text
stringlengths
100
500k
subset
stringclasses
4 values
Robotics Meta Robotics Stack Exchange is a question and answer site for professional robotic engineers, hobbyists, researchers and students. It only takes a minute to sign up. What is the difference between a holonomic and an omnidirectional robot? I have seen the usage of the term "omnidirectional" in robotics, but I have not found a (precise) definition. In the chapter Omni-Directional Robots from the book "Embedded Robotics", it is stated: In contrast, a holonomic or omnidirectional robot is capable of driving in any direction which seems to indicate that these words are synonyms. What's an omnidirectional robot? What is the difference between a holonomic and an omnidirectional robot (if any)? What are examples of omnidirectional and holonomic robots (or other objects)? mobile-robot Holonomic is a precise mathematical term; one definition is velocity constraints can be integrated into the form $$\begin{align} f(q1,q2,...,t) = 0. \end{align}$$ Roughly, that means the controllable degrees of freedom at any instant matches the total degrees of freedom for the robot. By this definition, a train is holonomic b/c it has one controllable degree of freedom (speed) and one motion degree of freedom (position along the track). However, most people would likely not consider a train to be omnidirectional. Note that a car is nonholonomic. At any give instant you have two controllable degrees of freedom (speed and steer angle) but three motion degrees of freedom (x, y, and orientation). ryan0270ryan0270 Thanks for contributing an answer to Robotics Stack Exchange! What is the difference between Task-Level and Joint-Level Control Systems? Motion Model for Holonomic Robot Difference between planetary and precision gear motors What is the difference between path planning and motion planning? Difference between g-value and rhs-value in Lifelong Planning A* Difference between Collision Cone and Velocity Obstacle What is the difference between the pose of a robot and the configuration of a robot? What's the difference between Pose Measurement and Position Measurement?
CommonCrawl
Latex Plot Points t: Time points at which the solution should be reported. 5 times the size required in LATEX, with the default pointsize. Using script fonts in LATEX There are three "script-like" fonts available in most standard LATEX distributions. This paper examines the powerful components of SAS/GRAPH and highlights techniques for harnessing that power to create effective and attention-grabbing graphs. This plot contains two layers. Before we can ask Mathematica to plot this we have to have de nitions of the slope aand y-intercept b. I can get it by plotting two separate graphs and by using the hold on capability. You should be able to change r from 0 to plus or minus 0. The graphical argument used to specify point shapes is pch. The margins, font and header/footer sizes meet IEEE or ASME paper specifications. They are organized into seven classes based on their role in a mathematical expression. jl is a plotting metapackage which brings many different plotting packages under a single API, making it easy to swap between plotting "backends". Plotting in Julia is available through external packages. 0 [email protected] tmYLMajorOutwardLengthF = 0. You can use any string that is not a number as value for the missing data points or explicitly specify a missing data string using the set datafile missing command. Chart with Point Estimate and Confidence Interval Microsoft Excel Using a spreadsheet program, the point estimate and confidence interval of findings in rapid surveys can be presented graphically as High-Low-Close charts. Must validate the LaTeX for security reasons to prevent cross-site scripting. 1 plot command customization Customization of the data columns, line titles, and line/point style are specified when the plot command is issued. LaTeX can be used by The finer points may require a. The plot above represents 10 random points drawn from Normal distribution. Then select the Link to Data from the Plot menu. The list may be either a list of points p , where p is a list or Vector containing two numbers, or a flat list with an even number of values. These will be equivalent to since the y-axis represents the reliability and the values represent unreliability estimates. Plotting several 2d functions in a 3d graph [Open in Overleaf] Scatter plot with cubes [Open in Overleaf] Spiral cone [Open in Overleaf] Surface Plot [Open in Overleaf] Surface plot of a math function [Open in Overleaf] Surface plot with interior colors [Open in Overleaf] Torus. Gain additional perspective by studying polar plots, parametric plots, contour plots, region plots and many other types of visualizations of the. It also provides best solutions through artificial intelligence. Gnuplot was originally developed by Colin Kelley and Thomas Williams in 1986 to plot functions and data files on a variety of terminals. Other measurements, which are easier to obtain, are used to predict the age. Plotting Graphics From File. Texvc also spits back MathML, which is the W3C way to include equations in SVG -- maybe this can be used better in the future. To make a report:. In the following example the second E is lower than the first one because the operator _ is applied on /f which has a descending part, and not only on f which as no descending part. When such an equation contains both an x variable and a y variable, it is called an equation in two variables. Plotting with Microsoft Excel 1 Plotting Data with Microsoft Excel Here is an example of an attempt to plot parametric data in a scientifically meaningful way, using Microsoft Excel. If the plot includes curve fits, these will be recalculated as well. If in the numbers you are working with are greater than 9999 pgfplot will use the same notation as in the example. Using LaTeX with EPS Figures. , there are no licence fees, etc. The basic command for sketching the graph of a real-valued function of one variable in MATHEMATICA is Plot[ f, {x,xmin,xmax} ]. hist() is a widely used histogram plotting function that uses np. However, sometimes we wish to overlay the plots in order to compare the results. 1 plot command customization Customization of the data columns, line titles, and line/point style are specified when the plot command is issued. This package is part of the SageMath distribution in the /examples/latex_embed sub-directory together with the documentation. One of these tools is the programming language R. A scatter plot or constellation diagram is used to visualize the constellation of a digitally modulated signal. We can save these plots as a file on disk with the help of built-in functions. To get exp to appear as a superscript, you type ^{exp}. the y coordinates of points in the plot, optional if x is an appropriate structure. P A point on extension of line OA passing closest & perpendicular to C. The easiest way is by using the pgfplots package, as illustrated in this example:. You can use any string that is not a number as value for the missing data points or explicitly specify a missing data string using the set datafile missing command. TikZ and pgf - Boston University 11. Plot - Graph a Mathematical Expression - powered by WebMath. With this approach, the curve is guaranteed to go through all of the data points. Use the TeX markup \pi for the Greek letter π. Gnuplot was originally developed by Colin Kelley and Thomas Williams in 1986 to plot functions and data les on a variety of terminals. The PLOT function draws a line plot of vector arguments. , there are no licence fees, etc. All I can see is 9 empty facets. The left graph in the figure below shows what a disaster that could be. ## Expand the size of the plot character. The easiest way is by using the pgfplots package, as illustrated in this example:. In this post, I'll show how to extract data from text files, CSV. Plotly OEM Pricing Enterprise Pricing About Us Careers Resources Blog Support Community Support Documentation JOIN OUR MAILING LIST Sign up to stay in the loop with all things Plotly — from Dash Club to product updates, webinars, and more! Subscribe. PGF is an acronym for 'Portable Graphics Format' and TikZ is a recursive acronym for 'TikZ ist kein Zeichenprogramm'. The second used a pdf device 1. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. Plotting points in three dimensions How to plot points in three dimensions To plot points in three-dimensional coordinate space, we'll start with a three dimensional coordinate system, where the ???x???-axis comes toward us on the left, the ???y???-axis moves out toward the right, and the ???z???-axis is perfectly vertical. It comes as a separate package and has its own manualpgfplotstable. Not in a sence of the amount of document but the understandability of the contenct ist mostly extremely bad. Tips and Tricks for Troubleshooting LaTeX. For this example, plot y = x 2 sin ( x ) and draw a vertical line at x = 2. Using script fonts in LATEX There are three "script-like" fonts available in most standard LATEX distributions. Must be specified for symbol drawing. reset () to reset to the default options. TikZ and pgf - Boston University 11. Gnuplot is undoubtedly a powerful tool, but getting it to do what you want can be a considerable challenge. We can plot a set of points to represent an equation. frame, or other object, will override the plot data. Right know I'm using pgfplots and I like the look of it best but I can't deny it's more easy to make a plot in excel. To make a report:. Result Plot of the birthday paradox LaTeX-Code. Introduction Overview over PGFPlots Summary and Outlook PGFPlots – Plotting in LATEX Consistent and high-quality plotting combined with LATEX Christian Feuersänger. It also has a comprehensive list of references, which will be of further use. Gnuplot Tutorial. Since this format always works, it can be turned into a formula:. Given two points, you can always plot them, draw the right triangle, and then find the length of the hypotenuse. Or, scratch that, it's pretty much impossible. High quality plots for LaTex with PGF-Tikz and Gnuplot. Winplot is an example of how interactive plotting can be accomplished in Ch with mathematical expressions entered by the user through a graphical user interface in Windows. This is not a comprehensive list. Is it possible in Matlab to plot an even piecewise function like: $ f(x) = \begin{cases} 3t , 0 < t < \pi \\ -3t , -\pi \le t \le 0 \end{cases}$ which has a period of $2\pi$. The Plot Details Label tab for 3D Symbol/Vector/Bars and XYY Bar graphs. Create a plot where x1 and y1 are represented by blue circles, and x2 and y2 are represented by a dotted black line. Plot the (u, v) components of a vector field emanating from equidistant points on the x-axis. These commands will cause a plot to be saved as a suitably-sized PDF file. tex is a 2-page paper in 2-column format. You may define the origin point and the dimension for the cube. For example, to plot bivariate data the plot command is used to initialize and create the plot. When some initial data is ready, prepare a plot from the data. Here graphs of numerous mathematical functions can be drawn, including their derivatives and integrals. 5 posts • Page 1 of 1. LaTeX is available as free software. Monday, July 22, 2019 " Would be great if we could adjust the graph via grabbing it and placing it where we want too. lty,lwd: the line types and widths for lines appearing in the legend. R Tutorial Series: Labeling Data Points on a Plot There are times that labeling a plot's data points can be very useful, such as when conveying information in certain visuals or looking for patterns in our data. For example, a list of the first ten prime numbers would be {2, 3, 5, 7, 11, 13, 17, 19, 23, 29}. In order to plot the points for the probability plot, the appropriate reliability estimate values must be obtained. If you want to move it to the position (X,Y)=(100,100), gnuplot> set key 100,100 The coordinate (100,100) is the position of the mid-point between a text and a line/symbol of the first line of the legend. For example:. The standard abbreviations drawn on a plotting sheet are: C The centre of a radar plotting sheet. The resulting plot should look like Figure 8: Figure 8. the plotting component (which you are currently reading) and 2. Return to Top. The plotting functions calculate a set of points and pass them to the plotting package together with a set of commands. LaTeX Tips from Overleaf. You may have noticed on the plot of faithful there seems to be two clusters in the data. Tap into the extensive visualization functionality enabled by the Plots ecosystem, and easily build your own complex graphics components with recipes. sty (style file) refs. For example (0,2,0), (2,2,0) and (0,2,2). Matplotlib, and especially its object-oriented framework, is great for fine-tuning the details of a histogram. All rights reserved. PGF is an acronym for 'Portable Graphics Format' and TikZ is a recursive acronym for 'TikZ ist kein Zeichenprogramm'. LaTeX is the de facto standard for the communication and publication of scientific documents. First let's generate two data series y1 and y2 and plot them with the traditional points methods. Also, if you pre-set the dimensions of the window the plot is created in, you get even more control. Similar to the example above but: normalize the values by dividing by the total amounts. 50 Updated: 8/14 1. Water-t; Water-18 O; Water-d; Water-t 2; Deuterium oxide; Other names: Water vapor; Distilled water; Ice; H2O; Dihydrogen oxide; steam; Tritiotope Permanent link for this species. Monday, July 22, 2019 " Would be great if we could adjust the graph via grabbing it and placing it where we want too. Usually I draw this by hand. Profit or Age vs. Specify the components (in any order) as a character vector after the data arguments. Critical points: Thus we have saddles at (0, 0), (4, 0) and a stable spiral point at (2, 1). Stay on top of important topics and build connections by joining Wolfram Community groups relevant to your interests. 0 [email protected] tmXBMinorOutwardLengthF = 0. Biggles is another plotting library that supports multiple output formats, as is Piddle. The default linetypes for a particular terminal can be previewed by issuing the test command after setting the terminal type. The point is actually a circle drawn by \filldraw[black], this command will not only draw the circle but fill it with black colour. The Length slider controls the length of the vector lines. Here are a few more technical features that can either make your life easier or can make your plot look nicer: If you do not include x/y min/max, the plotting tool will choose defaults based on the range of your data (10% of the range of your data above/below the top/bottom of your data). Text scatter plot with Plotly Express¶. Right know I'm using pgfplots and I like the look of it best but I can't deny it's more easy to make a plot in excel. Hi, the tool looks fantastic. Using LaTeX with EPS Figures. When such an equation contains both an x variable and a y variable, it is called an equation in two variables. The command to plot each pair of points as an x-coordinate and a y-coorindate is "plot:" > plot ( tree $ STBM , tree $ LFBM ) It appears that there is a strong positive association between the biomass in the stems of a tree and the leaves of the tree. One of these tools is the programming language R. Author: Thomas Breloff (@tbreloff) To get started, see the tutorial. In the following example the second E is lower than the first one because the operator _ is applied on /f which has a descending part, and not only on f which as no descending part. 0 [email protected] tmXBMinorOutwardLengthF = 0. All of the points of the titration data can be connected to form a smooth curve. An example of script is present here to create a plot by exploiting both the quality of Tikz and the power of Gnuplot. The optional return value h is a vector of graphics handles to the created line objects. They are mostly standard functions written as you might expect. Depending on the output format (LaTeX or HTML), knitr will use the appropriate code like $3. smooth: Simple Panel Plot: par: Set or Query Graphical Parameters: pch. Given two points, you can always plot them, draw the right triangle, and then find the length of the hypotenuse. The third argument specifies the text. Plotting in Scilab www. Stem and leaf plot A stem and leaf plot organizes data by showing the items in order using stems and leaves. LaTeX forum ⇒ Graphics, Figures & Tables ⇒ pgfplots | Plot Size and Axis Scale Topic is solved Information and discussion about graphics, figures & tables in LaTeX documents. Alternatively, a single plotting structure, function or any R object with a plot method can be provided. The easiest way is by using the pgfplots package, as illustrated in this example:. The graph axis texters have been adjusted to work on the UnicodeEngine, as well. Make bar charts, histograms, box plots, scatter plots, line graphs, dot plots, and more. x was the last monolithic release of IPython, containing the notebook server, qtconsole, etc. Next to the point is a node, which is actually a box containing the "intersection point" text, is anchored at the west of the point. When such an equation contains both an x variable and a y variable, it is called an equation in two variables. 2 Upgrade remarks. Almost everything in Plots is done by specifying plot attributes. The areas in bold indicate new text that was added to the previous example. txt" using 1:($2) with lines title "first line" then Gnuplot will leave a gap. Instead, you can plot a line with all your data but then. Suppose we want to graph the equation [latex]y=2x - 1[/latex]. In this post I show the basics of tree drawing using TikZ and LuaLaTeX. csv files in LaTeX. They are mostly standard functions written as you might expect. We can plot it using a special library if the plot does not contain too many points (more on that later). 535072e+000. You may have noticed on the plot of faithful there seems to be two clusters in the data. PGF/TikZ is a pair of languages for producing vector graphics (e. The margins, font and header/footer sizes meet IEEE or ASME paper specifications. It seems that, for now at least, you can't combine plain text with math formatting in the same label. One can quickly identify relationships between the two measures, for example Sales vs. I am very much a visual person, so I try to plot as much of my results as possible because it helps me get a better feel for what is going on with my data. How to Graph Point Estimates and 95% Confidence Intervals Using Stata 11 or Excel The methods presented here are just several of many ways to construct the graph. Veusz is a scientific plotting and graphing program with a graphical user interface, designed to produce publication-ready 2D and 3D plots. 2 Basic Table Making in LATEX LATEX has excellent facilities for composing and typesetting tables. Gnuplot is a powerful and free plotting utility that is available for most architectures and operating systems. Clicking on the graph will reveal the x, y and z values at that particular point. To make a report:. That is, given a value for z, lines are drawn for connecting the (x,y) coordinates where that z value occurs. In this case, we'll use the summarySE() function defined on that page, and also at the bottom of this page. Through these tutorials, you will be able to explore the properties of points and quadrants in rectangular coordinate systems. The labeling of axes with letters x and y is a common convention, but any letters may be used. LaTEX (optional) Dimensions (lengths) Purpose; where used: Unit of length; \mfpic Size of a symbol; \point, \plot, and \plotsymbol Darkness of shading; \shade Cached Download Links. Text scatter plot with Plotly Express¶. As further data points are added to the sheet, the plot will be updated. For example, a list of the first ten prime numbers would be {2, 3, 5, 7, 11, 13, 17, 19, 23, 29}. Plotting complex numbers If the input to the Matlab plot command is a vector of complex numbers, the real parts are used as the x-coordinates and the imaginary parts as the y-coordinates. Graphics in LATEX using TikZ Zo a Walczak Faculty of Mathematics and Computer Science, University of Lodz zofiawal (at) math dot uni dot lodz dot pl Abstract In this paper we explain some of the basic and also more advanced features of. The last example for 3D plotting is about the representation of some spectroscopic data. The plot above represents 10 random points drawn from Normal distribution. With my question I intended to combine the best of both worlds: make the plot in excel and export it to LaTeX using pgfplots so that it looks like it's made with pgfplots in the first place. function plot To plot a curve where the vertical coordinate is a function of the horizontal coordinate, just give the function formula in terms of x within braces. However, sometimes we wish to overlay the plots in order to compare the results. plot(x,2*x, 'LineWidth', 2) I may want some data points drawn in the same color as the curve. The basic command is plot(x,y), where x and y are the co-ordinate. A complete list of available options can be found in the plot/options help page. We have not only include LaTex Editing app but also online LaTex editors along with LaTex plugins. The Bezier curve in Pstricks is cubic Bezier curve. The problem with that becomes clear if you have a large number of data points - you do not want to try to jam hundreds of symbols onto a curve. The x-intercept is the point where the line crosses the x-axis. com page 3/17 Step 1: Basic plot with LaTex annotations Here, we plot the function: U L 1 1 6 on the interval > F5,5. An example of script is present here to create a plot by exploiting both the quality of Tikz and the power of Gnuplot. Simplest method using Stata: One simple way in which to portray a graphical representation of the confidence intervals for the. The graphical argument used to specify point shapes is pch. Texvc also spits back MathML, which is the W3C way to include equations in SVG -- maybe this can be used better in the future. Letters are. After all, LaTeX offers a proper degree symbol in the tex companion fonts, indicating that someone there, too, decided that ^\circ is not perfect. b= y 1 The slope ais going to be approximately the rise over run between the rst and last data points, a= y N y 1 x N x 1. I Specify 1 or 2 \control points" between two points of the path. We've seen that Mathematica uses parentheses for grouping and square brackets in functions. One common task is to plot multiple data sets on the same plot. Input LaTeX, Tex, AMSmath or ASCIIMath notation (Click icon to switch to ASCIIMath mode) to make formula. T he captions for figures, tables, subfigures and subtables in LaTeX can be customized in various ways using the caption and subcaption packages. CPA The predicted closest point of approach of. O The position of an initial detection of a target on the PPI. TikZ automatically draws a line between the points (0,0) and (1,2) and sets up the right space for the gure (by default, coordinates are in centimeters). Other measurements, which are easier to obtain, are used to predict the age. First we come to see how to plot a sector. The command to plot each pair of points as an x-coordinate and a y-coorindate is "plot:" > plot ( tree $ STBM , tree $ LFBM ) It appears that there is a strong positive association between the biomass in the stems of a tree and the leaves of the tree. The graphical argument used to specify point shapes is pch. Specify the components (in any order) as a character vector after the data arguments. Trinity tells him it's for their mutual protection and. You can merge standard plots of expressions, plots of points, plots of text (This is the utility of textplot: to label things), and animations. 3 of this license or. If NULL, the default, the data is inherited from the plot data as specified in the call to ggplot(). You don't have to pay for using LaTeX, i. With my question I intended to combine the best of both worlds: make the plot in excel and export it to LaTeX using pgfplots so that it looks like it's made with pgfplots in the first place. Scatter Plots and Constellation Diagrams. For example, suppose that we wish to typeset the following passage: This passage is produced by the following input:. To plot one vector as a function of another, use two parameters. Add points to a plot in R. 2 Upgrade remarks. Look below to see them all. This can be accomplished by setting the in and out angle in brackets. The TitleFontSizeMultiplier property of the axes contains the scale factor. Summary Plot Overview In the novel's foreword, the fictional John Ray, Jr. in Excel) makes it difficult to explore the visualised data as you can't really zoom in and scroll through the data. Introduction Overview over PGFPlots Summary and Outlook PGFPlots - Plotting in LATEX Consistent and high-quality plotting combined with LATEX Christian Feuersänger. T he captions for figures, tables, subfigures and subtables in LaTeX can be customized in various ways using the caption and subcaption packages. In order to achieve this we use the command meshgrid. Getting MATLAB Figures into Latex or Powerpoint The Important Points. Using Cartesian Coordinates we mark a point on a graph by how far along and how far up it is: The point (12,5) is 12 units along, and 5 units up. reset () to reset to the default options. The underlying rendering is done using the matplotlib Python library. Line Plots ¶. The plot appears centered on the page and scaled at 60 percent of its original size. We first define them as different plots and then display them together. We've seen that Mathematica uses parentheses for grouping and square brackets in functions. Get our help. O The position of an initial detection of a target on the PPI. Octave has powerful facilities for plotting graphs via a second open-source program GNU-PLOT. Many aspects of the LaTeX template used to create PDF documents can be customized using top-level YAML metadata (note that these options do not appear underneath the output section, but rather appear at the top level along with title, author, and so on). So it is the plane that consists of the following points. If you then plot the lines using. Label the symbols "sampled" and "continuous", and add a legend. The third argument specifies the text. The subscripts and superscripts operators apply not only on one character but on all the "normal text" preceding them. There is indeed a rather steep learning curve, but I don't think there's many people that master the whole of it. This is not a comprehensive list. It comes as a separate package and has its own manualpgfplotstable. At the bottom of page F-6 there is a problem, F. Stacked bar plot with group by, normalized to 100%. Pychart is a library for creating EPS, PDF, PNG, and SVG charts. Here is the code you can use to change the line style. P A point on extension of line OA passing closest & perpendicular to C. For this example, plot y = x 2 sin ( x ) and draw a vertical line at x = 2. In RStudio the plot looks as intended, however, on my final pdf created with the above latex code all data points on the plot have simply vanished. ## Expand the size of the plot character. Plotting with Microsoft Excel 1 Plotting Data with Microsoft Excel Here is an example of an attempt to plot parametric data in a scientifically meaningful way, using Microsoft Excel. Plot the point given in polar coordinates, and find other polar coordinates (r, θ) of the point for which. It contains a number of examples that will be of immense use to readers who wish to try it out. The details may change with different versions, but the principle. In this 3D plot, all the axis are logarithmic axis but I cannot manage to import it properly into pgfplots. The command to plot each pair of points as an x-coordinate and a y-coorindate is "plot:" > plot ( tree $ STBM , tree $ LFBM ) It appears that there is a strong positive association between the biomass in the stems of a tree and the leaves of the tree. gnuplot, where by default is the name of the. Predicting the age of abalone from physical measurements. In 1988 and 1989 I created an alternate version, known as GnuTEX, that supported a new \terminal type" called latex, so gnuplot would output LATEX code. Again we'll use inline plotting, though it can be useful to skip the "inline" backend to allow interactive manipulation of the plots. A "perfect" fit (one in which all the data points are matched) can often be gotten by setting the degree of the regression to the number of data pairs minus one. All rights reserved. The help on inserting Greek letters and special symbols is also available in Help menu. plots y versus x using a dash-dot line (-. , technical illustrations and drawings) from a geometric/algebraic description, with standard features including the drawing of points, lines, arrows, paths, circles, ellipses and polygons. © 2016 CPM Educational Program. 簡単に使用できるオンラインLaTeXエディター。. Each example builds on the previous one. 2) Optionally color the points by a property - also read from the file It would. … Arguments to be passed to methods, such as graphical parameters (see par). Refer to the external references at the end of this article for more information. TikZ automatically draws a line between the points (0,0) and (1,2) and sets up the right space for the gure (by default, coordinates are in centimeters). Jeromy Anglim's Notes. When creating a theme, the margins should be placed on the side of the text facing towards the center of the plot. Advanced Techniques for Plotting Tool Use. When I hover over the point it will say Series 1 Point "48" and then it will give the position of the point as (33,420). When plotting in 3D we need evenly spaced x- and y-values, spaced on a grid where each function value z is taken of a point (x, y) on the grid. In 1988 and 1989 I created an alternate version, known as GnuTEX, that supported a new "terminal type" called latex, so gnuplot would output LATEX code. Surely I am missing something. Typing Greek letters with Keyboard Shortcuts To insert Greek letter type Ctrl+G ( Command G on Mac OS ) and then type Latin letter mentioned in the table below. smooth: Simple Panel Plot: par: Set or Query Graphical Parameters: pch. Simply begin the appropriate environment at the desired point within the current list. Label Curves, Make Keys, and Interactively Draw Points and Curves Description. Veusz is multiplatform, running on Windows, Linux/Unix and macOS. 0, the language-agnostic parts of the project: the notebook format, message protocol, qtconsole, notebook web application, etc. For interactive work the function plot is better suited. txt" using 1:($2) with lines title "first line" then Gnuplot will leave a gap. We believe free and open source data analysis software is a foundation for innovative and important work in science, education, and industry. If in the numbers you are working with are greater than 9999 pgfplot will use the same notation as in the example. For more detail in the plot, simply increase its number. plot "vikt_ma. And to to determine a cubic Bezier curve four control points are needed ("How to draw a cubic Bezier curve according to four given points" can be found in Wikipedia). Veusz is a scientific plotting and graphing program with a graphical user interface, designed to produce publication-ready 2D and 3D plots. In Paris, he is just starting to film his latest movie, a remake of Les vampires (1915), and has hired Hong Kong based Chinese actress Maggie Cheung as the title lead, "Irma Vep" (an anagram for "vampire"), despite she knowing no French and she not being. On the other hand, copying as PDF can also have one problem: the large file size created especially when attempting to convert Graphics3D objects to PDF. The details may change with different versions, but the principle. Matrices and other arrays in LaTeX. In this tutorial you will learn how to • plot data in Octave. JPE Precision Point sheet about how to compose a bode plot from a linear differential equation. Text size The text on a plot should be approximately the same as the text around it. You don't have to pay for using LaTeX, i. This can be useful for a variety of things but when I first learned about it, I was a bit confused by how the axes seem to be flipped sometimes when you do this. In the next example, we combine the point plot of the variable points defined above with a plot of sin(3*x). Using script fonts in LATEX There are three "script-like" fonts available in most standard LATEX distributions. How to Visualize and Compare Distributions in R By Nathan Yau Single data points from a large dataset can make it more relatable, but those individual numbers don't mean much without something to compare to. We first define them as different plots and then display them together. Stata: Visualizing Regression Models Using coefplot Partiallybased on Ben Jann's June 2014 presentation at the 12thGerman Stata Users Group meeting in Hamburg, Germany:. Coordinate Planes and Graphs A rectangular coordinate system is a pair of perpendicular coordinate lines, called coordinate axes, which are placed So that they intersect at their origins. Assorted notes on statistics, R, psychological research, LaTeX, computing, etc. Monday, July 22, 2019 " Would be great if we could adjust the graph via grabbing it and placing it where we want too. We've seen that Mathematica uses parentheses for grouping and square brackets in functions. Plot size in pixels for EMF, GIF, JPEG, PBM, PNG, and SVG.
CommonCrawl
Which of the following is CORRECT with respect to grammar and usage? Mount Everest is ____________. (A) the highest peak in the world (B) highest peak in the world (C) one of highest peak in the world (D) one of the highest peak in the world Answer : (A) the highest peak in the world The policeman asked the victim of a theft, "What did you _______ ?" (A) loose (B) lose (C) loss (D) louse Answer : (B) lose Despite the new medicine's ______________ in treating diabetes, it is not ______________widely. (A) effectiveness --- prescribed (B) availability --- used (C) prescription --- available (D) acceptance --- proscribed Answer : (A) effectiveness --- prescribed In a huge pile of apples and oranges, both ripe and unripe mixed together, 15% are unripe fruits. Of the unripe fruits, 45% are apples. Of the ripe ones, 66% are oranges. If the pile contains a total of 5692000 fruits, how many of them are apples? (A) 2029198 (B) 2467482 (C) 2789080 (D) 3577422 Answer : (A) 2029198 Subject : Analytical Skills Michael lives 10 km away from where I live. Ahmed lives 5 km away and Susan lives 7 km away from where I live. Arun is farther away than Ahmed but closer than Susan from where I live. From the information provided here, what is one possible distance (in km) at which I live from Arun's place? (A) 3.00 (B) 4.99 (C) 6.02 (D) 7.01 Answer : (C) 6.02 A person moving through a tuberculosis prone zone has a 50% probability of becoming infected. However, only 30% of infected people develop the disease. What percentage of people moving through a tuberculosis prone zone remains infected but does not show symptoms of disease? In a world filled with uncertainty, he was glad to have many good friends. He had always assisted them in times of need and was confident that they would reciprocate. However, the events of the last week proved him wrong. Which of the following inference(s) is/are logically valid and can be inferred from the above passage? (i) His friends were always asking him to help them. (ii) He felt that when in need of help, his friends would let him down. (iii)He was sure that his friends would help him when in need. (iv) His friends did not help him last week. (A) (i) and (ii) (B) (iii) and (iv) (C) (iii) only (D) (iv) only Answer : (B) (iii) and (iv) Leela is older than her cousin Pavithra. Pavithra's brother Shiva is older than Leela. When Pavithra and Shiva are visiting Leela, all three like to play chess. Pavithra wins more often than Leela does. Which one of the following statements must be TRUE based on the above? (A) When Shiva plays chess with Leela and Pavithra, he often loses. (B) Leela is the oldest of the three. (C) Shiva is a better chess player than Pavithra. (D) Pavithra is the youngest of the three. Answer : (D) Pavithra is the youngest of the three. If $ \frac12q^{-a}=\frac1r\; $ and $ r^{-b}=\frac1s $ and $ S^{-C}=\frac1q $ the value of $ abc $ is . (A) $(rqs)^{-1}$ (D) $r+q+s$ P, Q, R and S are working on a project. Q can finish the task in 25 days, working alone for 12 hours a day. R can finish the task in 50 days, working alone for 12 hours per day. Q worked 12 hours a day but took sick leave in the beginning for two days. R worked 18 hours a day on all days.What is the ratio of work done by Q and R after 7 days from the start of the project? (A) 10:11 (B) 11:10 (C) 20:21 (D) 21:20 Answer : (C) 20:21 The solution to the system of equations $\begin{bmatrix}2&5\\-4&3\end{bmatrix}\begin{Bmatrix}x\\y\end{Bmatrix}=\begin{Bmatrix}2\\-30\end{Bmatrix}$ is (A) 6, 2 (B) -6, 2 (C) -6, -2 (D) 6, -2 Answer : (D) 6, -2 If $f(t)$ is a function defined for all $ t\geq0 $, its Laplace transform $ f(s) $ is defined as (A) $ \int_0^\infty e^{st}\;f(t)\mathrm{dt} $ (B) $ \int_0^\infty e^{-st}\;f(t)\mathrm{dt} $ (C) $ \int_0^\infty e^{ist}\;f(t)\mathrm{dt} $ (D) $ \int_0^\infty e^{-ist}\;f(t)\mathrm{dt} $ Answer : (B) $ \int_0^\infty e^{-st}\;f(t)\mathrm{dt} $ $f(z)=u(x,y)+iv(x,y)$ is an analytic function of complex variable $z=x+\;i\;y$ where $i\sqrt{-1}$. If $u(x,y)=2\;xy$, then $v(x,y)$ may be expressed as (A) $-x^2+y^2+\mathrm{constant}$ (B) $x^2-y^2+\mathrm{constant}$ (C) $x^2+y^2+\mathrm{constant}$ (D) $-(x^2+y^2)+\mathrm{constant}$ Answer : (A) $-x^2+y^2+\mathrm{constant}$ Consider a Poisson distribution for the tossing of a biased coin. The mean for this distribution is $ \mu $. The standard deviation for this distribution is given by (A) $\sqrt\mu$ (B) $\mu^2$ (C) $\mu$ (D) $ \textstyle\frac1\mu $ Answer : (A) $\sqrt\mu$ Solve the equation $x=10 cos(x)$ using the Newton-Raphson method. The initial guess is $ x=\pi/4 $. The value of the predicted root after the first iteration, up to second decimal, is ________ Answer : 1.53 : 1.59 A rigid ball of weight 100 N is suspended with the help of a string. The ball is pulled by a horizontal force $ F $ such that the string makes an angle of $30^\circ$ with the vertical. The magnitude of force $ F $ (in N) is __________ Answer : 55 : 60 Subject : Engineering Mechanics Topic : Free Body Diagrams and Equilibrium A point mass M is released from rest and slides down a spherical bowl (of radius R) from a height H as shown in the figure below. The surface of the bowl is smooth (no friction). The velocity of the mass at the bottom of the bowl is (A) $\sqrt{gH}$ (B) $\sqrt{2gR}$ (C) $\sqrt{2gH}$ Answer : (C) $\sqrt{2gH}$ The cross sections of two hollow bars made of the same material are concentric circles as shown in the figure. It is given that r3 > r1 and r4 > r2 , and that the areas of the cross-sections are the same. $ J_1 $ and $ J_2 $ are the torsional rigidities of the bars on the left and right, respectively. The ratio $ J_2/J_1 $ is (A) >1 (B) <0.05 (C) =1 (D) between 0.5 and 1 Answer : (A) >1 Subject : Mechanics of Materials Topic : Torsion of Circular Shafts A cantilever beam having square cross-section of side $ \alpha $ is subjected to an end load. If $ \alpha $ is increased by 19%, the tip deflection decreases approximately by (A) 19% (B) 29% (C) 41% (D) 50% Answer : (D) 50% Subject : Mechanics of Materials Topic : Deflection of Beams A car is moving on a curved horizontal road of radius 100 m with a speed of 20 m/s. The rotating masses of the engine have an angular speed of 100 rad/s in clockwise direction when viewed from the front of the car. The combined moment of inertia of the rotating masses is 10 kg-$ \mathrm m^2 $. The magnitude of the gyroscopic moment (in N-m) is __________ Answer : 199 : 201 Subject : Theory of Machines Topic : Gyroscope
CommonCrawl
Quantum mechanics and everyday nature Is there a phenomenon visible to the naked eye that requires quantum mechanics to be satisfactorily explained? I am looking for a sort of quantic Newtonian apple. quantum-mechanics everyday-life $\begingroup$ Can't post an answer but the rainbow reflections that you see on a CD are a good example. They are caused by interference between the light reflected from a 'quantum grating' - a surface which is periodically reflective and less reflective. Different frequencies interfere differently. $\endgroup$ – jwg May 22 '13 at 14:00 $\begingroup$ @jwg: There is nothing quantum-mechanical about the fact that a CD acts as a reflection grating. That's pure classical E&M. $\endgroup$ – Ben Crowell May 25 '13 at 16:58 $\begingroup$ Fire and stars have different colors at different temperatures. This is explained by the quantum mechanical explanation of blackbody radiation. $\endgroup$ – raindrop May 26 '13 at 5:31 $\begingroup$ @BenCrowell, you are right of course. $\endgroup$ – jwg May 27 '13 at 8:19 $\begingroup$ Note: Please see the description of big-list: DO NOT USE THIS TAG! "big-list" is applied to list-like questions which have no single answer. A few of these were asked early in the site's history, but they are no longer allowed. $\endgroup$ – Abhimanyu Pallavi Sudhir Jun 2 '13 at 10:46 Use a prism (or a diffraction grating if you have one) to break up the light coming from a florescent bulb. You'll see a bunch of individual lines rather than a continuous band of colors. This comes from the discrete energy levels in atoms and molecules, which is a consequence of quantum mechanics. If the audience you have in mind is more advanced, you can present the ultraviolet catastrophe of classical mechanics. Classically, something with finite temperature would tend to radiate an infinite amount of energy. Quantum mechanics explains the intensity vs. wavelength curves that we actually see. $\begingroup$ Or see reflection on CD/DVD. $\endgroup$ – user11153 May 22 '13 at 7:12 $\begingroup$ Is it acceptable to describe the rainbow as a quantic phenomenon? $\endgroup$ – user1975053 Jun 3 '13 at 7:53 $\begingroup$ @user1975053: A rainbow can be more or less completely explained by water having an index of refraction that varies with frequency. Once you have this, no QM is needed to get a rainbow. I don't know if the dispersion of water is a QM phenomenon. $\endgroup$ – Dan Jun 3 '13 at 20:01 $\begingroup$ A diffraction grating doesn't give you a continuous band of colors when you look at a florescent light. It's line-dark gap-line-dark gap etc. That's the QM phenomenon. $\endgroup$ – Dan Jun 3 '13 at 20:03 $\begingroup$ I'm not convinced that spectral lines have to be quantum. A mechanical tuning fork has a mechanical response frequency and a pair of mirrors has an optical resonance frequency. Neither of those examples is quantum. $\endgroup$ – DanielSank May 19 '18 at 14:23 Reflections on Everyday Quantum Events In one sense, it's hard not to see quantum mechanics in everyday life. For example, the existence of complex chemistry and the volume occupied by ordinary matter are both direct consequences of something called Pauli exclusion. That's a quantum rule that requires that every electron in the universe maintains a unique address that consists of its location in space (three numbers), its momentum (more-or-less velocity for same-mass electrons, and also three numbers), and one more odd one called spin orientation (binary, either up or down). When negatively charged electrons are packed tightly together by, say, the positive attraction of an atomic nucleus, these unique-address rules cause the electrons to take up unique positions and orientations around atoms (chemistry), and to resist being squeezed together beyond a certain point (volume). Atomic bonding in chemistry -- without which we would not be here to discuss this! -- would largely disappear without that last odd address rule about up-down spins. The ability of two electrons to share the same space by having opposite spins gives some atoms the ability to steal an electron from other atoms by providing a cozy shared-address home away from home, an effect that in chemistry is called ionic bonding. In other cases the up-down pairing rule enables a pair of electrons to be shared equally by two atoms, which is also called covalent bonding. However, I think your question was really focused more on finding "a phenomenon visible to the naked eye that requires quantum mechanics," and that what you wanted was something a bit more profound and large than simply summing up the large-scale impacts of many very small quantum events. I suspect you were hoping for something you can see without your unaided eye, without the need for a lab filled with exotic equipment. Such things really do exist. In fact, you very likely looked straight into an example just this morning. They are called mirrors. That is, it the ability of polished metals manage to reflect beautifully accurate images of the worlds around them while most (not all!) other substances are dark, dull, or transparent, is a type of large-scale quantum event that is every bit as odd as exotica such as laboratory Bose condensates. It's a classic example of familiarity generating indifference: Metallic reflection is so common and easy to observe that we forget just how profoundly odd and non-classical it is. Spacing Out, For Real So why is metallic reflection deeply quantum? It's quantum in multiple ways, actually. The first step is that you have to send a huge number of electrons into a sort of curious alternative form of space, one in which the coordinate systems for finding the an electron no longer consists of three directions of space, but instead must be expressed in three directions of momentum. How can an electron possibly get "lost" in ordinary space? The way they get there is surprisingly uncomplicated and ordinary sounding: In metals, certain electrons are given the freedom to run around freely throughout the entire volume of the metal. That is, metal atoms are firm believers in a sort of community-wide sharing of some of their electron children, caring not in the least if their own electron ends up very far away indeed, as long as other electrons stay close enough to cancel out their positive charges. A roaming electron does not sound all that unusual until you realize that electrons are so very light that quantum mechanics cannot be ignored. What quantum mechanics does to very light objects is cause their quantum descriptions to start taking up space across the entire volume of the metal over which they roam. That is, instead of an electron moving back and for the across a crystal as an massive classical object would, an undisturbed and freely roaming electron is most accurately represented as being equally located at all locations in the metal at the same time. Try to pull that trick with your car! What is Your Address, Please? However, since in any given piece of metal the super-light shared electrons are all doing the same "I am everywhere!" trick at the same time, there arises a problem with that address issue I mentioned earlier: Every electron in the universe must have a completely unique address. If these lost electrons are all sharing space it the same chunk of metal, it means that they also are sharing essentially identical (even if odd) locations in ordinary space... and that simply will not do. It means that each such electron in the metal must find some new way to maintain a unique "address" within the universe. The up-and-down option helps, but only allows two electrons to share the same address. So, the only option left is for the electrons to start climbing into the only remaining set of coordinates, which is the diverse range of speeds and directions (velocities) called momentum space. Now I should point out that when observing this process from our perspective of ordinary space with XYZ coordinates, electrons climbing into momentum space just looks like they are all acquiring different speeds, which doesn't sound all that exotic. But to electrons moving into momentum space, the view is very different indeed. Here's the main reason why: The electrons can actually bump into each other once they enter into momentum space, just like water molecules filling up a container in ordinary space. All of that bumping and jostling for momentum space forces the electrons to spread out and take up more room there, again in a way that is strikingly similar to how water molecules pile up in ordinary XYZ space. Quantum Splish-Splash In fact, the process of electrons jostling around and spreading out in momentum space is so similar to the way water molecules fill a container that such collection of electrons in metals are called Fermi sea. (An aside: Enrico Fermi must have had a really good press agent working for him, given all the cool stuff that is named after him in physics.) This type of momentum-space liquid even has a well-defined surface, just like an ordinary liquid. However, recall that from our perspective in ordinary XYZ space, the electrons piled up in momentum space just appear to be moving at different velocities. This equivalence means that electrons closer to the surface of the Fermi sea in momentum space must necessarily also be moving faster in ordinary XYZ space. In fact, for a good conductor such as silver the electrons at the surface of the Fermi sea end up moving very fast indeed. Since speed for a small object is the same thing we call heat, just how hot (how hot) do these electrons end up being? We're Feeling Hot Hot Hot Well, if the electrons at the top of the Fermi sea in a large piece of silver suddenly lost all of their energy, it would be emitted in the form of X-rays. The burst would be so energetic than anyone standing nearby would be killed. That's pretty hot! Fortunately for jewelry wearers, this flatly cannot happen because all of those electrons lower in the Fermi sea refuse to budge. They really like their much cooler locations in momentum space, and they are not about to give them up! Now it's time to bring all of this back around to your question of whether you can "see" quantum effects on the scale of ordinary life. The quantum magic begins whenever you look into an ordinary mirror. As soon as you do, you are already gazing into a sea of electrons that from a quantum mechanical perspective don't quite exist in ordinary space. They are "lost" in the XYZ space we know best, a space in which their accurate quantum representations are in some cases as large as the entire surface of the mirror. And most of those lost electrons are also hidden! That's because light that we see bouncing off a mirror comes from only a very tiny percentage of the Fermi sea electrons, specifically only the extremely hot ones at the very top of the Fermi sea. This is because they are the only electrons that have any "wiggle room" left to accept a photon and play catch with it. What happens is this: An electron at the Fermi sea surface can accept a particle of light, a photon, and by doing so speed itself up just a little more. But unlike the electrons further down in the sea, when an electron at the surface speeds up it creates an "empty spot" in the Fermi sea. The process is closely akin to the ways a splash of water can rise up into air, but then realizes it no longer has any water below it to keep it supported. Unlike the water in the see, the splash above the surface is not stable: It has to fall back to the surface. Very much like such a splash of water, an electron at the Fermi surface that has been "splashed up" by an incoming particle of light (photon) has no support underneath it to keep it there. So, it must fall back to the surface of the Fermi sea. As it does so, it gives up the photon energy that it held ever so briefly by re-emitting a nearly identical version of the photon it just absorbed. This re-emission of a photon from an electron at the Fermi surface is the smallest and most fundamental unit of reflection, the event from which larger-scale reflections are composed. Simplicity from Complexity Now the really neat thing about such re-emissions is that if your metal is smooth and consistent and polished on the surface, each such re-emission effect ends up being directed by the high symmetry of both the flat metal surface and of its smooth Fermi sea of electrons, causing the emitted photon (or more precisely, many photons interacting over the entire surface) to emerge in a very precise fashion that we call the angle of reflection. It's a case where a lot of complicated physics guided by even more complicated math ends up having a gorgeously simple outcome, and event we simply call reflection. And most amazingly, that simplicity is deeply dependent on quantum effects that cross the entire mirror. It requires electrons that has collectively lost their way in ordinary space, and taken up refuge in a space that is not like the space we usually see, yet still allows them to bump into each other. They form a liquid in this peculiar momentum space, a sea that flips upside down our very understanding of what an "object" or "liquid" is and how it should behave. The tiniest sliver of these hidden electrons then wave back to us as they surf the surface of their hidden, showing off the incredible speeds they have reached by tossing photons back at us in a coordinate juggling act that we see as in the vivid brightness of a mirror, or the beauty of a sparkling ornament, or in a bit of bright silver or gold. Finale: Take a Moment to Reflect So, metallic reflection is a deeply quantum event, one that takes place on a human scale, and one that is uniquely beautiful and useful. If you find your universe a bit dull some mornings, take a moment to say hello to this lovely bit of quantum weirdness when you look into a morning mirror! And reflect a bit on your reflection to remind you how a remarkable universe we live in. Addendum 2015-06-20: Vision as Quantum Physics I must add an example of large-scale quantum phenomena that is much closer to home than a mirror. It is the fact that you can see at all. Lenses, including the ones in your eyes, are profoundly quantum devices. If it were not for quantum mechanical conversion of the large-scale shape of a lens into guidance for how microscopic particles of light (photons) travel, the lenses in your eyes would be as opaque as steel, and you would not be reading this text. The problem is this: Since light is emitted and received as tiny particle-like units of energy, or photons, classical physics requires that these photons remain particles during their travels between those two points. And that is a problem. After all, how does an electromagnetic photon travel through a lens full of electron-rich atoms that should bat it this way and that like a maze of mind-boggling complexity, let alone form an image? It might bounce around for a few moments in the outermost atomic layers of the lens, but it would have no chance of penetrating deeper before being lost or absorbed. It is quantum mechanics that rescues us from the paradox of classical-photon blindness. Mathematically, quantum mechanics allows a single photon to "explore" the entire shape of a lens through a process called the integration of all possible histories. This process makes no sense at all classically, since it is as if the photon had explored literally every possible path between its starting and ending points. Those virtual explorations are then added up in a special way to produce the wave function of the photon, which tells which bundle of paths is the most likely to contain the actual photon. It is this infinite array of virtual photon paths that allows a single photon to "sniff out" the overall shape and form of a lens such as the ones in your eyes. Given the incredibly tiny amount of energy contained in a single photon in comparison to a huge, human-scale lens, that is a rather remarkable feat. It is roughly like taking a small penlight into orbit and "seeing" the shape of the entire earth by shining it onto the night side. Remarkably, every photon must do this on its own, since the result of shining every photon in a beam of light one-by-one through a lens is the same as what you get by shining them all at once. The bottom line is this: Every single form of reflection, refraction, or transparency that you see using ordinary light is pretty much a miracle of quantum mechanics. None of those effects can exist without the photons "sniffing out" the large-scale shape of a mirror, lens, or window (which is really just a flat lens) in a way that allows them to ignore the incredible complexity of those objects, and focus instead on their overall shape and optical properties. So how far do you need to go to see profoundly quantum effects in everyday life? Not far at all, for the very act of using your eyes to look for such effects is itself deeply quantum. Terry BollingerTerry Bollinger $\begingroup$ Wow. I know very little about physics beyond a lot of very general "that's something like this", but this answer is absolutely fantastic. Thank you for this fascinating glimpse into the incredible things we know about the world. $\endgroup$ – Linuxios Apr 4 '14 at 22:11 $\begingroup$ This answer is too good. $\endgroup$ – Gaurav Jan 28 '15 at 12:26 $\begingroup$ @TerryBollinger Excellent answer indeed. It reminds me about the "how does a magnet work?" someone asked to Feynman once : youtube.com/watch?v=wMFPe-DwULM . Of course quantum mechanics is at work in reflection. Yet reflection should appear sufficient for dealing with ... reflection ;-) Still, the Pauli principle is THE effect to mention indeed. $\endgroup$ – FraSchelle Jun 9 '15 at 11:52 Magnetism is a nice example, you can explain the spin-spin alignment only with quantum mechanics (see exchange interaction), it is even possible to prove the Bohr-Van leeuwen theorem, which states that no classical theory can explain how a magnet works. Reference: Feynman's Lectures on Physics IkiperuIkiperu $\begingroup$ As a logical extension to this, you need QM to explain why a paper clip is attracted to a magnet. $\endgroup$ – zkf May 21 '13 at 20:19 $\begingroup$ The problem is that the macroscopical effects of magnetism are perfectly explained by Maxwell's equations, in a classical framework. What needs quantum mechanics is its microscopical description, that is not a 'naked eye' thing. Of course your answer is OK, but in my opinion is not really convincing. We should find something that had no explanation before QM. $\endgroup$ – Bzazz May 21 '13 at 22:30 $\begingroup$ @Bzazz The microscopic origin of magnetism was not a problem in classical physics in the sense that it was unable to speak about why materials are magnetic - they just "were" magnetic. I hardly think that counts as "explaining". $\endgroup$ – Emilio Pisanty May 21 '13 at 23:41 $\begingroup$ By this logic, the naked eye phenomenon of gravity also requires quantum mechanics because the classical framework only gives us non-explanatory cruft like $\frac{gmM}{r^2}$, and doesn't explain why there is gravity associated with mass. $\endgroup$ – Kaz May 22 '13 at 8:16 $\begingroup$ I don't see how QM explains gravitation, maybe you should share your discoveries with the world, however, I prefer discussing about physics rather than philosophy, I've only pointed out an ubiquitous macroscopical phenomena that is completely quantum mechanical $\endgroup$ – Ikiperu May 22 '13 at 8:37 It's funny that you mention "the naked eye", because all you have to do is to close your eyes. As it turns out, the reason why we don't see anything when we close our eyes is quantum mechanics. Sean Carroll explains it nicely: There is a lot of black body radiation in the infrared range inside your eyes. Even though the total energy of this infrared light is much higher than visible light that enters through our lenses, it isn't absorbed by the receptors, because according to quantum mechanics it can only be absorbed in quantized packages (photons). And each individual photon doesn't have enough energy to be absorbed. $\begingroup$ Whilst the electronic transition of the chromophore molecule does require a specific band of energy, this is not the only mechanism for isomerisation. The other major source is thermal (transition energies of infrared radiation). Our eyes are sensitive to it, but not very. Most thermal fluctuations do nothing, and there are many more due to internal processes than external IR. They eye itself absorbs a lot of incoming IR and to observe a difference one would need to have enough infrared radiation to significantly warm your retina (quickly). We don't see IR because of the signal/noise ratio. $\endgroup$ – Lucas May 23 '13 at 11:34 $\begingroup$ Point is, there is plenty of isomerisations caused by the blackbody radiation, it's just background that we filter out. $\endgroup$ – Lucas May 23 '13 at 11:39 $\begingroup$ Also, this is basically why colour vision doesn't work at night. $\endgroup$ – Lucas May 23 '13 at 11:43 You and your environment still exists! If it was not for quantum mechanics everything would spontaneously disintegrate, as atoms are not stable in Classical mechanics, due to radiation emitted by an accelerating electron. PrathyushPrathyush $\begingroup$ Wonderfully pithy! So obvious and yet I never thought of it. $\endgroup$ – WetSavannaAnimal Aug 30 '13 at 15:05 $\begingroup$ The collapse of classical atom Bohr talked about is valid only for isolated system with retarded fields. In reality, atoms are not isolated but subject to EM fields of other atoms and to external background EM radiation, so the premises of the collapse argument are not really plausible in classical EM theory. $\endgroup$ – Ján Lalinský May 12 '15 at 19:59 $\begingroup$ @JánLalinský In general all accelerating electrons would radiate as far as I understand. While I cannot do a detailed calculation to say that all matter is unstable I would presume it would be true classically. $\endgroup$ – Prathyush May 15 '15 at 23:25 The definitive, visual proof that quantum mechanics is required to describe our world is the observation of superfluidity in liquid helium that has been cooled to below the lambda point. Below this temperature (2.17K at STP) a macroscopic fraction of the atoms have condensed into the ground state. This leads to macroscopic correlations that cause the fluid to flow in highly non-classical, unusual ways. For example, the fluid can flow up (against gravity) the sides of it vessel to a nearby reservoir: In a more elaborate set-up, we can see the fountain effect: I find this to be the most convincing, visible phenomenon that requires QM. MarkWayneMarkWayne This video and this article both show you how to make a quantum eraser experiment at home using only a laser pointer and some polarising filters, which in the video are obtained from 3D glasses*. One could argue that this doesn't really count, since if you really think about it, you should expect the same result if light were just a wave. However, if you accept that light is photons then it very nicely demonstrates that the interference pattern disappears if there's a way to know which path the photon took, which is a very distinctly quantum phenomenon. (*) In my experience, most 3D glasses tend to have circular polarising filters rather than linear ones. This doesn't seem to be addressed in the video, but it probably changes what you'd need to do in order to see the result. However, I have used at least one pair with linear filters, which were from an IMAX cinema. NathanielNathaniel $\begingroup$ I think QM is not really needed to explain their experiment. Wave nature of light too explains it well. In order for two waves to form constructive or destructive interference their polarization should either be random or (anti/)parallel. $\endgroup$ – user10001 May 22 '13 at 2:27 $\begingroup$ @user10001 I know, that's what I meant when I said "one could argue that it doesn't really count, since if you really think about it, you should expect the same result if light were just a wave." (Though I suppose I could add that QM is required to understand why the laser pointer works!) $\endgroup$ – Nathaniel May 22 '13 at 2:30 $\begingroup$ Sorry, I read only first line :) $\endgroup$ – user10001 May 22 '13 at 2:32 Among the naked-eye visible effects that require a quantum explanation are fluorescence, phosphorescence, and electroluminescence. Concepts like band gap energy and the connection between energy and wavelength are needed to form plausible, detailed hypotheses which address the readily observed aspects of these phenomena. KazKaz The transparancy of glass is a quantum phenomenon. It is owed to the fact that the electrons in the silicon crystals require an inordinate amount of energy in order to get exited into a higher orbital. This means that low energy photons like visible light can pass through unhindered. Meanwhile, UV light has enough energy to be absorbed. Glass is transparent, but you don't get a sunburn. Karl Damgaard AsmussenKarl Damgaard Asmussen As an illustration of Dan's answer I get myself a diffraction grating and start looking at florescent lamps. I was frankly really disappointed because you really don't get to see the lines clearly with a grating alone. What you would see is more or less like this But I moved on and told myself that I can make myself a spectroscope and I did just that! I had a big poster and rolled it to a tube. From a brochure I cut myself a thin slit and taped them together. It didn't take 10 minutes but the result was really satisfactory: I was also able to see some of the Fraunhofer lines while I was looking at the sun at 1 or 2 hours before sunset but I couldn't take a photo of it because of my poor cellphone camera. I hope this helps illustrating Quantum Mechanics from daily life with the help of daily objects! GonencGonenc If you are looking for something visible to the naked eye, but that not necessarily happens naturally around you, then the quintessential experiment displaying the quantum nature of light in a way that is visible to the naked eye is Young's double slit experiement. The great thing about this experiment is that it can easily be performed in the comfort of your own home. See this Physics.SE post: Is it possible to reproduce Double-slit experiment by myself at home? joshphysicsjoshphysics $\begingroup$ Young's experiment actually just demonstrates wave nature of light. If you wanted to show its quantumness, you would need to show wave-particle duality using single photon sources and single photon detectors which I doubt people usually have lying around. $\endgroup$ – Ondřej Černotík May 21 '13 at 20:38 $\begingroup$ @OndřejČernotík I respectfully disagree with your characterization. The mere fact that (as you point out) we can shoot individual photons at the screen shows that the electromagnetic wave model of light is deficient. Once we note this and perform the experiment with a stream of particles, we are forced to invoke quantum mechanics to explain the resulting interference pattern. $\endgroup$ – joshphysics May 21 '13 at 21:14 $\begingroup$ Sure, you can explain Young's experiment with macroscopic light intensities using quantum physics. But the question asks for phenomena that can be explained only using quantum mechanics. Therefore you need single photons interfering to have the need to turn to quantum mechanics. Otherwise, you do not even need Maxwell's equations and are perfectly happy with a wave-optical explanation. $\endgroup$ – Ondřej Černotík May 22 '13 at 7:32 $\begingroup$ @OndřejČernotík Here is my point: once you accept that a laser beam consists of a stream of photons, how do you explain the interference pattern without quantum mechanics? I agree that if the only experiment we ever did with light were the non-single-particle-at-a-time experiment, then the non-quantum wave model would be sufficient. If a student were to ask me whether the double slit experiment could be satisfactorily explained without quantum mechanics, I would feel like I were not being totally honest if I were to say yes. $\endgroup$ – joshphysics May 22 '13 at 7:55 $\begingroup$ but @joshphysics accepting the existence of photons in a beam of light is a circular argument, because a photon is the quantum of light. $\endgroup$ – anna v May 22 '13 at 9:19 One very "simple" example is that of the reason why we are not going through the ground while atoms (and hence matter) are mostly empty. Although it is still debated which dominates in this effect between electrostatic repulsion and the so called Pauli exclusion Principle (a quantum effect), it is pretty much admitted that electrostatics only is not sufficient. Quantitative estimates of this quantum repulsion are done on a daily basis by people calculating ab-initio (solving equation of quantum mechanics for the electrons) intermolecular potential parameters to be put then in simulations at the molecular scale (basically these calculations explain why it is almost fair to represent atoms as hard beads and therefore they already explain how come two empty atoms cannot overlap). Another simple case is that of a piece of metal whose rigidity (or at least some measure of it) owes a lot to the exclusion principle (see Eq. 494 of the link and the following sentence) satisfied by the conducting electrons in the system. gatsugatsu Superconductivity of course. Classically, you cannot explain perfect diamagnetism and perfect conduction from a disordered system at the same time. The most striking experiment is the levitation of a superconductor on a magnetic field, aka Meissner effect. You just need a high-Tc superconductor and a few N-liquid. The stricking fact is the disapearance of the effect when the nitrogen entirelly vaporized. A lot of videos on internet about that. See e.g. this one: http://www.ted.com/talks/boaz_almog_levitates_a_superconductor.html By the way, superconductivity is THE demonstration of quantun weirness at the macroscopic level. Quantum Hall effect is an other interesting effect, but it requires more materials (fridge, ...). In general, one can safely say that almost (if not) all the true effects of quantum physics are matter-field interactions of some types... the Meissner effect and the quantum Hall effect are just two specific matter-field interaction (magnetic field applied on low temperature collective excitations of electrons). It should in principle be possible to measure the spectrum of some atoms in a table-top experiment (after all, it's a late 19-th experiment), but it's less impressive than levitation I believe. All spectroscopy properties can only be perfectly understood using quantum mechanics, and can well be "seen" by naked eyes, like fluorescence (it sometimes requires IR glasses, but still I believe it's naked eyes since you can really see the fluorescence using this glasses). Generically, all condensed matter problems require quantum mechanics to be perfectly understood: band theory (including band gap and crystal symmetry leading to the huge field of semi-conductor for instance), electronic propagation in disordered system (including Mott insulator for instance, ... (see also Kaz's answer https://physics.stackexchange.com/a/65464/16689 on this point). Tunnel effect can be thought of a striking effect of quantum mechanics, even if it is hard to see with naked eyes. See nevertheless jinawee's answer https://physics.stackexchange.com/a/65416/16689 on this page. FraSchelleFraSchelle $\begingroup$ Although a bipoloar junction transistor may require the tunnel effect to work, it is accurately described by the Ebers-Moll equations that do not make a direct reference to quantum mechanics. At some level, everything requires quantum mechanics in order to work, even a 100 kilogram mass oscillating on a big steel spring. $\endgroup$ – Kaz May 22 '13 at 8:29 $\begingroup$ @Kaz Thanks for your remark, I modified the answer accordingly. $\endgroup$ – FraSchelle May 22 '13 at 10:50 You also need the tunnel effect in order to explain many aspects of electrical conductivity. For example, why oxidized copper wires are still well conductors instead of insulators. Another interesting quantum mechanical effect is produced in photosynthesis. The process is called "hopping" and it happens when a chlorophyll absorbs a photon and then it emits an exciton that will propagate until it reaches a special type of chlorophyll molecule, which produces an electron transfer. There are some references like: http://www.chemphys.lu.se/old/Archive/annual_96/primarynew.htm . There is also the hypothesis that quantum entanglement is produced in certain birds to allow navegation. See: http://prl.aps.org/abstract/PRL/v106/i4/e040503 . jinaweejinawee In some sense all chemical reactions are fundamentally quantum mechanical, but in the case of chemiluminescence and related atomic light-emitting phenomena like the aurora, quantum physics enters another way: excited states of the molecule or ion can only decay and emit a photon because the electron in that state is being continually shaken by vacuum fluctuations, microscopic fluctuations in electric fields due to quantum uncertainty. Chay PatersonChay Paterson The sun is visible to the naked eye. The only reason the sun shines is quantum-mechanical tunneling. Without tunneling, fusion reactions would be impossible at the temperature of the sun's core. $\begingroup$ But note the Sun would still shine without nuclear reactions - if it was hot for other reasons, like gravitational collapse. $\endgroup$ – Ján Lalinský May 12 '15 at 20:06 Stuff it explains -- Normal force Electric conduction Why some atoms are stable (motivating issue) Zeeman effect (motivating issue) Why the universe isn't just a continuous cloud of matter Why are masses discretised (the difference between the masses of two particles) Much of chemistry Abhimanyu Pallavi SudhirAbhimanyu Pallavi Sudhir $\begingroup$ Friction is not particularly quantum, and "electricity" is too vague, if you mean "electrical conduction", yes, true. But the rest are good. $\endgroup$ – Ron Maimon Aug 22 '13 at 22:26 Not the answer you're looking for? Browse other questions tagged quantum-mechanics everyday-life or ask your own question. Is there any quantum experiments or effects that you can do at home? Canonical everyday-life example of a technology that could not work without humans mastering QM in analogy to the application of GR in GPS? Is it possible to reproduce Double-slit experiment by myself at home? Newtonian Mechanics and Quantum mechanics Quantum Mechanics and Entanglement Quantum mechanics threshold Compatibility between classical Newtonian gravitation and quantum mechanics Why is the infrared glow that everyday objects emit not visible in a regular camera? Newtonian mechanics as an approximation of Quantum Mechanics
CommonCrawl
Fishtank Math AGA (2021) Fishtank Learning | High School Home Reports Center Math High School Fishtank Math AGA Report Publication Details Publisher's Response How Reports Are Created Publisher's Response How Reports Are Created Series Overview High School - Collapsed Version + Full Length Version See Rating Scale Understanding Gateways Fishtank Math AGA - High School Alignment: Overall Summary The materials reviewed for Fishtank Math AGA meet expectations for alignment to the CCSSM for high school. For focus and coherence, the series showed strengths in the following areas: attending to the full intent of the mathematical content contained in the standards, spending the majority of time on the content from CCSSM widely applicable as prerequisites, requiring students to engage in mathematics at a level of sophistication appropriate to high school, being mathematically coherent and making meaningful connections in a single course and throughout the series, and explicitly identifying and building on knowledge from Grades 6-8 to the high school standards. In Gateway 2, for rigor, the series showed strengths in the following areas: supporting the intentional development of students' conceptual understanding, opportunities for students to develop procedural skills, working with applications, and displaying a balance among the three aspects of rigor. The materials intentionally develop all of the eight mathematical practices, but do not explicitly identify them in the context of individual lessons. Focus & Coherence Rigor & Mathematical Practices Gateway One Gateway One Details The materials reviewed for Fishtank Math AGA meet expectations for Focus and Coherence. The materials meet expectations for: attending to the full intent of the mathematical content contained in the standards, spending the majority of time on the content from CCSSM widely applicable as prerequisites, requiring students to engage in mathematics at a level of sophistication appropriate to high school, being mathematically coherent and making meaningful connections in a single course and throughout the series, and explicitly identifying and building on knowledge from Grades 6-8 to the high school standards. The materials partially meet expectations for attending to the full intent of the modeling process and letting students fully learn each non-plus standard. Criterion 1a - 1f Materials are coherent and consistent with "the high school standards that specify the mathematics which all students should study in order to be college and career ready". Criterion Rating Details Indicator 1a Materials focus on the high school standards. Indicator 1a.i Materials attend to the full intent of the mathematical content in the high school standards for all students. Indicator Rating Details The materials reviewed for Fishtank Math AGA meet expectations for attending to the full intent of the mathematical content contained in the high school standards for all students. Examples of standards addressed by the courses of the series include: N-Q.1: In Algebra 1, Unit 2, Lesson 15, Anchor Problem 1, students label axes with appropriate variables and units. Guiding Questions include, "How did you determine an appropriate scale?" In Algebra 2, Unit 1, Lesson 4, students use graphs to interpret units. Guiding Questions include, "How could George use the graph of f(g) to find the number of quarts that equals three-quarters of a gallon?" and " Which function is more appropriate to use to find the number of quarts that equals a gallon?" In Geometry, Unit 6, Lesson 16, Anchor Problem 2, students use metric unit conversions to solve the problem. N-CN.2: In Algebra 2, Unit 2, Lesson 8, Anchor Problem 1, students add and subtract complex numbers, and to determine if properties of operations apply to the addition and subtraction of complex numbers. In Anchor Problem 2, students find the product of two complex numbers, and determine if properties of operations apply to the multiplication of complex numbers. A-REI.2: In Algebra 2, Unit 4, Lesson 14, Anchor Problem 3, students compare two radical equations graphically to see that there are no solutions, although it may appear there are solutions if students attempt to solve them algebraically. In Lesson 15, Anchor Problem 1, students generate two solutions, but when they test their solutions, they discover one does not work in the original equation. F-IF.7c: In Algebra 2, Unit 3, Lesson 3, Anchor Problem 1, students are given sketches of two different functions and the factored form of one of them. Using Guiding Questions ("What is similar about the graphs?, What is different?," and "How do these differences help you determine which graph matches the equation?"), students can match the factored form to the correct graph. Other Guiding Questions lead students to determine the end behaviors of the graphs. Within the same lesson, Anchor Problem 2 has students use the roots of a cubic function to sketch the function. G-C.5: In Geometry, Unit 7, Lesson 11, Anchor Problem 2, students use Guiding Questions to develop a conceptual understanding of the proportional relationship between the radius of a circle and the length of an arc. Guiding Questions include, "What would be the 'arc length' if you were to measure the entire outside of the circle?" and "What portion of the entire circumference are you measuring (for a 30-degree angle)?" In Lesson 12, Anchor Problem 2, students use Guiding Questions and a diagram of concentric circles to develop a conceptual understanding of the proportional relationship between the radius and arc lengths. Finally, in Lesson 13, Anchor Problem 1, students find the area of a sector using its proportional relationship with the whole circle. With the use of Guiding Questions, students write a general formula to determine the sector area of a circle with respect to its central angle measure and radius length. G-GPE.6: In Geometry, Unit 5, Lesson 2, Anchor Problem 1, students identify locations that would partition a piece of wood into a 3:5 ratio. Students use a number line to justify their answers. In Anchor Problem 2, students use a number line to partition a line segment into a 3:4 ratio. In Anchor Problem 3, students use coordinates on a plane, and partition the vertical line segment into a 1:2 ratio. In Lesson 3, Anchor Problem 1, students find the midpoint of a directed line segment on a coordinate plane. In Anchor Problem 2, students partition a directed line segment into a 1:3 ratio. S-ID.4: In Algebra 1, Unit 2, Lesson 8, students annotate a standard deviation on a curve with correct values.Through calculations, they locate two points between which 68% of the contextual data falls and then calculate a percent of the data falling between two specified points. In Algebra 2, Unit 8, Lesson 8, Anchor Problem 2, students revisit and build a deeper understanding of normal distributions with the use of a contextual problem involving heights of 8 year old boys. Students use the information given in the problem and a graph of the normal distribution to calculate percentages related to the data. In Lesson 9 of the same unit, Anchor Problem 1, students revisit the problem involving the normal distribution of heights of 8 year old boys to calculate z-scores (number of standard deviations that a score is away from the mean). S-IC.5: Algebra 2, Unit 8, Lesson 13, Anchor Problem 1, students use a dotplot of the difference of means to compare plant growth in standard and nutrient-treated soils. The Guiding Questions include, "What would the distribution of data look like for you to confidently conclude that there is not enough evidence to say the nutrient-treated soil contributed to growth?" TheTarget Task has a histogram that represents another problem involving the difference of means in relation to plant growth in different soils. Students consider whether the difference of the means is due to the way the sample was taken or whether the treatment had an effect. Indicator 1a.ii Materials attend to the full intent of the modeling process when applied to the modeling standards. The materials reviewed for Fishtank Math AGA partially meet expectations for attending to the full intent of the modeling process when applied to the modeling standards. In this series, various aspects of the modeling process are present in isolation or combinations, yet opportunities for the complete modeling process are absent for the modeling standards throughout the materials. Examples of problems that allow students to engage in some aspects of the modeling process include, but are not limited to: Algebra 1, Unit 2, Lesson 21: An Anchor Problem contains data involving percent pass-completion rates for top paid NFL quarterbacks and their salaries. Students organize the data, represent it visually, and calculate bivariate statistical measures. Students can use Guiding Questions to help them formulate a problem related to the data. Examples of Guiding Questions include: "How did you organize this data set when the values of the salaries are so high?," "Why did you decide to use the graph you did?," and "What formulas will you use to either find the measures you need to make the graph or calculate center and spread?" Students make decisions about how to organize and represent the data graphically and ways to use the data. How the data is used will determine what statistical measures they need to calculate. Students can analyze and interpret the data to solve a problem, but there are no explicit instructions for how they should report their findings (N-Q.1, S-ID.6, S-ID.7, S-ID.8, S-ID.9). Algebra 1, Unit 8, Lesson 12, EngageNY Mathematics, Algebra 1, Module 4, Topic B, Lesson 16: "The Exploratory Challenge," provides a scenario where a fence is being constructed. Students are given the variables to use when writing an expression; thus, not allowing students to formulate their own. Students find the maximum area and determine if their answer is surprising. This allows for students to interpret and validate their response; however, there is no clear way that students should report their answer. (A-CED.2, F-IF.8a). Algebra 2, Unit 2, Lesson 9: Students focus on quadratic functions in context. The Target Task has two people throw a baseball in the air. One ball is modeled by a function, the other by a graph. One person says his ball goes higher and the students must decide if he is correct, determine how long each ball was in the air, and construct a graph of the function given to back up claims from the first two parts. Students do not have opportunities to formulate the mathematical model, but do have opportunities to validate and interpret their responses in relation to the problem. However, there is no directed way for students to communicate and/or report their findings to others (A-CED.1, A-CED.2, F-IF.6, F-BF.1). Algebra 2, Unit 6, Lesson 14, Problem Set, EngageNY Mathematics, Algebra II, Module 2, Topic B, Lesson 13: "Tides, Sound Waves, and Stock Markets," students write a sinusoidal function to fit a set of data. Students manipulate their function as the data will not lie exactly on the graph of the function. Students then analyze their model to predict a later time and height. Reflection questions prompt students to consider variance during different times of the year. There are other, similar modeling questions in the links to afford students opportunities to improve their ability to fit and analyze sinusoidal curves. However, there is no requirement for students to write a report and communicate their findings (F-TF.5). Geometry, Unit 6, Lesson 15, Anchor Problem 2: Students are given an image of four packages that all contain the same amount of candy, and then asked to rank the packages based on the least amount of packaging used to the most. The Guiding Questions give students a series of questions to consider, then asks them to determine a different configuration that would have a better packaging choice than one of the four presented. Students do not use the Guiding Questions to represent this problem mathematically. Guiding Questions assist students with rationalizing their responses, and developing their thinking on how to know if their answers are reasonable. With the use of the Guiding Questions, students can manipulate the model and validate and interpret their responses in relation to the problem, but there are no explicit instructions for how they should report their findings (G-MG.1). Indicator 1b Materials provide students with opportunities to work with all high school standards and do not distract students with prerequisite or additional topics. Indicator 1b.i Materials, when used as designed, allow students to spend the majority of their time on the content from CCSSM widely applicable as prerequisites for a range of college majors, postsecondary programs, and careers. The materials reviewed for Fishtank Math AGA, when used as designed, meet expectations for allowing students to spend the majority of their time on the CCSSM widely applicable as prerequisites (WAPs) for a range of college majors, postsecondary programs and careers. Examples of standards addressed by the series that have students engaging in the WAPs include: N-Q: In Algebra 1, Unit 1, Lesson 1, students model contextual linear data graphically, using appropriate scales and key graph features (N-Q.1). In Geometry, Unit 6, Lesson 2, students calculate and justify composite area and circumference of circles by defining appropriate units and levels of precision of measurement (N-Q.2 and N-Q.3). In Algebra 2, Unit 1, Lesson 4, Anchor Problem 1, students consider appropriate scaling for graphs with respect to units and use graphs to define appropriate quantities related to a problem context (N-Q.1 and N-Q.2). In Algebra 2, Unit 4, Lesson 17, students analyze unit relationships found in context data and associate units with variables to write expressions that model the data (N-Q.1). A-CED: In Algebra 1, Unit 3, Lesson 6, Anchor Problem, students are given the formula that describes a quantity related to getting to a destination via walking and riding a bus. The Guiding Questions ask how they would solve for one of the variables to determine the meaning of a specific variable (A-CED.4). In addition, they find the units associated with the variable, attending to N-Q.1. Lastly, students determine the domain restrictions that would be placed on different variables in the context of the problem (F-IF.5). In Geometry, Unit 5, Lesson 14, students write a system of inequalities to represent the polygon in a coordinate plane (A-CED.3). In Algebra 2, Unit 1, Lesson 6, Anchor Problem 2, students write and solve a one-variable equation in context (A-CED.1), then use that solution to write a system of equations in two variables and answer questions in context (A-CED.2 and A-CED.3). F-IF: In Algebra 1, Unit 1, Lesson 1, students calculate the average time per mile for various commutes (F-IF.6). In Algebra 1, Unit 4, Lesson 3, Anchor Problem 1, students calculate the slope of a function using a table of values (F-IF.6), and in Anchor Problem 2, compare properties of two functions represented algebraically and numerically in a table (F-IF.9). In Algebra 1, Unit 4, Lesson 4, Anchor Problems 1 and 2, students associate the domain with inputs as they explore contextual restrictions (F-IF.1), and evaluate functions for inputs and interpret function notation in context (F-IF.2). In Algebra 1, Unit 6, Lessons 11-15, and Algebra 2, Unit 5, Lesson 1, students define and write explicit and recursive formulas for arithmetic and geometric sequences (F-IF3). In Algebra 2, Unit 3, Lessons 1 and 3, students begin graphing polynomials using tables. Students are given a sketch of two different graphs and the factored form of one of the graphs. Students need to match the correct graph to the factored form of the function. Additionally, they identify the end behavior of the function (F-IF.7c,9). G-CO: In Geometry, Unit 1, Lessons 1-5, students define and construct geometric figures using a straightedge and a compass, including: angles and angle bisectors, an equilateral triangle inscribed in a circle, perpendicular bisectors and altitudes of triangles (G-CO.1, G-CO.12 and G-CO.13). In Geometry, Unit 4, Lesson 1 and Unit 6, Lessons 4 and 5, students define parts of the right triangle, describe the terms "point, line, and plane," define polyhedrons (prisms and pyramids), and define cylinders and cones (G-CO.1). Throughout Geometry Units 1,4 and 6, G-CO.1 is addressed and applied with other WAPs standards (G-SRT.4-8) to develop and use working definitions for angle, circle, perpendicular line, parallel lines, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc. S-IC.1: In Algebra 1, Unit 2, Lesson 1, students explore graphs to discuss how randomness and statistics are used to make decisions. In Algebra 2, Unit 8, Lesson 7, students use sample results from several trials to make predictions about the shape of a population. Indicator 1b.ii Materials when used as designed allow students to fully learn each standard. The materials reviewed for Fishtank Math AGA, when used as designed, partially meet expectations for allowing students to fully learn each standard. Examples of the non-plus standards that would not be fully learned by students include: N-RN.3: In Algebra 1, Unit 6, Lesson 9, students have opportunities to find products of rational numbers, irrational numbers, and a rational number and an irrational number. Guiding Questions found in Anchor Problems 1-3 of this lesson lead students to writing explanations and/or general rules for determining whether products will be rational or irrational. However, in Algebra 1, Unit 6, Lesson 10, students have opportunities to find sums of rational numbers, irrational numbers, and a rational number and an irrational number, but limited opportunities are provided for students to generalize a rule for whether the sums will be rational or irrational. A-SSE.3b: In Algebra 1, Unit 8, Lessons 2, 3, and 4 and Algebra 2, Unit 2, Lesson 6, students have opportunities to complete the square to write a quadratic function in vertex form, and students have opportunities to associate minimum and maximum function values with the vertex of a graph. Students have opportunities to identify the vertex of a graph using the vertex form of the function. However, opportunities are not provided for students to complete the square with the explicit intent of revealing the minimum or maximum value of the function. A-APR.6: In Algebra 2, Unit 4, Lesson 6, and Lessons 8-12, students have multiple opportunities to rewrite and simplify rational expression in different forms. However, there are limited opportunities for students to use long division to find quotients that have remainders which can be written in the form q+r(x)/b(x). A-REI.7: In Algebra 1, Unit 8, Lessons 14 and 15, and Algebra 2, Unit 2, Lesson 11, teachers are advised to create their own problem sets for students. Although a list of types of problems to be included in the problem sets is provided, there are limited resources from which to create the problem sets. F-BF.1b: In Algebra 1, Unit 3, Lesson 6, Target Task, students write the area of two triangles in a trapezoid and are instructed to write a formula for the area of the trapezoid using the area of the two triangles. In Algebra 2, Unit 3, Lesson 6, students are given practice with adding and subtracting polynomial functions. No opportunities were found for students to combine functions using multiplication and division. S-ID.6a: In Algebra 1, Unit 2, Lessons 15-19, students have multiple opportunities to fit linear models to data represented by scatter plots. Students do not have sufficient practice with fitting non-linear (quadratic and/or exponential) function models to data. In Algebra 1, Unit 2, Lesson 15, the Problem Set contains one link (EngageNY Mathematics: Algebra 1, Module 2, Topic D, Lesson 13), that contains a discussion about quadratic and exponential function models, but not in the context of a data set. In Algebra 1, Unit 6, Lesson 18, there is an EngageNY Mathematics link (Algebra 1, Module 3, Topic B, Lesson 14, Example 3) that contains an opportunity for students to model data with an exponential function, but this lesson is not tagged with the standard. Throughout the materials, there are some standards for which Guiding Questions and/or problems from the resources listed under Problem Set must be incorporated for the students to fully learn the standard. Examples include, but are not limited to: N-CN.7: In Algebra 2, Unit 2, Lesson 7, students do not solve quadratic equations with real coefficients that have complex solutions, rather, they only identify such equations. In order for students to fully learn the standard, the EngageNY lesson linked in the Problem Set is needed. In addition, the Kuta free worksheets allow unlimited practice of solving quadratic equations with complex solutions. F-IF.7a: In Algebra 1, Unit 7, Lesson 2, this standard is clearly addressed in the Criteria for Success, but students are not explicitly required to "show" the intercepts. The only situations where students are required to "show intercepts, maxima, and minima" are within the Problem Set Links. S-ID.7: In Algebra 1, Unit 2, Lessons 17 and 19-22, students have limited opportunities to interpret slopes (rates of change) and the intercepts (constant terms) of linear models in data contexts. In order for students to have sufficient opportunities, the Guiding Questions and the problems from the extra resources are needed. For example, Lesson 17, the EngageNY lesson linked in the Problem Set is needed to interpret slope and y-intercept in context and is needed for students to fully learn the standard. Indicator 1c Materials require students to engage in mathematics at a level of sophistication appropriate to high school. The materials reviewed for Fishtank Math AGA meet expectations for engaging students in mathematics at a level of sophistication appropriate to high school. The materials regularly use age appropriate contexts, use various types of real numbers, and provide opportunities for students to apply key takeaways from Grades 6-8. Examples of problems that allow students to engage in age appropriate contexts include: Algebra 1, Unit 4, Lesson 4, Anchor Problem 1: Students analyze two graphs involving lemonade sales to determine which graph makes the most sense for predicting the number of sales needed to reach a fundraiser goal. Students interchange the independent and dependent variables to decide which graph is most sensible to use. Algebra 2, Unit 6, Lesson 2, Anchor Problem 2: Students view a video referencing the Dan Meyer's Ferris Wheel task. Students use graphs of trigonometric functions and their critical thinking skills to determine problem solutions. Geometry, Unit 4, Lesson 19, Target Task: Students find the total length of a triathlon given that the race begins with a swim along the shore followed by a bike ride of specific length. After the bike ride, racers turn a specific degree and run a given distance back to the starting point. Geometry, Unit 8, Lesson 4, Anchor Problem 1: Students utilize a Venn diagram to display the coffee preferences of diner customers and calculate probabilities related to the preferences. Students also determine characteristics of the events by answering a series of questions related to the probabilities (Which two events are complements of each other ?...etc.). Examples of problems that allow students to engage in the use of various types of real numbers include: Algebra 1, Unit 1, Lesson 2, Problem Set, Illustrative Math: "The Parking Lot": Students calculate parking lot charges at the rate of $0.50 per half hour over a varied number of times in minutes. They create a graph that represents the various parking costs (in decimal increments) for the parking times. Within the same lesson, Problem Set, Mathematics Vision Project: Secondary Mathematics One, Module 5: Systems of Equations and Inequalities, Lesson 5.4, students write, solve, and graph equations and inequalities involving use of fractions and decimals. Algebra 1, Unit 3: Linear Expressions & Single-Variable Equation/Inequalities, the Post Unit Assessment contains problems involving fractions that contain multiple variables with solutions that involve varied numbers, including decimals and fractions. Algebra 2, Unit 5, Lesson 4: Students explore compound interest calculations to determine that the rate of growth approaches the irrational number defined as e. Geometry, Unit 5, Lesson 1, Problem Set, EngageNY Mathematics Geometry, Module 4, Topic C, Lesson 10: "Perimeter and Area of Polygonal Regions in the Cartesian Plane", students calculate area and perimeter of polygons graphed in the coordinate plane using various methods and varied numbers. For example, in Exercise 1, students find side lengths of a rectangle resulting in square root measures and use those measures to calculate the perimeter and area of the rectangle. Geometry, Unit 7, Lesson 11, students practice finding arc lengths and must determine appropriate ways to use \pi in their calculations. Students make judgments as to what solutions are reasonably precise in relation to a given context. Examples of problems that provide opportunities for students to apply key takeaways from Grades 6-8 include: Algebra 1, Unit 3, Lesson 2, Target Task: Students are given two equations and are asked to know how they are equivalent without solving them (A-SSE.2 and A-REI.1). Equation A has all integer value coefficients and constants and Equation B has all rational number coefficients and constants (7.NS.1d and 7.NS.2c). Algebra 1, Unit 4, Lesson 8, Target Task: Students write a linear inequality containing two variables to describe a context and graph the solutions in a coordinate plane (A-CED.3). This activity also builds on 7.EE.4.b, solving and graphing solutions for inequality word problems with one variable. Algebra 2, Unit 5, Lesson 10, Target Task: Students calculate logs with the use of technology and must demonstrate their understanding of the values the calculator produces (F-LE.4). This activity connects with experiences students have had with 8.EE.4, where students interpret scientific notation generated by technology. Algebra 2, Unit 8, Lesson 3, Anchor Problem 1: Students use tree diagrams to calculate conditional probabilities (S-CP.3). This activity builds on 7.SP.8b. Geometry, Unit 4, Lesson 6, Target Task: Students demonstrate mastery of finding the sine of an angle and understanding that, by similarity in right triangles, the sine is consistent for a particular angle (G-SRT.6). In doing this, students are applying ratio reasoning (6.RP.3). Geometry, Unit 5, Lesson 12, Anchor Problem 1: Students work with dilations of polygons (triangle, parallelogram) and compare the areas of the original and scaled figures (G-GPE.7). This problem builds on 7.G.6. Indicator 1d Materials are mathematically coherent and make meaningful connections in a single course and throughout the series, where appropriate and where required by the Standards. The materials reviewed for Fishtank Math AGA meet expectations for being mathematically coherent and making meaningful connections in a single course and throughout the series. Examples where the materials foster coherence within a single course include: Algebra 1, Unit 3: Students work with algebraic expressions, linear equations and inequalities, many times in context and connect Algebra, Function, and Number standards. Specifically, in Lesson 7, students write equations in context (N-Q.1, A-CED.1 and A-CED.2). In the Target Task, students are given 4 linear equations and interpret the equations in order to answer specific questions about them (F-IF.5). Algebra 2, Unit 3, Lesson 5, Anchor Problem 1: Students find the product of polynomials. (A-APR.1). In Anchor Problem 2, students identify both algebraically and graphically how fx multiplied by gx results in hx. The Guiding Questions make connections about the features of hx and how it relates to fx and gx (F-BF.3). Geometry, Unit 3: Topic C in the Unit Summary describes how "students formalize the definition of 'similarity,' explaining that the use of dilations and rigid motions are often both necessary to prove similarity." In Lessons 1-6, students first learn that similar triangles have proportional sides. Then, in Unit 4, Lesson 6, they learn through experimentation in Anchor Problem 1, that similar triangles all have the same sine ratio for the same angles. In Lesson 7, Anchor Problem 1, students make similar conclusions about cosine ratios. Finally, in Unit 4, Lesson 9, Anchor Problem 1, students determine relationships with respect to tangent ratios (G-SRT.6). Examples where the materials foster coherence across courses include: Algebra 1, Unit 3, Lesson 12, Anchor Problem 2: Students explain how to algebraically manipulate a compound inequality to solve a contextual inequality problem. In Geometry, Unit 1, Lesson 6, Anchor Problem 1, before solving equations with respect to geometric relationships, students review equation solving by explaining possible steps that could be used to solve the equation (A-REI.1). In Lesson 7, students use solving equations to solve for angle measures (G-CO.9). In Algebra 2, Unit 2, Lesson 5, Anchor Problem 1, students explain different ways they could find the roots of a quadratic function without using certain methods (A-REI.1), and then find and describe the solutions (A-REI.4b). Algebra 1, Unit 8, Lesson 1, Anchor Problem 1: Students learn to write quadratic functions in vertex form, first by identifying different written forms of the quadratic equation and describing the graph features that each form reveals. In Anchor Problem 2, students write a quadratic equation in vertex form using a graph. In Lesson 2, they transform a quadratic expression from standard form to vertex form (F-IF.4, F-IF.8). In Geometry, Unit 7, Lesson 3, students write the equation for a circle in standard form by completing the square (G-GPE.1). In Algebra 2, Unit 2, Lesson 6, students again complete the square, this time with an emphasis on seeing perfect square trinomials embedded in the vertex forms (A-REI.4). Algebra 1, Unit 5, Lessons 12-16: The focus is on transformations of absolute value functions (F-BF.3). Later, in Algebra 1, Unit 8, Lessons 9 and 10, transformations with quadratic functions are found. Specifically, Lesson 10 focuses on transformations of quadratics in applications. In the Target Task, students are given a graph to model the scenario given and students must choose which representation(s) of transformations written using function notation will allow a ball to clear a wall (F-BF.3). In Geometry, Unit 1, Lessons 8, 9 and 10, transformations with rigid motion are found (G-CO.2, G-CO.4, G-CO.5, and G-CO.6). Specifically, Lesson 9 uses "algebraic rules to translate points and line segments." Unit 2, Lesson 4 focuses on congruence of two dimensional polygons using rigid motion (G-CO.2, G-CO.4, G-CO.5, and G-CO.7). Unit 3, Lessons 1 and 2, focus on similarity and dilation which continues to connect transformations across the courses (G-SRT.2 and G-SRT.3). In Algebra 2, Unit 4, Lesson 13, the focus is on transformations of rational functions (F-BF.3). Lastly, in Algebra 2, Unit 6, Lessons 9 - 11, the focus is on transformations of graphs of sine, cosine, and tangent functions (F-IF.7e, F-BF.3). Indicator 1e Materials explicitly identify and build on knowledge from Grades 6-8 to the High School Standards. The materials reviewed for Fishtank Math AGA meet expectations for explicitly identifying and building on knowledge from Grades 6-8 to the High School Standards. Throughout the curriculum Foundational Standards are cited which include standards from prior grades. In this series, the connections to Grades 6-8 standards can be found in the Guiding Questions. These standards are not simply being retaught, rather they are extended, thus providing those students who did not master the standard at grade level another opportunity to master that standard, and other students opportunities to recall and stretch what they learned previously. Examples of lessons that allow students to build knowledge from Grades 6-8 to the High School Standards include: Algebra 1, Unit 1, Lesson 1: 8.F.5 is cited as a Foundational Standard. In this lesson, students apply the idea of increasing and decreasing functions to analyze graphs of functions in contexts (F-IF.4). In addition to finding where the graphs are increasing or decreasing, students investigate average time per mile, the time it took for students to get to school, the distance the students live from the school, and more. Algebra 1, Unit 4, Lessons 11-13: Students build on 8.EE.8 by exploring methods of solving systems of equations (A-CED.3, A-REI.5, and A-REI.6). Specifically, in Lesson 12, students are given a method by which a system of equations was solved. The Guiding Questions allow students to explore various methods for solving systems to determine if the solutions will be the same. In Target Task 1, two systems of equations are given and students must determine if they result in the same solution without solving them. Algebra 2, Unit 8, Lesson 1: Four 7th-grade standards are cited as Foundational Standards. Students build on their understanding of probability being a number between 0 and 1 (7.SP.5), approximating anticipated outcomes (7.SP.6), comparing results from an experiment to a probability model (7.SP.7), and finding probabilities of compound events using lists, tables, and trees (7.SP.8) by writing probabilities in P (desired outcome) notation and calculating probabilities for independent and mutually exclusive events (S-CP.1). Geometry, Unit 2, Lesson 2: 8.G.5 is cited as a Foundational Standard. Students expand their informal arguments of angle sums and angles formed by parallel lines cut by a transversal to write formal proofs about triangles (G-CO.10). Geometry, Unit 3, Lesson 9: Students build on 8.G.3. Students dilate figures where the center of dilation is not the origin and answer questions about their similarity (G-CO.2 and G-SRT.2). Indicator 1f The plus (+) standards, when included, are explicitly identified and coherently support the mathematics which all students should study in order to be college and career ready. Narrative Evidence Only The materials reviewed for Fishtank Math AGA are not consistently identified, and are not consistently used to coherently support the mathematics which all students should study in order to be college and career ready. The following standards are addressed in the materials, but are not explicitly identified as plus standards under the Lesson Map or Core Standards. Occasionally, in TIPS FOR TEACHERS, some of these standards are identified as plus standards. A-APR.7: In Algebra 2, Unit 4, Lessons 6 and 7 address A-APR.6 and A-APR.7. Both lessons focus on A-APR.6 with little to no problems addressing A-APR.7. While the connection is made with respect to working with fractions, no mention is made with respect to closure. F-IF.7d: In Algebra 2, Unit 4, Lessons 8-13, students identify asymptotes and end behavior. F-BF.4c: In Algebra 2, Unit 2, Lesson 4, students read values of inverse functions from graphs and tables. F-BF.5: In Algebra 2, Unit 5, Lesson 9, students explore the relationships between exponential functions and logarithms. F-TF.3: In Algebra 2, Unit 6, Lesson 7, students use special triangles to determine geometrically the values of sine, cosine, tangent for \pi/3, \pi/4 and \pi/6. This lesson also addresses F-TF.1 and is embedded in the material with the plus standards. In addition, this is the only lesson in which this standard is included, so it is not a lesson that can be skipped. In lesson 8, students continue this work with \pi-x, \pi+x, and 2\pi-x in terms of their values for x, where x is any real number. F-TF.4: In Algebra 2, Unit 6, Lesson 9, students consider whether sine and cosine functions are even or odd. F-TF.6: In Algebra 2, Unit 7, Lesson 4, students are given several functions that are equivalent. In the guiding questions, students determine how the domain could be restricted so that the inverse could only have one solution. This lesson also addresses F-TF.7. F-TF.7: In Algebra 2, Unit 7, Lessons 5 and 6, students use a graphing calculator to evaluate solutions and interpret the terms. F-TF.9: In Algebra 2, Unit 7, Lessons 11-14, F-TF.9 is listed as a Core Standard. Lessons 11-12 contain problems involving addition and subtraction formulas for sine, cosine and tangent. In Lesson 13, students use the sum formulas to derive the double angle formulas. Lesson 14 also lists F-TF.9 as a Core Standard, but the lesson does not address the standard. G-SRT.9: In Geometry, Unit 4, Lesson 16, students need to derive the area formula for any triangle in terms of sine. Through the Anchor Problems and the Guiding Questions, students also find the area of a triangle, given two sides and the angle in between. Through the Guiding Questions, students derive the formula for the area of a triangle, A= \frac{1}{2}ab sinC. G-SRT.10: In Geometry, Unit 4, Lessons 17 and 18, students use the Law of Sines and the Law of Cosines to solve side lengths and/or angles of triangles. In Lesson 17, the Guiding Questions allow for students to verify the Law of Sines algebraically. In Lesson 18, the Notes of Anchor Problem 1, states, "Students should algebraically verify the Law of Cosines during this Anchor Problem and class discussion." In Algebra 2, Unit 7, Lessons 15 and 16, students use the Law of Sines and Law of Cosines to find angle and side measures of acute triangles. G-SRT.11: In Geometry, Unit 4, Lessons 16-19, students solve real-world problems using the Law of Sines and the Law of Cosines. G-C.4: In Geometry, Unit 7, Lesson 9, G-C.2 is also addressed. In the Tips for Teachers, states that this standard, " is represented in its fullest in the problem set guidance." This lesson can be taught without doing the Problem Set(s) that address the plus standard. G-GMD.2: In Geometry, Unit 6, Lessons 10-12: Lesson 10 addresses G-GMD.1 and G-GMD.2. This lesson focuses on Cavalieri's Principle and uses it to derive the formula of the volume of a sphere. Lesson 11 addresses G-GMD.2 and G-GMD.3 and G-GMD.2 is identified as a plus standard in the Tips for Teachers. The lesson investigates how slices and/or cross sections of pyramids are related to pyramids. In Anchor Problem 2 of this lesson, a rectangular pyramid is given and students find the area of the base and the area of the cross section and then look at the relationship between the volume of the top part of the pyramid, above the cross section and the full pyramid. G-GMD.2 is integrated throughout Lesson 12 which addresses G-GMD.1, G-GMD.2, G-GMD.3, and N-Q.3. In Anchor Problem 2, there is an extension that would directly relate to G-GMD.2. S-CP.8: In Geometry, Unit 8, Lesson 3, students apply the general Multiplication Rule, P(A and B)=P(A)P(B|A)=P(B)P(A|B), and use it to solve context problems. S-CP.9: In Geometry, Unit 8, Lessons 9 and 10, combinations and permutations are covered. Additionally, at the beginning of some units, plus standards may be listed on the Unit Summary page under Future Standards, but are not found in the materials. For example, A-APR.5 is listed under Future Standards in Algebra 1, Unit 7 Summary, but is not found in the materials. Plus standards not mentioned in this report were not found in the materials. Gateway Two Gateway Two Details The materials reviewed for Fishtank Math AGA meet expectations for rigor and balance. The materials meet expectations for providing students opportunities in developing conceptual understanding, procedural skills, and application, and the materials also meet expectations for balancing the three aspects of rigor. The materials meet expectations for Practice-Content Connections as the materials intentionally develop all of the mathematical practices to their full intent. However, the materials do not explicitly identify the mathematical practices in the context of individual lessons, so one point is deducted from the score in indicator 2e to reflect the lack of identification. Criterion 2a - 2d Materials reflect the balances in the Standards and help students meet the Standards' rigorous expectations, by giving appropriate attention to: developing students' conceptual understanding; procedural skill and fluency; and engaging applications. The materials reviewed for Fishtank Math AGA meet expectations for rigor and balance. The materials meet expectations for providing students opportunities in developing conceptual understanding, procedural skills, and application, and the materials also meet expectations for balancing the three aspects of rigor. Materials support the intentional development of students' conceptual understanding of key mathematical concepts, especially where called for in specific content standards or clusters. The materials reviewed for Fishtank Math AGA meet expectations for developing conceptual understanding of key mathematical concepts especially where called for in specific content standards or cluster headings. Lessons in the materials have Anchor Problems that include Guiding Questions for teachers to ask. These questions often assist with developing conceptual understanding. The Guiding Questions also help students find solutions and write explanations to support those solutions. Examples of lessons that provide students with opportunities to demonstrate conceptual understanding include: Algebra 1, Unit 4, Lesson 7: In Anchor Problem 1, students write a linear inequality for a graph. Guiding Questions elicit students' justifications for their choices of inequalities (A-CED.3). Next, the questions prompt students to consider variations that would occur in their inequality with specific changes to the graph. Finally, students consider how restrictions in the domain and range would alter the solutions shown in the graph. Algebra 1, Unit 5, Lesson 12: Students investigate transformations of functions (F-BF.3). Anchor Problems 2 and 3 were adapted from a Desmos activity, Introduction to Transformations of Functions. In the Anchor Problems, students are instructed to go to the slides that correspond to the problems. Specifically, Anchor Problem 3 has a link to slide 7 on the Desmos activity. On the slide, students have a slider that changes the value of k in the function f(x)=|x|+k. Students answer three questions on the slide asking what happens when k is zero, negative, and what effect k has on the function. Also, the Problem Set includes a link to a Desmos activity: Absolute Value Translations, where students investigate vertical translations with absolute value functions. In the activity, students begin by graphing the function, f(x) = x and compare it with f(x) = x. Subsequent slides have students predict what functions will look like given k values for vertical/horizontal translations and then graph the function on the next slide to check their thinking. Algebra 2, Unit 3, Lesson 10: In Anchor Problem 1, students develop conceptual understanding through a series of images of diagrams of cubes with a portion cut out to show the total volume representing A^3 -B^3(A-APR.4). Then the lesson continues to further develop students' understanding of the proof of A^3 -B^3=(A-B)(A^2+AB+B^2) with the use of Guiding Questions and visuals of the cubes. Algebra 2, Unit 8, Lesson 2: In Anchor Problem 1, students use a Venn diagram to represent coffee preferences of diner customers. The diagram and Guiding Questions are used to develop students' conceptual understanding about determining whether two events are mutually exclusive or not, and assist students with developing conceptual understanding of unions, intersections, and complements of events ("or," "and," "not") (S-CP.1). In Anchor Problem 2, other types of diagrams are used to help students further develop a conceptual understanding of mutually exclusive and non-mutually exclusive events, and the probability addition rules that relate to these types of events. When two events are mutually exclusive, P(A or B) =P(A) + P(B) When two events are not mutually exclusive, P(A or B) = P(A) +P (B)-P(A and B) (S-CP.7). Geometry, Unit 2, Lesson 9: Students investigate triangle congruence using rigid motions (G.CO.7 and G.CO.8). In Anchor Problem 1, students use patty paper to trace two sides and an included angle. Then they explore how many different triangles can be made, starting with those two sides and the included angle. Guiding Questions prompt students to discover that all the triangles they make are congruent. In Anchor Problem 2, students answer "How can you use rigid motions to prove that if two triangles meet the side-angle-side criteria, the triangles are congruent?" The Guiding Questions, when used, support conceptual understanding by asking students "what properties of rigid motions show that the corresponding line segments and corresponding angles are congruent?" thus connecting rigid motions with congruent triangles. Geometry, Unit 5, Lesson 5: The Problem Set contains a link to Open Middle - Parallel Lines and Perpendicular Transversals: Students use the digits 1-9 (at most once each), to fill in open boxes to complete three equations. The digits become coefficients of the x and y terms in the equations.Two equations should represent parallel lines and the third equation should represent a transversal that is as close to being perpendicular to the parallel lines as possible. (G-GPE.5). Materials provide intentional opportunities for students to develop procedural skills and fluencies, especially where called for in specific content standards or clusters. The materials reviewed for Fishtank Math AGA meet expectations for providing intentional opportunities for students to develop procedural skills, especially where called for in specific content standards or clusters. Throughout the materials, there are Problem Sets which link to a variety of resources. Teachers can select problems and activities that align with the lessons with the use of these resources. Most of the opportunities for students to develop procedural skills related to the lessons are found in the linked resources. Students have limited practice with solving problems using the Anchor Problems and the Target Tasks provided in each lesson. However, for some lessons, the resource links are limited. Teachers are given instructions for what types of problems to include in an assignment and create their own sets of problems and activities. Examples of lessons that provide opportunities for students to develop procedural skills include: Algebra 1, Unit 1, Lesson 5: The Anchor Problems for this lesson give students opportunities to calculate average rates of change for functions through the use of multiple representations (graph, table, and context problem) (F-IF.6). Students are also asked to determine over which intervals functions are increasing or decreasing and whether or not the functions are linear. Algebra 1, Unit 7, Lessons 6 and 7: Students develop procedural skills for factoring quadratic equations. In Lesson 6, students first practice factoring quadratic expressions with a leading coefficient of one, and explore the relationship between the factors of the constant term and the coefficient of the middle term. Then students use their factoring skills to solve quadratic equations. In Lesson 7, students practice factoring quadratic expressions with leading coefficients that do not equal one and then use those skills to solve quadratic equations. Problem Sets for both lessons contain links to Engage NY lessons, Kuta Worksheets, and other resources that provide opportunities for students to practice factoring quadratics. Additionally, the Target Tasks in both lessons provide opportunities for students to show mastery of factoring different trinomials, including trinomials that have a greatest common factor, as well as one that has a leading negative coefficient (A-SSE.1a and A-SSE.3a). Algebra 2, Unit 3, Lesson 5: Students have opportunities to develop their procedural fluency with finding products of polynomials and representing the products graphically (A-APR.1). through the Problem Set. There are links for students to practice the procedural skills of polynomial operations in Engage NY Lesson, Mathematics Vision Project Modules and Kuta Worksheets. Through Guiding Questions, students are able to link the degrees of polynomials and polynomial factors to key graph features. Geometry, Unit 1, Lesson 3: Students develop procedural skills for constructing angle bisectors. The Problem Set links provide students with additional opportunities to practice with this geometric construction with the use of Engage NY Lesson links (G-CO.12). Geometry, Unit 2, Lesson 13: The Problem Set has links to two Engage NY lessons that have a note on both to "ask students to describe the transformations that will map the "givens" and show congruence". This provides students opportunities to develop procedural skills with using congruence and similarity to prove relationships in geometric figures (G.SRT.5). Geometry, Unit 7, Lesson 12: Students use the proportional relationships between the radius of a circle and the length of an arc in preparation for converting degrees to radians. Both the Practice Set and Target Task provide students with opportunities to practice this skill. Later, in Algebra 2, Unit 6, Lesson 7, students again learn to convert between degrees and radians and have many opportunities to practice this skill using the links provided in the Problem Set and the Target task (G-C.5). Materials support the intentional development of students' ability to utilize mathematical concepts and skills in engaging applications, especially where called for in specific content standards or clusters. The materials reviewed for Fishtank Math AGA meet expectations for supporting the intentional development of students' ability to utilize mathematical concepts and skills in engaging applications, especially where called for in specific content standards or clusters. Examples of lessons that include multiple opportunities for students to engage in routine and non-routine mathematics applications include: Algebra 1, Unit 1, Lesson 9: The Problem Set includes Yummy Math, "Harlem Shake" problem, in which students are presented with a situation in which a group of students want to investigate the lifespan of an internet meme using the song, "The Harlem Shake." Students are given a graph of data and are asked a series of questions about the graph. Students find how many days it took for the video to be mentioned 100,000 and then how much longer it took to get 200,000 mentions. Students must also determine if the data shows linear growth, the greatest rates of growth, and more. Lastly, students determine how many total mentions the post received (A-CED.2, F-IF.5, F-LE.3). Algebra 1, Unit 5, Lesson 5: For Anchor Problem 2, students are given a context problem involving bank account transactions. Students analyze a graph that is drawn to represent the transactions. However, this graph is misleading and inaccurate, because it represents the transactions as a continuous function. Based on the given information and a series of Guiding Questions, students determine the inaccuracies in the graph, and are prompted to draw a new graph in the form of a step-function that more accurately represents the problem context. Students are also asked to represent the context of the problem algebraically. This problem addresses graphing and writing step-functions contextually (A-CED.3, F-IF.7b). Algebra 2, Unit 5, Lesson 2: In Anchor Problem 2, students are given the situation where a fisherman introduces fish illegally into a lake, and the growth of the species is modeled by an exponential function. Students need to use the function to calculate how many fish were released initially; given the number of fish present after a specific time, find the base of the function; and if the base is known, calculate the weekly percent growth rate and interpret what this means in everyday language (F-IF.8b). Geometry, Unit 6, Lesson 17: The Problem Set links students to the Illustrative Mathematics problem: "How many cells are in the Human body?" "The purpose of this task is for students to apply the concepts of mass, volume, and density in a real-world context." The given information includes facts about the volume and density of a cell which are used to compute the mass of the cell. Students need to work with mass, density and volume in real life context and understand what information they need to know and what information they can make assumptions about. Students will need to understand what are reasonable assumptions to be made with this problem and they will need to convert the weight of a person into grams (N-Q.2, N-Q.3, G-GMD.3, G-MG.2). Geometry, Unit 8, Lesson 8: In the Anchor Problem, students make decisions about medical testing based on conditional probabilities. Students must complete a two-way frequency table and a tree diagram to make decisions about medical test results for a certain population percentage. Students need to find the number of people in a random sample of 2,000 people who take the test for the disease who get results that are true positive, false positive, true negative, and false negative. Students must also determine: which of the given information is conditional and which is not conditional; probabilities of getting certain test results; whether certain test results classify as independent or dependent events. Guiding Questions include: What are the risks and rewards for taking this test?; Are testing positive and having the disease independent or dependent events?; and Are testing negative and not having the disease independent or dependent events? (S-CP.4 and S-CP.5) The three aspects of rigor are not always treated together and are not always treated separately. The three aspects are balanced with respect to the standards being addressed. he materials reviewed for Fishtank Math AGA meet expectations for including lessons in which the three aspects of rigor are not always treated together and are not always treated separately. The aspects of rigor are balanced with respect to the standards being addressed. Examples of lessons that engage students in the aspects of rigor include: Algebra 1, Unit 1, Lesson 6: The Problem Set provides a link to MARS Formative Assessment Lessons for High School Representing Functions of Everyday Situations which focuses on conceptual development of representing context situations as functions and graphs (F-IF.4). In the task, students work towards understanding how specific situations will look as a function on a graph. Students work together in collaborative groups to match situations to graphs. After this is completed, students then will match functions to their paired graphs/situations. Throughout the lesson, there are questions for the teacher to ask students in order to further their understanding. Algebra 1, Unit 7, Lesson 10: The Anchor Problems assist students with developing their conceptual understanding of solutions to quadratic equations by having students identify the roots to a quadratic equation with the use of three representations (equation, graph, table) (A-SSE.3a, F-IF.9). The Problem Set provides a link to Illustrative Mathematics, where students work on proving the zero product property. Additionally, links to Kuta worksheets are provided to give students practice with solving quadratic equations. In Algebra 1, Unit 7, Lesson 13, students have opportunities to interpret quadratic solutions in context problems (example: Using a quadratic equation and its graph to model the height of a ball as a function of time) (A-SSE.3a,F-IF.8a). Algebra 2, Unit 4, Lesson 16: In this lesson, Anchor Problems 1 and 2 have students explore two methods of solving rational equations; clearing the equation of fractions by multiplying each term in the equation by the least common denominator and solving the resulting equation, or by rewriting each term in the equation with a common denominator and setting the numerators equal. Students determine if the resulting solution is valid or extraneous (A-REI.2). Then students determine how each of these types of solutions would be interpreted graphically. Students answer: If the solution is valid, how can the y-coordinate be determined? If the solution is extraneous, what does this mean graphically? The Problem Set provides links to lessons that contain additional problems with which students can develop their procedural skills for solving rational equations and identifying the types of solutions. Algebra 2, Unit 5, Lesson 3: Anchor Problem 1 gives a real-world scenario that is modeled by an exponential function. Students write the equation, which is a procedural skill within the application. The Guiding Questions asks questions to check procedural skill and conceptual understanding. Students find the height after a given time and are asked how they know their solutions are correct. Anchor Problem 2 has two bank options for students to choose which would be better after ten years with a given investment and percentage rate (F-BF.1a, F.LE.5). Geometry, Unit 4, Lesson 12: Students use the Pythagorean Theorem and right triangle trigonometry to solve application problems (G-SRT.8). The Anchor Problems and the Target Task provide opportunities for students to solve application problems that focus on solving right triangles. For example, the Target Task has students find how far a friend will walk to meet a friend who is ziplining down from a building. The Problem Set provides suggestions and links to lessons that also focus on application problems involving right triangles. Geometry, Unit 5, Lesson 12: In Anchor Problem 1, students develop a conceptual understanding of G-GPE.7 through the Guiding Questions. Students use dilations to create similar figures of original polygons using given scale factors (G-SRT1b). Students compare areas of original and scaled figures to determine the relationship between the areas of the figures and the scale factors. Then students develop a general rule that represents this relationship (The value of the ratio of the area of the scaled figure to the area of the original figure is the square of the scale factor of dilation). The Problem Set provides links to EngageNY and CK12 resources that include additional practice problems for students to practice their procedural skills. Criterion 2e - 2h Materials meaningfully connect the Standards for Mathematical Content and Standards for Mathematical Practice (MPs). The materials reviewed for Fishtank Math AGA meet expectations for Practice-Content Connections as the materials intentionally develop all of the mathematical practices to their full intent. However, the materials do not explicitly identify the mathematical practices in the context of individual lessons, so one point is deducted from the score in indicator 2e to reflect the lack of identification. Materials support the intentional development of overarching, mathematical practices (MPs 1 and 6), in connection to the high school content standards, as required by the mathematical practice standards. The materials reviewed for Fishtank Math AGA partially meet expectations for supporting the intentional development of overarching, mathematical practices (MPs 1 and 6), in connection to the high school content standards, as required by the Standards for Mathematical Practice. The Standards of Mathematical Practice are listed at the end of each Unit Summary, along with the Common Core Standards and Foundational Standards identified for the particular unit. Guiding Questions found within each lesson reflect the use of the Standards of Mathematical Practice for assisting students with solving problems, but they are not explicitly identified in the context of the individual lessons for teachers or students. As a result of this one point is deducted from the scoring of this indicator. Examples of where and how the materials use MPs 1 and/or 6 to enrich the mathematical content and demonstrate the full intent of the mathematical practices include: Algebra 1, Unit 6, Lesson 19: Students write exponential growth functions to model financial applications that include compound interest. Guiding Questions assist students with applying the correct formulas to determine account interests and balances to solve the Anchor Problems and Target Task found in the lesson. Students must persevere to solve the problems by identifying the meaning of the variables found in the formulas and assigning the correct values to them, and attend to precision when expressing the solutions to the problems (MPs 1 and 6); (F-IF.8, F-LE.2, F-LE.5). Algebra 1, Unit 8, Lesson 15: For Anchor Problem 3, students work with a word problem where they need to write two equations - one for a ball that is thrown, and one for a remote controlled toy plane that takes off at a constant rate. Students need to think through how those two functions are similar and how they are different. They will need to consider what type of function each of these will represent (MP1). The Guiding Questions walk students through some questions that will help students think through some of the process - asking students - what type of function represents the height of the ball, linear, exponential, or quadratic? Additionally the question asks what graph shape represents the motion path of the toy plane? Students must make a prediction as to whether the plane and the ball will ever reach the same height, explain the reasoning for their prediction, and sketch a graph to represent their prediction. Students must attend to precision by correctly using the units found in the problem context (MP6); (A-REI.7, A-REI.11). Algebra 2, Unit 6, Lesson 14: The Anchor Problems link to a Desmos activity called "Burning Daylight." In this activity, students need to work through the problems to find the period, amplitude, midline and phase shift through the context of geography and how much sunlight different areas of the country have (F-TF.3, F-TF.5). Students need to make sense of the problem they are given and understand the context of the graphs they are given to answer the questions in the Desmos problems, then write equations for the hours of sunlight for the given locations (MP1). Additionally, after students make their equation and graph related to one city in Alaska, the context of the problem changes to a more northern city in Alaska, which allows students to write another function, then see the actual graph and make sense of this graph. Students persevere to solve this problem by accurately interpreting the graphs associated with the problem and attending to precision when using the formulas needed to process problem solutions (MP1 and MP6). Algebra 2, Unit 8, Lesson 10: Students describe and compare statistical study methods. For each Anchor Problem in this lesson, students need to use the problem description to determine what type of study (survey, observational or experimental), will result in the most reliable data (S-IC.3). Students must also identify population parameters, and determine advantages and disadvantages of using one type of study compared to another (MP1). In Anchor Problem 1 students must determine if a correlation exists between the number of minutes a train is delayed and the number of violent acts that occur on the platform or on the train. Students are given various scenarios of how to conduct this study and must identify what type of study each scenario represents and which would result in the most reliable data. Guiding Questions assist students with analyzing the problem to determine which scenario would result in the most reliable data (MP6). Geometry, Unit 4, Lesson 2: The Problem Set provides a link to an Illustrative Mathematics task that contains 3 right triangles surrounding a shaded triangle. Students need to prove that the shaded triangle is a right triangle. Students must determine an approach for solving the problem. Throughout this proof, students need to both attend to precision (by keeping their answers in irrational numbers, not rounding them) and persevering through the calculations to complete the proof (MP1 and MP6). Students must also attend to precision when using the Pythagorean Theorem to find the hypotenuse of the smaller triangle to find some of the missing information needed to solve the problem (G-SRT.4). Geometry, Unit 5, Lesson 10: In this lesson, students find areas and perimeter of polygons sketched in the coordinate plane. Students must also identify and write the correct formulas needed to find the areas and perimeters of the polygons. Guiding Questions assist students with persevering to plan strategies for solving the problems and attending to precision when using the coordinates of polygon vertices to find the measures needed to find the areas and perimeters of the polygons (MP1 and MP6); (G-GPE.7). Materials support the intentional development of reasoning and explaining (MPs 2 and 3), in connection to the high school content standards, as required by the mathematical practice standards. The materials reviewed for Fishtank Math AGA meet expectations for supporting the intentional development of reasoning and explaining (MPs 2 and 3), in connection to the high school content standards, as required by the Standards for Mathematical Practice. Algebra 1, Unit 4, Lesson 6: In Anchor Problem 2, students reason abstractly and quantitatively as they analyze the graph of an inequality to determine solution points. Students are told to name a point that is part of the solution, then explain their reasoning using the graph, and the algebraic expression for the inequality. Additionally, they need to do the same with a point that is not a solution, explaining their reasoning both graphically and algebraically (MP2). Guiding Questions assist students in constructing arguments to justify their conclusion. (MP3); (A-REI.12). Algebra 1, Unit 4, Lesson 9: The Problem Set contains a link to Illustrative Mathematics: Estimating a Solution via Graph. In this lesson, students reason quantitatively and graphically to analyze a given solution to a system of equations to determine why the given solution is incorrect. Calculating the slopes and the y-intercepts for each of the equations indicates there is an unique solution to the system of equations. Using the graph of the system of equations shows that the intersection point (solution) for the system of equations is to the right of the y-axis and below the x-axis, indicating a positive x-coordinate and a negative y-coordinate.The given solution has a positive x-coordinate much less than the one indicated in the graph, and a positive y-coordinate (MP2 and MP3); (A-REI.6 and A-REI.11). Algebra 1, Unit 6, Lesson 16: In the Problem Set there is a link to Illustrative Mathematics: Boiling Water. In this problem, students are challenged to compare two related data sets that are modeled with a linear function, but when the data sets are combined, the combination is better represented by an exponential function (F-LE.1 and F-LE.2). Within the exploration, students consider why the data seems inaccurate. Then students construct an improved estimate for the slope of a linear equation to fit the data through the process of contextualizing and decontextualizing information in order to generalize a pattern that is not immediately clear (MP 2 and MP3). Algebra 2, Unit 2, Lesson 4: In Anchor Problem 2, students are presented with the information about how a quadratic equation is increasing and decreasing over specific intervals and that it has a rate of change of zero at x=3. Students need to then describe how each of three given graphs fit the description. The Guiding Questions ask students what is the same about each graph and what is different. Students then reason through how the different graphs can all look different yet meet the same criteria given in the Anchor Problem (MP2); (F-IF.4, F-BF.3). Algebra 2, Unit 5, Lesson 2: In Anchor Problem 1, students are given a graph and an equation that represents the graph in the form of f(x)=ab^x and asked to find the values of a and b. Students must use the graph to test the accuracy of their equation, identify features of the exponential function they see in the graph, determine rates of percent change between points on the graph, and whether the graph represents exponential growth or decay (MP2); (F-IF.4 and F-LE.2). Geometry, Unit 3, Lesson 5: The Problem Set contains a link to an Illustrative Mathematics lesson, Dilating a Line, in which students need to locate images based on the dilation they perform, and interpret what they think will happen to the line when they perform the dilation. In this task, they are verifying their work experimentally, then verifying their work with a proof. This helps students to construct their arguments and explain their reasoning to better understand the mathematical operation they are performing (MPs 2 and 3); (G-SRT.1). Geometry, Unit 3, Lesson 18: In this lesson,Tips for Teachers contains a link to Shadow, A Solo Dance Performance Illuminated by Three Synchronized Spotlight Drones. An opening modeling scenario is provided in which students consider the behavior of three different spotlights directed onto a ballet performance (G.SRT.5). Students may reflect on a previous lesson, Deducting Relationships: Floodlight Shadows, where spotlights were placed at various angles and consider how the spotlight arrangements affect the appearance of the performance shown in a video.The video can be paused and restarted to allow students to assess the reasonableness of their calculations. Reflection on prior learning, guides students to manipulate symbolic representations to explain the visual effects (MP 2). Indicator 2g Materials support the intentional development of modeling and using tools (MPs 4 and 5), in connection to the high school content standards, as required by the mathematical practice standards. The materials reviewed for Fishtank Math AGA meet expectations for supporting the intentional development of modeling and using tools (MPs 4 and 5), in connection to the high school content standards, as required by the Standards for Mathematical Practice. Algebra 1, Unit 2, Lesson 5: The Target Task has four questions that begin by having students draw a dot plot "representing the ages of twenty people for which the median and the mean would be approximately the same." Students are not given a data set or any other information about what numbers should be used. Students need to make some assumptions in order to come up with their dot plot that meets the criteria. Question 2 asks the same thing, but students need to find a dot plot where "the median is noticeably less than the mean". Students then have to determine and explain "which measure of spread" would be used for each data set. Lastly, students have to calculate the variance in the data sets they created in Questions 1 and 2 (MP 4); (S-ID.3). Students are free to choose the tool(s) that they feel would be appropriate when creating the dot plots and calculating the variance (MP 5). Algebra 1, Unit 6, Lesson 17: The Problem Set contains a link to Engage NY, Algebra 1, Module 3, Topic A, Lesson 5 - Problem Set. Problem #6 of the Problem Set involves two band members who each have a method for spreading the word about their upcoming concert. Students have to show why Meg's strategy will reach less people than Jack's in part a. In part b, students explore if Meg's strategy will ever inform more people than Jack's if they were given more days to advertise. Lastly, in part c, students revise Meg's plan to reach more people than Jack within the 7 days (MP 4); (F-LE.1). In Algebra 2, Unit 2, Lesson 3: In the lesson, students are factoring quadratics to find the roots and other features of a quadratic. (F-IF.8 and A-SSE.3). One of the Criteria for Success states, "Check solutions to problems using a graphing calculator." Graphing calculator use is also incorporated into the guiding questions in Anchor Problem 1 (MP 5). Algebra 2, Unit 7, Lesson 8: In the Target Task, students are given estimated populations of rabbits and coyotes, as well as the graphs of the data. The students need to write a function for each of the simulations and then calculate two different numbers of years where the population estimate will reach specific quantities for the rabbits and the coyotes (MP4); (F-TF.7). Geometry, Unit 1, Lesson 3: In Anchor Problem 2 students first place a set of construction instructions into the correct order. Then students follow the instructions to copy and construct an angle using a compass and straightedge (MP5); (G-CO.12). Geometry, Unit 6, Lesson 7: In Anchor Problem 2, the dimensions of a cylindrical glass filled with lemonade are given. Students need to determine how many cone-shaped cups, half the height of the glass and the same radius can be completely filled with the lemonade from the glass (G-GMD.3). Guiding Questions help students estimate and form conjectures about how to solve the problem (MP 4), and select appropriate formulas to solve the problem (MP 5). Indicator 2h Materials support the intentional development of seeing structure and generalizing (MPs 7 and 8), in connection to the high school content standards, as required by the mathematical practice standards. The materials reviewed for Fishtank Math AGA meet expectations for supporting the intentional development of seeing structure and generalizing (MPs 7 and 8), in connection to the high school content standards, as required by the Standards for Mathematical Practice. Examples of where and how the materials use MP7 to enrich the mathematical content and demonstrate the full intent of the mathematical practice include: Algebra 1, Unit 5, Lesson 7: Students construct understanding on solving absolute value equations by looking at graphs. In Anchor Problem 1, students consider the equation, |x|=5 and look at it as a system of equations where f(x)=|x|, g(x)=5, and f(x)=g(x). In the Guiding Questions, students are asked to consider what each graph looks like and where the two functions intersect. Lastly, students are asked what the solution to |x|=5 looks like on a number line (A-REI.1 and A-REI.11). Algebra 2, Unit 7, Lesson 1: In Tips for Teachers, there is a link to Sam Shah's blog post: Dan Meyer Says Jump and I Shout How High? In his post, Sam describes having students graph functions that seem to be different but then produce the same graph. Then Sam provides triangles from which students produce symbolic logic that represents the equality in the graphs. Finally, students produce the algebraic proof for the logic (F-TF.8). Geometry, Unit 3, Lesson 6: In Anchor Problem 1, students explore a diagram containing two triangles drawn on a sheet of lined paper. The two triangles are contained in one figure and the bases of the triangles coincide with the lines on the paper, which represent parallel lines. Students are asked what they would need to know to justify that the triangles are dilations of one another. Students attempt to answer the question with the use of the dilation theorem and the side-splitter theorem, by explaining how each of these theorems applies to the diagram (G-SRT.5). Algebra 1, Unit 6, Lesson 6: Students engage with N-RN.1 through a series of questions. Students consider why the equation 100^\frac{1}{2}=50 is not true.To help answer this question, students are given a pattern of 100 raised to different integer exponents, and asked where \frac{1}{2} fits into this pattern? Then students are asked to rewrite the base 100 as a power of 10 and place it in the equation (10^2)^\frac{1}{2}=? to see if students recognize what exponent rule can be applied and what the outcome would be. Students are then asked to try evaluating other expressions that contain rational exponents. Through the Guiding Questions, students are further asked to explore their conceptual understanding by asking what they think it means to have a fractional exponent, specifically, what does the denominator of the fractional exponent indicate? Algebra 2, Unit 6, Lesson 8: In Anchor Problem 1, students rewrite angle values, such as in sin\frac{11\pi}{6} ,in the form of 2\pi-x to find their values. As they do so, they relate their revisions to angles on the unit circle. They generalize this relationship and process to solving problems in the problem set link to Engage NY Mathematics: Precalculus and Advanced Topics, Module 4, Topic A, Lesson 1 (F-TF.3). Geometry, Unit 4, Lesson 9: In Anchor Problem 2, students are asked to describe why the tangent of 90° is undefined. With the use of special right triangles, students explore what the tangent of the following angles: 0°, 45°, 60° and 90° would be. Students look at the pattern of these, then describe what happens to the tangent as the value of the angle approaches 90° and as it approaches 0°.To extend the understanding of why the tangent of 90° is undefined, students are then asked what is the relationship between tangents of complementary angles (G-SRT.6, G-SRT.7). Gateway Three Gateway Three Details The materials reviewed for Fishtank Math AGA do not meet expectations for Usability. The materials partially meet expectations for Criterion 1, Teacher Supports, do not meet expectations for Criterion 2, Assessment, and do not meet expectations for Criterion 3, Student Supports. Criterion 3a - 3h The program includes opportunities for teachers to effectively plan and utilize materials with integrity and to further develop their own understanding of the content. The materials reviewed for Fishtank Math AGA partially meet expectations for Teacher Supports. The materials: include standards correlation information that explains the role of the standards in the context of the overall series; provide explanations of the instructional approaches of the program and identification of the research-based strategies; and provide a comprehensive list of supplies needed to support instructional activities. The materials partially contain adult-level explanations and examples of concepts beyond the current course so that teachers can improve their own knowledge of the subject, and they partially include sufficient and useful annotations and suggestions that are presented within the context of the specific learning objectives. Materials provide teacher guidance with useful annotations and suggestions for how to enact the student materials and ancillary materials, with specific attention to engaging students in order to guide their mathematical development. The materials reviewed for Fishtank Math AGA partially meet expectations for providing teacher guidance with useful annotations and suggestions for how to enact the students material and ancillary materials, with specific attention to engaging students in order to guide their mathematical development. The materials provide some general guidance that will assist teachers in presenting the students and ancillary materials, but they do not consistently include sufficient and useful annotations and suggestions that are presented within the context of the specific learning objectives. Examples include, but are not limited to: The Algebra 1 and Algebra 2 course summary includes a section which states, "How do we order the units? In Unit 1...in Unit 2.." which provides a synopsis of the work the students will be engaging in. However, the Geometry course summary does not offer such an explanation. In most lessons, solutions are not provided for Anchor Problems and/or Target Tasks. In a few lessons, Anchor Problem(s) and/or Target Task(s) solutions are available through a link to the source of the problem. For example, Geometry, Unit 8, Lesson 1, the solutions available for the Anchor Problems through links to the sources. Tips for Teachers provides strategies and guidance for lesson implementation; however, there are several lessons that contain no Tips for Teachers. Examples include, but are not limited to: Algebra 1, Unit 2, Lessons 8, 15, 18, Algebra 2, Unit 5, Lessons 10 - 13, and Geometry, Unit 3, Lesson 1 - 6, 8, 11, and 14 - 17. The materials provide minimal guidance that might assist teachers in presenting the ancillary materials. Examples include: The Preparing to Teach a Math Unit section, gives seven steps for teachers to prepare to teach a unit, as well as questions teachers should ask themselves when organizing a lesson presentation. For example, Step 1 states, "Read and annotate the Unit Summary - Ask yourself: What content and strategies will students learn?", "What knowledge from previous grade levels will students bring to this unit?", and "How does this unit connect to future units and/or grade levels?". At the beginning of each unit there is a Unit Summary section, which provides a synopsis of the unit, Assessment links, Unit Prep, and identifies Essential Understandings connected to the unit. For example, in Algebra 2, Unit 5, the Unit Summary is about Exponential Modeling and Logarithms. Teachers are informed that students have previously seen exponential functions in Algebra 1, and that this unit builds upon prior knowledge by revisiting exponential functions and including geometric sequences and series and continuous compounding situations. The Unit Prep contains suggestions for how teachers can prepare for unit instruction. This includes suggestions for taking the unit assessment, making sure to note which standards each question aligns to, the purpose of each question, strategies used in the lessons, relationships to the essential questions and lessons that the assessment is aligned to. It is suggested that teachers do all of the target tasks, and make connections to the essential questions and the assessment questions. A vocabulary section tells the teacher the terms and notation that students will learn or use in the unit but does not define them. Additionally, a materials section lists the materials, representations, and/or tools that teachers and students will need for this unit. Materials contain adult-level explanations and examples of the more complex grade-level/course-level concepts and concepts beyond the current course so that teachers can improve their own knowledge of the subject. The materials reviewed for Fishtank Math AGA partially meet expectations for containing adult-level explanations and examples of the more complex course-level concepts and concepts beyond the current course so that teachers can improve their own knowledge of the subject. Materials contain adult-level explanations and examples of the more complex course-level concepts so that teachers can improve their own knowledge of the subject. While adult-level explanations of concepts beyond the course are not present, Tips for Teachers, within some lessons, can support teachers in developing a deeper understanding of course concepts. Opportunities for teachers to expand their knowledge include: In Algebra 1, Unit 1, Lesson 2, the Tips for Teachers, contain adult-level explanations of complex grade level concepts. There is a link to a resource to show teachers all of the ways that function notation can be represented. The linked materials look at functions in a more sophisticated manner - "y is a function of x" what does this mean and what is the relationship between x and y. Lesson 2 is the first lesson that introduces function notation to students. In Algebra 2, Unit 1, Lesson 4, the Tips for Teachers, contain adult-level explanations of complex grade level concepts. It states, "This lesson has components that extend beyond F-BF.4a into F-BF.4c and F-BF.4d. If you are not teaching an advanced Algebra 2 course, focus on the contextual meaning of inverse functions presented in this lesson rather than the tabular or graphical analysis of inverse functions. The following resource may be helpful for teachers to grasp the full conceptual understanding of inverse functions before planning this lesson. American Mathematical Society Blogs, Art Duval, "Inverse Functions: We're Teaching It All Wrong!" November 28, 2016." In this piece, Duval explains the problems that can occur with switching variables in the sense that the meaning of the variables can change. This understanding of inverse relationships (course-level content) extends beyond the intent of the standards. In Geometry, Unit 4, Lessons 6 and 7, the Tips for Teachers, contain adult-level explanations of complex grade level concepts. It states, "The following resource can help to frame the overall study of introductory trigonometry: Continuous Everywhere but Differentiable Nowhere, Sam Shah, "My Introduction to Trigonometry Unit for Geometry". The purpose of this blog is to help teachers develop a deeper understanding of the lessons concepts of trigonometry. Materials include standards correlation information that explains the role of the standards in the context of the overall series. The materials reviewed for Fishtank Math AGA meet expectations for including standards correlation information that explains the role of the standards in the context of the overall series. Generalized correlation information is present for the mathematics standards addressed throughout the series and can be found in the course summary standards map, unit summary lesson map, and the list of standards identified for each lesson. Examples include: In Algebra 1, Algebra 2, and Geometry, a Standards map for each course includes a table with each course-level unit in columns and aligned standards in the rows. Teachers can easily identify a unit and when each standard will be addressed. In most lessons, there is a list of content standards following the lesson objective. For example, in Algebra 1, Unit 1, Lesson 2, the Core Standards are identified as F.IF.1 and F.IF.2. The Foundational Standards are identified as 8.F.1. Lessons contain a consistent structure that includes an Objective, Common Core Standards, Criteria for Success, Tips for Teachers, Anchor Problems, Problem Set, and Target Task. Occasionally these contain additional references to standards. For example, in Geometry Unit 5, Lesson 5, the Tips for Teachers connects 4.G.2 "Classify two-dimensional figures based on the presence or absence of parallel or perpendicular lines" with the high school geometry lesson objective of "Describe and apply the slope criteria for parallel lines". Each Unit Summary includes a narrative outlining relevant prior and future content connections for teachers. There is also a Lesson Map which gives an objective and the standard(s) for each lesson. Examples include: The Algebra 1, Unit 2 summary includes an overview of how the unit builds from prior coursework. The materials state, "Scatterplots are explored heavily in this unit, and students use what they know about association from 8th grade to connect to correlation in Algebra 1." The Geometry, Unit 8 summary includes an overview of how the content learned will form a foundation for future learning. The materials state, "In Algebra 2, students will continue their study of probability by studying statistical inference and making decisions using probability." The Algebra 2, Unit 7 summary includes an overview that indicates trigonometry is needed for Calculus, yet also states that the unit builds on the previous unit on trigonometry functions to expand students' knowledge. Materials provide strategies for informing all stakeholders, including students, parents, or caregivers about the program and suggestions for how they can help support student progress and achievement. The materials reviewed for Fishtank Math AGA do not provide strategies for informing all stakeholders, including students, parents, or caregivers about the program and suggestions for how they can help support student progress and achievement. The materials do not contain strategies for informing students, parents, or caregivers about the mathematics their student is learning. Additionally, no forms of communication with parents and caregivers and no suggestions for how stakeholders can help support student progress and achievement were found in the materials. Materials provide explanations of the instructional approaches of the program and identification of the research-based strategies. The materials reviewed for Fishtank Math AGA meet expectations for providing explanations of the instructional approaches of the program and identification of the research-based strategies. Materials explain the instructional approaches of the program, or materials include or reference research-based strategies. Instructional approaches of the program and identification of the research-based strategies can be found within Our Approach and Math Teacher Tools. Examples where materials explain the instructional approaches of the program and describe research-based strategies include: Under the About Us section, there is a link to Our Approach, which includes a reference to "best practices," the Common Core State Standards and Massachusetts Curriculum Frameworks. The approach is stated as being one of flexibility for teachers to be able to adapt lessons. Well-known open educational resources are mentioned as being included in the Fishtank materials. Within Math Teacher Tools, there is a section called, Preparing To Teach Fishtank Math, the Understanding the Components of a Fishtank Math Lesson section, outlines the purpose for each lesson component. It states that, "Each Fishtank math lesson consists of seven key components, such as the Objective, Standards, Criteria for Success, Tips for Teachers, Anchor Tasks/Problems, Problem Set, the Target Task, among others. Some of these connect directly to the content of the lesson, while others, such as Tips for Teachers, serve to ensure teachers have the support and knowledge they need to successfully implement the content." In Math Teacher Tools, Assessments, Overview, Works Cited lists, "Wiliam, Dylan. 2011. Embedded formative assessment. Bloomington, Indiana: Solution Tree Press." and "Principles to Action: Ensuring Mathematical Success for All. (2013). National Council of Teachers of Mathematics, p. 98." In Teacher Tools, there is a link to Academic Discourse. The Overview outlines the role discourse plays within Fishtank Math. The materials state, "Academic discourse is a key component of our mathematics curriculum. Academic discourse refers to any discussion or dialogue about an academic subject matter. During effective academic discourse, students are engaging in high-quality, productive, and authentic conversations with each other (not just the teacher) in order to build or clarify understanding of a topic." Additional documents, "Preparing for Academic Discourse," "Tiers of Academic Discourse," and "Strategies to Support Academic Discourse," that may provide more detail are not available in Fishtank Math AGA. Materials provide a comprehensive list of supplies needed to support instructional activities. The materials reviewed for Fishtank Math AGA meet expectations for providing a comprehensive list of supplies needed to support instructional activities. There is a material list at the beginning of some units. Examples of lists of supplies found at the beginning of a unit include: Algebra 1, Unit 1: The following materials are listed: Helpful to create graphs (if you have a Mac): Omni Graph Sketcher (free), Desmos, Three-Act Task. Algebra 2, Unit 5: The materials listed include: Equations, tables, graphs, and contextual situations. A calculator or other technology to graph and solve problems using exponential modeling and logarithms. Geometry, Unit 5: The material listed is the Massachusetts Comprehensive Assessment System Grade 10 Mathematics Reference Sheet. There are times when the materials list is not comprehensive and/or omitted. Examples include, but are not limited to: Algebra 1, Unit 4, Lesson 4, Target Task asks students to graph coordinates, connect coordinates to create a linear function, and then find the inverse; however, there are no materials listed for Unit 4 and no mention of materials needed in the lesson. Algebra 1, Unit 4, Lesson 10, Anchor Problem 3 requires students to "write and graph a system of inequalities" but there are no materials listed in the unit or lesson. Algebra 2, Unit 4, Lesson 8, Criteria for Success has reference to [TABLE] and [TBLSET], both functions on a graphing calculator. In addition, Anchor Problem 2 has these questions, "How can you use your graphing calculator to see possible differences?", "Show the table in your graphing calculator for each of these functions using the [TABLE] feature. Which values of x are of most interest to you? Why?", and "Why do both functions return an "ERROR" for the value at x = 2? Is the reason for the error the same? Different?" Unit 4 has no material(s) list nor is a graphing calculator mentioned as a needed material in the lesson. Geometry, Unit 6, Lesson 3, Target Task Part b requires students to find how much it will cost to paint the gym floor if cans of paint are $15.97. The only material listed for Unit 6 is the Massachusetts Comprehensive Assessment System Grade 10 Mathematics Reference Sheet. There is no direction if calculators should be utilized or not. This is not an assessed indicator in Mathematics. Criterion 3i - 3l The program includes a system of assessments identifying how materials provide tools, guidance, and support for teachers to collect, interpret, and act on data about student progress towards the standards. The materials reviewed for Fishtank Math AGA do not meet expectations for Assessment. The materials partially include assessment information that indicate which standards and practices are assessed and partially provide assessments that include opportunities for students to demonstrate the full intent of course-level standards.The materials do not provide multiple opportunities throughout the courses to determine students' learning or sufficient guidance to teachers for interpreting student performance and suggestions for follow-up. Indicator 3i Assessment information is included in the materials to indicate which standards are assessed. The materials reviewed for Fishtank Math AGA partially meet expectations for having assessment information included in the materials to indicate which standards are assessed. The materials identify the standards and practices assessed for some of the formal assessments. There is a Post-Unit Assessment for each unit in a course. Assessment item types include short-answer, multiple choice, and constructed response. For the Algebra 1 and Geometry courses, there are Post-Unit Assessment Answer Keys for each unit. The Algebra 1 and Geometry Post-Unit Assessment Answer Keys for each unit contains the following: question numbers, aligned standards, item types, point values, correct answers and scoring guides, and aligned aspects of rigor for each question. However, neither the Post-Unit Assessment or the Post-Unit Answer Keys identify the mathematical practice. Examples of Algebra 1 and Geometry Post-Unit Assessment questions and aligned standards include: Algebra 1, Unit 3: For Post-Unit Assessment question 4, students solve a multi-step inequality.The answer key shows the aligned standard as A-REI.3. This is a short-answer question with a point value of 2, and the rubric explains how the two points are determined based on the detailed accuracy of the student's answer.The aspect of rigor for this question is referenced as P/F (procedural fluency). Algebra 1, Unit 5: For Post-Unit Assessment question 3, students are given a system of equations and the graph of both functions (One is a linear function and the other is an absolute value function). For part 3a, students need to identify the solution(s) for the system of equations; then for part 3b, they need to algebraically show that the point(s) are solution(s) to the system. Each part of this question aligns with standard A-REI.11 and each part has a point value of 2. Part 3a item type is considered as short-answer and part 3b's item type is identified as constructed response. Aspects of rigor for this question are referenced as C P/F (conceptual understanding/ procedural fluency). Geometry, Unit 5: For Post-Unit Assessment question 3, students are given a constructed response task consisting of the graph of a triangle (CAB) and 3 related questions; students need to calculate \frac{2}{3} of the distance between points C and A (part a), and points C and B (part b). Students label these new coordinate points D and E respectively, found by completing these calculations. Students then calculate the perimeter of triangle CDE in radical form (part c). The aligned standard for the first two parts of this question is G-GPE.6 and the aligned standard for the third part is G-GPE.7. A point value of 1 is assigned to each of the first two parts of the question and a point value of 2 is assigned to the third part of the question. For the Algebra 2 Course, Post-Unit Assessments have no answer keys and there is no alignment of questions to the standards. Examples of Algebra 2 Post-Unit Assessments that have no answer keys or standards referenced include, but are not limited to: Algebra 2, Units 1, 2, 5, and 9. The following Algebra 2 Post-Unit Assessments have some solutions and standards referenced in links to original sources: Algebra 2, Unit 3, only have references for questions 8, 9, and 13 Algebra 2, Unit 4, only have references for questions 2, 4, 5, and 14 Algebra 2, Unit 7, only have references for questions 1, 2, 4, 6, and bonus question Many of these reference links do not work, such as for Regents Exams in units 4 and 7 and in "Algebra II Paper-based Practice Test from Mathematics Practice Tests," from unit 3. Indicator 3j Assessment system provides multiple opportunities throughout the grade, course, and/or series to determine students' learning and sufficient guidance to teachers for interpreting student performance and suggestions for follow-up. The materials reviewed for Fishtank Math AGA do not meet expectations for including an assessment system that provides multiple opportunities throughout the grade, course, and/or series to determine students' learning and sufficient guidance to teachers for interpreting student performance and suggestions for follow-up. The Assessment system does not provide multiple opportunities to determine students' learning and sufficient guidance to teachers for interpreting student performance and suggestions for follow-up with students. The assessments for the materials include a post assessment after every unit and Target Task(s) at the end of each lesson. These provide little guidance to teachers for interpreting student performance or suggestions for follow-up. In Algebra 1 and Geometry, there is an Answer Key for each Post-Unit Assessment with point values assigned for each question. However, there are no rubrics or other explanations as to how many points different kinds of responses are worth. An example of this includes, but is not limited to: Algebra 1, Unit 8: For Post-Unit Assessment question 6, students are given the following question: "Jervell makes the correct claim that the function below does not cross the x-axis. Describe how Jervell could know this and show that his claim is true." The answer key states the following: "The discriminant of the quadratic formula tells how many real roots a quadratic function has (or how many times a parabola intersects with the x-axis). Since the discriminant of this function is − 32, there are no real roots to this function. (Equivalent answers acceptable)" This is worth 3 points, but there is no guidance for and no sample responses for which 0 points, 1 point or 2 points might be assigned. In the Target Tasks, there are no answer keys, scoring criteria, guidance to teachers for interpreting student performance, or suggestions for follow-up. Indicator 3k Assessments include opportunities for students to demonstrate the full intent of grade-level/course-level standards and practices across the series. The materials reviewed for Fishtank Math AGA partially meet expectations for providing assessments that include opportunities for students to demonstrate the full intent of course-level standards and practices across the series. The Assessments section found under Math Teacher Tools contains the following statement: "Pre-unit and mid-unit assessments as well as lesson-level Target Tasks offer opportunities for teachers to gather information about what students know and don't know while they are still engaged in the content of the unit. Post-unit assessments offer insights into content that students may need to revisit throughout the rest of the year to ensure continued work towards mastery. Student self-assessments provide space for students to reflect on their learning and monitor their own progress." The materials reviewed do not contain Pre-Unit, mid-unit, or student self-assessments, the system of assessments included is twofold: Target Tasks and Post-Unit Assessments. All of the Post-Unit Assessments have to be printed and administered in person. For the Algebra 1 and Geometry unit assessments, answer keys are provided, however no answer keys are provided for the Algebra 2 unit assessments.The unit assessment item types include multiple choice, short answer, and constructed response. However, the assessment system leaves standards unassessed. Examples of how standards are not assessed or only partially assessed in Post-Unit Assessments include, but are not limited to: In Algebra 1, Unit 3, students solve equations and inequalities. However, students are not prompted to explain each step. ( A-REI.1) In Algebra 1, Unit 8, students identify the vertex, minimum or maximum, axis of symmetry, and y-intercept, but they do not indicate where the function is increasing or decreasing or whether the function is positive or negative (F-IF.4). In Algebra 2, Unit 8, there are several questions that involve random sampling, but students do not explain how randomization relates to the context. In Geometry, Unit 1, students construct an equilateral triangle by copying a segment (G-CO.12). G-CO.13 is identified as being addressed in the same unit, but it is not assessed in the Post-Unit assessment. In Post-Unit Assessment Keys for Algebra 1 and Geometry, Common Core Standards are identified for each assessment item, but mathematical practices are not identified for any of the assessment items. Examples of Post-Unit Assessment multiple choice items include: Algebra 2, Unit 2, question 2 asks, "Which equation has non-real solutions? a. 2x^2+4x-12=0 b. 2x^2+3x=4x+12 c. 2x^2+4x+12=0 d. 2x^2+4x=0" Geometry, Unit 7, question 1 states, "A designer needs to create perfectly circular necklaces. The necklaces each need to have a radius of 10 cm. What is the largest number of necklaces that can be made from 1000 cm of wire? A. 16, B. 15, C. 31 D. 32." Examples of Post-Unit Assessment short answer items include: Algebra 1, Unit 6, question 5 states, "Multiply and simplify as much as possible: \sqrt{8x^3 \cdot \sqrt{50x}} " Algebra 2, Unit 1, question 1 states, "Let the function f be defined as f(x)=2x+3a where a is a constant. a. If f(-5)= -4 , what is the value of the y-intercept? b. The point (5,k) lies of the line of the function f . What is the value of k?" Examples of Post-Unit Assessment constructed response items include: Algebra 1, Unit 5, question 6 states, "A new small company wants to order business cards with its logo and information to help spread the word of their business. One website offers different rates depending on how many cards are ordered. If you order 100 or fewer cards, the rate is $0.40 per card. If you order over 100 and up to and including 200 cards, the rate is $0.36 per card. If you order over 200 and up to and including 500 cards, the rate is $0.32 per card. Finally, if you order over 500 cards, the rate is $0.29 per card. Part A: Write a piecewise function, p(x), to model the pricing policy of the website. Part B: Calculate p(250-p(200), and interpret its meaning in context of the situation. Part C: The manager of the company decides to order 500 business cards, but the marketing director says they can order more cards for less money. Is the marketing director's claim true? Explain and justify your response using calculations from the piecewise function." Algebra 2, Unit 8, question 9 states, "A brown paper bag has five cubes, 2 red and 3 yellow. A cube is chosen from the bag and put on the table, and then another cube is taken from the bag. Part A: What is the probability of two red cubes being chosen in a row? Part B: Is the event of choosing a red cube the first time you pick and choosing a red cube the second time you pick from the bag independent events? Why or why not?" Indicator 3l Assessments offer accommodations that allow students to demonstrate their knowledge and skills without changing the content of the assessment. The materials reviewed for Fishtank Math AGA do not provide assessments which offer accommodations that allow students to demonstrate their knowledge and skills without changing the content of the assessment. According to Math Teacher Tools, Assessment Resource Collection, " The post-unit assessment is designed to assess students' full range of understanding of content covered throughout the whole unit...It is recommended that teachers administer the post-unit assessment soon, if not immediately, after completion of the unit. The assessment is likely to take a full class period." While all students take the post-unit assessment, there are no accommodations offered that ensure all students can access the assessment. Criterion 3m - 3v The program includes materials designed for each child's regular and active participation in grade-level/grade-band/series content. The materials reviewed for Fishtank Math AGA do not meet expectations for Student Supports. The materials provide: extensions and/or opportunities for students to engage with course-level mathematics at higher levels of complexity and provide manipulatives, both virtual and physical, that are accurate representations of the mathematical objects they represent and, when appropriate, are connected to written methods. The materials partially provide strategies and supports for students who read, write, and/or speak in a language other than English to regularly participate in learning course-level mathematics. The materials do not provide strategies and supports for students in special populations to support their regular and active participation in learning course-level mathematics. Indicator 3m Materials provide strategies and supports for students in special populations to support their regular and active participation in learning grade-level/series mathematics. The materials reviewed for Fishtank Math AGA do not meet expectations for providing strategies and supports for students in special populations to support their regular and active participation in learning series mathematics. There are no strategies, supports, or resources for students in special populations to support their regular and active participation in grade-level mathematics. The materials have Special Populations under the Math Teacher Tools link. Within Special Populations, there is a link to Areas of Cognitive Functioning. Eight areas of cognitive functioning: Conceptual Processing, Visual Spatial Processing, Language, Executive Functioning, Memory, Attention and/or Hyperactivity, Social and/or Emotional Learning and Fine Motor Skills, are discussed in this section. While these areas of cognitive functioning are discussed in relation to mathematics learning, there are no specific suggestions and/or strategies for how teachers can assist students with their learning, if presented with these behaviors. Found in the Overview for the section on Areas of Cognitive Functioning, there is a statement that says: "To learn more about how teachers can effectively incorporate strategies to support students in special populations in their planning, see our Teacher Tools, Protocols for Planning for Special Populations and Strategies for Supporting Special Populations." However, the protocols and strategies teacher tools are not available in Fishtank Math AGA. Indicator 3n Materials provide extensions and/or opportunities for students to engage with grade-level/course-level mathematics at higher levels of complexity. The materials reviewed for Fishtank Math AGA meet expectations for providing extensions and/or opportunities for students to engage with course-level mathematics at higher levels of complexity. Opportunities for students to investigate course-level mathematics at a higher level of complexity are found with the lesson Anchor Problems. Each lesson contains Anchor Problems that are accompanied by Guiding Questions. The Guiding Questions assists students with critically engaging in the math content of the problem. Also, Guiding Questions prompt students to engage in purposeful investigations and extensions related to the problem. Examples of lessons that include the use of Guiding Questions for prompting students to engage in lesson content at higher levels of complexity include: Algebra 1, Unit 2, Lesson 19: In Anchor Problem 1, students use screenshots of a battery charge indicator to determine when a laptop will be fully charged. Students need to represent the data in a scatter plot, determine the correlation coefficient for this data to determine the strength of the association, assign a line of best fit either through least squares regression or estimation, and determine if a linear function is the best model for this data through plotting the residuals. Guiding Questions that accompany this problem include: How do the correlation coefficient and the residual plot help you to assess the validity of the answer to the question? Why is it useful to have a line of best fit for this problem? How does this allow you to make a prediction? How can you communicate your confidence in your answer to the question using correlation and the residual plot? Algebra 1, Unit 5, Lesson 16: For Anchor Tasks Problem 2, students use the Desmos activity, Transformations Practice, and are tasked to write an equation that represents the blue graph for each transformation. At the end of the activity, there are two challenge transformations for students to complete. Guiding Questions for this problem include: How can you tell if a reflection is involved? How can you tell if a dilation or scaling of the graph is involved? How can you tell if a translation of the graph is involved? How are these moves represented in the equation? Algebra 2, Unit 4, Lesson 18: Anchor Problem 1 involves two participants in a 5-kilometer race. The participants' distances are modeled by the following equations: a(t)=\frac{t}{4} and b(t)=\sqrt{2t-1} where t represents time in minutes. Students need to determine who gets to the finish line first? Guiding Questions for this problem include: What is the time for each person when the total distance run is 5 kilometers? How can you use this information to determine who wins the race? If the participants have a constant speed, how many different times would you expect they would be side by side? How would you determine at what time(s) the participants are side by side? Sometimes teachers are directed to create problem sets for students that engage students in mathematics at higher levels of complexity. Examples include: Geometry, Unit 7, Lesson 2: The lesson objective is: Given a circle with a center translated from the origin, write the equation of the circle and describe its features. For the Problem Set, teachers are to create the problems for students. Teacher directives for creating the problems include three bullet points labeled EXTENSION. These bullet points read as follows: Include problems such as "What features are the same/different between the two circles given by the equations: x^2+y^2=16 and 2x^2+2y^2=16? Show your reasoning algebraically." Include problems with systems of equations between two circles, which is discussed in Algebra 2. Include problems such as "What are the x-intercepts of the circle?" Algebra 2, Unit 1, Lesson 12: The objective for this lesson is: Write and evaluate piecewise functions from graphs. Graph piecewise functions from algebraic representations. For the Problem Set, one of the directives to teachers states include problems: "where based on a description of number of pieces, continuous or discontinuous, students create a piecewise function graphically and algebraically (This is an extension, and we'll come back to this at the end of the unit.)" Algebra 2, Unit 2, Lesson 2: The lesson objective is: Identify the y-intercept and vertex of a quadratic function written in standard form through inspection and finding the axis of symmetry. Graph quadratic equations on the graphing calculator. For the Problem Set, teachers create the problems and one of the directives to teachers is: "Include problems where students are challenged to write multiple quadratic equations given the constraint of vertex AND y-intercept in standard form. Ask students to explain what they have discovered about the possible values." Additionally, there are no instances of advanced students doing more assignments than their classmates. Indicator 3o Materials provide varied approaches to learning tasks over time and variety in how students are expected to demonstrate their learning with opportunities for students to monitor their learning. The materials reviewed for Fishtank Math AGA partially provide varied approaches to learning tasks over time and variety in how students are expected to demonstrate their learning with opportunities for students to monitor their learning. The materials provide a variety of approaches for students to learn the content over time. Each lesson has Anchor Problems/Tasks to guide students with a series of questions for students to ponder and discuss, and the Problem Set, gives students the option to select problems and activities to create their own problem set. The Tips for Teachers, when included in the lesson, guides teachers to additional resources that the students can use to deepen their understanding of the lesson. However, while students are often asked to explain their reasoning, there are no paths or prompts provided for students to monitor their learning or self-reflect. Indicator 3p Materials provide opportunities for teachers to use a variety of grouping strategies. The materials reviewed for Fishtank Math AGA partially provide opportunities for teachers to use a variety of grouping strategies. Some general guidance regarding grouping strategies is found within the Math Teacher Tools, Academic Discourse section, however there is limited guidance on how to group students throughout the Fishtank Math AGA materials. Grouping strategies are suggested within lessons, however these suggestions are not consistently present or specific to the needs of particular students. Occasionally, there will be some guidance in the Tips for Teachers on how to facilitate a lesson, but this is limited and inconsistent. Examples include: In Algebra 1, Unit 8, Lesson 13: In the Tips for Teachers, there is a bullet point that states, "There is only one Anchor Problem for this lesson, as there is a lot to dig into with this one problem. Students can also spend an extended amount of time on independent, pair, or small-group practice working through applications from Lessons 11–13." While grouping students is suggested, no guidance is given to teachers on how to group students based on their needs. In Geometry, Unit 6, Lesson 6: Problem 3 Notes state: "Students should spend time discussing and defending their estimates before being given the dimensions of the glasses. Students should first identify information that is necessary to determine a solution and ask the teacher for this information, which can be given through the image "The Dimensions of the Glasses." However, no particular grouping arrangement is mentioned. Indicator 3q Materials provide strategies and supports for students who read, write, and/or speak in a language other than English to regularly participate in learning grade-level mathematics. The materials reviewed for Fishtank Math AGA partially meet expectations for providing strategies and supports for students who read, write, and/or speak in a language other than English to regularly participate in learning grade-level mathematics. Materials provide strategies and supports for students who read, write, and/or speak in a language other than English to meet or exceed grade-level standards through active participation in grade-level mathematics, but not consistently. While there are resources within Math Teacher Tools, Supporting English Learners, that provide teachers with strategies and supports to help English Learners meet grade-level mathematics, these strategies and supports are not consistently embedded within lessons. The materials state, "Our core content provides a solid foundation for prompting English language development, but English learners need additional scaffolds and supports in order to develop English proficiency as they build their content knowledge. In this resource we have outlined the process our teachers use to internalize units and lessons to support the needs of English learners, as well as three major strategies that can help support English learners in all classrooms (scaffolds, oral language protocols, and graphic organizers). We have also included suggestions for how to use these strategies to provide both light and heavy support to English learners. We believe the decision of which supports are needed is best made by teachers, who know their students English proficiency levels best. Since each state uses different scales of measurement to determine students' level of language proficiency, teachers should refer to these scales to determine if a student needs light or heavy support. For example, at Match we use the WIDA ELD levels; students who are levels 3-6 most often benefit from light support, while students who are levels 1-3 benefit from heavy support."Regular and active participation of students who read, write, and/or speak in a language other than English is not consistently supported because specific recommendations are not connected to daily learning objectives, standards, and/or tasks within grade-level lessons. Within Supporting English Learners there are four sections, however only one section, Planning for English Learners, is available in Fishtank Math AGA. Planning for English Learners is divided into two sections, Intellectually Preparing a Unit and Intellectually Preparing a Lesson. The "Intellectually Preparing a Unit" section states, "Teachers need a deep understanding of the language and content demands and goals of a unit in order to create a strategic plan for how to support students, especially English learners, over the course of the unit. We encourage all teachers working with English learners to use the following process to prepare to teach each unit." The process is divided into four steps where teachers are prompted to ask themselves a series of questions such as: "What makes the task linguistically complex?", "What are the overall language goals for the unit?", and "What might be new or unfamiliar to students about this particular mathematical context?" The "Intellectually Preparing a Lesson" section states, "We believe that teacher intellectual preparation, specifically internalizing daily lesson plans, is a key component of student learning and growth. Teachers need to deeply know the content and create a plan for how to support students, especially English learners, to ensure mastery. Teachers know the needs of the students in their classroom better than anyone else, therefore, they should also make decisions about where to scaffold or include additional supports for English learners. We encourage all teachers working with English learners to use the following process to prepare to teach a lesson." The process is divided into two steps where teachers are prompted to do certain objectives such as , "Unpack the Objective, Target Task, and Criteria for Success" or "Internalize the Mastery Response to the Target Task" or to ask themselves a series of questions such as: "What does a mastery answer look like?", "What are the language demands of the particular task?" , and "If students don't understand something, is there a strategy or way you can show them how to break it down?" Regular and active participation of students who read, write, and/or speak in a language other than English is not consistently supported because specific recommendations are not connected to daily learning objectives, standards, and/or tasks within grade-level lessons. Indicator 3r Materials provide a balance of images or information about people, representing various demographic and physical characteristics. The materials reviewed for Fishtank Math AGA do not provide a balance of images or information about people, representing various demographic and physical characteristics. No images are used in these materials. However lessons do include a variety of problem contexts that could interest students of various demographic populations. Examples include: Algebra 1, Unit 1, Lesson 4: Any student could relate to Anchor Problem 1: "You are selling cookies for a fundraiser. You have a total of 25 boxes to sell, and you make a profit of $2 on each box." In Lesson 5, Anchor Problem 3: "You leave from your house at 12:00 p.m. and arrive to your grandmother's house at 2:30 p.m. Your grandmother lives 100 miles away from your house. What was your average speed over the entire trip from your house to your grandmother's house?" Algebra 1, Unit 7, Lesson 13: There is a link in the Problem Set to Engage NY Mathematics: Algebra 1,Module 4,Topic A,Lesson 9, Exercise 3, Example 3: "A science class designed a ball launcher and tested it by shooting a tennis ball straight up from the top of a 15-story building. They determined that the motion of the ball could be described by the function: h(t)= -16t2^+144t+160 where 𝑡 represents the time the ball is in the air in seconds and h(t)represents the height, in feet, of the ball above the ground at time 𝑡. What is the maximum height of the ball? At what time will the ball hit the ground?" Students graph the function and use the graph to determine problem solutions. Names used in problem contexts are not representative of various demographic and physical characteristics. The names used can typically be associated with one population and therefore lack representation of various demographics. Examples include, but are not limited to: Algebra 1, Unit 3, Lesson 7: Anchor Problem 1 begins with: "Joshua works for the post office and drives a mail truck." Algebra 2, Unit 1, Lesson 1: Anchor Problem 3 begins with: "Allison states that the slope of the following equation is 3." In Lesson 3: Anchor Problem 2 begins with: "Alex is working on a budget after getting a new job." Geometry, Unit 8, Lesson 2: Anchor Problem 2 begins with: "Dan has shuffled a deck of cards." Other names found in the materials that are not representative of all populations include: Mary, Beverly, Andrea, Lisa, Greg, and Jessie. Indicator 3s Materials provide guidance to encourage teachers to draw upon student home language to facilitate learning. The materials reviewed for Fishtank Math AGA do not provide guidance to encourage teachers to draw upon student home language to facilitate learning. There is no evidence of promoting home language knowledge as an asset to engage students in the content material or purposefully utilizing student home language in context with the materials. Indicator 3t Materials provide guidance to encourage teachers to draw upon student cultural and social backgrounds to facilitate learning. The materials reviewed for Fishtank Math AGA do not provide guidance to encourage teachers to draw upon student cultural and social backgrounds to facilitate learning. Within the About Us, Approach, Culturally Relevant, the materials state, "We are committed to developing curriculum that resonates with a diversity of students' lived experiences. Our curriculum is reflective of diverse cultures, races and ethnicities and is designed to spark students' interest and stimulate deep thinking. We are thoughtful and deliberate in selecting high-quality texts and materials that reflect the diversity of our country." Although this provides a general overview of the cultural relevance within program components, materials do not embed guidance for teachers to amplify students' cultural and/or social backgrounds to facilitate learning. Indicator 3u Materials provide supports for different reading levels to ensure accessibility for students. The materials reviewed for Fishtank Math AGA do not provide supports for different reading levels to ensure accessibility for students. There are no supports to accommodate different reading levels to ensure accessibility for students. The Guiding Questions, found within the lessons, offer some opportunities to identify entrance points for students. However, these questions provide teacher guidance that may or may not support struggling readers to access and engage in course-level mathematics. Indicator 3v Manipulatives, both virtual and physical, are accurate representations of the mathematical objects they represent and, when appropriate, are connected to written methods. The materials reviewed for Fishtank Math AGA meet expectations for providing manipulatives, both virtual and physical, that are accurate representations of the mathematical objects they represent and, when appropriate, are connected to written methods. While there are missed opportunities to use manipulatives, there is strong usage of virtual manipulatives such as Desmos and Geogebra throughout the materials to help students develop a concept or explain their thinking.They are used to develop conceptual understanding and connect concrete representations to a written method. Examples of the usage of virtual manipulatives include: Algebra 1, Unit 1, Lesson 10, Anchor Problem 2, in the notes Teachers are instructed to show students a video of three people eating popcorn at different rates. The notes states that, "This video is essential to show students so they can graph this scenario accurately. You will likely need to show it several times." Algebra 2, Unit 9, Lesson 1: The Problem Set contains a link to the Desmos activity, "Domain and Range Introduction." In Geometry, Unit 3, Lesson 10, Anchor Problem 2, animation in Geogebra is used for students to describe the transformation(s) that map one figure onto the other figure. Opportunities for students to use manipulatives are sometimes missed as the materials provide pictures but do not prescribe manipulatives. An example of this includes, but is not limited to: In Algebra 2, Unit 8, Lesson 1, Anchor Problem 2, a picture of a spinner is shown, no physical or virtual spinner is provided. Cubes are mentioned in Anchor Problem 3, but there are no suggestions as to how to make simple cubes. In the Target Task, Game Tools listed include a spinner and a card bag, but there are no suggestions to teachers to provide these manipulatives. Criterion 3w - 3z The program includes a visual design that is engaging and references or integrates digital technology, when applicable, with guidance for teachers. The materials reviewed for Fishtank Math AGA integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in series standards. The materials do not include or reference digital technology that provides opportunities for collaboration among teachers and/or students. The materials have a visual design that supports students in engaging thoughtfully with the subject, and the materials do not provide teacher guidance for the use of embedded technology to support and enhance student learning. Indicator 3w Materials integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the grade-level/series standards, when applicable. The materials reviewed for Fishtank Math AGA integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the series standards, when applicable. DESMOS is used throughout the materials and it is customizable as teachers can copy and change activities or completely design their own. Examples include: Algebra 1, Unit 2, Lesson 17: Tips for Teachers encourages teachers to use Desmos to help students understand Regressions. Algebra 1, Unit 5, Lesson 12: Tips for Teachers explains: "Desmos activities are featured in these lessons in order to capture the movements inherent in these transformations". Algebra 2, Unit 1, Lesson 12: The Problem Set contains a link to a DESMOS activity where students explore Domain and Range of different functions. Examples of other technology tools include: Algebra 1, Unit 2, Lesson 6: Contains a link in the Problem Set to an applet with which students can explore Standard Deviation. Algebra 2, Unit 8, Lesson 11: Contains a link in the Teacher Tips to a "Sample Size Calculator" that can be used to determine the sample size needed to reflect a particular population with the intended precision. Geometry, Unit 1, Lesson 2: Tips for Teachers contains links to Math Open Reference "Constructions", and an online game called "Euclid: The Game" designed with Geogebra that assists students in understanding geometric constructions. Geometry, Unit 6, Lesson 10: In Tips for Teachers the following suggestion is made: "The following GeoGebra applet may be helpful to demonstrate Cavalieri's principle, which can be done after Anchor Problem #1: GeoGebra, "Cavalieri's Principle," by Anthony C.M OR." Indicator 3x Materials include or reference digital technology that provides opportunities for teachers and/or students to collaborate with each other, when applicable. The materials reviewed for Fishtank Math AGA do not include or reference digital technology that provides opportunities for teachers and/or students to collaborate with each other, when applicable. While there are opportunities within activities in this series for students to collaborate with each other, the materials do not specifically include or reference student-to-student or student-to-teacher collaboration with digital technology. Indicator 3y The visual design (whether in print or digital) supports students in engaging thoughtfully with the subject, and is neither distracting nor chaotic. The materials reviewed for Fishtank Math AGA have a visual design (whether in print or digital) that supports students in engaging thoughtfully with the subject, and is neither distracting nor chaotic. There is a consistent design within the units and lessons that supports learning on the digital platform. Each lesson contains the following components: Lesson Objective, Common Core Standards, Foundational Standards, Criteria for Success, Anchor Problems, and Target Tasks. In addition to these components, most lessons contain Tips for Teachers and Problem Set links. While there is a consistent design within the units and lessons that supports learning on the digital platform, this design mainly supports teachers by giving guidance for lesson presentations and providing links to learning resources. There are no separate materials for students. Student versions of the materials have to be created by teachers. While the visual layout is appealing, there are various errors within the materials. Examples include, but are not limited to: Algebra 1, Unit 1, Lesson 8, Anchor Problem 1 has a link to a video of a ball rolling down a ramp so that students can sketch a graph of the distance the ball travels over time; however, the YouTube video says it is unavailable and is a private video. Also, in Anchor Problem 2, the fourth bullet under Guiding Questions is incomplete: "The equation that represents a quadratic function is. How can you verify the points you created on the graph using this equation?" Algebra 1, Unit 5, Lesson 13, Anchor Problem 2, the first and second questions under Guiding Questions have an equation and then the word "{{ h}}ave" following it. The brackets should not be in either question. Algebra 2, Unit 7, Lesson 13, Problems Set, two of the three links do not work. The first one gives an "Error 404 - Not Found" when clicked and the third link says "Classzone has been retired." Geometry, Unit 5, Lesson 8, Anchor Problem 3, under Notes, there is a link to NCTM's Illuminations. However, when clicked, a "Members-Only Access" page appears. Indicator 3z Materials provide teacher guidance for the use of embedded technology to support and enhance student learning, when applicable. The materials reviewed for Fishtank Math AGA do not provide teacher guidance for the use of embedded technology to support and enhance student learning, when applicable. While teacher implementation guidance is included for Anchor Problems/Tasks, Notes, Problem Set, and Target Task, there is no embedded technology within the materials. Summary Gateway 1 Criterion 1a - 1f Gateway 2 Criterion 2a - 2d Criterion 2e - 2h Gateway 3 Criterion 3a - 3h Criterion 3i - 3l Criterion 3m - 3v Criterion 3w - 3z Report Published Date: 2022/01/12 Report Edition: 2021 Publisher Background HS Technology Information HS Please note: Reports published beginning in 2021 will be using version 1.5 of our review tools. Version 1 of our review tools can be found here. Learn more about this change. Math High School Review Tool The high school review criteria identifies the indicators for high-quality instructional materials. The review criteria supports a sequential review process that reflect the importance of alignment to the standards then consider other high-quality attributes of curriculum as recommended by educators. For math, our review criteria evaluates materials based on: Focus and Coherence Rigor and Mathematical Practices Instructional Supports and Usability The High School Evidence Guides complement the review criteria by elaborating details for each indicator including the purpose of the indicator, information on how to collect evidence, guiding questions and discussion prompts, and scoring criteria. Math High School Evidence Guide EdReports-Evidence-Guide-Mathematics-HS-v1.5 (1).pdf Review Criteria EdReports-Review-Criteria-Math-HS-v1.5.pdf The EdReports rubric supports a sequential review process through three gateways. These gateways reflect the importance of alignment to college and career ready standards and considers other attributes of high-quality curriculum, such as usability and design, as recommended by educators. Materials must meet or partially meet expectations for the first set of indicators (gateway 1) to move to the other gateways. Gateways 1 and 2 focus on questions of alignment to the standards. Are the instructional materials aligned to the standards? Are all standards present and treated with appropriate depth and quality required to support student learning? Gateway 3 focuses on the question of usability. Are the instructional materials user-friendly for students and educators? Materials must be well designed to facilitate student learning and enhance a teacher's ability to differentiate and build knowledge within the classroom. In order to be reviewed and attain a rating for usability (Gateway 3), the instructional materials must first meet expectations for alignment (Gateways 1 and 2). Alignment and usability ratings are assigned based on how materials score on a series of criteria and indicators with reviewers providing supporting evidence to determine and substantiate each point awarded. For ELA and math, alignment ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for alignment to college- and career-ready standards, including that all standards are present and treated with the appropriate depth to support students in learning the skills and knowledge that they need to be ready for college and career. For science, alignment ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for alignment to the Next Generation Science Standards, including that all standards are present and treated with the appropriate depth to support students in learning the skills and knowledge that they need to be ready for college and career. For all content areas, usability ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for effective practices (as outlined in the evaluation tool) for use and design, teacher planning and learning, assessment, differentiated instruction, and effective technology use. Math K-8 Focus and Coherence - 14 possible points 12-14 points: Meets Expectations 8-11 points: Partially Meets Expectations Below 8 points: Does Not Meet Expectations Rigor and Mathematical Practices - 18 possible points 11-15 points: Partially Meets Expectations Below 11 points: Does Not Meet Expectations Instructional Supports and Usability - 38 possible points Below 23: Does Not Meet Expectations Text Complexity and Quality - 58 possible points Building Knowledge with Texts, Vocabulary, and Tasks - 32 possible points 28-32 points: Meet Expectations ELA High School Science Middle School Designed for NGSS - 26 possible points Coherence and Scope - 56 possible points
CommonCrawl
A fourth order block-hexagonal grid approximation for the solution of Laplace's equation with singularities Adiguzel A Dosiyev1 & Emine Celiker1 Advances in Difference Equations volume 2015, Article number: 59 (2015) Cite this article The hexagonal grid version of the block-grid method, which is a difference-analytical method, has been applied for the solution of Laplace's equation with Dirichlet boundary conditions, in a special type of polygon with corner singularities. It has been justified that in this polygon, when the boundary functions away from the singular corners are from the Hölder classes \(C^{4,\lambda}\), \(0<\lambda<1\), the uniform error is of order \(O(h^{4})\), h is the step size, when the hexagonal grid is applied in the 'nonsingular' part of the domain. Moreover, in each of the finite neighborhoods of the singular corners ('singular' parts), the approximate solution is defined as a quadrature approximation of the integral representation of the harmonic function, and the errors of any order derivatives are estimated. Numerical results are presented in order to demonstrate the theoretical results obtained. It is well known that angular singularities arise in many applied problems when the solution of Laplace's equation is considered, and that finite-difference and finite-element methods may become less accurate when singularities are not taken into account. In the last two decades, for the solution of singularity problems, various combined and highly accurate methods have been proposed (see [1–8], and references therein). Among these methods the block-grid method (BGM), presented in [6–8], on polygons with interior angles \(\alpha_{j}\pi\), \(j=1,2,\ldots,N\), where \(\alpha_{j}\in \{ \frac{1}{2},1,\frac{3}{2},2 \} \) (staircase polygons), requires the finite neighborhood of the singular vertices to be covered by sectors (blocks), and the remaining part of the domain by overlapping rectangles ('nonsingular' part). The finite-difference method with square grids is used for the approximate solution in the 'nonsingular' part, and in the blocks the integral representations of the harmonic function are approximated by the exponentially convergent mid-point quadrature rule (see [9]). Finally these subsystems are connected together by constructing an appropriate order matching operator. BGM is a highly accurate method not only for the approximation of the solution, but also for the approximation of its derivatives around singular points. In this paper, the fourth order BGM is extended and justified for the Dirichlet problem of Laplace's equation on polygons with interior angles \(\alpha_{j}\pi\), where \(\alpha_{j}\in \{ \frac{1}{3},\frac{2}{3} ,1,2 \} \) (non-staircase), by gluing with the matching operator the 7-point approximation on a hexagonal grid in the 'nonsingular' part and the approximation of the integral representations around the singular points (on 'singular' parts). An advantage of using the hexagonal grid version of BGM in this domain is that a highly accurate approximation on the irregular grids is not required as in [8]. Thus the realization of the total system of algebraic equations becomes simpler. This may not be the case for this type of domain when square or rectangular grids are applied. Furthermore it is justified that, when the boundary functions on the sides except the adjacent sides of the singular vertices are given in \(C^{4,\lambda}\), \(0<\lambda<1\), the proposed hexagonal grid version of BGM has an accuracy of \(O ( h^{4} ) \), h is the mesh step. The same order of accuracy is obtained for the 9-point scheme on a square grid (see [10, 11]). Finally in the last section of the paper, numerical experiments are demonstrated to support the theoretical results obtained. Boundary value problem on a special type of polygon Let D be an open simply connected polygon, \(\gamma_{j}\), \(j=1,2,\ldots,N\), be its sides, including the ends, enumerated counterclockwise (\(\gamma _{0}\equiv\gamma_{N}\), \(\gamma_{1}\equiv\gamma_{N+1}\)), and let \(\alpha _{j}\pi\), \(\alpha_{j}\in \{ \frac{1}{3},\frac{2}{3},1,2 \} \), be the interior angles formed by the sides \(\gamma_{j-1}\) and \(\gamma_{j}\). Furthermore, let \(\dot{\gamma}_{j}=\gamma_{j-1}\cap\gamma_{j}\) be the jth vertex of D, \(\gamma=\bigcup_{j=1}^{N}\gamma_{j}\) be the boundary of D; s is the arclength measured along the boundary of D in the positive direction, and \(s_{j}\) is the value of s at \(\dot{\gamma }_{j}\). We denote by \(r_{j}\), \(\theta_{j}\) the polar system of coordinates with poles in \(\dot{\gamma}_{j}\) and the angle \(\theta _{j}\) is taken counterclockwise from the side \(\gamma_{j}\). Consider the boundary value problem $$\begin{aligned}& \Delta u = 0 \quad\text{on } D, \end{aligned}$$ $$\begin{aligned}& u = \varphi_{j} \quad\text{on } \gamma_{j}, j=1,2, \ldots,N, \end{aligned}$$ where \(\Delta\equiv\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}\), \(\varphi_{j}\), \(j=1,2,\ldots,N\), are given functions and $$ \varphi_{j}\in C^{4,\lambda} ( \gamma_{j} ) ,\quad0< \lambda<1, 1\leq j\leq N. $$ In addition, at the vertices \(\dot{\gamma}_{j}\), for \(\alpha _{j}=1/3 \), the following conjugation conditions are satisfied: $$ \varphi_{j-1}^{ ( 3p ) } ( s_{j} ) =\varphi _{j}^{ ( 3p ) } ( s_{j} ) ,\quad p=0,1. $$ No compatibility conditions are required at the vertices for \(\alpha _{j}\neq1/3\). Moreover, it is required that when \(\alpha_{j}\neq 1/3\), the boundary functions on \(\gamma_{j-1}\) and \(\gamma_{j}\) are given as algebraic polynomials of the arclength s measured along γ. Let \(E= \{ j:\alpha_{j}\neq1/3, 1\leq j\leq N \} \). We construct two fixed block sectors in the neighborhood of \(\dot{\gamma}_{j}\), \(j\in E\), denoted by \(T_{j}^{i}=T_{j}(r_{ji})\subset D\), \(i=1,2\), where \(0< r_{j2}<r_{j1}<\min\{ s_{j+1}-s_{j},s_{j}-s_{j-1} \} \), \(T_{j}(r)= \{ ( r_{j},\theta _{j} ) :0< r_{j}<r, 0<\theta_{j}<\alpha_{j}\pi \} \). On the closed sector \(\overline{T}_{j}^{1}\), \(j\in E\), we consider the carrier function \(Q_{j}(r_{j},\theta_{j})\) in the form given in [12], which satisfies the boundary conditions (2) on \(\gamma_{j-1}\cap\overline{T}_{j}^{1}\) and \(\gamma_{j}\cap\overline{ T}_{j}^{1}\), \(j\in E\). We set (see [12]) $$ R_{j}(r_{j},\theta_{j},\eta)= \frac{1}{\alpha_{j}}\sum_{k=0}^{1}(-1)^{k}R\biggl( \biggl( \frac{r}{r_{j2}} \biggr) ^{1/\alpha_{j}},\frac{\theta }{\alpha _{j}},(-1)^{k} \frac{\eta}{\alpha_{j}} \biggr) ,\quad j\in E, $$ $$ R(r,\theta,\eta)=\frac{1-r^{2}}{2\pi(1-2r\cos(\theta-\eta)+r^{2})} $$ is the kernel of the Poisson integral for a unit circle. The following lemma acts as a basis for the approximation of the solution around the vertices \(\dot{\gamma}_{j}\) with angle \(\alpha_{j}\pi\), \(\alpha_{j}\neq1/3\). The solution u of problem (1)-(4) can be represented on \(\overline{T}_{j}^{2}\backslash V_{j}\), \(j\in E\), in the form $$ u(r_{j},\theta_{j})=Q_{j}(r_{j}, \theta_{j})+\int_{0}^{\alpha_{j}\pi } \bigl(u(r_{j2},\eta)-Q_{j}(r_{j2},\eta) \bigr)R_{j}(r_{j},\theta_{j},\eta)\,d\eta, $$ where \(V_{j}\) is the curvilinear part of the boundary of the sector \(T_{j}^{2}\). For the approximation of problem (1), (2) in the domain \(\overline{D}\), we apply the hexagonal grid version of the block-grid method (see [6–8]). The application of this method first of all requires the construction of two more sectors \(T_{j}^{3}\) and \(T_{j}^{4}\), where \(0< r_{j4}<r_{j3}<r_{j2}\). Let \(D_{T}=D\backslash ( \bigcup_{j\in E}\overline{T}_{j}^{4} ) \). The following steps are taken for the realization: We blockade the singular corners \(\dot{\gamma}_{j}\), \(j\in E\), by the double sectors \(T_{j}^{i}(r_{ji})\), \(i=2,3\), with \(T_{k}^{2}\cap T_{l}^{3}=\emptyset\), \(k\neq l\), \(k,l\in E\), and cover the polygon D by overlapping parallelograms denoted by \(D_{l}^{\prime}\), \(l=1,2,\ldots ,M\), and sectors \(T_{j}^{3}\), \(j\in E\), such that the distance from \(\overline{ D}_{l}^{\prime}\) to \(\dot{\gamma}_{j}\) is not less that \(r_{j4}\) for all \(l=1,2,\ldots,M\). On the parallelograms \(\overline{D}_{l}^{\prime}\), \(l=1,2,\ldots,M\), we use the 7-point scheme for the hexagonal grid with step size \(h_{l}\leq h\), h a parameter, for the approximation of Laplace's equation, and the singular part \(\overline{T}_{j}^{3}\), \(j\in E\), is approximated by using the harmonic function defined in Lemma 2.1. The fourth order matching operator in a hexagonal grid is applied for connecting the subsystems. Schwarz's alternating procedure is used for solving the finite-difference system formed for Laplace's equation on the parallelograms covering \(D_{T}\). Let \(D_{l}^{\prime}\subset D_{T}\), \(l=1,2,\ldots,M\), be open fixed parallelograms and \(D\subset ( \bigcup_{l=1}^{M}D_{l}^{\prime} ) \cup ( \bigcup_{j\in E}T_{j}^{3} ) \subset D\). We denote by \(\eta _{l}\) the boundary of \(D_{l}^{\prime}\), \(l=1,2,\ldots M\), by \(V_{j}\) the curvilinear part of the boundary of the sector \(T_{j}^{2}\), and let \(t_{j}= ( \bigcup_{l=1}^{M}\eta_{l} ) \cap\overline{T}_{j}^{3}\). For the arrangement of the parallelograms \(D_{l}^{\prime}\), \(l=1,2,\ldots ,M\), it is required that any point P lying on \(\eta_{l}\cap D_{T}\), \(1\leq l\leq M\), or lying on \(V_{j}\cap D\), \(j\in E\), falls inside at least on of the parallelograms \(D_{l(P)}^{\prime}\), \(1\leq l(P)\leq M\), depending on P, where the distance from P to \(D_{T}\cap\eta_{l(P)}\) is not less than some constant \(\kappa_{0}\) independent of P. Let \(h\in( 0,\kappa_{0}/4 ] \) be a parameter, and define a hexagonal grid on \(D_{l}^{\prime}\), \(1\leq l\leq M\), with maximal positive step \(h_{l}\leq h\), such that the boundary \(\eta_{l}\) lies entirely on the grid lines. Let \(D_{lh}^{\prime}\) be the set of grid nodes on \(D_{l}^{\prime}\), \(\eta_{l}^{h}\) be the set of nodes on \(\eta_{l}\), and let \(\overline{D}_{lh}^{\prime}=D_{lh}^{\prime}\cup\eta_{l}^{h}\). Furthermore, \(\eta_{l0}^{h}\) denotes the set of nodes on \(\eta_{l}\cap D_{T}\), \(\eta_{l1}^{h}=\eta_{l}^{h}\backslash\eta_{l0}^{h}\), and \(t_{j}^{h}\) denotes the set of nodes on \(t_{j}\). We also specify a natural number \(n\geq [ \ln^{1+\chi}h^{-1} ] +1\), where \(\chi>0\) is a fixed number, and the quantities \(n(j)=\max \{ 4, [ \alpha_{j}n ] \} \), \(\beta _{j}=\alpha_{j}\pi/n(j)\), and \(\theta_{j}^{m}=(m-1/2)\beta_{j}\), \(j\in E\), \(1\leq m\leq n(j)\). On the arc \(V_{j}\) we choose the points \(( r_{j2},\theta_{j}^{m} ) \), \(1\leq m\leq n(j)\), and denote the set of these points by \(V_{j}^{n}\). Finally, we have $$ \omega^{h,n}= \Biggl( \bigcup_{l=1}^{M} \eta_{l0}^{h} \Biggr) \cup\biggl( \bigcup _{j\in E}V_{j}^{n} \biggr) ,\qquad \overline{D}_{\ast}^{h,n}=\omega^{h,n}\cup\Biggl( \bigcup_{l=1}^{M}\overline{D}^{\prime}_{lh} \Biggr) . $$ For the approximation of the solution at the points of the set \(\omega ^{h,n}\) we use the fourth order linear matching operator \(S^{4}\) constructed in [8], which can be represented as follows: $$ S^{4}(u_{h},\varphi)=\sum_{k=0}^{16} \lambda_{k}u_{h} ( P_{k} ) , $$ where \(\varphi= \{ \varphi_{j} \} _{j=1}^{N}\), $$ \lambda_{k}\geq0,\quad\sum_{k=0}^{16} \lambda_{k}=1. $$ Consider the system of difference equations $$\begin{aligned}& u_{h} = Su_{h}\quad\text{on }D_{lh}^{\prime}, \end{aligned}$$ $$\begin{aligned}& u_{h} = \varphi\quad\text{on }\eta_{l1}^{h}, \end{aligned}$$ $$\begin{aligned}& u_{h}(r_{j},\theta_{j}) = Q_{j}(r_{j}, \theta_{j}) +\beta_{j}\sum_{k=1}^{n(j)}R_{j} \bigl(r_{j},\theta_{j},\theta_{j}^{k} \bigr) \bigl[ u_{h}\bigl(r_{j2},\theta_{j}^{k} \bigr)-Q_{j}\bigl(r_{j2},\theta_{j}^{k} \bigr) \bigr]\quad\text{on }t_{j}^{h}, \end{aligned}$$ $$\begin{aligned}& u_{h}=S^{4} ( u_{h},\varphi)\quad\text{on } \omega^{h,n}, \end{aligned}$$ where \(1\leq l\leq M\), \(j\in E\), and the operator S is defined as $$\begin{aligned} Su(x,y) =&\frac{1}{6}\biggl( u ( x+h,y ) +u \biggl( x+ \frac{h}{2},y+ \frac{\sqrt{3}}{2}h \biggr) +u \biggl( x- \frac{h}{2},y+\frac{\sqrt{3}}{2} h \biggr) \\ &{} +u ( x-h,y ) +u \biggl( x-\frac{h}{2},y-\frac{\sqrt{3}}{2} h \biggr) +u \biggl( x+\frac{h}{2},y-\frac{\sqrt{3}}{2}h \biggr) \biggr) . \end{aligned}$$ The solution of this system is the approximation of the solution of problem (1), (2) on \(\overline{D}_{\ast}^{h,n}\). There is a natural number \(n_{0}\) such that for all \(n\geq n_{0}\) the system of equations (8)-(11) has a unique solution. Taking into account the corresponding homogeneous system of system (8)-(10), the proof follows by analogy to Lemma 2 in [6]. □ Now consider the sector \(T_{j}^{\ast}=T_{j}(r_{j}^{\ast})\), where \(r_{j}^{\ast}=(r_{j2}+r_{j3})/2\), \(j\in E\). Let \(u_{h}\) be the solution of the system of equations (8)-(11). The function $$ U_{h}(r_{j},\theta_{j})=Q_{j}(r_{j}, \theta_{j})+\beta_{j}\sum_{q=1}^{n(j)}R_{j} \bigl(r_{j},\theta_{j},\theta_{j}^{q} \bigr) \bigl( u_{h}\bigl(r_{j2},\theta_{j}^{q} \bigr)-Q_{j}\bigl(r_{j2},\theta_{j}^{q} \bigr) \bigr) $$ defined on \(T_{j}^{\ast}\) is an approximate solution of problem (1), (2), on the closed block \(\overline{T}_{j}^{3}\), \(j\in E\). Error analysis of the 7-point approximation on the special parallelogram Let \(D^{\prime}\) be one of the parallelograms covering the 'nonsingular' part of the polygon D defined in Section 2. The boundaries of the parallelogram \(D^{\prime } \) are denoted by \(\gamma_{j}^{\prime}\), enumerated counterclockwise starting from left, including the ends, \(\dot{\gamma}_{j}^{\prime }=\gamma_{j-1}^{\prime}\cap\gamma_{j}^{\prime}\), \(j=1,2,3,4\), denotes the vertices of \(D^{\prime}\), \(\gamma^{\prime}=\bigcup _{j=1}^{4}\gamma _{j}^{\prime}\), and \(\overline{D}^{\prime}=D^{\prime}\cup\gamma ^{\prime } \). Furthermore \(\gamma\cap\gamma^{\prime}\neq\emptyset\), but the vertices \(\dot{\gamma}_{m}^{\prime}\) with an interior angle of \(\alpha_{m}\pi\neq\pi/3\) are located either inside of D, or on the interior of a side \(\gamma_{m}\) of D, \(1\leq m\leq N\). We define the open parallelogram \(D^{\prime}\) as \(D^{\prime}= \{ ( x,y ) :0< y<\sqrt{3}a/2, d-y/\sqrt{3} <x<e-y/\sqrt{3} \}\). The boundary value problem (1)-(4) is considered on \(D^{\prime}\): $$\begin{aligned}& \Delta v = 0\quad\text{on }D^{\prime}, \end{aligned}$$ $$\begin{aligned}& v = \psi_{j}\quad\text{on }\gamma_{j}^{\prime}, j=1,2,3,4, \end{aligned}$$ where \(\psi_{j}\) are the values of the solution of problem (1)-(4) on \(\gamma^{\prime}\). Let \(h>0\), where \((e-d)/h\geq2\), \(a/\sqrt{3}h \geq2\) are integers. We assign to \(D^{\prime}\) a hexagonal grid of the form \(D_{h}^{\prime}= \{ (x,y)\in D^{\prime}:x=\frac{h}{2}(1-l)+kh, y=l\frac{\sqrt{3}h}{2}, k,l=0,\pm1,\pm2,\pm3,\ldots\} \). Let \(\gamma_{jh}^{\prime}\) be the set of nodes on the interior of \(\gamma _{j}^{\prime}\), and $$\begin{aligned}& \dot{\gamma}_{jh}^{\prime} = \gamma_{j-1}^{\prime} \cap\gamma_{j}^{\prime},\qquad\gamma_{h}^{\prime}= \bigcup_{j=1}^{4}\gamma_{jh}^{\prime}, \quad j=1,2,3,4, \\& \overline{D}_{h}^{\prime} = D_{h}^{\prime}\cup \gamma_{h}^{\prime}. \end{aligned}$$ We consider the system of finite-difference equations: $$\begin{aligned}& v_{h} = Sv_{h}\quad\text{on }D_{h}^{\prime}, \end{aligned}$$ $$\begin{aligned}& v_{h} = \psi_{j}\quad\text{on }\gamma _{jh}^{\prime}, j=1,2,3,4, \end{aligned}$$ $$\begin{aligned} Sv(x,y) =&\frac{1}{6}\biggl( v ( x+h,y ) +v \biggl( x+ \frac{h}{2},y+ \frac{\sqrt{3}}{2}h \biggr) +v \biggl( x- \frac{h}{2},y+\frac{\sqrt{3}}{2} h \biggr) \\ &{} +v ( x-h,y ) +v \biggl( x-\frac{h}{2},y-\frac{\sqrt{3}}{2} h \biggr) +v \biggl( x+\frac{h}{2},y-\frac{\sqrt{3}}{2}h \biggr) \biggr) . \end{aligned}$$ Since (17) has nonnegative coefficients and their sum is equal to 1, the solution of system (15), (16) exists and is unique (see [13]). Everywhere below we will denote constants which are independent of h and of the cofactors on their right by \(c,c_{0},c_{1},\ldots\) , generally using the same notation for different constants for simplicity. $$ \psi_{j}(s)\in C^{4,\lambda}\bigl(\gamma_{j}^{\prime} \bigr),\quad0< \lambda<1 $$ $$ \psi_{j-1}^{ ( 3p ) }(s_{j})=\psi_{j}^{ ( 3p ) }(s_{j}), \quad p=0,1, $$ be satisfied on the vertices whose interior angles are \(\alpha_{j}\pi =\pi/3\), where \(j=1,2,3,4\). Then the solution of problem (13), (14) $$ v\in C^{4,\lambda}\bigl(\overline{D}^{\prime}\bigr). $$ The closed parallelogram \(\overline{D}^{\prime}\) lies inside the polygon D defined in Section 2, and the vertices \(\dot{\gamma} _{m}^{\prime}\) with an interior angle of \(\alpha_{m}\pi\neq\pi/3\) are located either inside of D or on the interior of a side \(\gamma_{m}\) of D, \(1\leq m\leq N\). Since the boundary functions (14), by the definition of the boundary functions \(\varphi_{j}\) in problem (1), (2) satisfy conditions (3), (4), from the results in [14], (20) follows. □ Let \(D_{h,k}^{\prime}\) be the set of nodes whose distance from the point \(P\in D_{h}^{\prime}\) to \(\gamma_{h}^{\prime}\) is \(\frac{\sqrt{3}}{2} kh\), \(1\leq k\leq a^{\ast}\), where \(a^{\ast}= [ \frac{2d_{t}}{\sqrt{3} h} ] \), \([ c ] \) denotes the integer part of c, and \(d_{t}\) is the minimum of the half-lengths of the sides of the parallelogram. Let \(w_{h}^{k}\neq\mathrm{const.}\) be the solution of the system of equations $$\begin{aligned}& w_{h}^{k} = Sw_{h}^{k}+f_{h}^{k} \quad\textit{on }D_{h,k}^{\prime}, \\& w_{h}^{k} = Sw_{h}^{k}\quad \textit{on }D_{h}^{\prime}\backslash D_{h,k}^{\prime }, \\& w_{h}^{k} = 0\quad\textit{on }\gamma_{h}^{\prime}, \end{aligned}$$ and \(z_{h}^{k}\neq\mathrm{const.}\) be the solution of the system of equations $$\begin{aligned}& z_{h}^{k} = Sz_{h}^{k}+g_{h}^{k} \quad\textit{on }D_{h,k}^{\prime}, \\& z_{h}^{k} = Sz_{h}^{k}\quad \textit{on }D_{h}^{\prime}\backslash D_{h,k}^{\prime }, \\& z_{h}^{k} = 0\quad\textit{on }\gamma_{h}^{\prime}, \end{aligned}$$ where \(1\leq k\leq a^{\ast}\). If \(\vert f_{h}^{k}\vert \leq g_{h}^{k}\), then $$ \bigl\vert w_{h}^{k}\bigr\vert \leq z_{h}^{k}, \quad1\leq k\leq a^{\ast}. $$ The proof follows analogously to the proof of the comparison theorem given in [13]. □ Let v be the trace of the solution of problem (13), (14) on \(\overline{D}_{h}^{\prime}\), and \(v_{h}\) be the solution of system (15), (16). If $$ \psi_{j}(s)\in C^{4,\lambda}\bigl(\gamma_{j}^{\prime} \bigr),\quad0< \lambda<1, j=1,2,3,4 $$ on the vertices with an interior angle of \(\alpha_{j}\pi=\pi/3\), \(j=1,2,3,4\), then $$ \max_{\overline{D}_{h}^{\prime}} \vert v-v_{h}\vert \leq ch^{4}. $$ Let \(\epsilon_{h}=v_{h}-v\) on \(\overline{D}_{h}^{\prime}\). Clearly $$\begin{aligned}& \epsilon_{h} = S\epsilon_{h}+(Sv-v)\quad\text{on }D_{h}^{\prime}, \end{aligned}$$ $$\begin{aligned}& \epsilon_{h} = 0\quad\text{on }\gamma_{h}^{\prime}. \end{aligned}$$ Let \(D_{1h}^{\prime}\) contain the set of nodes whose distance from the boundary \(\gamma^{\prime}\) is \(\frac{\sqrt{3}h}{2}\), and hence for \((x,y)\in D_{1h}^{\prime}\), \((x+sH,y+sK)\in\overline{D}^{\prime}\) for \(0\leq s\leq1\), \(H=\pm\frac{h}{2}, \pm h\), \(K=0, \pm\frac{\sqrt{3}h}{2} \), \(H^{2}+K^{2}>0\), and \(D_{2h}^{\prime}=D_{h}^{\prime}\backslash D_{1h}^{\prime}\). Moreover, let $$ \epsilon_{h}=\epsilon_{h}^{1}+\epsilon _{h}^{2}. $$ We rewrite problem (23), (24) as $$\begin{aligned}& \epsilon_{h}^{1} = S\epsilon_{h}^{1}+(Sv-v) \quad\text{on }D_{1h}^{\prime}, \\& \epsilon_{h}^{1} = S\epsilon_{h}^{1} \quad\text{on }D_{2h}^{\prime}, \\& \epsilon_{h}^{1} = 0\quad\text{on }\gamma _{h}^{\prime} \end{aligned}$$ $$\begin{aligned}& \epsilon_{h}^{2} = S\epsilon_{h}^{2} \quad\text{on }D_{1h}^{\prime}, \\& \epsilon_{h}^{2} = S\epsilon_{h}^{2}+(Sv-v) \quad\text{on }D_{2h}^{\prime}, \\& \epsilon_{h}^{2} = 0\quad\text{on }\gamma _{h}^{\prime}. \end{aligned}$$ In order to obtain an estimation for \(Sv-v\) on \(D_{1h}^{\prime}\), we use Taylor's formula. On the basis of Lemma 3.1, we have $$ \vert Sv-v\vert \leq c_{3}h^{4}\quad\text{on }D_{1h}^{\prime}. $$ Since at least two values of \(\epsilon_{h}^{1}\) in \(S\epsilon _{h}^{1}\) are lying on the boundary \(\gamma_{h}^{\prime}\), on which \(\epsilon _{h}^{1}=0\), from (26), (28), and the maximum principle (see [13]), we obtain $$ \max_{\overline{D}_{h}^{\prime}}\bigl\vert \epsilon_{h}^{1} \bigr\vert \leq\frac{2}{3}\max_{\overline{D}_{h}^{\prime}}\bigl\vert \epsilon_{h}^{1}\bigr\vert +c_{3}h^{4}. $$ $$ \max_{\overline{D}_{h}^{\prime}}\bigl\vert \epsilon_{h}^{1} \bigr\vert \leq c_{4}h^{4}, $$ where \(c_{4}=3c_{3}\). Next, we consider the estimation of \(\epsilon_{h}^{2}\). Let \(D_{2h,k}^{\prime}\) be the set of nodes whose distance from the point \(P\in D_{2h}^{\prime}\) to \(\gamma_{h}^{\prime}\) is \(\frac{\sqrt{3}}{2}kh\), \(2\leq k\leq a^{\ast}\), where \(a^{\ast}= [ \frac{2d_{t}}{ \sqrt{3}h} ] \), \([ c ] \) denotes the integer part of c, and \(d_{t}\) is the minimum of the half-lengths of the sides of the parallelogram. Furthermore, \(D_{2h,1}^{\prime}\equiv D_{1h}^{\prime}\) and \(D_{2h,0}^{\prime}\equiv\gamma_{h}^{\prime}\). Since the vertices with \(\alpha_{j}=\frac{1}{3}\) of the parallelogram \(D^{\prime}\) are never used as a node of the hexagonal grid for the estimation of \(\vert Sv-v\vert \) on \(D_{2h,k}^{\prime}\), \(2\leq k\leq a^{\ast}\), we use the inequality $$ \max_{p+q=6}\biggl\vert \frac{\partial^{6}v(x,y)}{\partial x^{p}\, \partial y^{q}}\biggr\vert \leq c_{0}\rho^{\lambda-2}\quad\text{on }\overline{D}^{\prime } \backslash\gamma_{m}^{\prime}, $$ for the sixth order derivatives, where ρ is the distance from \((x,y)\in D^{\prime}\) to \(\gamma_{m}^{\prime}\). Hence, we obtain $$ \vert Sv-v\vert \leq c_{5}h^{6}/(kh)^{2-\lambda} \quad\text{on } D_{2h,k}^{\prime}, 2\leq k\leq a^{\ast}. $$ Consider a majorant function of the form $$ Y_{k}=\left \{ \begin{array}{l@{\quad}l} 3m&\text{if }P\in D_{2h,m}^{\prime}, 0\leq m\leq k, \\ 3k&\text{if }P\in D_{2h,m}^{\prime}, m>k. \end{array} \right . $$ Hence \(Y_{k}\) is a solution of the finite-difference problem $$\begin{aligned}& Y_{k} = SY_{k}+\mu_{k}\quad\text{on }D_{2h,k}^{\prime}, \\& Y_{k} = SY_{k}\quad\text{on }D_{h}^{\prime} \backslash D_{2h,k}^{\prime}, \\& Y_{k} = 0\quad\text{on }\gamma_{h}^{\prime}, \end{aligned}$$ where \(1\leq\mu_{k}\leq3\), \(1\leq k\leq a^{\ast}\). We represent the solution of system (27) as the sum of the solution of the following subsystems: $$\begin{aligned}& \epsilon_{h,k}^{2} = S\epsilon_{h,k}^{2}+ \mu_{k}^{\prime}\quad\text{on } D_{2h,k}^{\prime}, \\& \epsilon_{h,k}^{2} = S\epsilon_{h,k}^{2} \quad\text{on }D_{h}^{\prime }\backslash D_{2h,k}^{\prime}, \\& \epsilon_{h,k}^{2} = 0\quad\text{on }\gamma _{h}^{\prime}, \end{aligned}$$ where \(1\leq k\leq a^{\ast}\), \(\mu_{k}^{\prime}=0\) when \(k=1\) and \(\vert \mu_{k}^{\prime} \vert \leq c_{6}\frac{h^{4+\lambda}}{k^{2-\lambda}}\) when \(k=2,3,\ldots,a^{\ast}\). By (32), (33), and Lemma 3.2, it follows that $$ \bigl\vert \epsilon_{h,k}^{2}\bigr\vert \leq c_{6}\frac{h^{4+\lambda}}{k^{2-\lambda}}Y_{k}. $$ Hence, by taking (33) and (34) into consideration, we have $$\begin{aligned} \max_{D_{h}^{\prime}}\bigl\vert \epsilon_{h}^{2} \bigr\vert \leq&\sum_{k=1}^{a^{\ast}}\epsilon _{h,k}^{2}\leq\sum_{k=1}^{a^{\ast}}c_{6} \frac{h^{4+\lambda}}{k^{2-\lambda}}Y_{k} \\ \leq&3c_{6}h^{4+\lambda}\sum_{k=1}^{a^{\ast}} \frac{1}{k^{1-\lambda}} \leq c_{7}h^{4}. \end{aligned}$$ On the basis of (25), (29), and (35), we have estimation (22). □ Error analysis of the hexagonal block-grid equations $$ \epsilon_{h}=u_{h}-u, $$ where \(u_{h}\) is the solution of system (8)-(11), and u is the trace of the solution of problem (1), (2) on \(\overline{D} _{\ast}^{h,n}\). It is easy to show that (36) satisfies the system of equations $$ \begin{aligned} &\epsilon_{h} = S\epsilon_{h}+r_{h}^{1} \quad\text{on }D_{lh}^{\prime}, \\ &\epsilon_{h} = 0\quad\text{on }\eta_{l1}\cap\gamma _{m}, \\ &\epsilon_{h}(r_{j},\theta_{j}) = \beta _{j}\sum_{k=1}^{n(j)}R_{j} \bigl(r_{j},\theta_{j},\theta_{j}^{k} \bigr)\epsilon_{h}\bigl(r_{j2},\theta_{j}^{k} \bigr)+r_{jh}^{2}\quad\text{on }t_{j}^{h}, \\ &\epsilon_{h} = S^{4}(\epsilon_{h},0)+r_{h}^{3} \quad\text{on }\omega^{h,n}, \end{aligned} $$ where \(1\leq m\leq N\), \(1\leq l\leq M\), \(j\in E\), and $$\begin{aligned}& r_{h}^{1} = Su-u\quad\text{on }\bigcup _{l=1}^{M}D_{lh}^{\prime}, \end{aligned}$$ $$\begin{aligned}& r_{jh}^{2} = \beta_{j}\sum _{k=1}^{n(j)}R_{j}\bigl(r_{j}, \theta_{j},\theta_{j}^{k}\bigr) \bigl[ u \bigl(r_{j2},\theta_{j}^{k}\bigr)-Q_{j} \bigl(r_{j2},\theta_{j}^{k}\bigr)\bigr] \\& \hphantom{r_{jh}^{2} =}{}-\bigl(u(r_{j},\theta_{j})-Q_{j}(r_{j}, \theta_{j})\bigr)\quad\text{on }\bigcup_{j\in E}t_{j}^{h}, \end{aligned}$$ $$\begin{aligned}& r_{h}^{3} = S^{4}(u,0)-u\quad\text{on }\omega ^{h,n}. \end{aligned}$$ Let the boundary functions \(\varphi_{j}\), \(j=1,2,3,4\), in problem (1), (2) satisfy conditions (3), (4). Then $$ \max_{\omega^{h,n}}\bigl\vert r_{h}^{3}\bigr\vert \leq c_{5}h^{4}, $$ where \(\varphi=\bigcup_{j=1}^{4}\varphi_{j}\). The function \(S^{4}(u,\varphi)\) is defined as (3.14) in [8]. Keeping in mind the positioning of the points in \(\omega^{h,n}\), conditions (3), (4), and estimation (4.64) in [14], it follows that the fourth order partial derivatives of the exact solution of problem (1), (2) are bounded on \(D_{T}\). Then estimation (41) follows from the construction of the operator \(S^{4}\). □ There exists a natural number \(n_{0}\) such that for all \(n\geq\max \{ n_{0}, [ \ln^{1+\chi}h^{-1} ] +1 \} \), \(\chi>0\) being a fixed number, $$ \max_{j\in E}\bigl\vert r_{jh}^{2}\bigr\vert \leq c_{6}h^{4}. $$ The proof follows by analogy to the proof of Lemma 6.2 in [7]. □ Assume that conditions (3), (4) hold. Then there exists a natural number \(n_{0}\) such that for all \(n\geq\max \{ n_{0}, [ \ln^{1+\chi}h^{-1} ] +1 \} \), \(\chi>0\) being a fixed number, $$ \max_{\overline{D}_{\ast}^{h,n}}\vert u_{h}-u\vert \leq ch^{4}. $$ Consider an arbitrary parallelogram \(D_{l^{\ast}}^{\prime}\) and let \(t_{l^{\ast}j}^{h}=\overline{D}^{\prime}_{l^{\ast}}\cap t_{j}^{h}\). Assume that \(t_{l^{\ast}j}^{h}\neq\emptyset\), \(z_{h}\) is the solution of system (37), and \(r_{h}^{1}\), \(r_{jh}^{2}\), \(r_{h}^{3}\) are defined in the same way as (38)-(40) on \(D_{l^{\ast}}^{\prime}\), but are zero on \(\overline{D}_{\ast}^{h,n}\backslash D_{l^{\ast}}^{\prime}\). Hence, $$ V=\max_{\overline{D}_{\ast}^{h,n}}\vert z_{h}\vert =\max _{\overline{D}_{l^{\ast}}^{\prime}} \vert z_{h}\vert . $$ We represent the function \(z_{h}\) as $$ z_{h}=\sum_{k=1}^{4}z_{h}^{k}, $$ $$\begin{aligned}& \begin{aligned} &z_{h}^{2} = Sz_{h}^{2}+r_{h}^{1} \quad\text{on }D_{l^{\ast}}^{\prime}, \\ &z_{h}^{2} = 0\quad\text{on }\eta_{l^{\ast}1}^{h} \cap\gamma_{m}, \\ &z_{h}^{2} = 0\quad\text{on }t_{l^{\ast}j}^{h}, \\ &z_{h}^{2} = 0\quad\text{on }\omega^{h,n}\cap \overline{D}^{\prime}_{l^{\ast}}, \end{aligned} \end{aligned}$$ $$\begin{aligned}& \begin{aligned} &z_{h}^{3} = Sz_{h}^{3} \quad\text{on }D_{l^{\ast}}^{\prime}, \\ &z_{h}^{3} = 0\quad\text{on }\eta_{l^{\ast}1}^{h} \cap\gamma_{m}, \\ &z_{h}^{3} = r_{jh}^{2}\quad\text{on }t_{l^{\ast}j}^{h}, \\ &z_{h}^{3} = 0\quad\text{on }\omega^{h,n}\cap \overline{D}^{\prime}_{l^{\ast}}, \end{aligned} \end{aligned}$$ $$\begin{aligned}& \begin{aligned} &z_{h}^{4} = Sz_{h}^{4} \quad\text{on }D_{l^{\ast}}^{\prime}, \\ &z_{h}^{4} = 0\quad\text{on }\eta_{l^{\ast}1}^{h} \cap\gamma_{m}, \\ &z_{h}^{4} = 0\quad\text{on }t_{l^{\ast}j}^{h}, \\ &z_{h}^{4} = r_{h}^{3}\quad\text{on } \omega^{h,n}\cap\overline{D}^{\prime} _{l^{\ast}} \end{aligned} \end{aligned}$$ $$ z_{h}^{k}=0,\quad k=2,3,4\text{ on }\overline{D}_{\ast}^{h,n} \backslash D_{l^{\ast}}^{\prime}. $$ Hence by (44)-(48), \(z_{h}^{1}\) satisfies the system of equations $$ \begin{aligned} &z_{h}^{1} = Sz_{h}^{1} \quad\text{on }D_{l}^{\prime}, \\ &z_{h}^{1} = 0\quad\text{on }\eta_{l1}^{h} \cap\gamma_{m}, \\ &z_{h}^{1} = \beta_{j}\sum _{k=1}^{n(j)}R_{j}\bigl(r_{j}, \theta_{j},\theta_{j}^{k}\bigr)\sum _{k=1}^{4}z_{h}^{k} \bigl(r_{j2},\theta_{j}^{k}\bigr)\quad\text{on }t_{lj}^{h}, \\ &z_{h}^{1} = S^{4} \Biggl( \sum _{k=1}^{4}z_{h}^{k} \Biggr) \quad\text{on }\omega^{h,n}, \end{aligned} $$ where \(1\leq m\leq N\), \(1\leq l\leq M\), \(j\in E\), and the functions \(z_{h}^{k}\), \(k=2,3,4\), are assumed to be known. As the solution of system (45), \(z_{h}^{2}\), is the error function of the finite-difference solution with step size \(h_{l^{\ast}}\leq h\) of system (15), (16), by (48), the maximum principle and Lemma 3.3, we have $$ V_{2}=\max_{\overline{D}_{\ast}^{h,n}}\bigl\vert z_{h}^{2} \bigr\vert \leq c_{9}h^{4}. $$ Also, for the solutions of systems (46) and (47), as the operator S has coefficients which are nonnegative and their sum does not exceed 1, by the maximum principle, (48), Lemma 4.1, and Lemma 4.2, we obtain the inequalities $$\begin{aligned}& V_{3} = \max_{\overline{D}_{\ast}^{h,n}}\bigl\vert z_{h}^{3} \bigr\vert \leq c_{10}h^{4}, \end{aligned}$$ $$\begin{aligned}& V_{4} = \max_{\overline{D}_{\ast}^{h,n}}\bigl\vert z_{h}^{4} \bigr\vert \leq c_{11}h^{4}. \end{aligned}$$ Now we consider the solution of \(v_{h}^{1}\). Taking into consideration (49), the gluing condition of \(D_{l}^{\prime}\), \(l=1,2,\ldots,M\), and \(T_{j}^{2}\), \(j\in E\), for all \(n\geq\max \{ n_{0}, [ \ln^{1+\chi}h^{-1} ] +1 \} \) we have the inequality $$ V_{1}=\max_{\overline{D}_{\ast}^{h,n}}\bigl\vert z_{h}^{1} \bigr\vert \leq\lambda^{\ast}V+\sum_{k=2}^{4} \max_{\overline{D}_{\ast}^{h,n}}\bigl\vert z_{h}^{k}\bigr\vert , $$ where \(0<\lambda^{\ast}<1\). By (43), (44), (50), (51), (52), and (53), we have $$ V=\max_{\overline{D}_{\ast}^{h,n}}\vert z_{h}\vert \leq ch^{4}. $$ Hence (42) follows. □ For the approximation of (12), we consider the following theorem. Let \(u_{h}\) be the solution of the system of equations (8)-(11) and let an approximate solution of problem (1), (2) be found on the blocks \(\overline{T}_{j}^{3}\), \(j\in E\), by (12). There is a natural number \(n_{0}\) such that for all \(n\geq\max \{ n_{0}, [ \ln^{1+\chi}h^{-1} ] \} \), \(\chi>0\) being a fixed number, the following estimations hold: For \(\alpha_{j}=1\), \(p\geq1\), $$ \biggl\vert \frac{\partial^{p}}{\partial x^{p-q}\, \partial y^{q}} \bigl( U_{h}(r_{j},\theta _{j})-u(r_{j},\theta_{j}) \bigr) \biggr\vert \leq c_{p}h^{4}\quad\textit{on }\overline{T}_{j}^{3}. $$ For \(\alpha_{j}=\frac{2}{3},1,2\), \(0\leq p\leq\frac{1}{\alpha_{j}}\), $$ \biggl\vert \frac{\partial^{p}}{\partial x^{p-q}\, \partial y^{q}} \bigl( U_{h}(r_{j},\theta _{j})-u(r_{j},\theta_{j}) \bigr) \biggr\vert \leq c_{p}h^{4}r_{j}^{1/\alpha_{j}-p}\quad\textit{on }\overline{T}_{j}^{3}. $$ For \(\alpha_{j}=\frac{2}{3},2\), \(p>\frac{1}{\alpha_{j}}\), $$ \biggl\vert \frac{\partial^{p}}{\partial x^{p-q}\, \partial y^{q}} \bigl( U_{h}(r_{j},\theta _{j})-u(r_{j},\theta_{j}) \bigr) \biggr\vert \leq c_{p}h^{4}/r_{j}^{p-1/\alpha_{j}}\quad \textit{on }\overline{T}_{j}^{3}\backslash\dot{\gamma }_{j}, $$ where \(j\in E\), \(0\leq q\leq p\), and \(c_{p}\), \(p=0,1,\ldots\) , are constants independent of \(r_{j}\), \(\theta_{j}\), and h. By taking estimation (42) into account, the proof follows by analogy to the proof of Theorem 6.4 in [7]. □ Numerical results Two examples have been solved in order to test the effectiveness of the proposed method. In Example 5.1, it is assumed that there is a slit in the domain D, thus causing a strong singularity at the origin. The vertex \(\dot{\gamma}_{1}\) containing the singularity has an interior angle of \(\alpha_{1}\pi=2\pi\). The exact solution of this problem is assumed to be known. In Example 5.2, we consider a problem with two singularities. The vertices which contain the singularities have interior angles of \(\alpha_{j}\pi=\frac{2}{3}\pi\), \(j=2,4\). In this example, the exact solution is not known. After separating the 'singular' part in Example 5.1, the remaining part of the domain is covered by 5 overlapping parallelograms, whereas in Example 5.2, the 'nonsingular' part of the domain is covered by only two parallelograms. For the solution of the block-grid equations, Schwarz's alternating method is used. In each Schwarz iteration the system of equations on the parallelograms are solved by the block Gauss-Seidel method. The carrier function is constructed for each example, taking into consideration the boundary conditions given on the adjacent sides of the vertices in the 'singular' parts. Furthermore, the derivatives are approximated in the 'singular' parts for both of the examples. The results are provided in Tables 1-5, and Figures 1-5. Domain of the slit problem. Table 1 Results obtained for the slit problem Example 5.1 Consider the open parallelogram \(D= \{ ( x,y ) :-\frac{\sqrt{3}}{2}< y<\frac{\sqrt{3}}{2},-1- \frac{y}{\sqrt{3}}<x<1-\frac{y}{\sqrt{3}} \} \). We assume that there is a slit along the straight line \(y=0\), \(0\leq x\leq1\). Let \(\gamma_{j}\), \(j=1,2,\ldots,7\), be the sides of D, including the ends, enumerated counterclockwise starting from the upper side of the slit (\(\gamma _{0}\equiv\gamma_{7} \)), \(\gamma=\bigcup_{j=1}^{7}\gamma_{j}\), and \(\dot{\gamma}_{j}=\gamma_{j}\cap\gamma_{j-1}\) be the vertices of D. Let \(( r,\theta ) \equiv ( r_{1},\theta_{1} ) \) be a polar system of coordinates with pole in \(\dot{\gamma}_{1}\), where the angle θ is taken counterclockwise from the side \(\gamma_{1}\). We consider the boundary value problem $$ \begin{aligned} &\Delta u = 0\quad\text{on }D, \\ &u = \varphi_{j}\quad\text{on }\gamma_{j}, j=1,2, \ldots,7, \end{aligned} $$ where \(\varphi_{j}\) is the value of the function \(v ( r,\theta ) =0.5r^{1/2}\sin\frac{\theta}{2}+0.8r^{3/2}\sin \frac{3\theta}{2}+2r^{2}\cos2\theta+2.5r^{3}\cos3\theta+2\theta\) on \(\gamma_{j}\). As \(\varphi_{0}=2x^{2}+2.5x^{3}+4\pi\) and \(\varphi_{1}=2x^{2}+2.5x^{3}\), we obtain the carrier function in the form $$\begin{aligned} Q_{1}(r,\theta) =&2\theta+2 \bigl( \xi_{2}(r,\theta)+ \xi_{2}(r,2\pi-\theta) \bigr) \\ &{}+2.5 \bigl( \xi_{3}(r,\theta)+\xi_{3}(r,2\pi- \theta) \bigr) , \end{aligned}$$ where \(\xi_{2}(r,\theta)=r^{2} ( ( 2\pi-\theta ) \cos 2(2\pi-\theta)+\ln r\sin2(2\pi-\theta) ) /2\pi\) and \(\xi_{3}(r,\theta)=r^{3} ( ( 2\pi-\theta ) \cos3(2\pi -\theta)+\ln r\sin3(2\pi-\theta) ) /2\pi\). The following notation is used in the table of results. Let \(D_{l}^{\prime }\), \(l=1,2,\ldots,5\), be the open overlapping parallelograms, \(D_{NS}=\bigcup_{l=1}^{5}\overline{D}_{l}^{\prime}\) be the 'nonsingular' part, and \(D_{S}=\overline{D}\backslash D_{NS} \) denote the 'singular' part of D (see Figure 1). In Table 1, the values are obtained in the maximum norm of the difference between the exact and the approximate solutions, for the values of \(h=2^{-k}\), \(k=4,5,6,7\), and n, which is the number of quadrature nodes on \(V_{j}\). The order of convergence, \(R_{D}^{m}= \frac{\Vert v-v_{2^{-m}}\Vert _{D}}{\Vert v-v_{2^{-(m+1)}}\Vert _{D}}\) has also been included. We also present the error obtained between the derivatives of the exact and the block-grid solutions \(\epsilon _{h}^{ ( 1 ) }=r^{1/2} ( \frac{\partial u}{\partial x}-\frac{\partial U_{h}}{\partial x} ) \) and \(\epsilon_{h}^{ ( 2 ) }=r^{3/2} ( \frac{\partial^{2}u}{\partial x^{2}}-\frac{\partial ^{2}U_{h}}{\partial x^{2}} ) \), in the maximum norm, in Tables 2 and 3, respectively. Figures 2 and 3 illustrate the shapes of the derivatives \(\frac{\partial u}{\partial x}\) and \(\frac{\partial^{2}u}{\partial x^{2}}\) of the approximate (a) and the exact (b) solutions. These figures also demonstrate the highly accurate approximation of the derivatives. Approximate solution (a) and exact solution (b) of \(\pmb{\frac{\partial u}{\partial x}}\) , respectively, using polar coordinates. Approximate solution (a) and exact solution (b) of \(\pmb{\frac{\partial^{2}u}{\partial x^{2}}}\) , respectively, using polar coordinates. Table 2 Results obtained for first derivative of the slit problem Table 3 Results obtained for second derivative of the slit problem Let P be the open parallelogram \(P= \{ (x,y):0< y<\frac{\sqrt{3}}{2},-\frac{y}{\sqrt{3}}<x<1-\frac{y}{\sqrt{3}} \} \), let \(\gamma_{j}\), \(j=1,2,3,4\), be the sides of P, including the ends, enumerated counterclockwise starting from left (\(\gamma_{0}\equiv\gamma_{4}\), \(\gamma_{1}\equiv\gamma_{5}\)), \(\gamma=\bigcup_{j=1}^{4}\gamma_{j}\), and \(\dot{\gamma_{j}}=\gamma _{j}\cap\gamma_{j-1}\) be the vertices of P. We look at a problem with two corner singularities at the vertices \(\dot{\gamma}_{2}\) and \(\dot{\gamma}_{4}\), where \(\alpha_{j}\pi=\frac{2}{3}\pi\), \(j=2,4\). The two 'singular' corners of P are covered by sectors and these areas are denoted by \(P_{S}^{i}\), \(i=1,2\), and two overlapping parallelograms cover the 'nonsingular' part of the domain, denoted by \(P_{NS}^{i}\), \(i=1,2\) (see Figure 4). Domain of the problem with two singularities. $$\begin{aligned}& \Delta u = 0\quad\text{on }P, \\& u = 0\quad\text{on }\gamma_{j}, j=1,4, \\& u = 1\quad\text{on }\gamma_{j}, j=2,3. \end{aligned}$$ The carrier functions constructed for each singularity are \(Q_{2} ( r_{2},\theta_{2} ) =1-\frac{3\theta_{2}}{2\pi}\) and \(Q_{4}(r_{4},\theta_{4})=\frac{3\theta_{4}}{2\pi}\). We have checked the accuracy of the obtained approximate results \(u_{h}\) by looking at the order of convergence using the formula \(\widetilde{R}_{P}^{m}=\frac{\Vert u_{2^{-m}}-u_{2^{-m+1}}\Vert _{P}}{\Vert u_{2^{-m-1}}-u_{2^{-m}}\Vert _{P}}\), which corresponds to 24, for the pairs \((h,n)=(2^{-4},80)\), \((2^{-5},100)\), \((2^{-6},100)\), \((2^{-7},90)\). The results are presented in Table 4. Moreover, \(\frac{\partial ^{2}u}{\partial x^{2}}\) has been approximated in the 'singular' part, where u is the unknown exact solution of problem (55). The results are presented in Table 5 and illustrated further in Figure 5. \(\pmb{\frac{\partial^{2}U_{h}}{\partial x^{2}}}\) in 'singular' part \(\pmb{P_{S}^{1}}\) shown by (a) and in \(\pmb{P_{S}^{2}}\) shown by (b). Table 4 Order of convergence of Example 5.2 Table 5 Order of convergence of derivatives in 'singular' parts of Example 5.2 A fourth order square and hexagonal grid version of the block-grid method, for the solution of the boundary value problem of Laplace's equation on staircase polygons, with interior angles \(\alpha_{j}\in \{ \frac{1}{2},1,\frac {3}{2},2 \}\), is extended for the polygons with interior angles \(\alpha_{j}\pi\), \(\alpha_{j}\in \{ \frac{1}{3},\frac{2}{3},1,2 \}\), by constructing and justifying the block-hexagonal grid method. Moreover, the smoothness requirement on the boundary functions away from the singular vertices (outside of the 'singular' parts) is lowered down from the Hölder classes \(C^{6,\lambda}\), \(0<\lambda<1\), as in [8], to \(C^{4,\lambda}\), \(0<\lambda<1\), which was proved for the 9-point scheme on square grids (see [10, 11]). The proposed version of the BGM can be applied for the mixed boundary value problem of Laplace's equation on the above mentioned polygons. Furthermore, by this method any order derivatives of the solution can be highly approximated on the 'singular' parts, which are difficult to obtain in other numerical methods. This method can also be used for the solution of the biharmonic equation by representing the problem with two problems for the Laplace and Poisson equations. Li, ZC: A nonconforming combined method for solving Laplace's boundary value problems with singularities. Numer. Math. 49(5), 475-497 (1986) Article MATH MathSciNet Google Scholar Li, ZC: Combined Methods for Elliptic Problems with Singularities, Interfaces and Infinities. Kluwer Academic, Dordrecht (1998) Olson, LG, Georgiou, GC, Schultz, WW: An efficient finite element method for treating singularities in Laplace's equation. J. Comput. Phys. 96(2), 391-410 (1991) Xenophontos, C, Elliotis, M, Georgiou, G: A singular function boundary integral method for Laplacian problems with boundary singularities. SIAM J. Sci. Comput. 28(2), 517-532 (2006) Georgiou, GC, Smyrlis, YS, Olson, L: A singular function boundary integral method for the Laplace equation. Commun. Numer. Methods Eng. 12(2), 127-134 (1996) Dosiyev, AA: A block-grid method of increased accuracy for solving Dirichlet's problem for Laplace's equation on polygons. Comput. Math. Math. Phys. 34(5), 591-604 (1994) Dosiyev, AA: The high accurate block-grid method for solving Laplace's boundary value problem with singularities. SIAM J. Numer. Anal. 42(1), 153-178 (2004) Dosiyev, AA, Celiker, E: Approximation on the hexagonal grid of the Dirichlet problem for Laplace's equation. Bound. Value Probl. 2014, 73 (2014) Volkov, EA: An exponentially converging method for solving Laplace's equation on polygons. Math. USSR Sb. 37(3), 295-325 (1980) Dosiyev, AA: On the maximum error in the solution of Laplace equation by finite difference method. Int. J. Pure Appl. Math. 7, 229-242 (2003) MATH MathSciNet Google Scholar Dosiyev, AA, Buranay, SC: A fourth order accurate difference-analytical method for solving Laplace's boundary value problem with singularities. In: Tas, K, Machado, JAT, Baleanu, D (eds.) Mathematical Methods in Engineering, pp. 167-176. Springer, Berlin (2007) Volkov, EA: Block Method for Solving the Laplace Equation and Constructing Conformal Mappings. CRC Press, Boca Raton (1994) Samarskii, AA: The Theory of Difference Schemes. Dekker, New York (2001) Volkov, EA: Differentiability properties of solutions of boundary value problems for the Laplace equation on a polygon. Tr. Mat. Inst. Steklova 77, 113-142 (1965) Department of Mathematics, Eastern Mediterranean University, Famagusta, KKTC, Mersin 10, Turkey Adiguzel A Dosiyev & Emine Celiker Adiguzel A Dosiyev Emine Celiker Correspondence to Adiguzel A Dosiyev. All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. Dosiyev, A.A., Celiker, E. A fourth order block-hexagonal grid approximation for the solution of Laplace's equation with singularities. Adv Differ Equ 2015, 59 (2015). https://doi.org/10.1186/s13662-015-0407-9 Laplace's equation singularity problem block-grid method 3rd International Eurasian Conference on Mathematical Sciences and Applications (IECMSA-2014)
CommonCrawl
FibroBox: a novel noninvasive tool for predicting significant liver fibrosis and cirrhosis in HBV infected patients Xiao-Jie Lu1 na1, Xiao-Jun Yang2 na1, Jing-Yu Sun1, Xin Zhang ORCID: orcid.org/0000-0002-1748-68583, Zhao-Xin Yuan4,5 & Xiu-Hui Li6 China is a highly endemic area of chronic hepatitis B (CHB). The accuracy of existed noninvasive biomarkers including TE, APRI and FIB-4 for staging fibrosis is not high enough in Chinese cohort. Using liver biopsy as a gold standard, a novel noninvasive indicator was developed using laboratory tests, ultrasound measurements and liver stiffness measurements with machine learning techniques to predict significant fibrosis and cirrhosis in CHB patients in north and east part of China. We retrospectively evaluated the diagnostic performance of the novel indicator named FibroBox, Fibroscan, aspartate transaminase-to-platelet ratio index (APRI), and fibrosis-4 index (FIB-4) in CHB patients from Jilin and Huai'an (training sets) and also in Anhui and Beijing cohorts (validation sets). Of 1289 eligible HBV patients who had liver histological data, 63.2% had significant fibrosis and 22.5% had cirrhosis. In LASSO logistic regression and filter methods, fibroscan results, platelet count, alanine transaminase (ALT), prothrombin time (PT), type III procollagen aminoterminal peptide (PIIINP), type IV collagen, laminin, hyaluronic acid (HA) and diameter of spleen vein were finally selected as input variables in FibroBox. Consequently, FibroBox was developed of which the area under the receiver operating characteristic curve (AUROC) was significantly higher than that of TE, APRI and FIB-4 to predicting significant fibrosis and cirrhosis. In the Anhui and Beijing cohort, the AUROC of FibroBox was 0.88 (95% CI, 0.72–0.82) and 0.87 (95% CI, 0.83–0.91) for significant fibrosis and 0.87 (95% CI, 0.82–0.92) and 0.90 (95% CI, 0.85–0.94) for cirrhosis. In the validation cohorts, FibroBox accurately diagnosed 81% of significant fibrosis and 84% of cirrhosis. FibroBox has a better performance in predicting liver fibrosis in Chinese cohorts with CHB, which may serve as a feasible alternative to liver biopsy. Hepatitis B virus (HBV) infection has become a major public health threat for its high prevalence (attacking 257 million people worldwide in 2016) [1]. The major complications of CHB include cirrhosis and hepatocellular carcinoma, leading to poor prognosis [2]. Chronic hepatitis B (CHB) is highly endemic in China, with over 74 million hepatitis B surface antigen (HBsAg)-positive patients [2, 3]. The number of CHB patients undergoing antiviral treatment remains uncalculated [4]. To control the spread of CHB in China, it is essential to conduct early diagnosis and intervention of HBV infection. Fibrosis staging, an approach to assess HBV-induced liver diseases, is efficient to estimate the prognosis of patients and identify those requiring antiviral treatment [5]. Liver biopsy is traditionally recommended as a standard for staging fibrosis [6], but it is restricted with by invasiveness, cost [7, 8], and unavoidable errors from sampling [9, 10]. Therefore, a variety of noninvasive tests have been developed in recent years. As summarized in EASL-ALEH clinical practice guidelines [11], noninvasive staging usually depends on serum biomarkers-based mathematic calculation and elasticity-based imaging techniques, such as transient elastography (TE) and magnetic resonance elastography (MRE). Although several strategies combining TE and computer algorithm are introduced in the guidelines, they are only applicable for patients infected with hepatitis C virus (HCV). Moreover, no measurements or macro characteristics of imaging methods have been described in strategies. With machine learning that can tease out the complex, non-linear relationships in the data [12, 13], we conducted a retrospective multicenter study and established a novel multivariate algorithmic model, named FibroBox, in a cohort of CHB patients in Huaian and Jilin, and then evaluated its predictive accuracy in external validation sets from Anhui and Beijing. We selected 1843 treatment-naïve CHB patients who underwent liver biopsy, blood test, B-ultrasound examination and Fibroscan (FS402, Echosens, France) at four centers, including Huai'an Fourth People's Hospital (Huai'an, China) (June 2010–October 2017), Beijing You-An Hospital (Beijing, China) (December 2013–April 2017), Hepatology Hospital of Jilin Province (Jilin, China) (July 2008–October 2016) and The First Affiliated Hospital of Anhui University of Chinese Medicine (Anhui, China) (February 2012–November 2017). Their clinical data were retrospectively collected through hospital information system. Included were those who underwent liver biopsy and at least one of the following criteria: aspartic transaminase (AST) or alanine transaminase (ALT) ≥40 IU/L, liver stiffness ≥6.5 kPa, HBV DNA ≥2000 IU/mL or family history of liver diseases. The exclusion criteria included co-infection with HCV, hepatitis D virus (HDV) or human immunodeficiency virus (HIV), focal hepatic lesion (e.g. HCC, hepatic tuberculosis and any other), significant alcohol intake (> 20 g/day), severe hepatic failure (complications such as jaundice and ascites or transaminases level over 10 times the upper limit of normal (ULN)), acute heart failure and pregnancy and BMI greater than 30 kg/m2. Percutaneous liver biopsy (LB) was performed under the ultrasonic guidance by experienced ultrasonologists. Liver samples were formalin-fixed and paraffin-embedded for subsequent histological analysis. Histological analysis was performed by three senior pathologists in every center. If three different results came from one sample, the consensus was taken as the final decision. Liver samples with less than three portal tracts were considered as poor quality and excluded from the analysis. All the pathologists were blinded to the clinical information. The liver fibrosis was staged by the Metavir system [14]. F ≥ 2 was considered as significant fibrosis and F4 as cirrhosis. Transient elastography (Fibroscan) All liver stiffness measurements (LSMs) were performed using Fibroscan devices (FS402, Echosens, France) by skilled technicians according to the manufacturer's protocol [15]. The TE results were presented as kilopascal (kPa). For each patient, the median of 10 successfully measured TE values was regarded as the final TE. A measurement was considered invalid if its TE median > 7.1 kPa and interquartile ratio (IQR)/LSM > 0.30 [16]. Traditional serum index calculation Aspartate transaminase (AST)-to-platelet ratio index (APRI) [17] and the fibrosis-4 (FIB-4) [18] are two common compound surrogates that use simple formulas to score easily acquired parameters. The formulas of APRI and FIB-4 were shown as follows: $$ \mathrm{APRI}=\frac{\left(\mathrm{AST}\left(\mathrm{IU}/\mathrm{L}\right)/\mathrm{ULN}\right)\times 100\ }{\mathrm{Platelet}\ \mathrm{count}\ \left({10}^9/\mathrm{L}\right)} $$ $$ \mathrm{FIB}-4=\frac{\mathrm{age}\left(\mathrm{years}\right)\times \mathrm{AST}\left(\mathrm{IU}/\mathrm{L}\right)\kern0.5em }{\mathrm{Platelet}\ \mathrm{count}\ \left({10}^9/\mathrm{L}\right)\times \mathrm{ALT}\left(\mathrm{IU}/\mathrm{L}\right)\hat{\mkern6mu} 1/2} $$ These relevant input parameters were measured when patients were admitted to the hospitals without any interventions. Ultrasonic measurement In this study, the parameters measured during ultrasonic examinations included the size of spleen (mm2, length × thickness), the diameter of splenic vein (mm) and the diameter of portal vein (mm). Every parameter was measured for at least three times by experienced ultrasonologists and the mean value was calculated as the final score of each measurement. Two training data sets of treatment-naïve HBV-infected patients who entirely met the study criteria from Huai'an and Jilin (n = 549) were subjected to the algorithmic model (FibroBox). The sets were not absolutely comparable, but the mode could normalize these sets. Validation sets The diagnostic performances of the FibroBox and other noninvasive markers were evaluated with external validation sets from Anhui and Beijing cohorts. In the Anhui (n = 408) and Beijing cohorts (n = 332), the CHB patients who underwent biopsy with available data on TE, AST, ALT and Platelet count were included in the analysis. FibroBox construction The data characteristics, preprocessing and training/testing procedures of FibroBox were described in Supplement Material 1. All variables were normalized in order to minimize systematic errors from different centers. And then algorithm models (Supplement Material 1) were used to select significant variables and conduct training and validation. The machine learning algorithm was implemented using Python 3.7 (Amsterdam, Netherlands). The diagnostic accuracy of FibroBox and conventional fibrosis markers (APRI, FIB-4 and Fibroscan) was estimated using the area under the receiver operating characteristic curve (AUROC) and the rate of correctly classified fibrosis/cirrhosis. Delong's test [19] with a significant level of 0.05 was used to compare AUROC values of the FibroBox and other markers. Agreements between them were described using Cohen's kappa coefficient. The decision curve analysis (DCA) and ROC analysis were computed with R 3.5.1. Statistical analysis was conducted using SPSS 19.0 (SPSS Inc., Chicago, IL, USA). Between July 2008 and November 2017, 1843 HBV-infected patients were retrospectively enrolled in this study (Fig. 1). After exclusion of patients with HCC or other tumors (n = 193) and liver abscess (n = 86), histological specimens of 1393 (75.6%) patients showed eligibility. A total of 171 (9.3%) patients refused to participate in this study. After the investigation of clinical information, 14 patients were found co-infected with HDV and 26 with HIV (Fig. 1). The data of 64 patients were incomplete. Therefore, 1289 patients were finally included in the study. The TE results of all the included patients were reliable according to guidelines proposed by Boursier et al. [16]. The main characteristics of the study patients are summarized in Table 1. Flow diagram of the study population and reasons for exclusion. CHB, chronic HBV; HCC, hepatocellular carcinoma; HDV, hepatitis D virus Table 1 Baseline characteristics of the study population in training set (Huai'an, Jilin and Anhui) and in validation sets (Beijing) No complication was reported after liver biopsy. The significant fibrosis and cirrhosis account for 63.2% (815) and 22.5% (290) of all included patients, respectively. Almost a quarter of patients (382; 29.6%) had liver activity (A2/A3) and no steatosis was reported by the histopathologists. Meanwhile, 994 (77.1%) specimens showed consistent results rendered by 2 pathologists and a final determined diagnosis was reached by a third experienced histopathologist for the remaining specimens that showed biases. Training sets in Huai'an and Jilin In spearman correlation analyses of original variables, the stage of liver fibrosis was associated with age, AST, GGT, total bilirubin, platelet count, WBC, PT, ALP, albumin, INR, PIIINP, type IV collagen, laminin, HA, size of spleen, diameter of spleen vein, diameter of portal vein, velocity of portal vein and Fibroscan results (Table 2). Subsequent multivariable analysis using the least absolute shrinkage and selection operator (LASSO) logistic regression (Fig. 2) and the filter method [20] (supplement material 1) selected Fibroscan results, platelet count, AST, PT, PIIINP, type IV collagen, laminin, HA and diameter of portal vein as input parameters of diagnostic models for significant fibrosis and cirrhosis. Table 2 Selection for orginal variables associated with the presence of fibrosis stage in the training set Feature selection by using a parametric method, the least absolute shrinkage and selection operator (LASSO) regression. a Significant fibrosis feature selection of tuning parameter (λ) in the LASSO model used 10-fold cross-validation via minimum criteria. The AUC curve was plotted versus log(λ). Dotted vertical lines were drawn at the optimal values by using the minimum criteria and the 1 standard error of the minimum criteria (the 1 – standard error criteria). The optimal log(λ) of − 3.96 was chosen. b Cirrhosis feature selection and the optimal log(λ) of − 4.83 was chosen. c LASSO coefficient profiles of the 18 initially selected features. A vertical line was plotted at the optimal λ value, which resulted in 9 features with nonzero coefficients. d LASSO coefficient profiles of the 16 initially selected features. A vertical line was plotted at the optimal λ value, which resulted in 9 features with nonzero coefficients In the training cohort, the AUROC of the FibroBox for predicting significant fibrosis (0.914, 95% CI 0.890 to 0.938) was higher than that of the models using TE alone (0.886, 95% CI 0.856 to 0.917), APRI (0.692, 95% CI 0.643 to 0.741) or FIB-4 (0.707, 95% CI 0.659 to 0.755). The optimal cut-off value of FibroBox was 0.38. For predicting cirrhosis, the AUROC of FibroBox (0.914, 95% CI 0.885 to 0.943) was better than that of TE (0.880, 95% CI 0.844 to 0.917), APRI (0.705, 95% CI 0.659 to 0.752) and FIB-4 (0.758, 95% CI 0.713 to 0.804). The optimal cut-off value of FibroBox was 0.56. Validation set in Anhui In the Anhui cohort (n = 408), fibrosis stage based on histopathology was shown as follows: 10 (2.5%) in F0, 144 (35.3%) in F1, 129 (31.6%) in F2, 66 (16.2%) in F3 and 59 (14.5%) in F4 (Table 1). The diagnostic performance (Fig. 3a) of FibroBox was better than TE, APRI and FIB-4: AUROC at 0.88 (95% CI 0.84 to 0.92) for predicting significant fibrosis and 0.87 (95% CI 0.82 to 0.92) for predicting cirrhosis (Table 3). Applying the optimal cut-off value (0.38 for significant fibrosis and 0.56 for cirrhosis) determined in the training set, the correctly classified rate of predicting significant fibrosis and cirrhosis was both 0.81 (se: 0.80, sp.: 0.82; se: 0.51, sp.: 0.94, respectively). The performances of the prediction models including FibroBox, TE, APRI and FIB-4 for significant fibrosis and cirrhosis in the Anhui cohort (a) and Beijing corhort (b) are assessed by the area under a receiver operating characteristic (ROC) curve Table 3 Diagnostic performance of FibroBox, TE, APRI and FIB-4 in the validation cohorts (Anhui and Beijing) Across the range of reasonable threshold probabilities in this cohort, DCA graphically demonstrated that FibroBox provided a larger net benefit compared with TE, APRI and FIB-4 in diagnosing significant fibrosis and cirrhosis (Fig. 4a). This became as the supplementary evidence for the comparison of FibroBox and TE (p = 0.058) in predicting cirrhosis. Decision curve analysis (DCA) of the prediction models including FibroBox, TE, APRI and FIB-4 for significant fibrosis and cirrhosis in the Anhui cohort (a) and Beijing corhort (b) Validation set in Beijing In the Beijing cohort (n = 332), 26 (7.9%) were F0, 127 (38.3%) were F1, 73 (22%) were F2, 32 (9.6%) were F3 and 74 (22.3%) were F4 according to the liver histology results (Table 1). For the prediction of significant fibrosis (Fig. 3b), it was statistically significant that the AUROC of FibroBox (0.87, 95% CI 0.83 to 0.91) was higher than that of TE (0.82, 95% CI 0.77 to 0.87, p < 0.001), APRI (0.70, 95% CI 0.65 to 0.76, p < 0.001) and FIB-4 (0.67, 95% CI 0.61 to 0.73, p < 0.001) (Table 3). For predicting cirrhosis (Fig. 3b), the performance of FibroBox (0.90, 95% CI 0.85 to 0.94) was significantly better than that of APRI (0.75, 95% CI 0.67 to 0.82, p < 0.001) and FIB-4 (0.70, 95% CI 0.62 to 0.79, p < 0.001) (Table 3). There was no significant difference between FibroBox and TE (0.89, 95% CI 0.85 to 0.94, p = 0.863). DCA also showed consistent results (Fig. 4b). In China, assessing the severity of CHB infection is a critical step before timely intervention [4]. TE has also been widely applied in Chinese hospitals in recent years, regardless of its high price. To stage liver fibrosis noninvasively in patients with HBV, our study established and validated a multivariable model based on machine-learning and incorporating Fibroscan results, serum biomarker indices and ultrasonic measurements. This FibroBox model demonstrated favorable diagnostic performances in two external validation cohorts for the prediction of significant fibrosis which was superior to TE, APRI and FIB-4. The diagnostic performance of FibroBox for predicting cirrhosis was potentially better than TE, which required more validations. It was reported that Fibroscan performed better than serum biomarker indexes in predicting significant fibrosis and cirrhosis in Chinese cohorts [21, 22]. In our study, TE measurements were obtained within a month after liver biopsy. The optimal cut-off values of Fibroscan for significant fibrosis and cirrhosis in both validation sets were 7.8 and 11.3 kpa, both close to those proposed in other countries [23,24,25]. Regardless of set types and prediction goals, all the AUROC results of TE were over 0.8, which was acceptable but not efficient enough. Our study excluded obese patients (BMI ≥30 kg/m2), thus ruling out an error leading to unreliable TE results. Fibroscan is not widespread because of its high cost (€34,000 for a portable device and €5000 for its annual maintenance), but its high diagnostic efficiency also makes it recommendable [5, 26]. FibroBox behaved better than TE according to AUROC comparisons (Table 3, Fig. 3) and DCA curves (Fig. 4). Although the difference between FibroBox and TE for cirrhosis is not significant, the imbalance of data can also affect the validation results. For instance, less than a quarter of included patients were cirrhotic (Anhui: 14.5%; Beijing: 22.3%). The application of Fibroscan is limited by ascites and not so reliable compared as two-dimensional (2D) shear wave elastography (SWE) [27, 28]. However, 2D-SWE has not been widely applied like Fibroscan in China. Therefore, this study took TE as the only input variable. In addition, TE has the advantage of staging liver fibrosis regardless of causes (HBV, HCV or nonalcoholic fatty liver disease [NAFLD]). FibroBox only focused on the HBV-induced liver fibrosis, which required more similar studies about other kinds of fibrosis. The prediction accuracy of APRI and FIB-4 observed in this study was unacceptable. The AUROC of APRI was 0.66 (0.60 to 0.73) in the Anhui cohort and 0.70 (0.65 to 0.76) in the Beijing cohort in predicting significant fibrosis, and 0.72 (0.65 to 0.79) in the Anhui cohort and 0.75 (0.67 to 0.82) in the Beijing cohort in predicting cirrhosis. The diagnostic performance of APRI in the prediction of cirrhosis was better than that of which in the prediction of significant fibrosis. The AUROC value of FIB-4 in predicting cirrhosis in the Anhui cohort was significantly higher than that of APRI (P = 0.009), indicating FIB-4 might have a prediction efficiency between those of APRI and TE. In addition, the optimal cut-off values of APRI and FIB-4 were both calculated with Youden index (sensitivity + specificity - 1), and the optimal cut-off value of APRI was quite different from that recommended by the WHO guidelines [29], reminding of the instability and unreliability of APRI guideline-suggested cutoff values for the prediction of fibrosis in Chinese cohort. There are several limitations in this study. First, the robustness of data was limited because of the retrospective researches. However, the size of research data is large and four centers participated in this study which can ensure the applicability and reliability of established models. We designed a two-validation-set study similar to that conducted by Lemoine et al. [25]. Second, the data sample inconsistency affected the model validations. For instance, the proportion of cirrhosis was only 14.5% in Anhui cohort, meaning that it cannot be taken as a training set, because this proportion is not enough to discriminate cirrhosis (F4) from non-cirhosis (F0–3). Third, the FibroBox is complicated and involves 10 parameters. However, the cost-effectiveness of this might not be poor because these 10 input parameters can be obtained through clinical examinations and the run time of FibroBox is only a few seconds. Finally, several parameters such as PIIINP, type IV collagen, laminin and HA are not readily available in clinical laboratories. We can develop several easily obtained ratios similar to the study conducted by Yuan et al. [30]. Future versions of FibroBox should focus on the simplification with accuracy. In conclusion, compared with TE, APRI and FIB-4, FibroBox may be a superior noninvasive fibrosis indicator to predict the fibrosis stage in Chinese patients with CHB. The FibroBox requires further validation in other parts of China or other countries. The data is not available because of patients' privacy. ALP: ALT: APRI: Aspartate transaminase-to-platelet ratio index AST: Aspartic transaminase AUROC: CHB: Chronic hepatitis B DCA: Decision curve analysis FIB-4: Fibrosis-4 GGT: Gamma-glutamyl transpeptidase HA: HBsAg: Hepatitis B surface antigen HBV: Hepatitis B virus HCC: HCV: HDV: Hepatitis D virus HIV: INR: International normalized ratio kPa: kilopascal LSMs: Liver stiffness measurements MRE: Magnetic resonance elastography PIIINP: Type III procollagen aminoterminal peptide Transient elastography ULN: The upper limit of normal WBC: White blood cell 2D-SWE: Two-dimensional shear wave elastography Seto WK, Lo YR, Pawlotsky JM, Yuen MF. Chronic hepatitis B virus infection. Lancet. 2018;392(10161):2313–24. Schweitzer A, Horn J, Mikolajczyk RT, Krause G, Ott JJ. Estimations of worldwide prevalence of chronic hepatitis B virus infection: a systematic review of data published between 1965 and 2013. Lancet. 2015;386(10003):1546–55. Ott JJ, Horn J, Krause G, Mikolajczyk RT. Time trends of chronic HBV infection over prior decades - a global analysis. J Hepatol. 2017;66(1):48–54. Wang FS, Fan JG, Zhang Z, Gao B, Wang HY. The global burden of liver disease: the major impact of China. Hepatology. 2014;60(6):2099–108. European Association for the Study of the Liver. EASL 2017 clinical practice guidelines on the management of hepatitis B virus infection. J Hepatol. 2017;67(2):370–98 Electronic address: [email protected]. Rockey DC, Caldwell SH, Goodman ZD, Nelson RC. Smith AD; American Association for the Study of Liver Diseases. Liver Biopsy Hepatology. 2009;49(3):1017–44. Perrault J, McGill DB, Ott BJ, Taylor WF. Liver biopsy: complications in 1000 inpatients and outpatients. Gastroenterology. 1978;74(1):103–6. Strassburg CP, Manns MP. Approaches to liver biopsy techniques--revisited. Semin Liver Dis. 2006;26(4):318–27 Review. Maharaj B, Maharaj RJ, Leary WP, Cooppan RM, Naran AD, Pirie D, Pudifin DJ. Sampling variability and its influence on the diagnostic yield of percutaneous needle biopsy of the liver. Lancet. 1986;1(8480):523–5. Regev A, Berho M, Jeffers LJ, Milikowski C, Molina EG, Pyrsopoulos NT, Feng ZZ, Reddy KR, Schiff ER. Sampling error and intraobserver variation in liver biopsy in patients with chronic HCV infection. Am J Gastroenterol. 2002;97(10):2614–8. European Association for Study of Liver; Asociacion Latinoamericana para el Estudio del Higado. EASL-ALEH clinical practice guidelines: non-invasive tests for evaluation of liver disease severity and prognosis. J Hepatol. 2015;63(1):237–64. Obermeyer Z, Emanuel EJ. Predicting the future - big data, machine learning, and clinical medicine. N Engl J Med. 2016;375(13):1216–9. Chen JH, Asch SM. Machine learning and prediction in medicine - beyond the peak of inflated expectations. N Engl J Med. 2017;376(26):2507–9. Bedossa P, Poynard T. An algorithm for the grading of activity in chronic hepatitis C. The METAVIR Cooperative Study Group. Hepatology. 1996;24(2):289–93. Sandrin L, Fourquet B, Hasquenoph JM, Yon S, Fournier C, Mal F, Christidis C, Ziol M, Poulet B, Kazemi F, Beaugrand M, Palau R. Transient elastography: a new noninvasive method for assessment of hepatic fibrosis. Ultrasound Med Biol. 2003;29(12):1705–13. Boursier J, Zarski JP, de Ledinghen V, Rousselet MC, Sturm N, Lebail B, Fouchard-Hubert I, Gallois Y, Oberti F, Bertrais S, Calès P, Multicentric Group from ANRS/HC/EP23 FIBROSTAR Studies. Determination of reliability criteria for liver stiffness evaluation by transient elastography. Hepatology. 2013;57(3):1182–91. Wai CT, Greenson JK, Fontana RJ, Kalbfleisch JD, Marrero JA, Conjeevaram HS, Lok AS. A simple noninvasive index can predict both significant fibrosis and cirrhosis in patients with chronic hepatitis C. Hepatology. 2003;38(2):518–26. Vallet-Pichard A, Mallet V, Pol S. FIB-4: a simple, inexpensive and accurate marker of fibrosis in HCV-infected patients. Hepatology. 2006;44(3):769 author reply 769-70. Delong ER, Clarke-Pearson DLL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837–45. Torlay L, Perrone-Bertolotti M, Thomas E, Baciu M. Machine learning-XGBoost analysis of language networks to classify patients with epilepsy. Brain Inform. 2017;4(3):159–69. Chen Y, Wang Y, Chen Y, Yu Z, Chi X, Hu KQ, Li Q, Tan L, Xiang D, Shang Q, Lei C, Chen L, Hu X, Wang J, Liu H, Lu W, Chi W, Dong Z, Wang X, Li Z, Xiao H, Chen D, Bai W, Zhang C, Xiao G, Qi X, Chen J, Zhou L, Sun H, Deng M, Qi X, Zhang Z, Qi X, Yang Y. A novel noninvasive program for staging liver fibrosis in untreated patients with chronic hepatitis B. Clin Transl Gastroenterol. 2019;10(5):1–12. Lu XJ, Li XH, Yuan ZX, Sun HY, Wang XC, Qi X, Zhang X, Sun B. Assessment of liver fibrosis with the gamma-glutamyl transpeptidase to platelet ratio: a multicentre validation in patients with HBV infection. Gut. 2018;67(10):1903–4. Marcellin P, Ziol M, Bedossa P, Douvin C, Poupon R, de Lédinghen V, Beaugrand M. Non-invasive assessment of liver fibrosis by stiffness measurement in patients with chronic hepatitis B. Liver Int. 2009;29(2):242–7. Castéra L, Bernard PH, Le Bail B, Foucher J, Trimoulet P, Merrouche W, Couzigou P, de Lédinghen V. Transient elastography and biomarkers for liver fibrosis assessment and follow-up of inactive hepatitis B carriers. Aliment Pharmacol Ther. 2011;33(4):455–65. Lemoine M, Shimakawa Y, Nayagam S, Khalil M, Suso P, Lloyd J, Goldin R, Njai HF, Ndow G, Taal M, Cooke G, D'Alessandro U, Vray M, Mbaye PS, Njie R, Mallet V, Thursz M. The gamma-glutamyl transpeptidase to platelet ratio (GPR) predicts significant liver fibrosis and cirrhosis in patients with chronic HBV infection in West Africa. Gut. 2016;65(8):1369–76. Terrault NA, Lok ASF, McMahon BJ, Chang KM, Hwang JP, Jonas MM, Brown RS Jr, Bzowej NH, Wong JB. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67(4):1560–99. Dietrich CF, Bamber J, Berzigotti A, Bota S, Cantisani V, Castera L, Cosgrove D, Ferraioli G, Friedrich-Rust M, Gilja OH, Goertz RS, Karlas T, de Knegt R, de Ledinghen V, Piscaglia F, Procopet B, Saftoiu A, Sidhu PS, Sporea I, Thiele M. EFSUMB guidelines and recommendations on the clinical use of liver ultrasound Elastography, update 2017 (long version). Ultraschall Med. 2017;38(4):e16–47. Jeong JY, Cho YS, Sohn JH. Role of two-dimensional shear wave elastography in chronic liver diseases: a narrative review. World J Gastroenterol. 2018;24(34):3849–60. WHO. World Health Organization. Guidelines for the Prevention, Care and Treatment of Persons with chronic Hepatitis B infection. 2015. http://who.int/hiv/pub/hepatitis/hepatitis-b-guidelines/en/ (accessed 17 Mar 2015). Yuan X, Duan SZ, Cao J, Gao N, Xu J, Zhang L. Noninvasive inflammatory markers for assessing liver fibrosis stage in autoimmune hepatitis patients. Eur J Gastroenterol Hepatol. 2019;31(11):1467–74. We would like to express our great appreciation to the R Development Core Team and contributors for R packages used in our study. This work was supported by the National Natural Science Fundation of China (to Xiao-Jie Lu, Grant No. 81772596; 81972283), Beijing Youan Hospital Capital Medical University, and the Natural Science Foundation of Beijing (7172103). Xiao-Jie Lu and Xiao-Jun Yang are co-first authors. Department of General Surgery, The First Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China Xiao-Jie Lu & Jing-Yu Sun Department of Infection, the First Affiliated Hospital of Anhui University of Chinese Medicine, Hefei, China Xiao-Jun Yang Department of Medical Imaging, The Fourth People's Hospital of Huai'an, Huai'an, China Xin Zhang Changchun Medical College, Changchun, Jilin, China Zhao-Xin Yuan Department of Hepatology, Hepatobiliary Disease Hospital of Jilin Province, Changchun, Jilin, China Department of Integrated Traditional Chinese Medicine and Western Medicine, Beijing Youan Hospital, Capital Medical University, Beijing, China Xiu-Hui Li Jing-Yu Sun X. Lu, X. Li, J. Sun, X. Zhang, Z. Yuan and X. Yang collected data; X. Lu, X. Li and J. Sun analyzed data; X. Lu, Y. Mao, Z. Yuan and X. Yang participated in research design; X. Lu, X. Li and J. Sun contributed to the writing of the manuscript discussing data and supervised the study; and all authors performed data analysis and interpretation and read and approved the final manuscript. Correspondence to Xin Zhang or Zhao-Xin Yuan or Xiu-Hui Li. This study was approved by the participating institutional ethics review boards (Fourth Hospital of Huai'an, First Affiliated Hospital of Anhui University of Chinese Medicine, Beijing Youan Hospital and Hepatobiliary Disease Hospital of Jilin Province), and the requirement for written informed consent was waived. Lu, XJ., Yang, XJ., Sun, JY. et al. FibroBox: a novel noninvasive tool for predicting significant liver fibrosis and cirrhosis in HBV infected patients. Biomark Res 8, 48 (2020). https://doi.org/10.1186/s40364-020-00215-2 Liver fibrosis Noninvasive diagnosis
CommonCrawl
Two-agent integrated scheduling of production and distribution operations with fixed departure times JIMO Home Some scheduling problems with sum of logarithm processing times based learning effect and exponential past sequence dependent delivery times doi: 10.3934/jimo.2021095 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. Inertial Tseng's extragradient method for solving variational inequality problems of pseudo-monotone and non-Lipschitz operators Gang Cai 1,, , Yekini Shehu 2, and Olaniyi S. Iyiola 3, School of Mathematics Science, Chongqing Normal University, Chongqing 401331, China Department of Mathematics, Zhejiang Normal University, Jinhua, 321004, China Department of Mathematics and Physical Sciences, , California University of Pennsylvania, PA, USA * Corresponding author: G. Cai Received September 2020 Revised January 2021 Early access May 2021 Fund Project: The first author is supported by the NSF of China (Grant No. 11771063), the Natural Science Foundation of Chongqing(cstc2020jcyj-msxmX0455), Science and Technology Project of Chongqing Education Committee (Grant No. KJZD-K201900504) and the Program of Chongqing Innovation Research Group Project in University (Grant no. CXQT19018) Full Text(HTML) Figure(43) / Table(8) In this paper, we propose a new inertial Tseng's extragradient iterative algorithm for solving variational inequality problems of pseudo-monotone and non-Lipschitz operator in real Hilbert spaces. We prove that the sequence generated by proposed algorithm converges strongly to an element of solutions of variational inequality problem under some suitable assumptions imposed on the parameters. Finally, we give some numerical experiments for supporting our main results. The main results obtained in this paper extend and improve some related works in the literature. Keywords: Tseng's extragradient method, variational inequality, pseudo-monotone operators, strong convergence. Mathematics Subject Classification: Primary: 47H09, 47H10; Secondary: 47J20, 65K15. Citation: Gang Cai, Yekini Shehu, Olaniyi S. Iyiola. Inertial Tseng's extragradient method for solving variational inequality problems of pseudo-monotone and non-Lipschitz operators. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2021095 T. O. Alakoya, L. O. Jolaoso and O. T. Mewomo, Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems, Optimization, 70 (2021), 545-574. doi: 10.1080/02331934.2020.1723586. Google Scholar F. Alvarez and H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set. Valued Anal., 9 (2001), 3-11. doi: 10.1023/A:1011253113155. Google Scholar F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection proximal point algorithm for maximal monotone operators in Hilbert space, SIAM J. Optim., 14 (2004), 773-782. doi: 10.1137/S1052623403427859. Google Scholar C. Baiocchi and A. Capelo, Variational and quasivariational inequalities: Applications to free boundary problems, Wiley, New York, 1984. Google Scholar R. I. Bǫt, E. R. Csetnek and A. Heinrich, A primal-dual splitting algorithm for finding zeros of sums of maximally monotone operators, SIAM J. Optim., 23 (2013), 2011-2036. doi: 10.1137/12088255X. Google Scholar R. I. Bǫt and C. Hendrich, A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators, SIAM J. Optim., 23 (2013), 2541-2565. doi: 10.1137/120901106. Google Scholar R. I. Bǫt, E. R. Csetnek, A. Heinrich and C. Hendrich, On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems, Math. Program., 150 (2015), 251-279. doi: 10.1007/s10107-014-0766-0. Google Scholar R. I. Bǫt, E. R. Csetnek and C. Hendrich, Inertial Douglas-Rachford splitting for monotone inclusion problems, Appl. Math. Comput., 256 (2015), 472-487. doi: 10.1016/j.amc.2015.01.017. Google Scholar R. I. Bǫt and E. R. Csetnek, An inertial Tseng's type proximal algorithm for nonsmooth and nonconvex optimization problems, J. Optim. Theory Appl., 171 (2016), 600-616. doi: 10.1007/s10957-015-0730-z. Google Scholar R. I. Bǫt, E. R. Csetnek and P. T. Vuong, The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces, European J. Oper. Res., 287 (2020), 49-60. doi: 10.1016/j.ejor.2020.04.035. Google Scholar Y. Censor, A. Gibali and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318-335. doi: 10.1007/s10957-010-9757-3. Google Scholar Y. Censor, A. Gibali and S. Reich, Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space, Optimization, 61 (2011), 1119-1132. doi: 10.1080/02331934.2010.539689. Google Scholar Y. Censor, A. Gibali and S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms, 56 (2012), 301-323. doi: 10.1007/s11075-011-9490-5. Google Scholar R. W. Cottle and J. C. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281-295. doi: 10.1007/BF00941468. Google Scholar S. V. Denisov, V. V. Semenov and L. M. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757-765. doi: 10.1007/s10559-015-9768-z. Google Scholar Q. L. Dong, Y. J. Cho, L. L. Zhong and Th. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687-704. doi: 10.1007/s10898-017-0506-0. Google Scholar Q. L. Dong, H. B.Yuan, Y. J. Cho and Th. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87-102. doi: 10.1007/s11590-016-1102-9. Google Scholar Q.-L. Dong, K. R. Kazmi, R. Ali and X.-H. Li, Inertial Krasnosel'skii-Mann type hybrid algorithms for solving hierarchical fixed point problems, J. Fixed Point Theory Appl., 21 (2019), 57. doi: 10.1007/s11784-019-0699-6. Google Scholar F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer Series in Operations Research, vol. II. Springer, New York, 2003. Google Scholar A. Gibali, D. V. Thong and P. A. Tuan, Two simple projection-type methods for solving variational inequalities, Anal. Math. Phys., 9 (2019), 2203-2225. doi: 10.1007/s13324-019-00330-w. Google Scholar R. Glowinski, J.-L. Lions and R. Trémolières, Numerical Analysis of Variational Inequalities, Elsevier, Amsterdam, 1981. Google Scholar K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, New York, 1984. Google Scholar P. T. Harker and J.-S. Pang, A damped-newton method for the linear complementarity problem, Lect. Appl. Math., 26 (1990), 265-284. doi: 10.1007/bf01582255. Google Scholar X. Hu and J. Wang, Solving pseudo-monotone variational inequalities and pseudo-convex optimization problems using the projection neural network, IEEE Trans. Neural Netw., 17 (2006), 1487-1499. Google Scholar A. N. Iusem, An iterative algorithm for the variational inequality problem, Mat. Apl. Comput., 13 (1994), 103-114. Google Scholar A. Iusem and R. Gárciga Otero, Inexact versions of proximal point and augmented Lagrangian algorithms in Banach spaces, Numer. Funct. Anal. Optim., 22 (2001), 609-640. doi: 10.1081/NFA-100105310. Google Scholar C. Izuchukwu, A. A. Mebawondu and O. T. Mewomo, A new method for solving split variational inequality problem without co-coerciveness, J. Fixed Point Theory Appl., 22 (2020), 98. doi: 10.1007/s11784-020-00834-0. Google Scholar L. O. Jolaoso, A. Taiwo, T. O. Alakoya and O. T. Mewomo, A strong convergence theorem for solving pseudo-monotone variational inequalities using projection methods, J. Optim. Theory Appl., 185 (2020), 744-766. doi: 10.1007/s10957-020-01672-3. Google Scholar L. O. Jolaoso, A. Taiwo, T. O. Alakoya and O. T. Mewomo, A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem, Comput. Appl. Math., 39 (2020), 38. doi: 10.1007/s40314-019-1014-2. Google Scholar L. O. Jolaoso, T. O. Alakoya, A. Taiwo and O. T. Mewomo, Inertial extragradient method via viscosity approximation approach for solving Equilibrium problem in Hilbert space, Optimization, 70 (2021), 387-412. doi: 10.1080/02331934.2020.1716752. Google Scholar G. Kassay, S. Reich and S. Sabach, Iterative methods for solving systems of variational inequalities in refelexive Banach spaces, SIAM J. Optim., 21 (2011), 1319-1344. doi: 10.1137/110820002. Google Scholar [32] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and their Applications, Academic Press, New York, 1980. Google Scholar I. Konnov, Combined Relaxation Methods for Variational Inequalities, Springer, Berlin, 2001. doi: 10.1007/978-3-642-56886-2. Google Scholar G. M. Korpelevič, The extragradient method for finding saddle points and other problems, Ekon. Mat. Metody, 12 (1976), 747-756. Google Scholar P.-E. Maingé, A hybrid extragradient-viscosity method for monotone operators and fixed point problems, SIAM J. Control Optim., 47 (2008), 1499-1515. doi: 10.1137/060675319. Google Scholar P.-E. Maingé, Convergence theorem for inertial KM-type algorithms, J. Comput. Appl. Math., 219 (2008), 223-236. doi: 10.1016/j.cam.2007.07.021. Google Scholar Y. Malitsky, Projected reflected gradient methods for monotone variational inequalities, SIAM J. Optim., 25 (2015), 502-520. doi: 10.1137/14097238X. Google Scholar Y. V. Malitsky and V. V. Semenov, A hybrid method without extrapolation step for solving variational inequality problems, J. Glob. Optim., 61 (2015), 193-202. doi: 10.1007/s10898-014-0150-x. Google Scholar Y. Shehu and O. S. Iyiola, Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method, J. Fixed Point Theory Appl., 19 (2017), 2483-2510. doi: 10.1007/s11784-017-0435-z. Google Scholar Y. Shehu, O. S. Iyiola and F. U. Ogbuisi, Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing, Numer Algorithms, 83 (2020), 1321-1347. doi: 10.1007/s11075-019-00727-5. Google Scholar Y. Shehu and P. Cholamjiak, Iterative method with inertial for variational inequalities in Hilbert spaces, Calcolo, 56 (2019), 4. doi: 10.1007/s10092-018-0300-5. Google Scholar Y. Shuhu, Q.-L. Dong and D. Jiang, Single projection method for pseudo-monotone variational inequality in Hilbert spaces, Optimization, 68 (2019), 385-409. doi: 10.1080/02331934.2018.1522636. Google Scholar M. V. Solodov and P. Tseng, Modified projection-type methods for monotone variational inequalities, SIAM J. Control Optim., 34 (1996), 1814-1830. doi: 10.1137/S0363012994268655. Google Scholar M. V. Solodov and B. F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim., 37 (1999), 765-776. doi: 10.1137/S0363012997317475. Google Scholar D. V. Thong and D. V. Hieu, Inertial extragradient algorithms for strongly pseudomonotone variational inequalities, J. Comput. Appl. Math., 341 (2018), 80-98. doi: 10.1016/j.cam.2018.03.019. Google Scholar D. V. Thong and D. V.Hieu, Modified subgradient extragradient method for variational inequality problems, Numer. Algorithms, 79 (2018), 597-610. doi: 10.1007/s11075-017-0452-4. Google Scholar D. V. Thong and D. V. Hieu, Weak and strong convergence theorems for variational inequality problems, Numer. Algorithms, 78 (2018), 1045-1060. doi: 10.1007/s11075-017-0412-z. Google Scholar D. V. Thong and D. V. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer Algorithms, 80 (2019), 1283-1307. doi: 10.1007/s11075-018-0527-x. Google Scholar D. V. Thong, N. T. Vinh and Y. J. Cho, Accelerated subgradient extragradient methods for variational inequality problems, J. Sci. Comput., 80 (2019), 1438-1462. doi: 10.1007/s10915-019-00984-5. Google Scholar D. V. Thong, D. Van Hieu and T. M. Rassias, Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems, Optim. Lett., 14 (2020), 115-144. doi: 10.1007/s11590-019-01511-z. Google Scholar D. V. Thong and A. Gibali, Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities, J. Fixed Point Theory Appl., 21 (2019), 20. doi: 10.1007/s11784-018-0656-9. Google Scholar D. V. Thong, N. T. Vinh and Y. J. Cho, New strong convergence theorem of the inertial projection and contraction method for variational inequality problems, Numer Algorithms, 84 (2020), 285-305. doi: 10.1007/s11075-019-00755-1. Google Scholar D. V. Thong and D. Van Hieu, Strong convergence of extragradient methods with a new step size for solving variational inequality problems, Comp. Appl. Math., 38 (2019), 136. doi: 10.1007/s40314-019-0899-0. Google Scholar D. V. Thong, Y. Shehu and O. S. Iyiola, Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings, Numer. Algorithms, 84 (2020), 795-823. doi: 10.1007/s11075-019-00780-0. Google Scholar D. V. Thong and D. V. Hieu, New extragradient methods for solving variational inequality problems and fixed point problems, J. Fixed Point Theory Appl., 20 (2018), 129. doi: 10.1007/s11784-018-0610-x. Google Scholar D. V. Thong and D. V. Hieu, Modified Tseng's extragradient algorithms for variational inequality problems, J. Fixed Point Theory Appl., 20 (2018), 152. doi: 10.1007/s11784-018-0634-2. Google Scholar D. V. Thong, N. A. Triet, X.-H. Li and Q.-L. Dong, Strong convergence of extragradient methods for solving bilevel pseudo-monotone variational inequality problems, Numer. Algorithms, 83 (2020), 1123-1143. doi: 10.1007/s11075-019-00718-6. Google Scholar P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431-446. doi: 10.1137/S0363012998338806. Google Scholar F. Wang and H.-K. Xu, Weak and strong convergence theorems for variational inequality and fixed point problems with Tseng's extragradient method, Taiwan. J. Math., 16 (2012), 1125-1136. doi: 10.11650/twjm/1500406682. Google Scholar H.-K. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240-256. doi: 10.1112/S0024610702003332. Google Scholar Figure 1. Example 1: $ k = 20 $, $ N = 10 $ Figure Options Download as PowerPoint slide Figure 9. Example 1: Different $ \gamma $ with $ (N, k) = (20, 10) $ Figure 10. Example 1: Different $ \gamma $ with $ (N, k) = (20, 21) $ Figure 17. Example 1: Different $ \mu $ with $ (N, k) = (20, 10) $ Figure 25. Example 2: Case I Figure 26. Example 2: Case II Figure 27. Example 2: Case III Figure 28. Example 2: Case IV Figure 29. Example 2: Case V Figure 30. Example 2: Case VI Figure 31. Example 2: Case I with different $ \gamma $ Figure 32. Example 2: Case II with different $ \gamma $ Figure 33. Example 2: Case III with different $ \gamma $ Figure 34. Example 2: Case IV with different $ \gamma $ Figure 35. Example 2: Case V with different $ \gamma $ Figure 36. Example 2: Case VI with different $ \gamma $ Figure 37. Example 2: Case I with different $ \mu $ Figure 38. Example 2: Case II with different $ \mu $ Figure 39. Example 2: Case III with different $ \mu $ Figure 40. Example 2: Case IV with different $ \mu $ Figure 41. Example 2: Case V with different $ \mu $ Figure 42. Example 2: Case VI with different $ \mu $ Figure 43. The value of error versus the iteration numbers for Example 3 Table 1. Methods Parameters Choice for Comparison Proposed Alg. $ \epsilon_n = \frac{1}{n^2} $ $ \theta = 0.1 $ $ l=0.001 $ $ \beta_n = \frac{1}{n} $ $ \gamma = 0.99 $ $ \mu = 0.99 $ Thong Alg. (1) $ \epsilon_n = \frac{1}{(n + 1)^2} $ $ \theta = 0.1 $ $ \beta_n = \frac{1}{n + 1} $ $ \lambda = \frac{1}{1.01L} $ Thong Alg. (2) $ l=0.001 $ $ \gamma = 0.99 $ $ \mu = 0.99 $ Thong Alg. (3) $ \alpha_n = \frac{1}{n + 1} $ $ l=0.001 $ $ \gamma = 0.99 $ $ \mu = 0.99 $ Gibali Alg. $ \alpha_n = \frac{1}{n + 1} $ $ l=0.001 $ $ \gamma = 0.99 $ $ \mu = 0.99 $ Table 2. Example 1: Comparison among methods with different values of $ N $ and $ k $ $ N=10 $ $ N=20 $ $ N=30 $ $ N=40 $ $ k=20 $ Iter. Time Iter. Time Iter. Time Iter. Time Proposed Alg. 3 1.3843 3 1.7672 3 1.7564 4 2.2017 Thong Alg. (1) 76 1.2902 139 2.7111 111 2.1715 232 37.7743 Thong Alg. (2) 2136 36.6812 1561 30.7776 1370 31.8672 1160 4.0453 Gibali Alg. 150 12.0085 235 20.3243 319 41.0421 315 4.2520 Thong Alg. (1) 72 1.1548 142 2.6436 136 2.888 207 4.4416 Thong Alg. (2) 1771 30.2921 1325 28.4023 1132 28.6053 920 26.5714 Thong Alg. (3) 101 1.5058 90 1.4923 156 2.9515 162 3.6149 Gibali Alg. 203 17.1568 255 30.849 282 31.6244 303 35.2953 Table 3. Example 1 Comparison: Proposed Alg. with different values $ \gamma $ $ (N, k) $ $ \gamma = 0.1 $ $ \gamma = 0.5 $ $ \gamma = 0.7 $ $ \gamma = 0.99 $ $ (20, 10) $ No. of Iterations 3 3 3 3 CPU (Time) 1.6337 1.4830 1.4773 1.3843 CPU (Time) 1.8664 0.85257 1.7597 1.7564 Table 4. Example 1 Comparison: Proposed Alg. with different values $ \mu $ $ (N, k) $ $ \mu = 0.1 $ $ \mu = 0.5 $ $ \mu = 0.7 $ $ \mu = 0.99 $ Proposed Alg. $ \epsilon_n = \frac{1}{(n + 1)^2} $ $ \theta = 0.5 $ $ l=0.01 $ $ \beta_n = \frac{1}{n + 1} $ $ \gamma = 0.99 $ $ \mu = 0.99 $ Gibali Alg. $ \alpha_n = \frac{1}{1 + n} $ $ l=0.01 $ $ \gamma = 0.99 $ $ \mu = 0.99 $ Table 6. Example 2: Prop. Alg. vs Gibali Alg. (Unaccel. Alg.) No. of Iterations CPU Time Prop. Alg. Gibali Alg. Prop. Alg. Gibali Alg. Case I 17 1712 0.001243 0.1244 Case II 17 1708 0.001518 0.1248 Case III 17 1713 0.001261 0.1276 Case IV 17 1729 0.001202 0.1297 Case V 17 1715 0.001272 0.1258 Case VI 18 1835 0.001339 0.1564 $ \mu = 0.1 $ $ \mu = 0.5 $ $ \mu = 0.7 $ $ \mu = 0.99 $ Case I No. of Iterations 17 17 17 17 CPU (Time) 0.0011992 0.0012179 0.0013264 0.0012430 Case II No. of Iterations 17 17 17 17 Case III No. of Iterations 17 17 17 17 Case IV No. of Iterations 17 17 17 17 Case V No. of Iterations 17 17 17 17 Case VI No. of Iterations 18 18 18 18 $ \gamma = 0.1 $ $ \gamma = 0.5 $ $ \gamma = 0.7 $ $ \gamma = 0.99 $ Grace Nnennaya Ogwo, Chinedu Izuchukwu, Oluwatosin Temitope Mewomo. A modified extragradient algorithm for a certain class of split pseudo-monotone variational inequality problem. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021011 Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020178 Yekini Shehu, Olaniyi Iyiola. On a modified extragradient method for variational inequality problem with application to industrial electricity production. Journal of Industrial & Management Optimization, 2019, 15 (1) : 319-342. doi: 10.3934/jimo.2018045 Augusto Visintin. Weak structural stability of pseudo-monotone equations. Discrete & Continuous Dynamical Systems, 2015, 35 (6) : 2763-2796. doi: 10.3934/dcds.2015.35.2763 Augusto VisintiN. On the variational representation of monotone operators. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 909-918. doi: 10.3934/dcdss.2017046 Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013 A. C. Eberhard, J-P. Crouzeix. Existence of closed graph, maximal, cyclic pseudo-monotone relations and revealed preference theory. Journal of Industrial & Management Optimization, 2007, 3 (2) : 233-255. doi: 10.3934/jimo.2007.3.233 Françoise Demengel. Ergodic pairs for degenerate pseudo Pucci's fully nonlinear operators. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3465-3488. doi: 10.3934/dcds.2021004 Frank Jochmann. A variational inequality in Bean's model for superconductors with displacement current. Discrete & Continuous Dynamical Systems, 2009, 25 (2) : 545-565. doi: 10.3934/dcds.2009.25.545 Anne-Laure Bessoud. A variational convergence for bifunctionals. Application to a model of strong junction. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 399-417. doi: 10.3934/dcdss.2012.5.399 Pierdomenico Pepe. A nonlinear version of Halanay's inequality for the uniform convergence to the origin. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021045 Li Wang, Yang Li, Liwei Zhang. A differential equation method for solving box constrained variational inequality problems. Journal of Industrial & Management Optimization, 2011, 7 (1) : 183-198. doi: 10.3934/jimo.2011.7.183 Hui-Qiang Ma, Nan-Jing Huang. Neural network smoothing approximation method for stochastic variational inequality problems. Journal of Industrial & Management Optimization, 2015, 11 (2) : 645-660. doi: 10.3934/jimo.2015.11.645 Ming Chen, Chongchao Huang. A power penalty method for a class of linearly constrained variational inequality. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1381-1396. doi: 10.3934/jimo.2018012 Walter Allegretto, Yanping Lin, Shuqing Ma. On the box method for a non-local parabolic variational inequality. Discrete & Continuous Dynamical Systems - B, 2001, 1 (1) : 71-88. doi: 10.3934/dcdsb.2001.1.71 Shuang Chen, Li-Ping Pang, Dan Li. An inexact semismooth Newton method for variational inequality with symmetric cone constraints. Journal of Industrial & Management Optimization, 2015, 11 (3) : 733-746. doi: 10.3934/jimo.2015.11.733 Chibueze Christian Okeke, Abdulmalik Usman Bello, Lateef Olakunle Jolaoso, Kingsley Chimuanya Ukandu. Inertial method for split null point problems with pseudomonotone variational inequality problems. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021037 Weijun Zhou, Youhua Zhou. On the strong convergence of a modified Hestenes-Stiefel method for nonconvex optimization. Journal of Industrial & Management Optimization, 2013, 9 (4) : 893-899. doi: 10.3934/jimo.2013.9.893 Liping Pang, Fanyun Meng, Jinhe Wang. Asymptotic convergence of stationary points of stochastic multiobjective programs with parametric variational inequality constraint via SAA approach. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1653-1675. doi: 10.3934/jimo.2018116 Dag Lukkassen, Annette Meidell, Peter Wall. Multiscale homogenization of monotone operators. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 711-727. doi: 10.3934/dcds.2008.22.711 HTML views (296) Gang Cai Yekini Shehu Olaniyi S. Iyiola Article outline
CommonCrawl
The stability of contact discontinuity for compressible planar magnetohydrodynamics KRM Home Diffusive limit with geometric correction of unsteady neutron transport equation December 2017, 10(4): 1205-1233. doi: 10.3934/krm.2017046 Generalized Huygens' principle for a reduced gravity two and a half layer model in dimension three Zhigang Wu 1, and Weike Wang 2,, Department of Applied Mathematics, Donghua University, Shanghai 201620, China Department of Mathematics, Shanghai Jiao Tong University, Shanghai 200240, China * Corresponding author: Weike Wang Received September 2016 Revised December 2016 Published March 2017 Fund Project: The research of Z.G. Wu was sponsored by Natural Science Foundation of Shanghai (No. 16ZR1402100) and the Fundamental Research Funds for the Central Universities (No. 2232015D3-33). The research of W.K. Wang was supported by Natural Science Foundation of China (No. 11231006). The Cauchy problem of the reduced gravity two and a half layer model in dimension three isconsidered. We obtain the pointwise estimates of the time-asymptotic shape of the solution, which exhibit two kinds of the generalized Huygens' waves. It is a significant different phenomenon from the Navier-Stokes system. Lastly, as a byproduct, we also extend $L^2(\mathbb{R}^3)$-decay rate to $L^p(\mathbb{R}^3)$-decay rate with $p>1$. Keywords: Green's function, reduced gravity two and a half layer model, Huygens' principle. Mathematics Subject Classification: Primary: 35J08, 35B40; Secondary: 35A09, 35Q35. Citation: Zhigang Wu, Weike Wang. Generalized Huygens' principle for a reduced gravity two and a half layer model in dimension three. Kinetic & Related Models, 2017, 10 (4) : 1205-1233. doi: 10.3934/krm.2017046 H. B. Cui, L. Yao and Z. A. Yao, Global existence and optimal decay rates of solutions to a reduced gravity two and a half layer model, Communications on Pure Applied Analysis, 14 (2015), 981-1000. doi: 10.3934/cpaa.2015.14.981. Google Scholar R. Duan and C.H. Zhou, On the compactness of the reduced-gravity two-and-a-half layer equations, J. Diff. Eqns., 252 (2012), 3506-3519. doi: 10.1016/j.jde.2011.12.012. Google Scholar R. J. Duan, S. Ukai, T. Yang and H. J. Zhao, Optimal convergence rates for the compressible Navier-Stokes equations with potential forces, Mathematical Models and Methods in Applied Sciences, 17 (2007), 737-758. doi: 10.1142/S021820250700208X. Google Scholar R. J. Duan, H. X. Liu, S. Ukai and T. Yang, Optimal $L^p-L^q$ convergence rates for the compressible Navier-Stokes equations with potential force, J. Diff. Eqns., 238 (2007), 220-233. doi: 10.1142/S021820250700208X. Google Scholar D. Hoff and K. Zumbrun, Multi-dimensional diffusion waves for the Navier-Stokes equations of compressible flow, Indiana University Mathematics Journal, 44 (1995), 603-676. doi: 10.1512/iumj.1995.44.2003. Google Scholar D. Hoff and K. Zumbrun, Pointwise decay estimates for multidimensional Navier-Stokes diffusion Waves, Z. angew. Math. Phys., 48 (1997), 597-614. doi: 10.1007/s000330050049. Google Scholar F. M. Huang, J. Li and A. Matsumura, Asymptotic stability of combination of viscous contact wave with rarefaction waves for one-dimensional compressible Navier-Stokes system, Arch. Ration. Mech. Anal., 197 (2010), 89-116. doi: 10.1007/s00205-009-0267-0. Google Scholar F. M. Huang, A. Matsumura and Z. P. Xin, Stability of contact discontinuities for the 1-D compressible Navier-Stokes equations, Arch. Ration. Mech. Anal., 179 (2006), 55-77. doi: 10.1007/s00205-005-0380-7. Google Scholar Y. I. Kanel, Cauchy problem for the equations of gas dynamics with viscosity, Siberian Math. J., 20 (1979), 208-218. Google Scholar S. Kawashima, System of a Hyperbolic-Parabolic Composite Type, with Applications to the Equations of Manetohydrodynamics Ph. D thesis, Kyoto University, Kyoto, 1983. Google Scholar S. Kawashima and A. Matsumura, Asymptotic stability of traveling wave solutions of systems for one-dimensional gas motion, Comm. Math. Phys., 101 (1985), 97-127. doi: 10.1007/BF01212358. Google Scholar S. Kawashima and T. Nishida, Global solutions to the initial value problem for the equations of one dimensional motion of viscous polytropic gases, J. Math. Kyoto Univ., 21 (1981), 825-837. doi: 10.1215/kjm/1250521915. Google Scholar T. Kobayashi and Y. Shibata, Decay estimates of solutions for the equations of motion of compressible viscous and heat-conductive gases in an exterior domain in $\mathbb{R}^3$, Commun. Math. Phys., 200 (1999), 621-659. Google Scholar H. L. Li and T. Zhang, Large time behavior of isentropic compressible Navier-Stokes system in $\mathbb{R}^3$, Mathematical Methods in the Applied Sciences, 34 (2011), 670-682. doi: 10.1002/mma.1391. Google Scholar T. P. Liu, Pointwise convergence to shock waves for viscous conservation laws, Comm. Pure Appl. Math., 50 (1997), 1113-1182. doi: 10.1002/(SICI)1097-0312(199711)50:11<1113::AID-CPA3>3.0.CO;2-D. Google Scholar T. P. Liu and S. E. Noh, Wave propagation for the compressible Navier-Stokes equations, J. Hyperbolic Differ. Eqns., 12 (2015), 385-445. doi: 10.1142/S0219891615500113. Google Scholar T. P. Liu and W. K. Wang, The pointwise estimates of diffusion wave for the Navier-Stokes systems in odd multi-dimension, Comm. Math. Phys., 196 (1998), 145-173. doi: 10.1007/s002200050418. Google Scholar T. P. Liu and Z. P. Xin, Nonlinear stability of rarefaction waves for compressible Navier-Stokes equations, Comm. Math. Phys., 118 (1988), 451-465. doi: 10.1007/BF01466726. Google Scholar T. P. Liu and S. H. Yu, The Green's function and large-time behavior of solutions for one dimensional Boltzmann equation, Comm. Pure. Appl. Math., 57 (2004), 1543-1608. doi: 10.1002/cpa.20011. Google Scholar T. P. Liu and S. H. Yu, Initial-boundary value problem for one-dimensional wave solutions of the Boltzmann equation, Commun. Pure Appl. Math., 60 (2007), 295-356. doi: 10.1002/cpa.20172. Google Scholar T. P. Liu and S. H. Yu, Green's function of Boltzmann equation, 3-D waves, Bullet. Inst. of Math. Academia Sinica, 1 (2006), 1-78. Google Scholar T. P. Liu and S. H. Yu, Solving Boltzmann equation, part Ⅰ: Green's function, Bullet. Inst. of Math. Academia Sinica, 6 (2011), 115-243. Google Scholar T. P. Liu and S. H. Yu, Dirichlet-Neumann Kernel for hyperbolic-dissipative system in half space, Bullet. Inst. of Math. Academia Sinica, 7 (2012), 477-543. Google Scholar T. P. Liu and Y. N. Zeng, Large time behavior of solutions for general quasilinear hyperbolic-parabolic systems of conservation laws, Amer. Math. Soc., 7125 (1997), ⅷ+120 pp. doi: 10.1090/memo/0599. Google Scholar T. P. Liu and Y. N. Zeng, Shock waves in conservation laws with physical viscosity, Mem. Amer. Math. Soc., 7125 (1997), ⅷ+120 pp. doi: 10.1090/memo/1105. Google Scholar A. Matsumura and T. Nishida, The initial value problem for the equations of motion of compressible viscous and heat-conductive fluids, Proc. Jpn. Acad. Ser. A, 55 (1979), 337-342. doi: 10.3792/pjaa.55.337. Google Scholar A. Matsumura and T. Nishida, The initial value problem for the equations of motion of viscous and heat-conductive gases, J. Math. Kyoto. Univ., 20 (1980), 67-104. doi: 10.1215/kjm/1250522322. Google Scholar M. Okada and S. Kawashima, On the equations of one-dimensional motion of compressible viscous fluids, J. Math. Kyoto Univ., 23 (1983), 55-71. doi: 10.1215/kjm/1250521610. Google Scholar S. Ukai, T. Yang and S. H. Yu, Nonlinear boundary layers of the Boltzmann equation. Ⅰ. Existence, Comm. Math. Phys., 236 (2003), 373-393. doi: 10.1007/s00220-003-0822-8. Google Scholar G. K. Vallis, Atmospheric and Oceanic Fluid Dynamics: Fundamentals and Large-Scale Circulation Cambridge University Press, 2006. doi: 10.1017/CBO9780511790447. Google Scholar W. K. Wang and Z. G. Wu, Pointwise estimates of solution for the Navier-Stokes-Poisson equations in multi-dimensions, J. Diff. Eqns., 248 (2010), 1617-1636. doi: 10.1016/j.jde.2010.01.003. Google Scholar W. K. Wang and T. Yang, The pointwise estimates of solutions for Euler-Equations with damping in multi-dimensions, J. Diff. Eqns., 173 (2001), 410-450. doi: 10.1006/jdeq.2000.3937. Google Scholar R. Wei, Y. Li and Z. A. Yao, Decay of the solution to a reduced gravity two and a half layer equations, Journal of Mathematical Analysis and Applications, 438 (2016), 946-961. doi: 10.1016/j.jmaa.2016.01.081. Google Scholar Z. G. Wu and W. K. Wang, Pointwise estimates of solution for non-isentropic Navier-Stokes-Poisson equations in multi-dimensions, Acta Math. Sci., 32 (2012), 1681-1702. doi: 10.1016/S0252-9602(12)60134-9. Google Scholar Z. G. Wu and W. K. Wang, Refined pointwise estimates for the Navier-Stokes-Poisson equations, Analysis and Applications, 14 (2016), 739-762. doi: 10.1142/S0219530515500153. Google Scholar Z. G. Wu and W. K. Wang, Large time behavior and pointwise estimates for compressible Euler equations with damping, Science China Mathematics, 58 (2015), 1397-1414. doi: 10.1007/s11425-015-5013-5. Google Scholar L. Yao, Z. L. Li and W. J. Wang, Existence of spherically symmetric solutions for a reduced gravity two-and-a-half layer system, J. Diff. Eqns., 261 (2016), 1637-1668. doi: 10.1016/j.jde.2016.04.012. Google Scholar S. H. Yu, Nonlinear wave propagation over a Boltzmann shock profile, J. Amer. Math. Soc., 23 (2010), 1040-1118. doi: 10.1090/S0894-0347-2010-00671-6. Google Scholar Y. N. Zeng, $L^1$ asymptotic behavior of compressible isentropic viscous 1-D flow, Commun. Pure Appl. Math., 47 (1994), 1053-1082. doi: 10.1002/cpa.3160470804. Google Scholar Y. N. Zeng, Thermal Non-equilibrium flows in three space dimensions, Arch. Rational Mech. Anal., 219 (2016), 27-87. doi: 10.1007/s00205-015-0892-8. Google Scholar Yongming Liu, Lei Yao. Global solution and decay rate for a reduced gravity two and a half layer model. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2613-2638. doi: 10.3934/dcdsb.2018267 Haibo Cui, Lei Yao, Zheng-An Yao. Global existence and optimal decay rates of solutions to a reduced gravity two and a half layer model. Communications on Pure & Applied Analysis, 2015, 14 (3) : 981-1000. doi: 10.3934/cpaa.2015.14.981 Chiu-Ya Lan, Huey-Er Lin, Shih-Hsien Yu. The Green's functions for the Broadwell Model in a half space problem. Networks & Heterogeneous Media, 2006, 1 (1) : 167-183. doi: 10.3934/nhm.2006.1.167 Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001 Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007 Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791 Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307 Azniv Kasparian, Ivan Marinov. Duursma's reduced polynomial. Advances in Mathematics of Communications, 2017, 11 (4) : 647-669. doi: 10.3934/amc.2017048 Hasib Khan, Cemil Tunc, Aziz Khan. Green function's properties and existence theorems for nonlinear singular-delay-fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2475-2487. doi: 10.3934/dcdss.2020139 Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501 David Hoff. Pointwise bounds for the Green's function for the Neumann-Laplace operator in $ \text{R}^3 $. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021037 Seick Kim, Longjuan Xu. Green's function for second order parabolic equations with singular lower order coefficients. Communications on Pure & Applied Analysis, 2022, 21 (1) : 1-21. doi: 10.3934/cpaa.2021164 Yanhong Zhang. Global attractors of two layer baroclinic quasi-geostrophic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6377-6385. doi: 10.3934/dcdsb.2021023 Kyoungsun Kim, Gen Nakamura, Mourad Sini. The Green function of the interior transmission problem and its applications. Inverse Problems & Imaging, 2012, 6 (3) : 487-521. doi: 10.3934/ipi.2012.6.487 Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098 Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2767-2783. doi: 10.3934/dcdsb.2014.19.2767 Claudia Bucur. Some observations on the Green function for the ball in the fractional Laplace framework. Communications on Pure & Applied Analysis, 2016, 15 (2) : 657-699. doi: 10.3934/cpaa.2016.15.657 Giuseppe Capobianco, Tom Winandy, Simon R. Eugster. The principle of virtual work and Hamilton's principle on Galilean manifolds. Journal of Geometric Mechanics, 2021, 13 (2) : 167-193. doi: 10.3934/jgm.2021002 Andrey Tsiganov. Poisson structures for two nonholonomic systems with partially reduced symmetries. Journal of Geometric Mechanics, 2014, 6 (3) : 417-440. doi: 10.3934/jgm.2014.6.417 Torsten Lindström. Discrete models and Fisher's maximum principle in ecology. Conference Publications, 2003, 2003 (Special) : 571-579. doi: 10.3934/proc.2003.2003.571 Zhigang Wu Weike Wang
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Using differentials in physics [duplicate] Asked 1 year, 4 months ago What are the "inexact differentials" in the first law of thermodynamics? (1 answer) Closed 1 year ago. I was lately wondering about the use of differentials in physics. I mean, usually $dx$ is thought of as a small increment in $x$, but does this have any rigorous meaning mathematically. Doubts started to appear when I saw the first law of thermodynamics: $$dU = dQ + dW.$$ What does this even mean? (not thermodynamically, but more mathematically). Again, does this have a somehow rigorous meaning? If it isn't rigorous, how are we able to manipulate them? Qmechanic♦ 182k3838 gold badges468468 silver badges20942094 bronze badges Gaston CastilloGaston Castillo $\begingroup$ Possible duplicates: How to treat differentials and infinitesimals? , Rigorous underpinnings of infinitesimals in physics physics.stackexchange.com/q/563629/2451 , physics.stackexchange.com/q/65724/2451 , physics.stackexchange.com/q/153791/2451 and links therein. $\endgroup$ – Qmechanic ♦ $\begingroup$ IMHO differentials in physics are often formally defined from a differential geometry point of view. You may want to take a look at en.wikipedia.org/wiki/Differential_form and ideas therein. $\endgroup$ – FriendlyLagrangian $\begingroup$ See also: en.wikipedia.org/wiki/Hyperreal_number $\endgroup$ – Filip Milovanović It's a shorthand for$$\frac{dU}{dp}=\frac{dQ}{dp}+\frac{dW}{dp}$$for all choices of a parameter $p$ that tracks the system's evolution. The special case where $p$ is the time elapsed is equivalent to the general case by the chain rule. J.G.J.G. $\begingroup$ Is this something that you have seen anywhere or is it your own idea? $\endgroup$ – md2perpe $\begingroup$ I don't think this is a good way of writing it, because $W$ and $Q$ aren't state functions; there is no "amount of work/heat" in a body that changes. That's why people use the differential notation instead; it's the amount of work done or heat added during a process that's meaningful. $\endgroup$ – knzhou $\begingroup$ See also "implicit differentiation" for differentiation with respect to an as-yet-to-be-specified independent variable. $\endgroup$ – Eric Towers $\begingroup$ @knzhou Just because they aren't state function doesn't mean they don't have derivatives. $\endgroup$ $\begingroup$ @Acccumulation Sure, for example in this case you can define $Q(p)$ as "the total amount of heat absorbed so far at parameter value $p$ along this particular path", so that $dQ/dp$ is a totally valid derivative. But I think that makes things pretty confusing because then the function $Q$ is implicitly different for every path. $\endgroup$ This was a little too long to insert into a comment, so I write it as an answer here. Here's an answer I wrote on MSE How does the idea of a differential dx work if derivatives are not fractions?. There I explain the idea of interpreting them as 1-forms (this is just fancy vocabulary for a simple idea). Also, if you want to see how some of these basic differential geometric ideas are used in physics, I'd strongly recommend you read Bamberg and Sternberg's A Course of Mathematics for Students of Physics, which is a very readable text. Check out both volumes 1 and 2 (there's stuff on electrostatics, magnetostatics, optics, Maxwell's equations). Thermodynamics specifically is covered in Chapter 22 (the last chapter in Volume 2), and they follow the geometric approach of Caratheodory. To understand this, you don't need to read every single chapter beforehand; you only need to read chapter 5 (basic differential calculus in several variables) and occasionally you'll need the material of chapter 15 (on exterior derivatives, and closed and exact forms). In this language the first law says that if we take the work 1-form $\omega$ and the "heat" $1$-form $\alpha$, then their sum $\alpha+\omega$ is closed (i.e $d(\alpha+\omega)=0$) and thus can be locally written as $dU=\alpha+\omega$ for some smooth function $U$. So, it is not $\omega$ nor $\alpha$ alone which is closed, but rather their sums. Typically, to reflect this assertion, the classical notation writes it as $\delta Q$ or $\delta W$, or even $dQ, dW$ with a little line crossed over the $d$ (not too sure how to write this in mathjax). Another book (more advanced) which develops the exterior calculus and shows its applications to physics is Applied Exterior Calculus, by Dominic G.B. Edelen. peek-a-boopeek-a-boo Thermodynamics deals with real mathematical differentials for the thermodynamic state functions. State functions are real functions of several variables, and different state functions are related via the Legendre transformation (see for its application in Thermodynamics). As an example, the internal energy can be written as $$ dU = TdS - PdV + \sum_i\mu_idN_i, $$ and interpretation of the coefficients in the differential expansion as the partial derivatives allows to obtain the Maxwell relations, which in mathematical terms are nothing by the expression of the existence of the total differential). The last term in the equation above however already shows that physicists tend to take liberties with mathematical notation, as $N_i$ in the last term is the number of particle,s i.e., a discrete variable. Similarly, one often stresses that $dQ$ and $dW$ are not real differentials, since they depend on the path that one chooses between the two points, and only their sum is not ambiguous - just as it is the case in math. However, as @J.G. has pointed out, the path is usually implied, and $Q,W$ can be thought as functions of a parameter along this path. Some books specifically use in this case symbols with crossed $d:$ đQ, đW. Roger VadimRoger Vadim I could be wrong, but the first beginning of thermodynamics in the general case is not expressed in complete differentials. As correctly noted above (@Roger Vadim), even many authors specifically introduce other designations for these values. Both expressions (and $dU$ and $\delta Q$) are physically infinitesimal increments, but the latter is not a differential. If you remember the Newton-Leibniz formula $$ \int\limits_{a}^{b}{dF} = F(b) - F(a), $$ then the integral (otherwise: the algebraic sum over the whole "trajectory" of summation) will depend only on the initial and final states of the system, and does not depend on how the system got from state (a) to state (b). At the same time, there are quantities that clearly depend on the shape of the trajectory of motion from (a) to (b). In this regard, the answer to your question should sound approximately as follows. Some physical functions are total differentials in the sense of physically infinitesimal quantities. "Physically" in this case implies that the incremental magnitude itself is much smaller than some reference dimension, such as the size of a system or a possible quantum of measurement of some instrument. Only when the basic assumptions are made can the corresponding properties be used, and then you should always keep an eye on the meaning of this or that value, such as the same elementary work: $\delta W = \int\limits_{a}^{b}{\left( \vec{F} \cdot d\vec{r} \right)}$. SssurSssur A derivative can be thought of as a ratio between differentials. For instance, $\frac {dy}{dx}=2$ can be interpreted as saying $dy =2dx$. The equation $dU=dQ+dW$ is just a version of such an equation with three terms. If you'd like, you can divide both sides by $dU$ to get $1 = \frac{dQ}{dU}+\frac{dW}{dU}$. And according to the chain rule, you can divide by any differential; as J.G. said, it's equivalent to $\frac{dU}{dp}=\frac{dQ}{dp}+\frac{dW}{dp}$ for any parameter $p$. Saying that two expressions of infinitesimals are equal can be considered to be a claim that they are equal "in the limit". Stated rigorously, we can say $\Delta U = \Delta Q+\Delta W+\epsilon$ where $\lim_{\Delta U \rightarrow 0}\frac {\Delta \epsilon}{\Delta U}=0$ (of course, since the relationship between terms is constant, it turns out that $\epsilon$ is identically zero). AcccumulationAcccumulation Strictly speaking, it's probably wrong to write your above equation as you have done so as a differential in one variable has to be referenced to another independent variable. But most people know what is meant by it, i.e. the corresponding equation written in increments: ΔU = ΔQ + ΔW Written like that we can manipulate the equation w.r.t. other variables and convert to differential form by division by another increment and applying a 'tending towards zero' limit. TrunkTrunk What are the "inexact differentials" in the first law of thermodynamics? How to treat differentials and infinitesimals? Rigorous underpinnings of infinitesimals in physics Difference between $\Delta$, $d$ and $\delta$ Symbols of derivatives Struggling understanding definitions with infinitesimal quantities How does one interpret thermodynamic differentials? What does the differential of $d_s\sin(\theta) = m\lambda$ help us see, with respect to waves through diffraction gratings? Proof of Ampère's law from the Biot-Savart law for tridimensional current distributions Physics of small values and differentials Can I differentiate a wavefunction with respect to a quantum number? Some clarifications about the ideas of representation of a group Partial derivatives in a $PVT$ system Two total differentials with equal variable differentials. Why coefficients in front of differentials are equal? "Trace" of a distribution?
CommonCrawl
A simple digital spiking neural network: Synchronization and spike-train approximation DCDS-S Home Output feedback based sliding mode control for fuel quantity actuator system using a reduced-order GPIO April 2021, 14(4): 1465-1477. doi: 10.3934/dcdss.2020395 Finite-time exponential synchronization of reaction-diffusion delayed complex-dynamical networks M. Syed Ali 1,, , L. Palanisamy 1, , Nallappan Gunasekaran 2, , Ahmed Alsaedi 3, and Bashir Ahmad 3, Department of Mathematics, Thiruvalluvar University, Vellore-632115, Tamil Nadu, India Department of Mathematical Sciences, Shibaura Institute of Technology, Saitama 337-8570, Japan Nonlinear Analysis and Applied Mathematics (NAAM)-Research Group, Department of Mathematics Faculty of Science, King Abdulaziz University, P.O. Box 80257, Jeddah 21589, Saudi Arabia * Corresponding author: M. Syed Ali Received October 2019 Revised March 2020 Published April 2021 Early access June 2020 Fund Project: This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant no. (RG-39-130-38). The authors, therefore, acknowledge with thanks DSR technical and financial support. Figure(1) This investigation looks at the issue of finite time exponential synchronization of complex dynamical systems with reaaction diffusion term. This reort studies complex networks consisting of $ N $ straightly and diffusively coupled networks. By building a new Lyapunov krasovskii functional (LKF), using Jensens inequality and convex algorithms approach stability conditions frameworks are determined. At last, a numerical precedent is given to demonstrate the practicality of the theoretical results. Keywords: Complex dynamical systems, finite-time, reaction-diffusion, synchronization. Mathematics Subject Classification: 34K20, 34K50, 92B20, 94D05. Citation: M. Syed Ali, L. Palanisamy, Nallappan Gunasekaran, Ahmed Alsaedi, Bashir Ahmad. Finite-time exponential synchronization of reaction-diffusion delayed complex-dynamical networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1465-1477. doi: 10.3934/dcdss.2020395 M. S. Ali, N. Gunasekaran and R. Saravanakumar, Design of passivity and passification for delayed neural networks with markovian jump parameters via non-uniform sampled-data control, Neural Computing and Applications, 30 (2018), 595-605. doi: 10.1007/s00521-016-2682-0. Google Scholar M. S. Ali, N. Gunasekaran and Q. X. Zhu, State estimation of T–S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control, Fuzzy Sets and Systems, 306 (2017), 87-104. doi: 10.1016/j.fss.2016.03.012. Google Scholar M. S. Ali, N. Gunasekaran and M. E. Rani, Robust stability of Hopfield delayed neural networks via an augmented LK functional, Neurocomputing, 234 (2017), 198-204. Google Scholar M. Syed Ali, K. Meenakshi, N. Gunasekaran and M. Usha, Finite-time passivity of discrete-time TS fuzzy neural networks with time-varying delays, Iranian Journal of Fuzzy Systems, 15 (2018), 93-107. Google Scholar M. Syed Ali, Q. X. Zhu, S. Pavithra and N. Gunasekaran, A study on $(Q, S, R)-\gamma$-dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays, International Journal of Systems Science, 49 (2018), 755-765. doi: 10.1080/00207721.2017.1422814. Google Scholar P. Balasubramaniam and L. J. Banu, Synchronization criteria of discrete-time complex networks with time-varying delays and parameter uncertainties, Cognitive Neurodynamics, 8 (2014), 199-215. doi: 10.1007/s11571-013-9272-y. Google Scholar M. Fang, Synchronization for complex dynamical networks with time delay and discrete-time information, Applied Mathematics and Computation, 258 (2015), 1-11. doi: 10.1016/j.amc.2015.01.106. Google Scholar D. W. Gong, H. G. Zhang, Z. S. Wang and J. H. Liu, Synchronization analysis for complex networks with coupling delay based on T–S fuzzy theory, Applied Mathematical Modelling, 36 (2012), 6215-6224. doi: 10.1016/j.apm.2012.01.041. Google Scholar N. Gunasekaran, M. Syed Ali and S. Pavithra, Finite-time $L_\infty$ performance state estimation of recurrent neural networks with sampled-data signals, Neural Processing Letters, 51 (2019), 1379-1392. doi: 10.1007/s11063-019-10114-9. Google Scholar W. He and J. Cao, Exponential synchronization of hybrid coupled networks with delayed coupling, IEEE Transactions on Neural Networks, 21 (2010), 571-583. Google Scholar B. N. Huang, H. G. Zhang, D. W. Gong and J. Y. Wang, Synchronization analysis for static neural networks with hybrid couplings and time delays, Neurocomputing, 148 (2015), 288-293. doi: 10.1016/j.neucom.2013.11.053. Google Scholar D. H. Ji, J. H. Park, W. J. Yoo, S. C. Won and S. M. Lee, Synchronization criterion for Lur'e type complex dynamical networks with time-varying delay, Physics Letters A, 374 (2010), 1218-1227. doi: 10.1016/j.physleta.2010.01.005. Google Scholar Y.-G. Kao, J.-F. Guo, C.-H. Wang and X.-Q. Sun, Delay-dependent robust exponential stability of Markovian jumping reaction-diffusion Cohen–Grossberg neural networks with mixed delays, Journal of the Franklin Institute, 349 (2012), 1972-1988. doi: 10.1016/j.jfranklin.2012.04.005. Google Scholar T. H. Lee, J. H. Park, H. Y. Jung, S. M. Lee and O. M. Kwon, Synchronization of a delayed complex dynamical network with free coupling matrix, Nonlinear Dynamics, 69 (2012), 1081-1090. doi: 10.1007/s11071-012-0328-z. Google Scholar J. G. Lu, Global exponential stability and periodicity of reaction–diffusion delayed recurrent neural networks with dirichlet boundary conditions, Chaos Solitons Fractals, 35 (2008), 116-125. doi: 10.1016/j.chaos.2007.05.002. Google Scholar N. N. Ma, Z. B. Liu and L. Chen, Finite-time $H_\infty$ synchronization for complex dynamical networks with markovian jump parameter, Journal of Control, Automation and Electrical Systems, 30 (2019), 75-84. doi: 10.1007/s40313-018-00428-9. Google Scholar F. Z. Nian and X. Y. Wang, Chaotic synchronization of hybrid state on complex networks, International Journal of Modern Physics C, 21 (2010), 457-469. doi: 10.1142/S0129183110015221. Google Scholar M.-J. Park, O. M. Kwon, J. H. Park, S.-M. Lee and E.-J. Cha, Synchronization criteria of fuzzy complex dynamical networks with interval time-varying delays, Applied Mathematics and Computation, 218 (2012), 11634-11647. doi: 10.1016/j.amc.2012.05.046. Google Scholar L. Scheeffer, Ueber die bedeutung der begriffe maximum und minimum iin der variationsrechnung, Mathematische Annalen, 26 (1886), 197-208. doi: 10.1007/BF01444332. Google Scholar J. Shen and J. Cao, Finite-time synchronization of coupled neural networks via discontinuous controllers, Cognitive Neurodynamics, 5 (2011), 373-385. doi: 10.1007/s11571-011-9163-z. Google Scholar M. Shirkavand, M. Pourgholi and A. Yazdizadeh, Robust fixed-time synchronisation of non-identical nodes in complex networks under input non-linearities, IET Control Theory & Applications, 13 (2019), 2095-2103. Google Scholar A. N. Langville and W. J. Stewart, The Kronecker product and stochastic automata networks, Journal of Computational and Applied Mathematics, 167 (2004), 429-447. doi: 10.1016/j.cam.2003.10.010. Google Scholar P. Park, J. W. Ko and C. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica, 47 (2011), 235-238. doi: 10.1016/j.automatica.2010.10.014. Google Scholar J.-A. Wang, New synchronization stability criteria for general complex dynamical networks with interval time-varying delays, Neural Computing and Applications, 28 (2017), 805-815. doi: 10.1007/s00521-015-2108-4. Google Scholar J. Y. Wang, J. W. Feng, C. Xu, Y. Zhao and J. Q. Feng, Pinning synchronization of nonlinearly coupled complex networks with time-varying delays using M-matrix strategies, Neurocomputing, 177 (2016), 89-97. doi: 10.1016/j.neucom.2015.11.011. Google Scholar L. Wang and Q.-G. Wang, Synchronization in complex networks with switching topology, Physics Letters A, 375 (2011), 3070-3074. doi: 10.1016/j.physleta.2011.06.054. Google Scholar M.-G. Wang, X.-Y. Wang and Z.-Z. Liu, A new complex network model with hierarchical and modular structures, Chinese Journal of Physics, 48 (2010), 805-813. Google Scholar Y. Xu, W. Zhou, C. Xie and D. Tong, Finite-time synchronization of the complex dynamical network with non-derivative and derivative coupling, Neurocomputing, 173 (2016), 1356-1361. Google Scholar X. S. Yang and J. D. Cao, Finite-time stochastic synchronization of complex networks, Applied Mathematical Modelling, 34 (2010), 3631-3641. doi: 10.1016/j.apm.2010.03.012. Google Scholar X. S. Yang, J. D. Cao and J. Q. Lu, Synchronization of delayed complex dynamical networks with impulsive and stochastic effects, Nonlinear Analysis: Real World Applications, 12 (2011), 2252-2266. doi: 10.1016/j.nonrwa.2011.01.007. Google Scholar X. S. Yang, J. D. Cao and Z. C. Yang, Synchronization of coupled reaction-diffusion neural networks with time-varying delays via pinning-impulsive controller, SIAM Journal on Control and Optimization, 51 (2013), 3486-3510. doi: 10.1137/120897341. Google Scholar J. F. Zeng and J. D. Cao, Synchronisation in singular hybrid complex networks with delayed coupling, International Journal of Systems, Control and Communications, 3 (2011), 144-157. doi: 10.1504/IJSCC.2011.039865. Google Scholar J. Zhang and Y. B. Gao, Synchronization of coupled neural networks with time-varying delay, Neurocomputing, 219 (2017), 154-162. doi: 10.1016/j.neucom.2016.09.004. Google Scholar Y.-J. Zhang, S. Liu, R. Yang, Y.-Y. Tan and X. Y. Li, Global synchronization of fractional coupled networks with discrete and distributed delays, Physica A: Statistical Mechanics and its Applications, 514 (2019), 830-837. doi: 10.1016/j.physa.2018.09.129. Google Scholar J. Zhou, Q. J. Wu and L. Xiang, Impulsive pinning complex dynamical networks and applications to firing neuronal synchronization, Nonlinear Dynamics, 69 (2012), 1393-1403. doi: 10.1007/s11071-012-0355-9. Google Scholar Figure 1. Error trajectories of the system in Example 1 with node 6 Tianhu Yu, Jinde Cao, Chuangxia Huang. Finite-time cluster synchronization of coupled dynamical systems with impulsive effects. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3595-3620. doi: 10.3934/dcdsb.2020248 Jianjun Paul Tian. Finite-time perturbations of dynamical systems and applications to tumor therapy. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 469-479. doi: 10.3934/dcdsb.2009.12.469 Tingting Su, Xinsong Yang. Finite-time synchronization of competitive neural networks with mixed delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3655-3667. doi: 10.3934/dcdsb.2016115 Wei-Jian Bo, Guo Lin, Shigui Ruan. Traveling wave solutions for time periodic reaction-diffusion systems. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4329-4351. doi: 10.3934/dcds.2018189 Wei Feng, Xin Lu. Global periodicity in a class of reaction-diffusion systems with time delays. Discrete & Continuous Dynamical Systems - B, 2003, 3 (1) : 69-78. doi: 10.3934/dcdsb.2003.3.69 Juanjuan Huang, Yan Zhou, Xuerong Shi, Zuolei Wang. A single finite-time synchronization scheme of time-delay chaotic system with external periodic disturbance. Mathematical Foundations of Computing, 2019, 2 (4) : 333-346. doi: 10.3934/mfc.2019021 B. Ambrosio, M. A. Aziz-Alaoui, V. L. E. Phan. Global attractor of complex networks of reaction-diffusion systems of Fitzhugh-Nagumo type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3787-3797. doi: 10.3934/dcdsb.2018077 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, 2021, 20 (2) : 623-650. doi: 10.3934/cpaa.2020283 Gheorghe Craciun, Jiaxin Jin, Casian Pantea, Adrian Tudorascu. Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1305-1335. doi: 10.3934/dcdsb.2020164 Arno Berger. On finite-time hyperbolicity. Communications on Pure & Applied Analysis, 2011, 10 (3) : 963-981. doi: 10.3934/cpaa.2011.10.963 Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405 Lu Yang, Meihua Yang. Long-time behavior of stochastic reaction-diffusion equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2627-2650. doi: 10.3934/dcdsb.2017102 Grzegorz Karch, Kanako Suzuki, Jacek Zienkiewicz. Finite-time blowup of solutions to some activator-inhibitor systems. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4997-5010. doi: 10.3934/dcds.2016016 Abderrahim Azouani, Edriss S. Titi. Feedback control of nonlinear dissipative systems by finite determining parameters - A reaction-diffusion paradigm. Evolution Equations & Control Theory, 2014, 3 (4) : 579-594. doi: 10.3934/eect.2014.3.579 Mostafa Bendahmane, Mauricio Sepúlveda. Convergence of a finite volume scheme for nonlocal reaction-diffusion systems modelling an epidemic disease. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 823-853. doi: 10.3934/dcdsb.2009.11.823 Arno Berger, Doan Thai Son, Stefan Siegmund. Nonautonomous finite-time dynamics. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 463-492. doi: 10.3934/dcdsb.2008.9.463 Ching-Shan Chou, Yong-Tao Zhang, Rui Zhao, Qing Nie. Numerical methods for stiff reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 515-525. doi: 10.3934/dcdsb.2007.7.515 Laurent Desvillettes, Klemens Fellner. Entropy methods for reaction-diffusion systems. Conference Publications, 2007, 2007 (Special) : 304-312. doi: 10.3934/proc.2007.2007.304 A. Dall'Acqua. Positive solutions for a class of reaction-diffusion systems. Communications on Pure & Applied Analysis, 2003, 2 (1) : 65-76. doi: 10.3934/cpaa.2003.2.65 Xiongxiong Bao, Wan-Tong Li. Existence and stability of generalized transition waves for time-dependent reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3621-3641. doi: 10.3934/dcdsb.2020249 M. Syed Ali L. Palanisamy Nallappan Gunasekaran Ahmed Alsaedi Bashir Ahmad
CommonCrawl
Applied Network Science Towards explainable community finding Sophie Sadler1, Derek Greene2 & Daniel Archambault1 Applied Network Science volume 7, Article number: 81 (2022) Cite this article The detection of communities of nodes is an important task in understanding the structure of networks. Multiple approaches have been developed to tackle this problem, many of which are in common usage in real-world applications, such as in public health networks. However, clear insight into the reasoning behind the community labels produced by these algorithms is rarely provided. Drawing inspiration from the machine learning literature, we aim to provide post-hoc explanations for the outputs of these algorithms using interpretable features of the network. In this paper, we propose a model-agnostic methodology that identifies a set of informative features to help explain the output of a community finding algorithm. We apply it to three well-known algorithms, though the methodology is designed to generalise to new approaches. As well as identifying important features for a post-hoc explanation system, we report on the common features found made by the different algorithms and the differences between the approaches. Explainability is a growing area of study in machine learning, due to the "black-box" nature of many algorithms in this field (Adadi and Berrada 2018). Models can have millions of parameters, which rely on long training processes for optimal tuning. This often results in a lack of understanding as to why a model returns the outputs it does, and there is a growing concern among experts that this can lead to hidden biases in the trained models. Even when such flaws are present in the model's reasoning, these can remain undetected due to its good performance on a specific set of data. To combat this problem, machine learning experts have been developing techniques that provide explanations for the outputs produced by a trained model (Wachter et al. 2017). Most Explainable AI (XAI) techniques have focused on algorithms which are typically applied to tabular or image data. However, black-box algorithms, which provide little explanation for their outputs, also exist in tasks outside of the traditional realm of machine learning. Network analysis is another field which relies on complex, stochastic algorithms to solve its well-known problems. One such problem is that of community finding, also known as community detection. In network analysis, relational data is represented by a graph structure, where data points known as nodes are connected by edges. Communities in this context are loosely defined as sets of densely-connected nodes in the graph, where connections between the identified sets are more sparse. These sets may overlap, although more commonly algorithms partition the nodes into disjoint communities, which is our focus here. In previous work, the definition of a community has sometimes varied slightly depending on the domain context. Nevertheless, a range of well-known algorithms have been developed to try and solve the problem of identifying community structure in networks (Fortunato 2010). Although these algorithms aim to optimize a quality function, usually through a heuristic, this optimization process can be very complex, leaving even experts with little intuitive understanding of the outputs. As with machine learning algorithms, little to no explanation for these outputs is provided. One common factor which prevents a machine learning model's performance being reasonably understood is the presence of many (sometimes hundreds or thousands) of input features. Therefore, some of the popular techniques in explainability have focused on the idea of feature importance—i.e., generating an explanation for a previously-constructed model in the form of a set of important features and their relative importance (Guidotti et al. 2018; Saarela and Jauhiainen 2021). In general, such explanations ideally present the user with features that are readily interpretable. Here an interpretable feature can be considered to be one which carries meaning to a human domain expert, such as a patient's temperature to a doctor. This is in contrast to an uninterpretable feature, such as an individual pixel in an image. Interpretable features can be used to improve understanding in many ways, including but not limited to incorporating them as the inputs to a simpler, surrogate model which mimics the performance of the one to be explained (Keane and Kenny 2019). In a community finding context, some nodes consistently participate in a single community (a core node of that community) across several runs of the algorithm, whereas others may oscillate between two or more communities. At the moment, what distinguishes these two node types remains opaque to expert users, without network features to explain it. However, it may be that if one were to know that a node is of high betweenness centrality, one could infer that this node would likely oscillate, whereas a node with high clustering coefficient may not. This said, whether the current set of social network analysis metrics can be reliably used across algorithms to explain such phenomena, including the question of whether such a set even exists, remains unknown. Outside the direct application of this work to social networks, explainable network analysis, and in particular, explainable community finding, would bring benefits for helping public health network interventions. In this area, social network analysis is used to understand phenomena with social components and accelerate behaviour change (e.g. alcohol misuse). Interventions are designed by using social network analysis metrics and community finding algorithms (Valente 2012; Park et al. 2020; Hunter et al. 2015, 2019; Valente et al. 2013, 2015; Gesell et al. 2013) as well as developing understanding of how these phenomena spread through a community, known as social contagion (Valente and Yon 2020; Brown et al. 2017). Therefore, as researchers in this field are well versed in social network metrics, permitting more explainable community finding results would bring benefits to studies in these areas. The incorporation of interpretable features in a post-hoc explanation may be one promising approach for improving our understanding of community finding algorithms. In particular, we focus on interpretable features which can be understood by end-users who have social network analysis expertise and wish to apply community detection techniques to applied problems such as those in public health. Thus, our contributions are: A novel methodology for identifying those interpretable features which provide most insight into the outputs from stochastic community finding algorithms, from an initial longlist of candidate features. An application of this methodology to three well-known community finding algorithms in two experiments, and thus a list of interpretable features which relate to their performance. A discussion of the insight gained into these algorithms from the results. In our experiments, we find that the same features are identified across three algorithms, indicating common underlying optimisation among these algorithms, as well as a basis for believing that these features are relevant for explainable community finding as a whole. At the single node level, these features were: clustering coefficient; triangle participation; eigenvector centrality; and expansion. At the node-pair level, these features were: the Jaccard coefficient; the cosine similarity; and to a lesser degree, the maximum betweenness centrality of an edge along the shortest path between the two nodes. All of these features are defined in "Problem formulation" section. As well as the insight gained here by the identification of the relevant features, we also envision that our proposed approach could be incorporated into future work which generates detailed explanations for the communities found in specific graphs. Background and related work Recently there has been an extensive interest in explaining the outputs of "black-box" AI algorithms, frequently referred to as Explainable AI (XAI) (Adadi and Berrada 2018). One strand of this work has prioritised "model transparency" (Rudin 2019), where the core idea is that a model is transparent only if we can understand its inner workings. However, for certain types of data or for more complex algorithms, such an approach might not be feasible or effective. As an alternative, "post-hoc explanations" have become popular in the field of XAI, which are more concerned with why an algorithm produced a given output, and usually involve providing some kind of rationale or evidence for that output (Lipton 2018; Bach et al. 2015; Fong and Vedaldi 2017; Sundararajan et al. 2017). Work by Lundberg and Lee (2017) provided a unified framework for interpreting predictions by assigning each input feature an importance value for a particular input. This approach is based on the early work by Shapley (2016). Another of the most well-known post-hoc approaches is local interpretable model-agnostic explanations (LIME), proposed by Ribeiro et al. (2016), which tries to understand a previously-built model by perturbing the input data to see how the resulting predictions change. The output is a set of local feature importances which explain the classification of a specific observation. The authors also introduced SP-LIME (submodular pick LIME), which differs in that it provides global explanations for the model as a whole, rather than for individual observations. Both approaches are model-agnostic in the sense that they can be applied in conjunction with any classifier and do not require inspecting the internal workings of that classifier. Other work in post-hoc explanation by Keane and Kenny (2019) examined the use of a twin-systems strategy, where a complicated neural network model is mapped to a simpler, more interpretable "twin" model. This allowed the authors to understand the outputs of the former "black-box" model by using the latter "white-box" model. Despite the extensive attention paid to XAI in recent years, the majority of this work has focused on either image or tabular data. In particular, little attention has been paid to tasks involving network data. Some initial work has begun to incorporate explainability into graph neural networks (GNNs) (Ying et al. 2019; Yuan et al. 2020), but network analysis tasks such as community detection remain unexplored, though there is some work on a similar problem in clustering algorithms (Morichetta et al. 2019; Loyola-Gonzalez et al. 2020). In this paper, our focus is specifically on community finding techniques for network data, rather than classification. We aim to identify sets of useful features which can allow us to explain the outputs of these algorithms in a post-hoc manner. Community finding As discussed, there has been little work on explainability or interpretability for community finding algorithms and network analysis in general, to the best of our knowledge. However, existing work on comparing the performance of several algorithms on benchmark graphs has guided our choice of algorithms and data for the experimental evaluation of our proposed features. Lancichinetti et al. (2008) propose the LFR benchmark graph dataset generator, which creates graphs with ground truth community labels on each of the nodes. They assume that both the node degrees and the community sizes follow power law distributions, and define a mixing parameter, \(\mu\), which introduces noise to the communities relative to its value. For low values of \(\mu\), the communities remain well separated and thus easy to detect, but as the mixing parameter increases, communities become harder to identify. In a subsequent paper (Lancichinetti and Fortunato 2009), the performance of several well-known community finding algorithms is then compared on this benchmark data. Lee and Archambault (2016) find that humans behave in a similar way to Lancichinetti et al. when observing their own social network, confirming that the Infomap, Louvain and Girvan-Newman algorithms were the best-performing. This led to our decision to include Infomap (Rosvall and Bergstrom 2008) and the Louvain algorithm (Blondel 2008) in our experimental evaluation. Previous work in computational social science has also compared the performance of community finding algorithms on other datasets, including real data (Dao et al. 2020; Ghasemian et al. 2019; Peel et al. 2017). For this study, we choose to focus on the LFR data as it allows us to generate much larger datasets and to vary the mixing parameter to observe its effect on the results. The study by Bothorel et al. (2020) proposes a methodology to describe results of the algorithm to non-experts, however this differs from ours in that their aim is to assist in making a choice of algorithm for a particular problem, not to specifically explain the algorithm's results. In addition to the LFR data and the community detection algorithms needed for our experiments, we also rely on the notion of the a node's ease of clustering. Nodes which are easy to cluster are those which are consistently assigned to the same community, while a node which the algorithm finds it hard to cluster will oscillate between two or more communities across successive runs. Existing literature in this vein originates in papers unrelated to networks and community finding, but focused on more general clustering algorithms, e.g. k-means clustering (von Luxburg 2010; Ben-David et al. 2007). However, our proposed work differs from theirs as we centre our definition of a node's ease of clustering on its entropy in a coassociation matrix. Entries in the matrix describe how frequently two nodes are clustered into the same community. The concept of a coassociation matrix describing the relationship between pairs of nodes was derived from work proposed by Strehl (2002). In a similar approach, the authors of the LFR benchmark explore the use of consensus clustering to determine community structure over successive runs of the algorithm (Lancichinetti and Fortunato 2012a). Other works addressing the consistency of community finding algorithms (Chakraborty et al. 2013; Francisco and Oliveira 2011) are not directly relevant to our node feature experiments described in this paper, but may have relevance to the future work we propose in identifying community features. Due to their widespread adoption and suitability for our proposed methodology, in our experiments we focus on stochastic algorithms, where the community structure can change between successive runs, and on algorithms which find node partitions (i.e. each node belongs to exactly one community). Extending our approach to algorithms which generate overlapping communities will require additional steps, so we reserve this for future work. As the intention is to identify features which contribute intuitive understanding, our emphasis is on selecting features which are simple and easily understood to end-users, though specifically those with social network analysis expertise. We propose a model-agnostic methodology which can be adapted to any stochastic algorithm of interest, however we test it here on three in particular. We distinguish between two "levels" of graph feature, allowing for understanding of the nodes' community membership from two different perspectives. The first of these is at the node-level. Features at this level are calculated for individual nodes of the graph, with the aim to understand the community membership of that specific node. To motivate this problem in a social network context, suppose a node is occasionally classified as belonging to a community with extremist views on certain runs of a community finding algorithm. Understanding why this node has this varying classification would be important as this classification is not certain and could have important repercussions for the individual. The second is at the node-pair-level, where features are calculated for pairs of nodes. The aim is to understand why two nodes belong to either the same or different communities. In a social network context, if two nodes belong a community that holds an extremist view, it is important to understand why they have been placed into this community. Similarly, if one node belongs to this community and another does not, it is important to understand why the nodes have been separated. In this work we use a large number of synthetically-generated graphs to verify our approach. We employ the use of synthetic data to ensure the results are not a consequence of the characteristics of a single network (as real data is sparsely available) and to allow us to vary parameters of the network structure consistently to observe how these parameters affect the results. However, with the aim to apply these results to real data in the future, we use a synthetic generation process which can closely mimic the observed structure of real-world networks. Specifically, the synthetic graphs were generated using the LFR benchmark algorithm (Lancichinetti et al. 2008). An additional benefit of this approach is that existing work has already evaluated the performance of community finding algorithms on LFR graphs. We use several values of the LFR mixing parameter \(\mu\) in order to ascertain whether the separation of the communities affects the identified features. Our approach is to identify a longlist of features at both the node-level and the node-pair-level. We then use these features as the input data for a classification task, and extract the most informative features using permutation importance (Algorithm 1) for our trained model. Since some of the features in our longlist depend on nodes' community labels, we calculate these from many runs of the community finding algorithm using a mean average. If the feature does not depend on community label, it can be directly calculated once for each node or pair of nodes. To inform our choice of features, we consult the survey by Chakraborty et al. (2017) and select features which are widely adopted, state-of-the-art metrics for community evaluation such that they can be easily recognised and interpreted by experts on network analysis. The features selected for our experiments are described below. Node features The node-level features selected are defined as follows for node i (letting w be the number of nodes in the same community as i). Let the graph be \(G = (V, E)\) with V denoting the set of nodes and E the set of edges: Degree: The number of edges adjacent to i, deg(i). \(E_{in}\): The number of edges adjacent to i within its community. Adapted to a single node from the original definition by Radicchi et al. (2004) \(E_{out}\): The number of edges adjacent to i which connect it to nodes outside its community. Adapted to a single node from the definition by Radicchi et al. (2004). Note that \(E_{in} + E_{out} =\) deg(i). \(E_{in}\) over \(E_{out}\): For a given node, the ratio of the number of edges connecting it to other nodes within the same community, relative to the number of edges it has to nodes in other communities: $$\begin{aligned} \frac{E_{in}}{E_{out}}. \end{aligned}$$ Out Degree Fraction (ODF): Adapted to a single node from the definition by Flake et al. (2000), this is the ratio of edges connecting node i to nodes in other communities, relative to its total degree: $$\begin{aligned} \frac{E_{out}}{\mathrm{deg}(i)} \end{aligned}$$ Expansion: Adapted from the definition by Radicchi et al. (2004), this is number of edges from a single node i to nodes assigned to other communities, normalised with respect to the number of nodes in the same community as i: $$\begin{aligned} \frac{E_{out}}{w} \end{aligned}$$ Cut Ratio: Adapted to a single node from the graph cut measure discussed by Fortunato (2010). As with the metric above, this considers the number of outgoing edges from i to other communities, but in this case normalised with respect to the number of nodes not in the same community as i: $$\begin{aligned} \frac{E_{out}}{|V|-w} \end{aligned}$$ Conductance: Adapted to a single node from the clustering objective described by Shi and Malik (2000), this measure is the ratio between the connections for node i within its community and its total number of connections: $$\begin{aligned} \frac{E_{out}}{\mathrm{deg}(i) + E_{in}} \end{aligned}$$ Average Shortest Path: The mean of the shortest path lengths from node i to all other nodes in the graph. Triangle Participation: Let \(c_i\) be the number of nodes with which node i shares a common neighbour within its assigned community. Then triangle participation is given by the fraction: $$\begin{aligned} \frac{c_i}{w} \end{aligned}$$ Clustering Coefficient: The local clustering coefficient of a node measures how close its neighbours are to forming a clique (Watts and Strogatz 1998). Formally, let \(T_i\) be the number of triangles containing i across the whole graph. Then the clustering coefficient of node i is given by: $$\begin{aligned} \frac{2T_{i}}{\mathrm{deg}(i)(\mathrm{deg}(i)-1)} \end{aligned}$$ Betweenness Centrality: Let \(\sigma (j,k)\) be the number of shortest (j, k) paths, and \(\sigma (j,k|i)\) be the number of those paths that pass through i. Then the betweenness centrality of node i is given by Brandes (2001): $$\begin{aligned} \sum _{j,k \in V } \frac{\sigma (j,k|i)}{\sigma (j,k)} \end{aligned}$$ Note that if \(j = k\), \(\sigma (j,k) = 1\) and if either j or \(k = i\), then \(\sigma (j,k|i) = 0\). Intuitively, a high betweenness centrality score for a node often indicates that it holds a bridging position in a network. Eigenvector Centrality: Proposed by Bonacich (1986). The eigenvector centrality of node i is the ith entry in the vector \({\textbf {x}}\) which solves the eigenvector equation: $$\begin{aligned} {\textbf {Ax}} = \lambda {\textbf {x}} \end{aligned}$$ where \({\textbf {A}}\) is the adjacency matrix with node i represented in the ith row/column. Based on the definition above, this measures deems that a node is important if it is connected to other important nodes. Closeness Centrality: Refers to the centrality measure proposed by Freeman (1979). Let d(i, j) be the length of the shortest path between nodes i and j. Then the closeness centrality of node i is given by: $$\begin{aligned} \frac{|V|-1}{ \sum _{j \ne i} d(j,i)} \end{aligned}$$ This provides us with an assessment of the extent to which node i is close to all other nodes in a network, either directly or indirectly. Node-pair features Given a pair of nodes (i, j), we define a number of node-pair-level features: Shortest Path Length: The least number of edges separating nodes i and j. Common Neighbours: The number of shared nodes adjacent to both i and j, which we denote as \(n_{ij}\). Max Edge Centrality: The maximum over centralities of all edges along the shortest path. The edge centrality is defined in a similar manner to betweenness centrality for nodes (Brandes 2001). That is, for a given edge e, we compute $$\begin{aligned} \sum _{j,k \in V } \frac{\sigma (j,k|e)}{\sigma (j,k)} \end{aligned}$$ where \(\sigma (j,k|e)\) now refers to the number of shortest paths between j and k passing through an edge e rather than a node i. Cosine Similarity: Frequently used to measure similarity for textual data, but can also be applied to assess node-pair similarity in the context of graphs: $$\begin{aligned} \frac{n_{ij}}{\sqrt{\mathrm{deg}(i)}\sqrt{\mathrm{deg}(j)}} \end{aligned}$$ Jaccard Coefficient: A common set similarity measure, originally proposed in Jaccard (1912). In a graph context, let \(\Gamma (i)\) be the set of neighbours of node i. Then the Jaccard coefficient of nodes i and j is given by: $$\begin{aligned} \frac{|\Gamma (i) \cap \Gamma (j)|}{|\Gamma (i) \cup \Gamma (j)|} \end{aligned}$$ A higher value for this measure indicates a greater level of overlap between the neighbours of i and j, relative to their full sets of individual connections. Classification problems For node-pair-level features, there is an obvious binary classification problem where pairs of nodes are labelled as belonging to the "same community" or to "different communities". Since the algorithms of interest in our work are stochastic in nature, a pair of nodes may sometimes be in the same community, while for other runs of the algorithm the pair may not appear in the same community. Over the course of many runs of a given algorithm, pairs can simply be labelled as "same community" if they are in the same community for more than half of the runs, and "different community" if they are in the same community for less than half of the runs. In the unlikely event they are in the same community for exactly half of the runs, we have chosen arbitrarily to label them "same community". For node-level features, defining a classification problem is harder since, on consecutive runs of the community detection algorithm, the number of communities can vary, or the community labels can be permuted. Thus, classifying a node into its "correct" community is not a well-defined problem. Instead, we propose a binary classification problem determining whether the node is "easy" or "hard" to assign to a community, by observing how frequently it flips between communities on successive algorithmic runs. To define this mathematically, we require a coassociation matrix, described in "Coassociation matrix" section below. This will allow us to identify features that are predictive in whether a node is strongly associated with a specific community (near its "centre"), or whether it lies on the border between two or more communities. Nodes of the latter type may be of particular interest in certain domains, such as public health. In order to label the nodes as "easy" or "hard" to assign to a community, we incorporate the use of a coassociation matrix, defined below. Coassociation matrix For a given graph and community detection algorithm, we can construct a coassociation matrix, C, using the outputs of many runs of the algorithm on the graph. In our methodology, we use the same set of runs to calculate both the community-dependent features, and the coassociation matrix. Let \(r_{ij}\) be the number of runs for which nodes i and j are in the same community, and let R be the total number of runs. The value for the entry ij in the matrix is given by: $$\begin{aligned} C_{ij} = \frac{r_{ij}}{R} \end{aligned}$$ Intuitively, the coassociation matrix represents the proportion of runs for which two nodes are in the same community, for every pair of nodes. In order to classify nodes as either "easy to cluster" or "hard to cluster", we then calculate the entropy of each node from the coassociation matrix as follows: $$\begin{aligned} E_{i} = \frac{\sum _{j} p_{ij}}{N} \end{aligned}$$ where N is the number of nodes and \(p_{ij}\) is defined as follows: $$\begin{aligned} p_{ij} =\left\{ \begin{array}{ll} -C_{ij}\log _{2}(C_{ij}) & \text{ if } C_{ij} > 0 \\ 0 & \text{ if } C_{ij} \le 0 \end{array}\right. \end{aligned}$$ Unfortunately, these entropy values are not as intuitively understood as the raw coassociation matrix entries. Thus, it is not as simple to label nodes as "easy to cluster" or "hard to cluster" directly from their entropy values as it is to label pairs as "same community" or "different community" directly from the coassociation matrix. Instead, once every node is assigned an entropy, we use one-dimensional k-means clustering (with \(k=2\) clusters) to separate nodes into two training classes: those with low entropy belong to the "easy to cluster" class, and those with high entropy belong to the "hard to cluster" class. Intuitively, these correspond to nodes which are often assigned to the same community by the algorithm and those which are often assigned to different communities. Our aim is to identify human-interpretable graph features which relate to the community membership determined by a community finding algorithm. In order to select the more informative features from a predefined longlist of candidates, we define two simple binary classification problems: one for node-level features, where we will predict a node's ease of assignment to a community; and one for node-pair-level features, where we will predict whether the two nodes belong to the same community or not. We will then find the permutation importance of each feature from our model to identify which features provide the most information about the output label. Our experiments take place on more than one graph, \(\mu\) value (as described in "Community finding" section), algorithm, and even classification task. Having several independent variables enables us to answer the following research questions: RQ1: Do the most informative node features depend on the community finding algorithm used? RQ2: Do the most informative node-pair features depend on the community finding algorithm used? RQ3: How do the most informative node features vary with the degree of community separation, as defined by the mixing parameter, \(\mu\)? RQ4: How do the most informative node-pair features vary with the degree of community separation, as defined by the mixing parameter, \(\mu\)? RQ5: In all cases, what are the most predictive features? Although we did not form a strong hypothesis for the latter three questions, we hypothesise that the most predictive features would vary by algorithm. H1: The most informative node and features will depend on the community finding algorithm. H2: The most informative node-pair and features will depend on the community finding algorithm. In order to answer the questions above, we now present our experimental and statistical methodology, which may prove useful in tackling the evaluation of community finding algorithms more generally. This methodology is also illustrated in Fig. 1. Experiments for determining explainable social network analysis metrics in the node feature experiment. A similar methodology is applied to the node-pair experiment. After R runs of the algorithm, a coassociation matrix is constructed encoding how often two nodes are classified in the same community. Feature values are computed and provided as input to a random forest classifier to determine permutation importance. The distributions of permutation importance can be compared across all graphs to identify explainable metrics Experimental methodology We test our approach using three popular methods to detect community structure, each based on different concepts (for example, we only use one modularity optimization algorithm): Infomap (Rosvall and Bergstrom 2008), also known as the map equation, uses an information-theoretic approach, where nodes are represented by codewords composed of two parts, the first of which is provided by the community it belongs to. The community memberships are optimised by minimising the average code length describing random walks on the network. Louvain (Blondel 2008), is a modularity optimization approach which involves two steps. Firstly, the common modularity objective is optimized at a local level to create small communities. Next, each small community is treated as a single node and the first step is repeated. By following this agglomerative process, a hierarchy of communities is constructed. LPA, the label propagation algorithm proposed by Raghavan et al. (2007), assumes that nodes should belong to the same community as most of their neighbours. To begin, each node is initialised with a unique community and then these labels are then iteratively propagated through the network. After each iteration, a node receives the same label as the majority of its neighbours. Once this process is complete, nodes sharing the same label are grouped together as communities. When constructing our networks, we selected \(\mu\) values of 0.2, 0.3, and 0.4. As described in "Community finding" section, this parameter controls the level of separation or mixing between communities, where the higher the value of \(\mu\), the less easy it is to distinguish between different communities (Fig. 2). Example graphs with 200 nodes at the three \(\mu\) values. Communities shown with colour. Increased mixing parameter increases the prevalence of edges between communities At each value of \(\mu\), a set of graphs, \(\Gamma\), are generated before any experiments take place. This set of graphs is the same size, \(|\Gamma |\), for each value of \(\mu\). In order to match the hyperparameters used by Lancichinetti et al. (2008) in the original LFR benchmark paper, we use the LFR generator in NetworkX to generate networks with 1000 nodes of average degree 20 and maximum degree 50. We set the hyperparameters \(\tau _1\) and \(\tau _2\) to 3 and 2 respectively. Each experiment is then defined by three categories: the \(\mu\) value; the community detection algorithm; and the feature type (node vs node-pair). This results in 18 possible experiments from the 3 algorithms, 3 mixing parameters and 2 feature types. Data from the \(|\Gamma |\) graphs at the relevant value of \(\mu\) are used for the experiment. For each \(\mu\)-algorithm-feature type combination, the following procedure is then performed. Firstly, the algorithm is run 1000 times on each of the \(|\Gamma |\) graphs. Using these runs, any community-dependent features are calculated, along with the coassociation matrix. Features which are community-independent are also calculated at this stage, although they do not depend on the runs. The nodes or pairs-of-nodes must then be labelled according to the binary classification problem. The labelling procedures are described separately for the two feature-types in the relevant experiment sections. Now, for each of the graphs of the experiment, we have a dataset of either nodes or pairs of nodes, each member of which is labelled and has a list of feature values. A random forest with 100 trees is then trained to classify the dataset for the specific graph. During training we use 5-fold cross-validation and repeat this for 10 training runs. A permutation importance is calculated for each node or node-pair feature after each of the 50 runs, using the held-out test data. At the end of the 50 cross-validation runs, a mean average of the 50 gathered permutation importance scores is taken for each node or node-pair feature. This gives us its final importance score as generated by this graph. Overall, this results in \(|\Gamma |\) permutation importance values for each feature. The full experimental methodology for node features is represented in Algorithm 2. For node-pair features, the algorithm is identical, looping over node-pairs instead of nodes. For both experiments above, we have distributions of our features over the runs of the experiment. These distributions can be compared to determine statistical significance of the difference between them, and the size of this difference, in order to identify the features of interest. This statistical analysis and the final conclusions drawn are specific to the \(\mu\)-algorithm-feature type combination of the experiment. In order to develop an appropriate statistical methodology, we performed a pilot study using 20 graphs at each \(\mu\) value (giving 60 graphs in total). For each experiment, this gave us 20 values of permutation importance for each feature, on which we carried out Shapiro-Wilk tests. In this pilot study, \(67\%\) of the features across all algorithms, feature types and \(\mu\) values were normally distributed, so we started with a normal assumption. On this basis, the statistical methodology would be as follows: Perform a power analysis with a normal assumption to determine the value of \(|\Gamma |\) required to draw statistically significant conclusions. Carry out the experiments to obtain \(|\Gamma |\) values for each feature of each experiment. Confirm with a repeat of the Shapiro-Wilk tests that these \(|\Gamma |\) values are indeed normally distributed in the majority of cases. If the distributions are normal, perform pairwise t-tests with Bonferroni–Holm corrections using these values. Otherwise, perform pairwise Wilcoxon tests with Bonferroni–Holm corrections using these values. Power analysis was conducted with the following parameters: Cohen's effect size of 0.3, significance level of 0.05, and a power of 0.9. The power analysis concluded 119 graphs were necessary for our experiment, which we rounded to 120. At this stage, we generated 360 new graphs (120 at each \(\mu\) level) for our experiment. The application of this methodology to our new data set revealed that the distributions of metric values were not normally distributed. Therefore, to determine significance, pairwise Wilcoxon tests with Bonferroni–Holm correction were applied to our data to determine the significant results. Exp. 1: node feature experiment Table 1 The numbers of communities identified by each algorithm on graphs with 1000 nodes Table 2 Statistics on the normalised mutual information scores for two algorithms on graphs of the same \(\mu\) value Once the experimental data was collected, the 0.4-LPA-node and 0.4-LPA-node-pair experiments were omitted. This is because LPA clustered a majority of nodes into one large community at this \(\mu\) value, generating features and labels that were not suitable for our experiments. Essentially, LPA was unable to recognise community structure at this high degree of mixing. All other experimental data is reported. Tables 1 and 2 respectively show statistics on the number of communities detected by each algorithm across graphs of a common \(\mu\) value, and normalised mutual information (NMI) scores comparing the performance of pairs of algorithms. In the first of these, we can see that communities range in number from 24 to 40, resulting in a mean community size between 25 and 45 nodes. In reality, sizes of communities created using the LFR generator follow a power law, so many will be much larger or smaller than the mean. In the second table, NMI scores are generally high in all cases, although decrease as the \(\mu\) value increases, as one might expect. Overall these results suggest that there are some communities with very large, stable cores, and that the nodes which frequently change across multiple algorithmic runs are single nodes on the periphery of these large communities, or belong to the much smaller communities. The classification labels for the node feature experiments are calculated for a single graph as follows. The entropy of each node is calculated from the coassociation matrix of the current graph, and k-means clustering of these entropy values is performed to separate the nodes into "easy to cluster" and "hard to cluster" nodes. However, using this process, we have a very low proportion of "hard to cluster" nodes. The proportion of nodes labelled as "hard to cluster" are reported in Table 3. For low mixing parameter values, this can be as low as \(9\%\). This reinforces the finding from Tables 1 and 2 that there are large, central cores to the communities with a small number of nodes on the periphery or in smaller communities. However, the proportion of "hard to cluster" nodes can rise to as high as \(25\%\) with an increased mixing parameter, indicating that this is a distinct class of nodes. Due to the low proportions of "hard to cluster" nodes, we propose using undersampling. Rather than undersampling randomly, we propose using the "easiest" nodes to cluster (those with the lowest entropy) until the number of "hard" nodes is \(75\%\) that of the number of "easy" nodes. Using this strategic undersampling method enables us to identify node features which distinguish between truly separate classes, rather than distinguishing between nodes with an entropy either side of the arbitrary cut-off generated by the k-means clustering. Table 3 Statistics on the proportion of nodes labelled as "hard to cluster" after running each algorithm on graphs of varying \(\mu\) value From these three experiments (displayed in Fig. 3), we see that four of the features consistently have a non-zero permutation importance: clustering coefficient, eigenvector centrality, expansion and triangle participation. We focus on reporting the significant differences for these features and provide full results in the supplementary material. Across all experiments at all \(\mu\) levels, our pairwise Wilcoxon tests confirmed that these four features were significantly more important than the rest of the features, with the following exceptions: For Louvain at \(\mu = 0.2\), clustering coefficient was not significantly different from betweenness centrality, cut ratio, or \(E_{out}\). For Infomap at \(\mu = 0.2\), clustering coefficient was not significantly different from degree, \(E_{in}\), \(E_{out}\) or shortest path. Triangle participation was not significantly more important than degree or \(E_{in}\). For Infomap at \(\mu = 0.3\), clustering coefficient was not significantly different from closeness centrality, degree, \(E_{in}\) or shortest path. For LPA at \(\mu = 0.2\), clustering coefficient was not significantly different from any of betweenness centrality, closeness centrality, cut ratio, degree, \(E_{in}\) or average shortest path. These exceptions align with what can be seen qualitatively: at the lowest \(\mu\) level, some other features appear to be important such as degree, \(E_{in}\) and perhaps even closeness centrality, cut ratio, \(E_{out}\) and average shortest path. However, the effect size for all of these is much smaller than for the four most prominent features, and their significance vanishes at the two higher \(\mu\) levels. Results of the node feature experiments. Plots are of permutation importance of the metrics. Mean indicated as a black dot and median as a red dot. Lines indicate 95% bootstrapped confidence intervals With respect to our research question RQ1 and in contradiction to our hypothesis H1, we observed that the four prominent features for predicting whether a node was difficult to classify did not depend on the algorithm used: clustering coefficient, triangle participation, eigenvector centrality, and expansion. In relation to the first two, this is not unexpected as these characteristics have previously been shown to be broadly indicative of good community structure (Harenberg et al. 2014). One could conjecture that nearly all community finding approaches would try to preserve cliques (accounting for the importance of clustering coefficient and triangle participation). In fact, cliques have often been used as seeds for constructing larger communities in the context of quite different community detection algorithms (Palla et al. 2005; Lee et al. 2010). Meanwhile, it seems reasonable that a node with many links to nodes in other communities relative to the number of nodes in its own community would be harder to classify, as it likely lies on the periphery of its community, close to one or more others (accounting for the importance of expansion). At a surface level, the prominence of eigenvector centrality is more surprising, especially given the level of its performance. This centrality measure has similarities to PageRank (Page et al. 1999), where high values correspond to nodes at short distances from many high degree nodes. Nodes within a community's core are more likely to have high degree and to be a short distance from other high degree nodes, with edges that connect other nodes within the community. The relationship between eigenvector centrality and regions of high density within the core versus periphery of a network was recently highlighted by Jayne and Bonacich (2021). Thus, in our case high values of eigenvector centrality might correspond to an increased chance that this node forms a part of the stable community core, rather than being an unstable node on a community's periphery. The results of our experiment with regards to changing mixing parameters \(\mu\) (RQ3) indicate that these four features remain prominent. There is some evidence as well that the other features diminish in prominence as \(\mu\) increases and the communities become more difficult to find. Thus, the same features are involved for explaining why a node is part of a stable core or changes communities between runs and all become statistically significant at higher mixing parameters. Further investigation is required to find out why the important features consistently performed the best and the relative differences between them across other community finding algorithms. Exp. 2: pairwise community membership As mentioned in "Exp. 1: node feature experiment" section, the 0.4-LPA-node-pair experiment is omitted here as LPA classified the entire graph as one community on a number of occasions at the higher mixing parameter level. As with the node feature experiments, labelling all pairs directly as "same community" or "different community" results in imbalanced classes. However, we have vastly more data for the pairs of nodes than for the single nodes. Therefore, we propose undersampling both classes by randomly selecting the same number of "same community" and "different community" pairs from the available data. We choose to undersample randomly here rather than "strategically" since there are no pairs of nodes close to the threshold of 0.5 between "same" and "different" community, but choosing the highest and lowest values leads to a classification problem which is too easy. We select 1000 training examples for each class. As with our previous experiment, we found that two features were consistently important across the three community finding algorithms: cosine similarity and the Jaccard coefficient (displayed below in Fig. 4). We also found that the maximum edge centrality along the shortest path became more important at higher mixing parameter levels. This varied a little by algorithm; for Louvain it became important even at the lowest mixing parameter level of 0.2, however for Infomap and LPA it didn't become important until the mixing parameter of 0.3. Results of the pair feature experiments. Plots are of permutation importance of the metrics. Mean indicated as a black dot and median as a red dot. Lines indicate 95% bootstrapped confidence intervals We focus on the significant difference between these features. For a complete set of results, we refer to the supplementary material. The pairwise Wilcoxon tests confirmed that all three were significantly different across all experiments, including for max edge centrality at the \(\mu = 0.2\) level despite the small effect size. In contradiction to hypothesis H2, all algorithms performed similarly, with the most important features being Jaccard and cosine similarity. Both features compare the neighbourhoods of the selected nodes. Their importance is supported by the local consistency assumption (Zhou et al. 2004), frequently discussed in the context of instance-based classification, which asserts that neighboring instances will frequently share similar characteristics. In the context of unsupervised community detection, this corresponds to neighbouring nodes belonging to the same community, rather than having the same class label. This result is also congruous with the result of the first experiment, which found similar features to be important. In response to RQ4, maximum edge centrality proved increasingly important as the \(\mu\)-level of the generated data increased. This measure, which is the maximum edge centrality measure along the shortest path between the two nodes, could be indicative of important edges that bridge two communities, i.e. the weak ties, and has been used in the past for divisive methods of community detection (Girvan and Newman 2002). The increased importance at higher values for the mixing parameter could be explained by how important local information is to determining if two nodes are within the same community. If \(\mu\) is low, communities are well separated and local information almost completely describes if two nodes are in the same community. However, as \(\mu\) increases, the value of local information decreases in importance. Instead, global information, such as determining if the path between the two nodes likely contains an edge that lies between communities, becomes critical in determining whether nodes belong to the same community. Further investigation is required to find out why these features consistently performed the best and the relative differences between them across other community finding algorithms. Discussion and limitations General results discussion Our hypothesis for both experiments was that the important metrics for would be dependent on the community finding algorithm, however, the same metrics were identified consistently. This indicates that there are common metrics that can be used to explain these phenomena, at least when producing explanations on the same dataset for the three algorithms tested. As our study is limited to networks generated by the LFR algorithm, the common metrics of importance could be indicative of structure produced by this method. Further work should be carried out on other datasets to see how this affects the metrics of importance. If the community finding algorithm can be taken into account, then these important metrics can also be weighted in a way that is in line with the algorithm and degree of mixing of the communities. To validate the results of these experiments, we also ran them using Shapley values in place of permutation importance. Shapley values are well known among the explainability community, and are known to have mathematically desirable properties for producing explanations. Since our power analysis was performed using values of permutation importance, we report these as our main results, and report the results with Shapley values in the supplementary material. In the case of the node experiments, we see clearly that for Louvain and LPA, the same four node features are more important with increasing mu value for Shapley values as we saw with permutation importance: clustering coefficient, expansion, eigenvector centrality, and triangle participation. With Infomap, there is some variation; the same four features are seen to be important, though not at all mu values. In addition, E In is shown to be important, as are Degree and Closeness Centrality at the lower mu values. In the case of the node-pair experiments, the same trends are seen for Shapley values as for permutation importance across all algorithms. Jaccard and cosine similarity are consistently the most important, with max edge centrality increasing in importance with rising mu value. The features used in both experiments, which are well understood by the network analysis community, can be used to gain a greater understanding of community structure in online social networks. For public health applications, these metrics may be used to understand phenomena such as social contagion (Brown et al. 2017; Valente and Yon 2020) and to plan interventions (Valente 2012), but their practical use remains future work. In future studies, we plan on integrating visualisation methods and performing studies to ensure that approaches such as the ones proposed here have impact in explaining network phenomena to address real-world user needs. We envisage these results could be used with a visualisation system where the communities assigned by an algorithm can be explored by selecting individual nodes or pairs of nodes to understand their community assignment. When a node flips between different communities on consecutive runs of the community finding algorithm, important feature values such as those identified in this work could be visually reported and compared relative to other nodes in the same or in different communities for further study by an expert. Consensus clustering (Lancichinetti and Fortunato 2012b) is a way of dealing with nodes that are difficult to classify: run the community finding algorithm many times and determine the average result of these runs. Given that this study indicates that the metrics to determine if nodes are easy or hard to cluster by community finding algorithms are consistent across algorithm, the results of consensus clustering approaches could be augmented with these metrics to help determine which community these nodes should be clustered into, but this remains an area of future work. Also, values for these metrics could be used to seed the stable core of a community and then find other nodes that are less easy to cluster. This approach could lead to other partitioning algorithms or potentially overlapping community finding algorithms where "hard to cluster" nodes are partially contained by multiple communities. However, the effectiveness of such an approach would still need to be evaluated. Although partitioning algorithms are usually used to find communities in networks in public health settings, overlapping community finding algorithms (Lancichinetti et al. 2011; Pallaand et al. 2005) can assign a single node to multiple communities. The studies that we present here suggest metrics used in social network analysis can be used to explain partitioning algorithms. In future work, it would be interesting to see if these, or other metrics, extend to overlapping community finding. Although there were benefits to our use of synthetic LFR data, these networks are ultimately an approximation for real data. As discussed previously, the use of real data for this analysis would have been tricky due to the lack of large datasets of networks with consistent structure over which we could draw statistically significant conclusions. Additionally, we would not have been able to vary parameters such as the mixing parameter \(\mu\) to observe their effect on the results. Nevertheless, it is a limitation of this study that we are only able to confirm these results across a dataset of synthetic networks based on the parameters specified in an important community finding experiment (Lancichinetti and Fortunato 2009) on three community finding algorithms. In future work, the performance of these informative features could be verified on a smaller number of real networks, to investigate whether this affects the metrics of greatest importance. Even in the case where the metrics of greatest importance are heavily dependent on the dataset, the methodology presented here could beneficially be applied to new settings in order to gain insight into complex networks relevant to different applications, such as social, biological, and financial (Avdjiev et al. 2019; Giudici et al. 2017) networks. Another area for future work would be to consider applying the more recent Shapley-Lorenz approach to explainability (Giudici and Raffinetti 2021) in place of using Shapley values or permutation importance; this approach is well-suited to settings such as ours, where the response variable is categorical. This paper presents the results of two experiments designed to move towards explainable community finding and explainable network analysis in general. Despite the different methods used by the algorithms in our study, consistent social network analysis metrics can be used to explain community structure in a post-hoc manner for these three algorithms on LFR networks. The results of our study indicate that commonly understood metrics used for network analysis can be used by an expert to explain community structure, bringing benefits to application areas where network data is prevalent, from computational social science (Lazer et al. 2009) to public health studies (Luke and Harris 2007). Data and code implementations for the main experiments are available in the GitHub repository at https://github.com/sophiefsadler/community_finding. A zip file containing results and python scripts for statistical analysis can also be found on OSF at: https://osf.io/g4bwt/. Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160 Avdjiev S, Giudici P, Spelta A (2019) Measuring contagion risk in international banking. J Financ Stab. https://doi.org/10.1016/j.jfs.2019.05.014 Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):1–46. https://doi.org/10.1371/journal.pone.0130140 Ben-David S, Pál D, Simon HU (2007) Stability of k-means clustering. In: Bshouty NH, Gentile C (eds) Learning theory, pp 20–34 Blondel VD, Guillaume J-l, Lefebvre E (2008) Fast unfolding of communities in large networks, pp 1–12. arXiv:0803.0476v2 Bonacich P (1986) Power and centrality: a family of measures. Am J Sociol 92(5):1170–1182 Bothorel C, Brisson L, Lyubareva I (2020) How to choose community detection methods in complex networks: the case study of Ulule crowdfunding platform Brandes U (2001) A faster algorithm for betweenness centrality. J Math Sociol 25(2):163–177 Brown RC, Fischer T, Goldwich AD, Keller F, Young R, Plener PL (2017) #cutting: non-suicidal self-injury (NSSI) on Instagram. Psychol Med 48(2):337–346. https://doi.org/10.1017/s0033291717001751 Chakraborty T, Srinivasan S, Ganguly N, Bhowmick S, Mukherjee A (2013) Constant communities in complex networks. Nat Sci Rep 3(1):1825. https://doi.org/10.1038/srep01825 Chakraborty T, Dalmia A, Mukherjee A, Ganguly N (2017) Metrics for community analysis: a survey. ACM Comput Surv. https://doi.org/10.1145/3091106 Dao VL, Bothorel C, Lenca P (2020) Community structure: a comparative evaluation of community detection methods. Netw Sci 8(1):1–41. https://doi.org/10.1017/nws.2019.59 Flake GW, Lawrence S, Giles CL (2000) Efficient identification of web communities. In: Proceedings of the Sixth ACM SIGKDD international conference on knowledge discovery and data mining (KDD '00), pp 150–160 Fong R, Vedaldi A (2017) Interpretable explanations of black boxes by meaningful perturbation. CoRR. arXiv:1704.03296 Fortunato S (2010) Community detection in graphs. Phys Rep 486(3–5):75–174 Francisco AP, Oliveira AL (2011) On community detection in very large networks. In: da Costa FL, Evsukoff A, Mangioni G, Menezes R (eds) Complex networks. Springer, Berlin, pp 208–216 Freeman LC (1979) Centrality in networks: I. conceptual clarification. Soc Netw 1:215–239 Gesell SB, Barkin SL, Valente TW (2013) Social network diagnostics: a tool for monitoring group interventions. Implement Sci. https://doi.org/10.1186/1748-5908-8-116 Ghasemian A, Hosseinmardi H, Clauset A (2019) Evaluating overfit and underfit in models of network community structure. IEEE Trans Knowl Data Eng. https://doi.org/10.1109/tkde.2019.2911585 Girvan M, Newman MEJ (2002) Community structure in social and biological networks. Proc Natl Acad Sci U S A 99(12):7821–7826. https://doi.org/10.1073/pnas.122653799arXiv:01121 Giudici P, Raffinetti E (2021) Shapley-Lorenz explainable artificial intelligence. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2020.114104 Giudici P, Sarlin P, Spelta A (2017) The interconnected nature of financial systems: Direct and common exposures. J Bank Finance. https://doi.org/10.1016/j.jbankfin.2017.05.010 Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51(5):1–42 Harenberg S, Bello G, Gjeltema L, Ranshous S, Harlalka J, Seay R, Padmanabhan K, Samatova N (2014) Community detection in large-scale networks: a survey and empirical evaluation. Wiley Interdiscip Rev Comput Stat 6(6):426–439 Hunter RF, McAneney H, Davis M, Tully MA, Valente TW, Kee F (2015) hidden social networks in behavior change interventions. Am J Public Health 105(3):513–516. https://doi.org/10.2105/AJPH.2014.302399 Hunter RF, de la Haye K, Murray JM, Badham J, Valente TW, Clarke M, Kee F (2019) Social network interventions for health behaviours and outcomes: a systematic review and meta-analysis. PLoS Med 16(9):1–25. https://doi.org/10.1371/journal.pmed.1002890 Jaccard P (1912) The distribution of flora in the alpine zone. New Phytol 11(2):37–50 Jayne Bienenstock E, Bonacich P (2021) Eigenvector centralization as a measure of structural bias in information aggregation. J Math Sociol 46:1–19 Keane MT, Kenny EM (2019) How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Proceedings of international conference on case-based reasoning (ICCBR'19). Springer, pp 155–171 Lancichinetti A, Fortunato S (2009) Community detection algorithms: a comparative analysis. Phys Rev E - Stat Nonlinear Soft Matter Phys 80(5):1–12. https://doi.org/10.1103/PhysRevE.80.056117arXiv:0908.1062 Lancichinetti A, Fortunato S (2012a) Consensus clustering in complex networks. Sci Rep 2(1):1–7 Lancichinetti A, Fortunato S (2012b) Consensus clustering in complex networks. Nat Sci Rep 2:336 Lancichinetti A, Fortunato S, Radicchi F (2008) Benchmark graphs for testing community detection algorithms. Phys Rev E - Stat Nonlinear Soft Matter Phys 78(4):1–6. https://doi.org/10.1103/PhysRevE.78.046110arXiv:0805.4770 Lancichinetti A, Radicchi F, Ramasco JJ, Fortunato S (2011) Finding statistically significant communities in networks. PLoS ONE 6(4):1–18. https://doi.org/10.1371/journal.pone.0018961 Lazer D, Pentland AS, Adamic L, Aral S, Barabasi AL, Brewer D, Christakis N, Contractor N, Fowler J, Gutmann M et al (2009) Life in the network: the coming age of computational social science. Science 323(5915):721 Lee A, Archambault D (2016) Communities found by users—not algorithms. In: Proceedings of the 2016 CHI conference on human factors in computing systems, pp 2396–2400. https://doi.org/10.1145/2858036.2858071 Lee C, Reid F, McDaid A, Hurley N (2010) Detecting highly overlapping community structure by greedy clique expansion. In: Proceedings of the 4th international workshop on social network mining and analysis (SNA-KDD), pp 33–42 Lipton ZC (2018) The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3):31–57 Loyola-Gonzalez O, Gutierrez-Rodríguez AE, Medina-Pérez MA, Monroy R, Martínez-Trinidad JF, Carrasco-Ochoa JA, Garcia-Borroto M (2020) An explainable artificial intelligence model for clustering numerical databases. IEEE Access 8:52370–52384 Luke DA, Harris JK (2007) Network analysis in public health: history, methods, and applications. Annu Rev Public Health 28:69–93 Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems. NIPS'17. Curran Associates Inc., Red Hook, pp 4768–4777 Morichetta A, Casas P, Mellia M (2019) EXPLAIN-IT: towards explainable AI for unsupervised network traffic analysis. In: Proceeedings of 3rd ACM CoNEXT workshop on big data, machine learning and artificial intelligence for data communication networks, pp 22–28 Page L, Brin S, Motwani R, Winograd T (1999) The pagerank citation ranking: bringing order to the web. Technical Report 1999-66, Stanford InfoLab Palla G, Derényi I, Farkas I, Vicsek T (2005) Uncovering the overlapping community structure of complex networks in nature and society. Nature 435(7043):814–818 Pallaand G, Derényi I, Farkas I, Vicsek T (2005) Uncovering the overlapping community structure of complex networks in nature and society. Nature 435:814–818. https://doi.org/10.1038/nature03607 Park M, Lawlor MC, Solomon O, Valente TW (2020) Understanding connectivity: the parallax and disruptive-productive effects of mixed methods social network analysis in occupational science. J Occup Sci. https://doi.org/10.1080/14427591.2020.1812106 Peel L, Larremore DB, Clauset A (2017) The ground truth about metadata and community detection in networks. Sci Adv. https://doi.org/10.1126/sciadv.1602548 Radicchi F, Castellano C, Cecconi F, Loreto V, Parisi D (2004) Defining and identifying communities in networks. PNAS 101(9):2658–2663 Raghavan UN, Albert R, Kumara S (2007) Near linear time algorithm to detect community structures in large-scale networks. Phys Rev E 76:036106. https://doi.org/10.1103/PhysRevE.76.036106 Ribeiro MT, Singh S, Guestrin C (2016) "Why should I trust you?" Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144. https://doi.org/10.1145/2939672.2939778. arXiv:1602.04938v3 Rosvall M, Bergstrom CT (2008) Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci U S A 105(4):1118–1123. https://doi.org/10.1073/pnas.0706851105arXiv:0707.0609 Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215 Saarela M, Jauhiainen S (2021) Comparison of feature importance measures as explanations for classification models. SN Appl Sci 3(2):1–12 Shapley LS (2016) In: Kuhn HW, Tucker AW (eds) 17. A value for n-person games. Princeton University Press, pp 307–318. https://doi.org/10.1515/9781400881970-018 Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905 Strehl A (2002) Relationship-based clustering and cluster ensembles for high-dimensional data mining. Master's Thesis, The University of Texas at Austin Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: Proceedings of the 34th international conference on machine learning—volume 70. ICML'17, pp 3319–3328 Valente TW (2012) Network interventions. Science 337(6090):49–53. https://doi.org/10.1126/science.1217330 Valente TW, Yon GGV (2020) Diffusion/contagion processes on social networks. Health Educ Behav 47(2):235–248. https://doi.org/10.1177/1090198120901497 Valente TW, Fujimoto K, Unger JB, Soto DW, Meeker D (2013) Variations in network boundary and type: a study of adolescent peer influences. Soc Netw 35(3):309–316. https://doi.org/10.1016/j.socnet.2013.02.008 Valente TW, Palinkas LA, Czaja S, Chu K-H, Brown CH (2015) Social network analysis for program implementation. PLoS ONE. https://doi.org/10.1371/journal.pone.0131712 von Luxburg U (2010) Clustering stability: an overview. Found Trends Mach Learn 2(3):235–274. https://doi.org/10.1561/2200000008 Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv J Law Technol 31:841 Watts DJ, Strogatz SH (1998) Collective dynamics of small-world networks. Nature 393:440–442 Ying R, Bourgeois D, You J, Zitnik M, Leskovec J (2019) GNNExplainer: a tool for post-hoc explanation of graph neural networks. CoRR. arXiv:1903.03894 Yuan H, Tang J, Hu X, Ji S (2020) XGNN: Towards model-level explanations of graph neural networks. CoRR. arXiv:2006.02587 Zhou D, Bousquet O, Lal TN, Weston J, Schölkopf B (2004) Learning with local and global consistency. In: Advances in neural information processing systems, pp 321–328 We thank the extensive support provided by the AIMLAC CDT team and its administrators, in addition to the computer science department at Swansea University's Computational Foundry. This work is supported by the UKRI AIMLAC CDT, funded by Grant EP/S023992/1 and by the UKRI EPSRC Grant EP/V033670/1. The research was also partly supported by Science Foundation Ireland (SFI) under Grant No. SFI/12/RC/2289_P2. Swansea University, Swansea, UK Sophie Sadler & Daniel Archambault School of Computer Science, University College Dublin, Dublin, Ireland Derek Greene Sophie Sadler Daniel Archambault SS proposed the methods, implemented the code, and performed the experiments. SS, DG and DA wrote, read, and approved the manuscript. Correspondence to Sophie Sadler, Derek Greene or Daniel Archambault. Supplementary material is made available containing detailed descriptions and results plots of statistical analysis. Sadler, S., Greene, D. & Archambault, D. Towards explainable community finding. Appl Netw Sci 7, 81 (2022). https://doi.org/10.1007/s41109-022-00515-6 Graph mining Community detection Explainability
CommonCrawl
Difference between revisions of "Past Probability Seminars Spring 2019" Pmwood (talk | contribs) (→‎Past Seminars Spring 2019) = Past Seminars Spring 2019 = [[Probability Seminar | Back to Current Probability Seminar Schedule ]] [[Past Seminars | Back to Past Seminars]] <b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted. Back to Current Probability Seminar Schedule Back to Past Seminars 1 Past Seminars Spring 2019 1.1 January 31, Oanh Nguyen, Princeton 1.2 Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University 1.3 February 7, Yu Gu, CMU 1.4 February 14, Timo Seppäläinen, UW-Madison 1.5 February 21, Diane Holcomb, KTH 1.6 Probability related talk in PDE Geometric Analysis seminar: Monday, February 22 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison 1.7 Wednesday, February 27 at 1:10pm Jon Peterson, Purdue 1.8 March 21, Spring Break, No seminar 1.9 March 28, Shamgar Gurevitch UW-Madison 1.10 April 4, Philip Matchett Wood, UW-Madison 1.11 April 11, Eviatar Procaccia, Texas A&M 1.12 April 18, Andrea Agazzi, Duke 1.13 April 25, Kavita Ramanan, Brown 1.14 Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown 1.15 Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) Past Seminars Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to [email protected] January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. February 7, Yu Gu, CMU Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Probability related talk in PDE Geometric Analysis seminar: Monday, February 22 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Please note the unusual day and time. Title: Functional Limit Laws for Recurrent Excited Random Walks Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer's theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) Please note the unusual day. Title: The directed landscape Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag. Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Past_Probability_Seminars_Spring_2019&oldid=17459"
CommonCrawl
Challenges of the epidemiological and economic burdens associated with hypertension in middle income countries: evidence from Mexico Armando Arredondo1,2, Silvia Magali Cuadra2 & Maria Beatriz Duarte2 In order to identify the challenges resulting from hypertension in a middle income country, this study has developed probabilistic models to determine the epidemiological and economic burden of hypertension in Mexico. Considering a population base of 654,701 reported cases of adults with hypertension, we conducted a longitudinal analyses in order to identify the challenges of epidemiological changes and health care costs for hypertension in the Mexican health system. The cost-evaluation method used was based on the instrumentation technique. To estimate the epidemiological changes for 2015–2017, probabilistic models were constructed according to the Box-Jenkins technique. Regarding changes in expected cases for 2015 vs. 2017, an increase of 12 % is expected (p < 0.001). Comparing the economic impact in 2015 versus 2017 (p < 0.001), there is a 23 % increase in financial requirements. The total amount for hypertension in 2016 (US dollars) will be $6306,685,320 Of these, $ 2990,109,035 will be as direct costs and $ 3316,576,285 as indirect costs. If the risk factors and care models remain as they are currently in the health system, the financial consequences will have a major impact on the out-of-pocket users, following in order of importance, on social security providers and on public assistance providers. In the middle-income countries (MICs), hypertension issues are leading public health challenges to the health system and to society. Hypertension is a health problem that requires an integrated approach. The increasing trend in the prevalence of hypertension in recent years has been constant. It seems that the increase in hypertension cases will continue [1]. In Mexico, the prevalence of hypertension increased from 26.6 % in 1993 to 31.8 % in 2012 [2]. The impact of this disease is evident not only on mortality, but also on morbidity and the quality of life. This morbidity, compared to other chronic diseases, represents an important burden for individuals and their families, as well as for the health system and society in general [3]. The observed and forecasted changes in the incidence of hypertension in adults will generate constant increases in the need for health services [4]. The observed changes represent a high economic burden for health systems, in the medium and long term [5]. To date, in middle-income countries, there are no detailed studies of the financial requirements for health services needed for hypertension in the next years [6]. Thus, having no knowledge of the costs, it is impossible to develop resource allocation patterns for a more efficient use of hypertension-related expenditures [7]. The constantly increasing costs of medical care and the unknown costs of ambulatory case management and hospital cases, justify the development and use of indicators for the increasing changes in the demand and costs of case management in the future [8]. The purpose of this study is to identify expected cases and financial requirements for hypertension during the 2015–2017 period. We highlight the implications of the economic impact of hypertension on the users' pockets and on the providers of health services as one of the main challenges to be solved by health planners in resource allocation for public health interventions. An evaluative research was carried out, based on a longitudinal design to determine costs, epidemiological changes and financial requirements to deliver health care for hypertension during the 2015–2017 period. The three-year period was chosen for the study for several reasons: Projections with more than 3 years may generate uncertainty in the number of expected cases and hence are not recommended for strategic planning reasons; the health budget is revised, adjusted and approved for a three-year period by members of the Budget Commission of the legislative power; the inflationary consumer price index, applied to the direct health costs, is estimated by the Banco de Mexico, for consecutive periods every 3 years. Thus, to estimate indirect health costs under the human capital focus, it is preferable to limit the study to short term periods [9, 10]. Whereas analysis of economic impact relates to changes in the demand for services required for expected cases, case demand concerns the number of cases that request services and are under treatment and annual monitoring at each institution of the Mexican health system. The studied institutions belong to the public health sector of the Mexican health system: SSA (services for the uninsured population) and IMSS/ISSSTE (health care services for the insured population). (For more details on the health system in Mexico and the institutions included in the study, see Appendix 1). The basic protocol of this project was reviewed and ethically approved by the Committee on Health Research of the National Council of Science and Technology [11]. The demand for expected cases was estimated based on the following methodological steps [12]: Step 1. Modeling: a) identification of the tentative model for the time series, for use in the prediction, b) checking the quality of the information and number of observations, and c) analysis of the autocorrelation of the historical observations. Step 2. Estimation: a) determination of the estimates of the parameters, using the least squares criterion, b) application of an iterative procedure searching for a sum of squares function, previously specifying the preliminary estimates of the unknown parameters. Step 3. Diagnostic check: a) adequacy test, after the models were fitted to the data, b) analysis of the difference between the observed and expected results, c) application of the Box-Pierce chi squared. Step 4. Prediction: a) selection and design of the definitive model, b) data processing, c) prediction of the future values of the time series. The time series analysis was based on the total annual cases observed in the last 18 years in each studied institution. In our study we delimit the record to the 1996–2013 period, because before 1996, quality standards in the records were not high. Before 1996, there were no well trained personnel for the differentiated recording of cardiovascular diseases according to the International Classification of Diseases. The population base of the study included 654 701 cases of hypertension, which were medically diagnosed, reported in the year prior to the study (2014). This information was obtained from the health impairment statistics bulletin of the National Health System [13]. This is anonymous information that is used for the purpose of analysis. Direct costs refer to the average medical costs of annual case management per patient and for 3 main complications. Information on health care services was obtained from the management of standardized cases, adjusted by type of institution. The standardization and adjustment by type of institution, were performed with the application of a discount rate of 2 % annually, based on the cost of annual average case handling and cost of inputs (human resources, medicines, healing materials, utilities, furniture and medical equipment) by type of institution. In order to control the costs attributable to hypertension, the cost-evaluation method was designed according to an instrumentation technique that identified production and supply functions for each case management validated by consensus of experts from each institution. For each function production (medical visit, hospitalization, diagnostic studies, etc.) to be evaluated, management of the average case was defined, based on the disease's natural history and the results from a shadow study of all stages of the process which patients go through when requesting health services for hypertension [14]. The indirect costs were determined using the human capital model developed for chronic diseases in Latin America [15]. This model was adjusted and designed to include three categories of costs attributable to hypertension: premature mortality, permanent and temporal disability. To calculate the financial consequences of changes in demand by type of institution, in optimal scenarios, controlling the effects of inflation, an inflationary index projected to 2015–2017 was developed and applied, based on the last Banco de Mexico price index for consumers [16]. In order to have an overview of the annual direct and indirect costs of hypertension by item, the analysis was restricted to 2016, since it corresponds to one-half of the projected time period. The resulting model for estimating the expected cases of hypertension was a model with average movement operator of order 1. The estimated model of prognosis of demand refers to the 2015–2017 period. Historical data were annual numbers of hypertension cases reported over the 1996–2013 period. These numbers of cases, denoted as Y1, Y2,…,Y18, showed an increasing trend throughout the study period resulting in a slope coefficient of the trend analysis that was significantly different from zero (t = −.12). The data also showed a seasonal pattern with an increasingly higher peak each year, which coincides with the seasonal factor. Data were log-transformed to reduce the observed asymmetry. The natural logarithm of the number of cases is denoted by Y1,Y2,…,Y18. One can see that the series is non-seasonal due to the slow decrease in the self-correlation function. After applying several seasonal and non-seasonal transformations, we inferred that the seasonal time series can be explained by the transformation: Zt = Yt - Yt-1 - Yt-12 + Yt-13, having one seasonal and one non-seasonal effect. Upon analyzing the self-correlation function, it seems that it cuts the borderline of statistical significance of zero difference after lag 12. Some peaks are observed for smaller lags, indicating the need to include a seasonal average movement operator with some self-regressive operator and/or average movement operator of a smaller order. Based on the analysis of both functions, the following models are proposed: MODEL 1. Seasonal average movement operator, order 1 $$ {\mathrm{Z}}_{\uptau}-\updelta +{\upvarepsilon}_{\uptau}-{\uptheta}_{1,13}\kern0.5em {\upvarepsilon_{\uptau}}_{-12} $$ MODEL 2. Seasonal average movement operator, order 1, and non-seasonal average movement operator, order 1 $$ {\mathrm{Z}}_{\uptau}-\updelta +{\upvarepsilon}_{\uptau}-{\upbeta}_{1,13}{\upvarepsilon_{\uptau}}_{-12}-{\uptheta}_1\kern0.5em {\upvarepsilon}_{\uptau}+{\upbeta}_1\kern0.5em {\upbeta}_{1,12}\kern0.5em {\upvarepsilon}_{\uptau -13} $$ MODEL 3. The same as model 2, but including the mean MODEL 4. Seasonal self-regressive operator, order 1 $$ {\mathrm{Z}}_{\uptau}=\updelta +{\upphi}_{\kern0.5em 1,12}\kern0.5em {{\mathrm{Z}}_{\uptau}}_{-12} $$ MODEL 5. Non-seasonal self-regressive operator, order 1 and seasonal self-regressive operator, order 1 $$ {\mathrm{Z}}_{\uptau}=\updelta +{\upphi}_{1,12}\kern0.5em {\mathrm{Z}}_{\uptau}-12+{\upphi}_{\kern0.5em 1}\kern0.5em {\mathrm{Z}}_{\uptau \hbox{-} 1}-{\upphi}_{\kern0.5em 1}\kern0.5em {\upphi}_{\kern0.5em 1,13}\kern0.5em {\mathrm{Z}}_{\uptau \hbox{-} 13} $$ To select among several possible models, it was necessary to estimate each ones' parameters and to examine its properties. Model 1, which included only the seasonal average movement operator, fits the data's logarithms with reasonably good results and a self-correlation different from zero in lag 1, thus indicating that the model may be improved by adding a self-regressive and/or non-seasonal average movement operator. Besides the seasonal average movement operator, model 2 included the non-seasonal average movement operator, which showed quite significant improvement in the standard deviation value that decreased from .4798 to .38007, and no nonzero self-correlation of residuals. The discussion on model 3 is carried out after discussing predictions. Models 4 and 5, which included seasonal and non-seasonal self-regressive operators, were also tested and were discarded because they did not meet adequacy conditions (substantially greater values of the Box-Pierce Chi-square and of standard deviation than in the other two models). The model that adequately depicted the series to estimate hypertension cases was the following: $$ {\mathrm{Z}}_{\uptau}=\updelta +{\upvarepsilon}_{\uptau}-{\upbeta}_{1,13}\kern0.5em {\upvarepsilon}_{\uptau -13}-{\uptheta}_1\kern0.5em {\upvarepsilon}_{\uptau -1}+\kern0.5em {\uptheta}_1\kern0.5em {\uptheta}_{1,12}\kern0.5em {\upvarepsilon}_{\uptau -13} $$ with the prediction equation: $$ {\upgamma^{+}}_{\uptau}=\updelta +{\upgamma}_{\uptau -13}+{\upbeta}_{1,12}\kern0.5em {\upvarepsilon}_{\uptau -13}-{\uptheta}_{1\kern0.5em }{\upvarepsilon}_{\uptau -1}+{\uptheta}_{1\kern0.5em }\kern0.5em {\uptheta}_{1\kern0.5em ,12}{\upvarepsilon}_{\uptau -13}+{\upvarepsilon}_{\uptau} $$ Therefore, the proposed model was model 3, which besides average movement and seasonal average movement operators, includes the mean estimator and variables mentioned in the methodology section. The outcomes of this model were better than those obtained using model 2 (lower standard error) whereas the t value for the mean estimator indicates that the trend is negative and significantly different from zero (see Table 1). Table 1 Statistical results of models 2 and 3 The findings of expected hypertension cases during the study period have constant incremental trends in the population served at the 3 major health institutions in Mexico (see Table 2). These trends are stronger in the case of insured population showing an increase for 2015–2017 (p < 0.001). Adding up the cases that increase annually by type of institution, the result is 72895 cases for the insured population (IMSS and ISSTE) vs 38109 cases for the uninsured population (SSA). Table 2 Expected cases for hypertension in Mexico to the period 2015–2017 by type of institution The results on case management costs and on financial requirements to satisfy the demand for hypertension services in the coming years (see Tables 3 and 4) are new, pertinent and relevant evidence for an efficient management of hypertension in the near future in Mexico. Table 3 describes the main results of the expected costs and financial requirements for each year of the study (2015–2017) for the entire health system. Table 4 describes the expected economic burden only for 2016 as a cut-off point, differentiating costs by type of cost, type of institution and type of complication. Table 3 Cost from epidemiological changes of hypertension expected for the years 2015, 2016, 2017 in Mexico (in US$) Table 4 Direct, indirect and total costs (in US$) for health care service providers and users attributable to hypertension expected for the year 2016 in Mexico Comparing the economic impact in 2015 with the forecast for 2017 (p < 0.001), a 21 % increase in financial requirements has been forecasted (see Table 2). The total cost for hypertension in 2016 is forecasted to be US $6306,685,320 (see Table 3). This includes $2990,109,035 in direct costs and $3316,576,285 in indirect costs. The total costs for each institution are: $926,982,959 for the Ministry of Health, serving the uninsured population; $2163,295,383 for the insured population (the Mexican Institute for Social Security, and the Institute for Social Security and Services for State Workers); and $3216,406,987 for the out-of-pocket users. Nephropathy is the largest single contributor to the total cost of overall management ($1036,073,067). This is followed by the costs of treating nonfatal myocardial infarction ($470,942,093) and nonfatal stroke ($376,753,221). The overall contribution of direct costs was 47 % of total costs and indirect costs were 53 %. With regard to the four categories of estimated direct costs, results were: consultations and diagnoses, 12 %; drugs, 14 %; hospitalizations, 11 %; and complications, 63 %. Distribution of total costs of complications was as follows: nephropathy, 55 %; nonfatal myocardial infarction, 25 %; and nonfatal stroke, 20 %. Results for the three estimated categories of indirect costs were: mortality costs, 15 %; costs of permanently disabled patients, 74 %; and costs of temporarily disabled patients, 11 % (see Table 3). The institution for the insured population was found to have the highest average direct costs per managed case, as well as the highest economic impact of global management of hypertension for 2015–2017. Financial requirements for health care services for hypertension represent 19.5 % of the total budget assigned to the uninsured population, and 12.5 % of that allocated to the insured population. This study addressed the need to generate information that is necessary, relevant and pertinent, for a more strategic planning of public health actions that are oriented towards a better management of chronic diseases such as hypertension. The estimation of expected cases and financial requirements for each year of the studied period and for each of the main institutions of the health system in Mexico represents one of its main contributions. The finding related to the high contribution of patients and families to the total expenditure for hypertension in Mexico, is a fact that was unknown for this indicator. The constant increase in both epidemiological changes and the economic burden of hypertension, represents a very important finding that can be applied to the analysis of this problem in middle-income countries like Mexico, as well as in other Latin American countries, because of the great similarity of the health systems and epidemiological changes in chronic diseases in all these countries [17]. With respect to the differences in expected costs and financial requirements for the insured health subsystem vs. the uninsured subsystem, the differences in these results are mainly due to 3 reasons: The costs of inputs for social security institutions are higher, the population seeking care for hypertension is greater in uninsured care centers and because of differences in standards of quality of care and input mix in each institution. There are also several studies of epidemiological trends in recent years but in very general terms, they do not specify the disease or the expected epidemiological trends for the expected cases in the future. Our results have similarities, in terms of constant incremental pressure trends, with other case studies published by different authors [18, 19], particularly with hypertension results of the last national health survey conducted in Mexico in 2013 [20]. Indeed, a tendency is expected for a constant increase in the required health care services and in costs, although the increase is greater for the insured population than for the uninsured population. On the other hand, there is much talk about the high health expenditures of patients in MICs, but the exact measurement of expenditures by families and patients with chronic health conditions like hypertension, is not being considered. Our results certainly provide information about the high health expenditure by patients, attributable to hypertension. In addition to some problems listed in the methods section, this study also suffers several more external limitations. First, on the epidemiological analysis of the frequency of cases, we only included cases of hypertension and co-morbidity of 3 main complications that required health services for diagnosis, treatment and control, in major health institutions in Mexico. In this sense, the analysis does not include cases of patients ignoring hypertension symptoms, or even knowing they are ill but cannot have access to health services for different reasons. Second, the main limitation of the time series analysis, using the Box-Jenkins method, is the great number of reports that are required, having high quality information on observed cases (15 years minimum), for a rigorous estimate of expected cases in future years. Our study covers an 18 year period (1996–2013), since before 1996, quality standards in the records were not high enough; However, there may be important limitations when trying to replicate our forecasting model in other middle income countries where the quality of the information may not be good enough in terms of the minimum number of annual observations that is required. With respect to indirect costs, another limitation of this study, based on the human capital focus, is the possible overestimation of indirect costs, attributable to temporary disability, when friction costs are not determined. In the short-medium term, the indirect costs could be overestimated due to the fact that the patients may compensate for the loss of production in the long term, if they return to their job, or their work may be carried out by their co-workers. In the long term, the indirect costs could be zero if the patient improves and returns to his/her job, or else if an unemployed worker took his/her place at work after a "friction period," in which case, costs must be included that relate to searching for and training the unemployed worker in his/her new position; this depends on the unemployment rate and disability work policies, such as those proposed by some authors [21]. However, in our study, indirect costs were limited to a 3-year period and it is not the objective of our study to disaggregate the effect of other variables. This study estimated the changes in the number of expected hypertension cases during the 2015–2017 period, in a middle income country. Direct and indirect costs were also identified for the management of an average hypertension case and its 3 main complications. The results are proposed as relevant information for decision-making in the planning and allocation of resources for health services for chronic diseases in Mexico. This is an example of what is happening in the management of hypertension in middle income countries. The study's results also reveal important evidence of the economic burden for the users' pockets coming from hypertension management. In this sense, and in the context of the health reforms in MICs such as Mexico, aspects such as equity and access to health services are still an unresolved challenge. MICs: SSA: IMSS: Mexican Institute for Social Security ISSSTE: Institute for Social Security and Services for State Workers Arredondo A, Aviles R. Hypertension and its effects on the economy of the health system for patients and society: suggestions for developing. Am J Hypertens. 2014;27(4):335–6. SSA. Encuesta Nacional Salud y Nutrición: Hipertensión. Tercera Edición, SSA, México D.F. 2014: 186–91. Pacheco-Ureña A, Corona-Sapien C, Osuna-Ahumada M, Jiménez-Castellanos S. Prevalencia de hipertensión arterial, sobrepeso y obesidad en poblaciones urbanas del estado de Sinaloa, México. Rev Mex de Card. 2012;23(1):7–11. National Institute of Public Health. Métodos de estimación de cambios epidemiológicos y demanda esperada de enfermedades crónico-degenerativas. Cuernavaca, México: Informe Técnico de Memoria Metodológica; 2014. p. 12–22. Snowdon A, Schnarr K, Hussein A, Alessi CH. Measuring what matters: the cost vs. Values of health care. Ontario CA: International Centre for Health Innovation, Richard Ivey School of Business, Western University; 2012. p. 12–20. Arredondo A. Type 2 diabetes and health care costs in Latin America: exploring the need for greater preventive medicine. BMC Med. 2014;12(136):11–9. Panamerican Health Organization. Health in the Americas. Cardiovascular diseases: hypertension. Washington D.C: PAHO; 2014. p. 105–9. World Health Organization. Guidelines to assess the social and economic impact consequences of the diseases. Geneva: World Health Organization; 2009: 37–56. Arredondo A, Zuñiga A. Epidemiological changes and economic burden of hypertension in Latin America: evidences from Mexico. Am J Hypertens. 2006;19(6):553–9. National Institute of Public Health. Métodos de estimación de cambios epidemiológicos y demanda esperada de enfermedades crónico-degenerativas. Cuernavaca, México: Informe Técnico Parcial; 2013. p. 9–15. CONACYT. Fondo Sectorial Salud. Convocatoria (2012). Terminos de Referencia para el desarrollo de proyectos de investigación. Mexico DF: 6–14. Murray A. Statistical Modelling and Statistical Inference: Measurement error in the explanatory variables. Box-Jenkins technique, in Statistical Modelling in GLIM. Oxford Science Publications, Ox. Uni. Press, New York, USA. Third Ed. Chap. 2; 2005: 112–132. SSA, IMSS, ISSSTE. Boletín de Información Estadística. Casos de morbilidad hospitalaria por demanda específica, 1996–2013. SSA Ed. México, D.F, 2014: 57–81 Sánchez y Alejandro Flores H. Métodos e indicadores para la evaluación de los servicios de salud: estudio de sombra. Ed Univ de Barcelona. 2011;90–98. Barcelo A, Aedo C, Rajpathak S, Robles S. The cost of diabetes in Latin America and the Caribbean. Bull World Health Organ. 2003;81:19–27. BM Banco de México. Indice Nacional de Precios por Servicios Médicos en México. Cuadernos Mensuales, Base 2000 = 100. La Actividad Económica en México. 1992–2014. Gerencia de Investigación Económica. Banco de México Ed. México DF, México. 2015: 36–49. Arredondo A, Avilés R. Health disparities in and transdisciplinary approach to cardiovascular disease in Mexico. Am J Public Health. 2015;105(10):e3-e4. Villarreal E, Matew-Quiroz A. Costo de la atención de la hipertensión arterial y su impacto en el gasto en salud en México. Sal Pub de Mex. 2001;44(1):7–13. García C, Thorogood M, Reyes S, Salmerón J, Durán C. The prevalence and treatment of hypertension in elderly population. Sal Pub de Mex. 2011;43(3):415–20. Campos I, Hernández B, Rojas R. Hipertensión arterial: prevalencia, diagnóstico oportuno, control y tendencias en adultos mexicanos. Salud Publica Mex. 2013;55 supl 2:S144–50. Koopmanschap M, Rutten FF, van Ineveld BM, van Roijen L. The cost method for measuring indirect costs of disease. J Health Econ. 1995;14:171–89. This project was funded by the National Council of Science and Technology of Mexico, 2014. National Institute of Public Health-Mexico, School of Public Health, University of Montreal, Cuernavaca, QC, H3T 1 J4, Mexico Armando Arredondo National Institute of Public Health, Av Universidad 655, CP 62508, Cuernavaca, Mexico Armando Arredondo, Silvia Magali Cuadra & Maria Beatriz Duarte Silvia Magali Cuadra Maria Beatriz Duarte Correspondence to Armando Arredondo. The authors declared no conflict of interest. AA participated in the study concept and design. AA and MC participated in the acquisition of subjects and data, analysis and interpretation of data, preparation of manuscript. AA, MC and MBD participated in the analysis and interpretation of data, preparation and review of manuscript. All authors read and approved the final manuscript. Appendix 1: Some considerations on Mexico's health system At their inception, in 1942, health policies in Mexico were put into effect by municipal actions oriented by the Consejo Superior de Salubridad (Higher Council for Health). These actions proved to be clearly deficient. An office of the Federal Executive had to be created, with sufficient power to allocate resources and regulate activities to control epidemics and improve urban sanitation. The Departamento de Salubridad (Department of Health) was created, supported by the Consejo de Salubridad General (Council for General Health) which served, along with legislative agencies, as the advisory and managing bodies to issue specific health interventions. Until 1929, this policy allowed for the implementation of Health Cooperative Units (primary health centers) working with states and municipalities. During the 30's, health policies that had been set during the revolutionary period (1910–1920) were still followed. In addition, a new health care model was created: joint health services offered by the government, agricultural development banks and peasants endowed with vast land extensions. Under this new policy, the Department of Health worked to start and maintain intensive care and curative care services. The new health policy became quite dynamic on account of government support for health services for workers. The foundation of the present health system can be traced to 1943, the year in which IMSS and the SSA were established to convey tripartite contributions (state, enterprises and workers' funds) to support the industrial development of the main cities and the public supply of comprehensive services. This model prevailed almost worldwide; its technical core was drawn from the International Labor Organization. The SSA arose from the fusion of the Secretaría de Asistencia (Welfare Secretariat) and the Departamento de Salubridad (Department of Health). Its healthcare mission was extended to provide more comprehensive care for the population left unprotected by social security, which included the majority of peasants, the unemployed, and informal economy workers. SSA was also in charge of launching massive campaigns to control epidemics and specific health problems. An additional advancement of this period was the creation of ISSSTE in 1959, which brought together the diversity of pension and fringe benefit systems for state workers. Presently, according to data from the 2003–2018 National Health Program, the Mexican public health system provides care for 90 % of the population and is mainly formed by the three health institutions which were included in this study. It is important to highlight the fact that IMSS provides services for the insured population and covers 45 % of the population, ISSSTE serves the insured population of state workers and covers approximately 5 % of the population. Finally, since last years, the SSA in the context of universal coverage reforms, implemented as the main strategy program 'Seguro Popular', a program to cover the uninsured population, which is 40 % of the population. The remaining 10 % are served by the private health provider. Arredondo, A., Cuadra, S.M. & Duarte, M.B. Challenges of the epidemiological and economic burdens associated with hypertension in middle income countries: evidence from Mexico. BMC Public Health 15, 1106 (2015). https://doi.org/10.1186/s12889-015-2430-x Epidemiological changes Financial consequences
CommonCrawl
Search Results: 1 - 10 of 462001 matches for " A. Zezas " The X-ray Luminosity Function of "The Antennae" Galaxies (NGC4038/39) and the Nature of Ultra-Luminous X-ray Sources A. Zezas,G. Fabbiano Abstract: We derive the X-ray luminosity function (XLF) of the X-ray source population detected in the Chandra observation of NGC4038/39 (the Antennae). We explicitly include photon counting and spectral parameter uncertainties in our calculations. The cumulative XLF is well represented by a flat power law ($\alpha=-0.47$), similar to those describing the XLFs of other star-forming systems (e.g. M82, the disk of M81), but different from those of early type galaxies. This result associates the X-ray source population in the Antennae with young High Mass X-ray Binaries. In comparison with less actively star-forming galaxies, the XLF of the Antennae has a highly significant excess of sources with luminosities above 10^{39} erg\s (Ultra Luminous Sources; ULXs). We discuss the nature of these sources, based on the XLF and on their general spectral properties, as well as their optical counterparts discussed in Paper III. We conclude that the majority of the ULXs cannot be intermediate mass black-holes (M > 10-1000 \msun) binaries, unless they are linked to the remnants of massive Population III stars (the Madau & Rees model). Instead, their spatial and multiwavelength properties can be well explained by beamed emission as a consequence of supercritical accretion. Binaries with a neutron star or moderate mass black-hole (up to 20\msun), and B2 to A type star companions would be consistent with our data. In the beaming scenario, the XLF should exibit caracteristic breaks that will be visible in future deeper observations of the Antennae. Discovery of X-ray pulsations in the Be/X-ray binary IGR J21343+4738 P. Reig,A. Zezas Physics , 2014, DOI: 10.1093/mnras/stu898 Abstract: We report on the discovery of X-ray pulsations in the Be/X-ray binary IGR J21343+4738 during an XMM-Newton observation. We obtained a barycentric corrected pulse period of 320.35+-0.06 seconds. The pulse profile displays a peak at low energy that flattens at high energy. The pulse fraction is 45+-3$% and independent of energy within the statistical uncertainties. The 0.2-12 keV spectrum is well fit by a two component model consisting of a blackbody with kT=0.11+-0.01 keV and a power law with photon index Gamma=1.02+-0.07. Both components are affected by photoelectric absorption with a equivalent hydrogen column density NH=(1.08+-0.15)x 10^{22} cm^{-2} The observed unabsorbed flux is 1.4x10^{-11} erg cm^{-2} s^{-1} in the 0.2-12 keV energy band. Despite the fact that the Be star's circumstellar disc has almost vanished, accretion continues to be the main source of high energy radiation. We argue that the observed X-ray luminosity (LX~10^{35} erg s^{-1}) may result from accretion via a low-velocity equatorial wind from the optical companion. Chandra observations of NGC4698: a Seyfert-2 with no absorption I. Georgantopoulos,A. Zezas Physics , 2003, DOI: 10.1086/377120 Abstract: We present Chandra ACIS-S observations of the enigmatic Seyfert-2 galaxy NGC4698. This object together with several other bona-fide Seyfert-2 galaxies show no absorption in the low spatial resolution ASCA data, in contrast to the standard unification models. Our Chandra observations of NGC4698 probe directly the nucleus allowing us to check whether nearby sources contaminate the ASCA spectrum. Indeed, the Chandra observations show that the ASCA spectrum is dominated by two nearby AGN. The X-ray flux of NGC4698 is dominated by a nuclear source with luminosity L(0.3-8 keV) ~ 10^39, erg s-1 coincident with the radio nucleus. Its spectrum is well represented by a power-law, ~ 2.2, obscured by a small column density of 5x10^20 cm-2 suggesting that NGC4698 is an atypical Seyfert galaxy. On the basis of its low luminosity we then interpret NGC4698 as a Seyfert galaxy which lacks a broad-line region. Disc-loss episode in the Be shell optical counterpart to the high-mass X-ray binary IGR J21343+4738 Physics , 2013, DOI: 10.1051/0004-6361/201321408 Abstract: The main goal of this work is to determine the properties of the optical counterpart to the INTEGRAL source IGR J21343+4738, and study its long-term optical variability. We present optical photometric BVRI and spectroscopic observations covering the wavelength band 4000-7500 A. We find that the optical counterpart to IGR J21343+4738 is a V=14.1 B1IVe shell star located at a distance of ~8.5 kpc. The Halpha line changed from an absorption dominated profile to an emission dominated profile, and then back again into absorption. In addition, fast V/R asymmetries were observed once the disc developed. Although the Balmer lines are the most strongly affected by shell absorption, we find that shell characteristics are also observed in He I lines. The optical spectral variability of IGR J21343+4738 is attributed to the formation of an equatorial disc around the Be star and the development of an enhanced density perturbation that revolves inside the disc. We have witnessed the formation and dissipation of the circumstellar disc. The strong shell profile of the Halpha and He I lines and the fact that no transition from shell phase to a pure emission phase is seen imply that we are seeing the system near edge-on. New insights into the Be/X-ray binary system MXB 0656-072 E. Nespoli,P. Reig,A. Zezas Abstract: The X-ray transient MXB 0656-072 is a poorly studied member of high-mass X-ray binaries. Based on the transient nature of the X-ray emission, the detection of pulsations, and the early-type companion, it has been classified as a Be X-ray binary (Be/XRB). However, the flaring activity covering a large fraction of a giant outburst is somehow peculiar. Our goal is to investigate the multiwavelength variability of the high-mass X-ray binary MXB 0656-072. We carried out optical spectroscopy and analysed all RXTE archive data, performing a detailed X-ray-colour, spectral, and timing analysis of both normal (type-I) and giant (type-II) outbursts from MXB 0656-072 This is the first detailed analysis of the optical counterpart in the classification region (4000-5000 A). From the strength and ratio of the elements and ions, we derive an O9.5Ve spectral type, in agreement with previous classification. This confirms its Be nature. The characterisation of the Be/XRB system relies on Balmer lines in emission in the optical spectra, long-term X-ray variability, and the orbital period vs. spin period and EW(H\alpha) relation. The peculiar feature that distinguishes the type-II outburst is flaring activity, which occurs during the whole outburst peak, before a smoother decay. We interpret it in terms of magneto-hydrodynamic instability. Colour and spectral analysis reveal a hardening of the spectrum as the flux increases. We explored the aperiodic X-ray variability of the system for the first time, finding a correlation of the central frequency and rms of the main timing component with luminosity, which extends up to a "saturation" flux of 1E-8 erg/cm^2/s. A correlation between timing and spectral parameters was also found, pointing to an interconnection between the two physical regions responsible for both phenomenologies. Spatial Structures In the Globular Cluster Distribution of the Ten Brightest Virgo Galaxies R. D'Abrusco,G. Fabbiano,A. Zezas Physics , 2015, DOI: 10.1088/0004-637X/805/1/26 Abstract: We report the discovery of significant localized structures in the projected two-dimensional (2D) spatial distributions of the Globular Cluster (GC) systems of the ten brightest galaxies in the Virgo Cluster. We use catalogs of GCs extracted from the HST ACS Virgo Cluster Survey (ACSVCS) imaging data, complemented, when available, by additional archival ACS data. These structures have projected sizes ranging from $\sim\!5$ arcsec to few arc-minutes ($\sim\!1$ to $\sim\!25$ kpc). Their morphologies range from localized, circular, to coherent, complex shapes resembling arcs and streams. The largest structures are preferentially aligned with the major axis of the host galaxy. A few relatively smaller structures follow the minor axis. Differences in the shape and significance of the GC structures can be noticed by investigating the spatial distribution of GCs grouped by color and luminosity. The largest coherent GC structures are located in low-density regions within the Virgo cluster. This trend is more evident in the red GC population, believed to form in mergers involving late-type galaxies. We suggest that GC over-densities may be driven by either accretion of satellite galaxies, major dissipationless mergers or wet dissipation mergers. We discuss caveats to these scenarios, and estimate the masses of the potential progenitors galaxies. These masses range in the interval $10^{8.5}\!-\!10^{9.5}$ solar masses, larger than those of the Local Group dwarf galaxies. On the need for simultaneity between X-ray and radio observations in Doppler factor estimates I. Liodakis,A. Zezas,V. Pavlidou Abstract: We use archival X-ray and radio VLBA data to calculate inverse Compton Doppler factors for four high-power radio, $\gamma$-loud Flat Spectrum Radio Quasars frequently monitored by the F-GAMMA project. We explore the effect of the non-simultaneity between X-ray and radio observations by calculating Doppler factors for simultaneous and non-simultaneous observations. By comparing the newly re-calculated values from this work and archival values with variability Doppler factors, we show that simultaneous/quasi-simultaneous X-ray and radio observations can provide a reliable estimate of the true Doppler factor in blazar jets. For the particular case of PKS0528+134 we find that a time-difference of up to 1 week provides inverse Compton Doppler factor estimates consistent with the variability Doppler factor of this source at the 19% percent level. In contrast, time differences of more than 30 days between radio and X-ray observations result to discrepancies from 100% to more than a factor of 4. The optical counterpart to IGR J06074+2205: a Be/X-ray binary showing disc loss and V/R variability P. Reig,A. Zezas,L. Gkouvelis,cat Abstract: Present X-ray missions are regularly discovering new X/gamma-ray sources. The identification of the counterparts of these high-energy sources at other wavelengths is important to determine their nature. In particular, optical observations are an essential tool in the study of X-ray binary populations in our Galaxy. The main goal of this work is to determine the properties of the optical counterpart to the INTEGRAL source IGR J06074+2205, and study its long-term optical variability. Although its nature as a high-mass X-ray binary has been suggested, little is known about its physical parameters. We have been monitoring IGR J06074+2205 since 2006 in the optical band. We present optical photometric BVRI and spectroscopic observations covering the wavelength band 4000-7000 A. The blue spectra allow us to determine the spectral type and luminosity class of the optical companion; the red spectra, together with the photometric magnitudes, were used to derive the colour excess E(B-V) and estimate the distance. We have carried out the first detailed optical study of the massive component in the high-mass X-ray binary IGR J06074+2205. We find that the optical counterpart is a V=12.3 B0.5Ve star located at a distance of ~4.5 kpc. The monitoring of the Halpha line reveals V/R variability and an overall decline of its equivalent width. The Halpha line has been seen to revert from an emission to an absorption profile. We attribute this variability to global changes in the structure of the Be star's circumstellar disc which eventually led to the complete loss of the disc. The density perturbation that gives rise to the V/R variability vanishes when the disc becomes too small. The quiescent state of the accreting X-ray pulsar SAX J2103.5+4545 P. Reig,V. Doroshenko,A. Zezas Physics , 2014, DOI: 10.1093/mnras/stu1840 Abstract: We present an X-ray timing and spectral analysis of the Be/X-ray binary SAX J2103.5+4545 at a time when the Be star's circumstellar disk had disappeared and thus the main reservoir of material available for accretion had extinguished. In this very low optical state, pulsed X-ray emission was detected at a level of L_X~10^{33} erg/s. This is the lowest luminosity at which pulsations have ever been detected in an accreting pulsar. The derived spin period is 351.13 s, consistent with previous observations. The source continues its overall long-term spin-up, which reduced the spin period by 7.5 s since its discovery in 1997. The X-ray emission is consistent with a purely thermal spectrum, represented by a blackbody with kT=1 keV. We discuss possible scenarios to explain the observed quiescent luminosity and conclude that the most likely mechanism is direct emission resulting from the cooling of the polar caps, heated either during the most recent outburst or via intermittent accretion in quiescence. A multi-wavelength study of Supernova Remnants in six nearby galaxies. II. New optically selected Supernova Remnants I. Leonidaki,P. Boumis,A. Zezas Physics , 2012, DOI: 10.1093/mnras/sts324 Abstract: We present results from a study of optically emitting Supernova Remnants (SNRs) in six nearby galaxies (NGC 2403, NGC 3077, NGC 4214, NGC 4395, NGC 4449 and NGC 5204) based on deep narrow band H{\alpha} and [SII] images as well as spectroscopic observations. The SNR classification was based on the detected sources that fulfill the well-established emission line flux criterion of [SII]/H{\alpha} > 0.4. This study revealed ~400 photometric SNRs down to a limiting H{\alpha} flux of 10^(-15) erg sec^(-1) cm^(-2). Spectroscopic observations confirmed the shock-excited nature of 56 out of the 96 sources with ([SII]/H{\alpha})$_{phot}$> 0.3 (our limit for an SNR classification) for which we obtained spectra. 11 more sources were spectroscopically identified as SNRs although their photometric [SII]/H{\alpha} ratio was below 0.3. We discuss the properties of the optically-detected SNRs in our sample for different types of galaxies and hence different environments, in order to address their connection with the surrounding interstellar medium. We find that there is a difference in [NII]/H{\alpha} line ratios of the SNR populations between different types of galaxies which indicates that this happens due to metallicity. We cross-correlate parameters of the optically detected SNRs ([SII]/H{\alpha} ratio, luminosity) with parameters of coincident X- ray emitting SNRs, resulted from our previous studies in the same sample of galaxies, in order to understand their evolution and investigate possible selection effects. We do not find a correlation between their H{\alpha} and X-ray luminosities, which we attribute to the presence of material in a wide range of temperatures. We also find evidence for a linear relation between the number of luminous optical SNRs (10^(37) erg sec^(-1)) and SFR in our sample of galaxies.
CommonCrawl
DermoNet: densely linked convolutional neural network for efficient skin lesion segmentation Saleh Baghersalimi ORCID: orcid.org/0000-0002-7440-63401, Behzad Bozorgtabar1, Philippe Schmid-Saugeon2, Hazım Kemal Ekenel3 & Jean-Philippe Thiran1 Recent state-of-the-art methods for skin lesion segmentation are based on convolutional neural networks (CNNs). Even though these CNN-based segmentation approaches are accurate, they are computationally expensive. In this paper, we address this problem and propose an efficient fully convolutional neural network, named DermoNet. In DermoNet, due to our densely connected convolutional blocks and skip connections, network layers can reuse information from their preceding layers and ensure high accuracy in later network layers. By doing so, we take advantage of the capability of high-level feature representations learned at intermediate layers with varying scales and resolutions for lesion segmentation. Quantitative evaluation is conducted on three well-established public benchmark datasets: the ISBI 2016, ISBI 2017, and the PH2 datasets. The experimental results show that our proposed approach outperforms the state-of-the-art algorithms on these three datasets. We also compared the runtime performance of DermoNet with two other related architectures, which are fully convolutional networks and U-Net. The proposed approach is found to be faster and suitable for practical applications. Skin lesion segmentation is a key step in computerized analysis of dermoscopic images. Inaccurate segmentation could adversely impact the subsequent steps of an automated computer-aided skin cancer diagnosis system. However, this task is not trivial due to a number of reasons, such as the significant diversity among the lesions; inconsistent pigmentation; presence of various artifacts, e.g., air bubbles and fiducial markers; and low contrast between lesion and the surrounding skin, as can be seen in Fig. 1. Sample dermoscopic images from the ISBI 2017 Challenge: Skin Lesion Analysis Toward Melanoma Detection. The presence of artifacts such as hairs on the skin and inconsistent pigmentation making accurate skin lesion segmentation difficult In recent years, we have witnessed major advances of convolutional neural networks (CNNs) in many image processing and computer vision tasks, such as object detection [1], image classification [2], and semantic image segmentation [3]. A well-known CNN-based segmentation approach, fully convolutional networks (FCNs) [3], tackles per pixel prediction problems by replacing the fully connected layers with convolutions which kernels can cover the entire input image regions. Doing so, FCNs can process any image size and output pixel-wise labeled prediction map. However, the pooling layers in a down-sampling path cause a loss in the image resolution and make the network fragile to handle the lesion boundary details, e.g., fuzzy boundaries. In addition, the fully convolutional layers contain a large number of parameters, which produce a computationally expensive network. Most of the CNN approaches, such as SegNet [4] and DeconvNet [5], developed for segmentation purposes by using the encoder-decoder structure as the core of their network architecture. Another effective segmentation network is the employment of skip connections for the U-Net [6]. The encoder part is responsible for extracting the coarse features. It is followed by the decoder, which upsamples the features and is trained to recover the input image resolution at the network output. These CNN architectures [4, 5] use a base network adopted from VGG architecture [7], which is already pre-trained based on millions of images. Having said that, they utilize the deconvolution or unpooling layers to recover fine-grained information from the downsampling layers. Inspired by the residual networks (ResNets) [2], recently, a CNN architecture called DenseNet was introduced in [8]. The core components of the DenseNet are the dense blocks, where each block performs iterative summation of features from the previous network layers. This characteristic enables DenseNet to be more efficient, since it needs fewer parameters. Moreover, each layer can easily access their preceding layers; therefore, it reuses features of all layers with varying scales. Even though deep convolutional neural networks have been a significant success for the image pixel-wise segmentation, their inefficiency in terms of computational time limits their capability for real-time and practical applications. The motivation for this work is to propose an efficient network architecture for skin lesion segmentation, while achieving the state-of-the-art results. Our contributions can be summarized as follows. Our main aim is to perform an efficient segmentation under limited computational resources, while achieving the state-of-the-art results on skin benchmark datasets. We transform the DenseNets into a fully convolutional network. In particular, our architecture is built from multiple dense blocks in the encoder part, and we add a decoder part to recover the full input image resolution. This helps the multi-scale feature maps from different layers to be penalized by a loss function. The multiple skip connections are arranged between encoder and decoder. In particular, we link the output of each dense block with its corresponding decoder at each feature resolution. Doing so will enable the network to process high-resolution features from early layers as well as high-semantic features of deeper layers. Since we only upsample the feature maps produced by the preceding dense block, the proposed network uses fewer parameters. This enables the network achieve the best accuracy within a limited computational budget. We have conducted extensive experiments on ISBI 2016, ISBI 2017, and PH2 datasets, and we have shown that the proposed approach is superior to the state-of-the-art skin lesion segmentation methods. The rest of this paper is organized as follows: Section 2 presents the related work. Section 3 describes the proposed network architecture in detail. Section 4 conveys and discusses the experimental results. Finally, section 5 concludes the paper. Recently, deep learning has ushered in a new era of computer vision and image analysis. It is even more remarkable that the trained models on big dataset seem to transfer to many other problems such as detection technology [1, 9, 10] and semantic segmentation [3]. In particular, recent works on applying CNNs to image segmentation demonstrate superior performance over classical methods in terms of accuracy. In particular, convolutional neural networks can be adapted to FCNs [3] and perform semantic segmentation by replacing the fully connected layer of a classification network with a convolutional layer. However, due to the resolution loss in the down-sampling steps, the predicted lesion segmentation lacks lesion boundary details. Recently, several alternatives have been presented in the literature to address this shortcoming in FCNs. SegNet [4] and DeconvNet [5] are two examples of these approaches built upon auto-encoder network. In encoder, they both use the convolutional network from VGG16 for image classification. DeconvNet keeps two fully connected layers from VGG16, but SegNet discards them to decrease the number of parameters. Different from FCN in which the segmentation mask is recovered with only one deconvolution layer, the decoder network is composed of multiple deconvolution and unpooling layers both in SegNet and DeconvNet, which identify pixel-wise class labels and predict segmentation masks. U-Nets [6] have shown to yield very good results in different segmentation benchmarks. In the U-Net architecture, there are skip connections from encoder layers to their corresponding decoder layers. These skip connections help the decoder layers to recover the image details from the encoder part. As a result, a faster convergence and a more efficient optimization process are obtained during the training. Farabet et al. [11] proposed a segmentation method, where the raw input image is decomposed through a multi-scale convolutional network and produces a set of coarse-to-fine feature maps. Bozorgtabar et al. [12] proposed a skin segmentation method, which integrates fine and coarse prediction scores of the intermediate network layers. Simon et al. [13] used DenseNets to deal with the problem of semantic segmentation, where they achieved state-of-the-art results on urban scene benchmark datasets such as CamVid [14]. In addition, post-processing techniques such as conditional random fields (CRF) have been a popular choice to enforce consistency in the structure of the segmentation outputs [15]. Zheng et al. [16] proposed an interpretation of dense CRFs as recurrent neural networks (RNN). In their segmentation method, CRF-based probabilistic graphical modeling is integrated with deep learning techniques. Our proposed DermoNet is based on fully convolutional neural network. Unlike the FCN, in the DermoNet architecture, the outputs of the encoders are linked into the corresponding decoder to recover lost spatial information. The main difference between DermoNet and U-Net is that the encoder in DermoNet consists of four dense blocks with each block having four layers, whereas the encoder of U-Net is a path followed by the typical architecture of a convolutional neural network as can be seen in Fig. 2. Encoder difference between U-Net and DermoNet In this section, we propose a CNN-based architecture to perform lesion segmentation. Our network, DermoNet, consists of an encoder and a decoder; the encoder starts with a block, which performs the convolution on an input image with a kernel size of 7×7 and a stride of 2, and followed by the max pooling with stride of 2. In DermoNet, the output feature dimension of each layer within a dense block has k feature maps, where they are concatenated to the input. This procedure is repeated four times for each dense block; the output of the dense block is the concatenation of the outputs of the previous layers as in Eq. 1. $$ x_{l}=F_{l}\left (\left [ x_{l-1},x_{l-2},\cdots,x_{0} \right ] \right) $$ where xl denotes the output feature of the lth layer. F(·) is a nonlinear function defined as a convolution followed by a rectifier non-linearity (ReLU), and [⋯ ] denotes the concatenation operator. By using dense blocks, we enable the network to process high-resolution features from early layers as well as high-semantic features of deeper layers. Similar to the encoder, the decoder consists of four blocks, with each block having three layers. Each decoder block is composed of a convolutional layer with a kernel of size 1 ×1, a full-convolution layer with a kernel of size 3 ×3 followed by an upsampling by a factor 2 and a convolutional layer with a kernel of size 1 ×1. The network ends with three last convolutional layers and two bilinear upsampling steps by a factor of 2 in order to generate a segmented image with the same size as the input. Table 1 presents the architectural details of the proposed DermoNet. Table 1 Architectural details of the proposed DermoNet Figure 3 illustrates an overview of the proposed architecture; the encoder could be found on the right side of the figure while the decoder is shown on the left side. DermoNet is composed of four blocks in the encoder and decoder, respectively. Black arrows show connectivity patterns in the network, while red horizontal arrows represent skip connections between the encoder and decoder Since FCNs perform the image pixel-wise classification, cross-entropy loss is usually used for the segmentation task. However, a skin lesion usually occupies a small portion of a skin image. Consequently, the segmentation network trained with cross-entropy loss function tends to be biased toward the background image rather than lesion itself. Different variants of the cross-entropy loss have been devised to address this problem, which focus on the class balancing [17]. However, this class balancing strategy brings additional computation cost during the training procedure. In this paper, we use a loss function based on Jaccard distance (LJ) [18], which is complementary to the Jaccard index: $$ L_{J} = 1- \frac{\sum_{i,j}^{} (t_{ij} p_{ij})}{\sum_{i,j}^{} t_{ij}^{2} + \sum_{i,j}^{} p_{ij}^{2} - \sum_{i,j}^{} (t_{ij} p_{ij})} $$ where tij and pij denote the target and prediction output at image pixel (i,j), respectively. The Jaccard index measures the intersection over the union of the labeled segments for each class and reports the average. It takes into account both the false alarms and the missed values for each class. Our experimental results disclose that this loss function is more robust compared to the classical cross-entropy loss function. In addition, it is well suited to the imbalanced classes of the foreground and background, respectively. The output of DermoNet model is binarized to a lesion and compared with the ground truth provided by clinicians. As the evaluation metrics, Jaccard coefficient (JC) and Dice similarity coefficient (DSC) are used, which measure the spatial overlap between the obtained segmentation mask and the ground truth, respectively. They are defined as follows: \(\text {JC} = \frac {\text {TP}}{\mathrm {TP + FN + FP}} \hspace {1cm} \text {DSC} = \frac {2 \times \text {TP}}{2 \times \mathrm {TP + FN + FP}}\) where TP, FP, and FN denote the number of true positives, false positives, and false negatives, respectively. For the experiments, we have used the following three datasets : ISBI 2017: This dataset [19] contains 2000 training dermoscopic images, while there are 600 test images with the ground truths provided by experts. The images sizes vary from 771×750 to 6748×4499.ISBI 2016: This dataset [20] contains dermoscopic images, where the image sizes vary from 1022×767 to 4288×2848 pixels. There are 900 training images and 379 test images.PH2: This dataset has been acquired at Dermatology Service of Hospital Pedro Hispano, Matosinhos, Portugal [21] with Tuebinger Mole Analyzer system. This dataset contains 200 dermoscopic test images with a resolution of 768×560 pixels. Table 2 gives a summary about all three datasets. Table 2 Datasets summary We have trained our network using the resized RGB images of size 384×512 pixels. For the augmentation, we flipped the training images horizontally and vertically and did shrinking via cropping. Then, we normalized each image such that the pixel values would be between 0 and 1. The initial weights of our network are sampled from Xavier initialization. Adam optimizer is used as the optimizer for the DermoNet. The base learning rate for the network is set to 10−4. The maximum number of iteration is 5540. The whole architecture is implemented on the TensorFlow [22]. We used Nvidia Tesla K40 GPU with 12 GB GDDR5 memory for the training. We apply a threshold value of 0.5 to final pixel-wised score to generate lesion mask. To verify the effectiveness of the DermoNet in terms of test execution time, we compare it with two related architectures, namely FCN and U-Net. Table 3 presents the segmentation execution times per image using a system with Intel Core i7-5820K CPU. Due to the densely connected convolutional blocks and having less parameters, the proposed network is found to be faster. Table 3 Comparison of average runtime (s) per image Results on ISBI 2016 dataset For the experiments on the ISBI 2016 dataset, for training the models, we used either only the training dataset provided by the ISBI 2016 challenge or the augmented version of it, in which we include 6500 dermoscopic images from DermoSafe [23] to the ISBI 2016 training dataset, in order to introduce a wider variety of images. These trained models are then evaluated on the ISBI 2016 test dataset. Obtained results on the ISBI 2016 challenge dataset are given in Table 4. In this challenge, the participants are ranked only based on the JC. In addition, we also report the DSC results. The proposed DermoNet improved the segmentation performance both in terms of Jaccard coefficient and Dice similarity coefficient. As can be seen from the table, in terms of JC, 9.9% and 2.2% absolute performance increase improvement has been achieved with respect to FCN and U-Net, respectively. In terms of DSC, the obtained absolute increase in performance with respect to FCN and U-Net is 7.8% and 1.8%, respectively. As can be seen from the last two rows of the table, DermoNet's performance improves with the use of the additional data provided by DermoSafe. However, even without using the additional DermoSafe data, it stills outperforms the state-of-the-art methods. Figure 4 shows several examples of automatic segmentation results on the ISBI 2016 test set with different cases, such as hairy skin, irregular shape, and low contrast. We observe that the proposed DermoNet is able to separate the skin lesions from these artifacts and is robust to different image acquisition conditions. Sample segmentation results of ISBI 2016 challenging images. Ground truth contours and our detected contours are marked in red and blue, respectively Table 4 Performance comparison between the proposed segmentation and other state-of-the-art methods on ISBI 2016 challenge test set Results on PH2 dataset In these experiments, we have used the trained models obtained in Section 4.4 and evaluated them on the 200 skin images from the PH2 dataset. We have also compared the performance of the proposed lesion segmentation method with superpixel-based saliency detection approaches [26–28] on the PH2 dataset. Attained results are given in Table 5. From the experimental results, it can be observed that DermoNet which is trained using DermoSafe data has outperformed the other skin lesion segmentation methods. Due to dense connectivity in DermoNet, each layer is connected with all subsequent layers and allows later layers to bypass features and to maintain the high accuracy of the final pixel classification layer in a deeper architecture with fewer parameters. As a result, this brings additional performance gains. Table 5 Performance comparison between the proposed segmentation and other state-of-the-art methods on PH2 dataset For the experiments on the ISBI 2017 dataset, for training the models, we used either only the training dataset provided by the ISBI 2017 challenge or the augmented version of it, in which we include 6500 dermoscopic images from DermoSafe [23] to the ISBI 2017 training dataset. These trained models are then evaluated on the ISBI 2017 test dataset. Table 6 compares the performance of DermoNet with the state-of-the-art algorithms on ISBI 2017 dataset. Many teams evaluated their segmentation algorithms during the ISBI 2017 challenge. Among them, the top two teams used different variations of a fully convolutional network in their segmentation methods. For example, Yuan et al. [18] proposed a method based on deep fully convolutional-deconvolutional neural networks (CDNN) to segment skin lesions in dermoscopic image. NLP LOGIX [33] used a U-Net architecture followed by a CRF as post-processing in their segmentation method. Here, we observed that the proposed DermoNet outperforms the other teams' approaches. Effect of loss function As described in Section 3, due to imbalanced classes, cross-entropy loss function would not be suitable for the skin lesion segmentation task. Therefore, we used Jaccard distance instead, which enabled the DermoNet's training to focus more on lesion pixels over background. To also empirically analyze the effect of the loss function, we compare the performance of DermoNet using Jaccard distance or cross-entropy on ISBI 2016, 2017 and PH2 dataset. As can be seen from Table 7, using Jaccard distance as the loss function improves the performance significantly compared to using cross-entropy as the loss function. Table 7 Performance comparison of the proposed segmentation on ISBI 2016 and 2017 and PH2 dataset when using Jaccard coefficient or cross-entropy loss for training Qualitative comparison In this section, we provide qualitative comparison between DermoNet, FCN, and U-Net. Figure 5 shows some tricky cases from ISBI 2017 challenge dataset. In this figure, from left to right, we have the original image, ground truth, the output of DermoNet, U-Net, and FCN, respectivly. As can be observed, DermoNet provides better results compared to FCN and U-Net and is able to separate the skin lesion from artifacts such as ink markings and air bubbles. Cases where our proposed DermoNet outperforms FCN and U-Net Figure 6 shows cases where the ground truth is wrongly labeled, and it leads to a very low Jaccard coefficients (JC) even though the output of the segmentation is correct. In this figure, from left to right, we have the original image, ground truth, and DermoNet, U-Net, and FCN output. Cases where all segmentation methods failed (mis-segmentation) Finally, Fig. 7 shows some of the challenging cases among all the ISBI 2017 testing images where all three models (DermoNet, U-Net, and FCN) performed poorly. In these cases, the contrast between lesion and skin is very low. Cases where the results of all the networks are suboptimal Conclusion and future work In this paper, we have presented a new fully convolutional neural network architecture for automatic skin lesion segmentation. The idea behind DermoNet is sharing features across all encoder blocks and taking benefit of reusing features, while remaining densely connected to provide the network with more flexibility in learning new features. The proposed network has fewer parameters compared to existing baseline segmentation methods that have an order of magnitude larger memory requirement. Moreover, it improves state-of-the-art performance on challenging skin datasets, without using neither additional post-processing nor pre-training. We have achieved an average Jaccard coefficient of 82.5% on the ISBI 2016 Skin Lesion Challenge dataset, 85.3% on the PH2 dataset, and 78.3% on ISBI 2017 Skin Lesion Challenge dataset. In our future work, we plan to apply the proposed segmentation with some modifications in the network architecture on standard semantic segmentation benchmarks, e.g., MSCOCO, to show the generalization capability of the proposed framework. CDNN: Convolutional-deconvolutional neural network CRF: Conditional random fields DSC: Dice similarity coefficient FCN: Fully convolutional networks FP: ISBI: International symposium on biomedical imaging JC: Jaccard coefficient ReLU: Rectifier non-linearity ResNet: Residual network RNN: Recurrent neural networks TP: True positive J. Redmon, S. K. Divvala, R. B. Girshick, A. Farhadi, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). You only look once: unified, real-time object detection (Las Vegas, NV, 2016), pp. 779–788. K. He, X. Zhang, S. Ren, J. Sun, in Proceedings of the IEEE conference on CVPR. Deep residual learning for image recognition, (2016), pp. 770–778. J. Long, E. Shelhamer, T. Darrell, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Fully convolutional networks for semantic segmentation (Boston, MA, 2015), pp. 3431–3440. V. Badrinarayanan, A. Kendall, R. Cipolla, SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 39(12), 2481–2495 (2017). H. Noh, S. Hong, B. Han, in Proceedings of the IEEE ICCV. Learning deconvolution network for semantic segmentation, (2015), pp. 1520–1528. O. Ronneberger, P. Fischer, T. Brox, in Proc. Med. Image Comput. Comput.-Assisted Intervention. U-net: convolutional networks for biomedical image segmentation (Springer, 2015), pp. 234–241. K. Simonyan, A. Zisserman, in ICLR. Very deep convolutional networks for large-scale image recognition, (2015). G. Huang, Z. Liu, K. Q. Weinberger, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Densely connected convolutional networks (Honolulu, 2017), pp. 2261–2269. S. Ren, K. He, R. Girshick, J. Sun, in Advances in neural information processing systems. Faster R-CNN: towards real-time object detection with region proposal networks, (2015), pp. 91–99. C. Yan, H. Xie, J. Chen, Z. Zha, X. Hao, Y. Zhang, Q. Dai, A fast uyghur text detector for complex background images. IEEE Trans. Multimed.20(12), 3389–3398 (2018). C. Farabet, C. Couprie, L. Najman, Y. LeCun, Learning hierarchical features for scene labeling. IEEE Trans. Pattern. Anal. Mach. Intell.35(8), 1915–1929 (2013). B. Bozorgtabar, Z. Ge, R. Chakravorty, M. Abedini, S. Demyanov, R. Garnavi, in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). Investigating deep side layers for skin lesion segmentation (Melbourne, 2017), pp. 256–260. S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, Y. Bengio, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation (Honolulu, 2017), pp. 1175–1183. G. J. Brostow, J. Fauqueur, R. Cipolla, Semantic object classes in video: a high-definition ground truth database. Pattern Recogn. Lett.30(2), 88–97 (2009). L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans Pattern Anal Mach Intell. 40(4), 834–848 (2016). S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, P. H. S. Torr, in Proceedings of the IEEE ICCV. Conditional random fields as recurrent neural networks, (2015), pp. 1529–1537. K. Maninis, J. Pont-Tuset, P. A. Arbeláez, L. J. V. Gool, in International Conference on MICCAI. Deep retinal image understanding (Springer, 2016), pp. 140–148. Y. Yuan, M. Chao, Y. Lo, in International Skin Imaging Collaboration (ISIC) 2017 Challenge at the International Symposium on Biomedical Imaging (ISBI). Automatic skin lesion segmentation with fully convolutional-deconvolutional networks, (2017). https://arxiv.org/pdf/1703.05165.pdf. N. C. F. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. K. Mishra, H. Kittler, A. Halpern, in IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Skin lesion analysis toward melanoma detection: A challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC) (Washington, 2017), pp. 168–172. D. Gutman, N. C. F. Codella, M. E. Celebi, B. Helba, M. A. Marchetti, N. K. Mishra, A. Halpern, Skin lesion analysis toward melanoma detection: a challenge at the ISBI 2016, hosted by the ISIC (2016). arXiv preprint arXiv:1605.01397. T. Mendonça, P. M. Ferreira, J. S. Marques, A. R. S. Marçal, J. Rozeira, in EMBC, 2013 35th Annual International Conference of the IEEE. Ph 2-a dermoscopic image database for research and benchmarking (IEEE, 2013), pp. 5437–5440. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems (2015). software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/. DermoSafe. https://www.dermosafe.com/en/. S. Xie, Z. Tu, in IEEE International Conference on Computer Vision (ICCV). Holistically-nested edge detection (Santiago, 2015), pp. 1395–1403. M. Zortea, S. O. Skrøvseth, T. R. Schopf, H. M. Kirchesch, F. Godtliebsen, Automatic segmentation of dermoscopic images by iterative classification. J. Biomed. Imaging. 2011:, 3 (2011). N. Tong, H. Lu, X. Ruan, M. Yang, in Proceedings of the IEEE Conference on CVPR. Salient object detection via bootstrap learning, (2015), pp. 1884–1892. X. Li, Y. Li, C. Shen, A. Dick, A. V. D. Hengel, in Proceedings of the IEEE ICCV. Contextual hypergraph modeling for salient object detection, (2013), pp. 3328–3335. B. Bozorgtabar, M. Abedini, R. Garnavi, in Proc. Int. Workshop Mach. Learn. Med. Imag.Sparse coding based skin lesion segmentation using dynamic rule-based refinement (Springer, 2016), pp. 254–261. M. Silveira, J. C. Nascimento, J. S. Marques, A. R. S. Marcal, T. Mendonca, S. Yamauchi, J. Maeda, J. Rozeira, Comparison of segmentation methods for melanoma diagnosis in dermoscopy images. IEEE J. Sel. Top. Sig. Process.3(1), 35–45 (2009). M. Sezgin, B. Sankur, Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging. 13(1), 146–168 (2004). C. Li, C. Kao, J. C. Gore, Z. Ding, Minimization of region-scalable fitting energy for image segmentation. IEEE Trans. Image Process.17(10), 1940–1949 (2008). M. Celebi, H. Kingravi, H. Iyatomi, Y. Aslandogan, W. Stoecker, R. Moss, J. Malters, J. Grichnik, A. Marghoob, H. Rabinovitz, S. Menzies, Border detection in dermoscopy images using statistical region merging. Skin Res. Technol.14(3), 347–353 (2008). M. Berseth, ISIC 2017-skin lesion analysis towards melanoma detection (2017). https://arxiv.org/abs/1703.00523. The authors would like to thank Mr. Philippe Held, CEO and Founder of DermoSafe, for his support and for giving us access to the DermoSafe's database of images of pigmented skin lesions, which helped us to achieve the mentioned results. This work was supported by the Swiss Commission for Technology and Innovation CTI fund no. 25515.2 PFLS-LS for the project entitled "DermoBrain: advanced computer vision algorithms and features for the early diagnosis of skin cancer." The ISBI 2016 [20] datasets analyzed during the current study are available in https://challenge.kitware.com/#phase/566744dccad3a56fac786787. The ISBI 2017 [19] datasets analyzed during the current study are available in https://challenge.kitware.com/#challenge/583f126bcad3a51cc66c8d9a. The PH2 [21] datasets analyzed during the current study are available in https://www.dropbox.com/s/k88qukc20ljnbuo/PH2Dataset.rar. The datasets of DermoSafe [23] that are analyzed during the current study are not publicly available due to the protection of patient privacy. Electrical Engineering Department, Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Station 11, Lausanne, 1015, Switzerland Saleh Baghersalimi , Behzad Bozorgtabar & Jean-Philippe Thiran DermoSafe SA, EPFL Innovation Park, Bâtiment D, Lausanne, 1015, Switzerland Philippe Schmid-Saugeon Department of Computer Engineering, Maslak, 34469, Istanbul, Turkey Hazım Kemal Ekenel Search for Saleh Baghersalimi in: Search for Behzad Bozorgtabar in: Search for Philippe Schmid-Saugeon in: Search for Hazım Kemal Ekenel in: Search for Jean-Philippe Thiran in: SBS and BB conceived and designed the methods. SBS performed the experiments. PSS, HKE, and JPT supervised the project. Correspondence to Saleh Baghersalimi. Baghersalimi, S., Bozorgtabar, B., Schmid-Saugeon, P. et al. DermoNet: densely linked convolutional neural network for efficient skin lesion segmentation. J Image Video Proc. 2019, 71 (2019) doi:10.1186/s13640-019-0467-y Accepted: 15 May 2019 DOI: https://doi.org/10.1186/s13640-019-0467-y Fully convolutional neural networks Lesion segmentation
CommonCrawl
Hijaluronski fileri Botox – botulinum toksin Aparativni tretmani Radiotalasni lifting Mezoterapija bez igala Mikrodermoabrazija Lipo laser Beauty tretmani Dermaroler n value of steel децембар 2, 2020 Uncategorized 0 Comments Cast Iron: Coated : 0.010: 0.013: 0.014: Uncoated: 0.011: 0.014: 0.016: 4. Easily access valuable industry resources now with full access to the digital edition of The FABRICATOR. Of the similar books on the market today, none explain in detail why one steel is comparable to another. Perhaps the person was familiar with equations such as that used to calculate carbon equivalent (CE) used to predict the relative difficulty of welding. Structural steel grades are designed with specific chemical compositions and mechanical properties formulated for particular applications. natural channels with stones and weeds: 0.035. very poor natural channels Use it. A/V measures the rate of temperature increase of a steel cross section by the ratio of the heated surface area to the volume. 24 in. Tensile / yield strengths and ductilities for some of the plain carbon and low alloy steels are given in the following mechanical properties of steel chart. References Cast irom, ASTM A48, structural steel fro bridges and structures, ASTM A7. 11/27/2020 . The processes used to strengthen the metal also reduce the n-value. S355 steel is a structural steel with a specified minimum yield strength of 355 N/mm². Value of a 1943 Steel Penny . When metal alloys are cold worked, their yield strength increases. The N value (standard Penetration test) value is widely used in Geotechnical Engineering designs especially in India and nearby countries. • Some higher strength steel specifications require a spread of 10,000 or 20,000 PSI between the yield strength and tensile strength. © 2020 FMA Communications, Inc. All rights reserved. Recycle steel today! Anisotropy is, more importantly, also caused by preferred crystallographic orientation or "texture" in the steel. The test-piece has an initial width of 10 mm, thickness of 1.4 mm and gauge length of 50 mm. Use it. (2) For lined corrugated metal pipe, a composite roughness coefficient may be computed using the procedures outlined in the HDS No. The total value, or equity value, is then the sum of the present value of the future cash flows, which in this case is AU$7.2b. Pipe-arches approx-imately have the same roughness characteristics as their equivalent round pipes. After metals have reached their elastic limit and plastically deform they experience strain hardening, which can increase the strength within the final products application. The value of the strain hardening exponent lies between 0 and 1. Last month this column emphasized that our traditional steels have work-hardening exponents (n-values) that remain constant during deformation. Fabricators & Manufacturers Association, Intl. 12/1/2020 11:23:56 PM212.129.35.138. N The higher the n-value, the steeper the stress-strain curve in the uniform elongation region of the tensile test. Wrought Iron: Black : 0.012: 0.014: 0.015: Galvanized: 0.013: 0.016: 0.017: 5. The U value of Steel is 26.2 or an R value of .0382. Compared to the current share price of AU$14.9, the company appears around fair value at the time of writing. The greater the spread between the yield and tensile strengths, the steeper will be the stress-strain curve and the higher the n-value. 2135 Point Blvd., Elgin, IL 60123 (815) 399-8700. 10 in. Events Events Events . Is there a of obtaining the n-value from only that single piece of data? The test-piece has an initial width of 10 mm, thickness of 1.4 mm and gauge length of 50 mm. For structural design it is standard practice to consider the unit weight of structural steel equal to γ = 78.5 kN/m 3 and the density of structural steel approximately ρ = 7850 kg/m 3. It is likely to gradually replace the use of Hp/A. N/mm2 is not SI. Location B indicates the maximum, n-value expected when the minimum yield strength is specified. A value of 0 means that a material is a perfectly plastic solid, while a value of 1 represents a 100% elastic solid. N/A: 11/27/2020: an hour ago . 2. It depends according to the deformation grade applied. It should be noted that, for submerged sands, the SPT-N value needs to be reduced (N red) using the following relationship for SPT-N values exceeding 15. The sheet thickness is retained as specimen thickness, while the parallel length is obtained by milling or punching operations. Strain is the "deformation of a solid due to stress" - change in dimension divided by the original value of the dimension - and can be expressed as. A number of these issues are reviewed this month. DENSITY OF STEEL. Most metals have a n value between 0.10 and 0.50. n-Value The strain hardening exponent n is a measure of the response of metallic materials after cold working. Events Events Our major market-leading conferences and events offer optimum networking opportunities to all participants while adding great value to their business. • Can you explain why specifying the minimum yield strength of higher strength steels limits the maximum n-value? ε = dL / L (1) where. Conversions - Above 480 °C (900 °F), the value of the modulus of elasticity decreases rapidly. Often the n-value is ignored because it seems too complex. While the yield strength may increase slightly, a major reduction in the n-value will take place. Get to know the n-value. For years the n-values were only good to two decimal places. The intersection is the maximum n-value. 15 in. Physical Requirement: All steels have same modulus.. 200 to 210 GPa, or (in your units) those numbers times 1000. Absolutely. For the last decade, a large number of steel producers have improved their process control procedures so the n-value throughout most of the coil varies in the third decimal place. 1) Mild Steel Bars: Mild steel bars can be supplied in two grades . It really does not care what caused the difference. 2.25 2.05 Er value 9.6 Yield Strength, Tensile Strength and Ductility Values for Steels at Room Temperature: Material: Yield Strength: Tensile Strength % Elong. roughness than for annually corrugated steel pipe. Modulus of Elasticity Young's Modulus Strength for Metals - Iron and Steel as the young's modulus or elastic modulus of steel varies by the alloying additions in the steel. Stress is force per unit area and can be expressed as. These conversion charts are provided for guidance only as each scales uses different methods of measuring hardness. This table shows approximate hardness of steel using Brinell, Rockwell B and C and Vickers scales. Using the r-values in the three directions, two other important parameters are calculated. Concrete: Easily access valuable industry resources now with full access to the digital edition of The WELDER. steel, which has higher ductility and is capable of being formed into a wide variety of shapes. • Sometimes a coil is received that fails in a severe stretch area, while other coils form successfully. 10 in. Source: Admet Inc. Ductility is defined as the ability of a material to deform plastically before fracturing. If you have any comments or questions about the glossary, please contact Dan Davis, editor-in-chief of The FABRICATOR. Mexico Metalforming Technology Conference. AHSS are complex, sophisticated materials, with carefully selected chemical compositions and multiphase microstructures resulting from precisely controlled heating and cooling processes. Likewise, the TS/YS ratio will increase with the increased spread. Steel Trade Graphs Steel Trade Graphs. It got great weldability and machinability, let us see more mechanical details of this steel. Reply. I think N/mm^2 is then MPa since (10^–3)^2, inverted is 10^6. Determine the K and n values. • Is n-value a super-property that combines different material properties to assess formability? The arrow B represents the minimum specified yield strength. Material Properties of S355 Steel - An Overview S355 is a non-alloy European standard (EN 10025-2) structural steel, most commonly used after S235 where more strength is needed. TEHRAN, Nov. 29 (MNA) – The export value of steel products of the country exceeded $1.5 billion in the first seven months of the current Iranian calendar year (from March 21 to Oct. 22). The following table lists the buy price (what you can expect to pay to a dealer to purchase the coin) and sell value (what you can expect a dealer to pay you if you sell the coin). P: 216-901-8800 | F: 216-901-9669 36 in. The higher the Hp/A, the faster the steel section heats up and so the greater the thickness of fire protection material required. of the steel exposed to the fire and the mass of the steel section. To obtain a higher n value, one would have to reduce the yield strength below the specification. Hope this helps. When a custom metal fabricator launches a product, Modern material handling in the structural fab shop, The 4 components of reliable systems in stamping, Q&A: How sensors and controls help stampers adapt to the new normal, 5 ways to handle stamping, die cutting waste streams automatically, Second-gen Michigan stamper proves out her mettle through prototyping, Consumables Corner: Diagnosing the obvious and not-so-obvious causes of porosity, Jim's Cover Pass: Face masks are the new necessary shop PPE, Jim's Cover Pass: How to help new welders decode blueprints, Consumables Corner: How to diagnose and prevent weld cracking, Minibike enthusiasts make the most of miscellaneous metals, Helping manufacturers maintain the momentum of innovation, Planning a way out of the pandemic chaos for business owners, Tube manufacturer invests in flexible, automated mill technology, 7 maneras de repensar el flujo de trabajo en la oficina, Perfeccionando una soldadura de proyección en acero de ultra-alta resistencia, La forma y manipulación del rayo lleva la soldadura láser a sus límites, Report: Environmental impacts of metal 3D printing, Satair 3D-prints first flightworthy spare part for Airbus A320ceo, Army looks to 3D-print spare parts to speed aging equipment's return to duty, Software Webinar Series Session 4: Digital Transformation, Predictive Analytics to Solve Scheduling, The FABRICATOR's Technology Summit & Tours, Hastings Air Energy Control, Inc. - Controlled Process Ventilation: Save Money, Save Lives. The mechanical properties of the coil ends may be different because they undergo different processing conditions compared to locations interior to the coil. Cement: Neat Surface: 0.010: 0.011: 0.013: Mortar: 0.011: 0.013: 0.015: 7. The n-value is determined by tensile stress and strain values; while for the r-value calculation the specimen section measurement is also required. In essence, steel is composed of iron and carbon, although it is the amount of carbon, as well as the level of impurities and additional alloying elements that determine the properties of each steel grade. The limiting value is dependent on material, but is generally between 5 and 6. Modulus of Elasticity for Metals. and Larger This exposure to n-value generates a variety of questions about its importance, measurement, source, application and other sometimes fine details. φ in terms of 'N' or Dr φ= 20N +15o r φ=28o +15D Point resistance from SPT 4 N D 0.4 N L qp (tsf) = ≤ ⋅ ⋅ ⋅ N = avg value for 10D above and 4D below pile tip 10D 4D Q 1For a given initial φ unit point resistance for bored piles =1/3 to 1/2 of driven piles, and bulbous piles driven with great impact At ambient temperature between (160 to 200) MPa. They are worth about 10 to 13 cents each in circulated condition, and as much as 50 cents or more if uncirculated.. They simply appear together in a list of steels. Standard ASTM E8 Tensile Specimen. The n-value is measured over a strain range from 10 to 20 percent, 7 to 15 percent or 10 percent to UTS. 24 in. Steel prices saw some correction in the last week of November following a massive rally that pushed prices to an eleven-month high of 4,125 Yuan/MT as construction activity in China is expected to slow when the weather turns colder. steel (or more) that may stand alone in the structure or that is one of many pieces in an assembly or shipping piece. Definitions. Haihong International Trade (HK) CO., Limited. A good knowledge of the basic mechanical properties of these steels is needed for a satisfactory use of them. See Note 5. The Young's modulus of the steel is 19 × 10¹⁰ N/m². Long-term readers of The Science of Forming column and attendees at PMA metalforming seminars are acutely aware of the emphasis given to the workhardening of metals and the workhardening exponent or n-value. This usually indicates that the computer has been allowed to calculate a large number of decimal places that are not meaningful nor scientifically accurate. For low-carbon steels that have a single microstructure of ferrite, a historical data band can be constructed (See graph). Your test locations most likely are from blanks within the coil. electroslag, forged ring/ block,etc. Internal … Steel: Lockbar and welded: 0.010: 0.012: 0.014: Riveted and spiral: 0.013: 0.016: 0.017: 3. A higher ratio means more stretchability. Average n values range from 0.035 (steel) or 0.036 (aluminum) for 5-foot-diameter pipes to 0.033 for pipes of 18 feet or greater diameter. 36 in. a)Mild steel bars grade-I designated as Fe 410-S or Grade 60 b) Mild steel bars grade-II designated as Fe-410-o or Grade 40 . The n-value will print with the rest of the data if the n-value switch is turned on. Is the third decimal place accurate? Comparing steel standards is not an exact science, so the biggest challenge of preparing such a book was deciding on the "rules of comparison." Get your online quote today! Inflation Fear and Green Hope Drive Investors Into Copper. Many translated example sentences containing "value of steel" – German-English dictionary and search engine for German translations. n-value, also known as the strain hardening exponent, is the measure of a metal's response to cold working. Young's modulus of carbon steels (mild, medium and high), alloy steels, stainless steels and tool steels are given in the following table in GPA and ksi. 2-2/3" x 1/2" Annular 8 in. "exact" uses full enumeration of all sample splits with resulting standardized Steel statistics to obtain the exact \(P\)-value. Would this be a low n-value problem? It will become your most powerful trouble-shooter for sheetmetal stretch failures. 2) Medium Tensile Steel Bars designated as Fe- 540-w-ht or Grade 75 . Supply Condition : As Rolled, Normalized Rolling, Furnace Normalizing, … mighthaveverydifferentvalues.Nevertheless,intheabsenceof the stress-strain data for thematerial itself, the assumption, that the data derived from the testsof theshort columns may logically Stainless Steel, Special Steel, Compressor Blading, Turbine Blading, Superalloy Supplier. If the property data are acquired automatically during a tensile test and processed by computer software, most of the new property calculation codes automatically compute n-value. Corrugated Steel Pipe—Manning's "n" Value 60 in. The time dependent deformation due to heavy load over time is known as creep.. Delta r (∆ r) is called the "planar anisotropy parameter" in the formula: ∆ r = (r 0 + r 90 - 2r 45)/2. Corrugated Steel Pipe—Manning's "n" Value 60 in. Structural steel is a standard construction material made from specific grades of steel and formed in a range of industry-standard cross-sectional shapes (or 'Sections'). The n-Value, i.e. In cold-formed steel construction, the use of a range of thin, high strength steels (0.35 mm thickness and 550 MPa yield stress) has increased significantly in recent times. but low carbon steel varies a range of 20.59*104 Mpa and pre hardened steels 19.14*104 Mpa ; Creep. It is the ratio of the stress and the strain of a material. Design values of structural steel material properties Nominal values of structural steel yield strength and ultimate strength For structural design according to Eurocode 3 (EN1993-1-1), the nominal values of the yield strength f y and the ultimate strength f u for structural steel are obtained as a simplification from EN1993-1-1 Table 3.1 , which is reproduced above in tabular format. The n-value is the amount of strengthening for each increment of straining. According to the World Steel Association, there are over 3,500 different grades of steel, encompassing unique physical, chemical, and environmental properties. A typical value of the modulus of elasticity: - at 200 °C (400 °F) is about 193 GPa (28 × 10 6 psi); - at 360 °C (680 °F), 179 GPa (26 × 10 6 psi); - at 445 °C (830 °F), 165 GPa (24 × 10 6 psi); - and at 490 °C (910 °F), 152 GPa (22 × 10 6 psi). The Lankford coefficient (also called Lankford value, R-value, or plastic strain ratio) is a measure of the plastic anisotropy of a rolled sheet metal.This scalar quantity is used extensively as an indicator of the formability of recrystallized low-carbon steel sheets. • The n-values for a coil of steel often are given to three decimal places. TENSILE - YIELD STRENGTH OF STEEL CHART. 2-2/3" x 1/2" Annular 8 in. Strength [] Yield strengtYield strength is the most common property that the designer will need as it is the basis used for most of the rules given in design codes.In European Standards for structural carbon steels (including weathering steel), the primary designation relates to the yield strength, e.g. Other possibilities include a different chemistry, different processing, or even shipment of the wrong coil. For the last decade, a large number of steel producers have improved their process control procedures so the n-value throughout most of the coil varies in the third decimal place. C. MECHANICAL PROPERTIES OF CARBON AND STAINLESS STEEL 360 Table C.1: Nominal values of yield strength f y and ultimate tensile strength f u for hot rolled structural steel Standard and steel grade Nominal thickness of the element t [mm] t 40mm 40mm < t 80mm f y [N/mm 2] f u [N/mm ] f y [N/mm 2] f u [N/mm 2] EN 10025-2 S235 235 360 215 360 S275 275 430 255 410 6363 Oak Tree Blvd. Metric is Pascals, N/m^2. 4140 Steel Uses. If the steel is C15, with the material at 900°C 144 MPa, while at 1200°C 65 MPa. There is a complex equation based on the tensile strength/yield strength ratio. The n-value is the one property of sheetmetal that helps the most in evaluating its relative stretchability. Could the 2020s turn out like the 2010s for metal fabricators? ε = strain (m/m, in/in) dL = elongation or compression (offset) of object (m, in) L = length of object (m, in) Stress - σ. 1 MPa = 10 6 Pa = 1 N/mm 2 = 145.0 psi (lbf/in 2); Fatigue limit, endurance limit, and fatigue strength are used to describe the amplitude (or range) of cyclic stress that can be applied to the material without causing fatigue failure. Pipe, Angles, Expanded Metal, Flat bars, Rounds,Channels and Rebar products. The n-value is determined by tensile stress and strain values; while for the r-value calculation the specimen section measurement is also required. A typical S-N curve corresponding to this type of material is shown Curve A in Figure 1. FINISHLINE All-In-One Deburring & Finishing Solution. The work-hardening exponent of sheet steel which indicates how much the material strengthens when the material is deformed. 18 in. Accoding to The ASHRAE Handbook - Fundamentals 2005 Chapter 39 Table 3. σ = F / A (2) where. A value of 0 means that a … This function uses pairwise Wilcoxon tests, comparing a common control sample with each of several treatment samples, in a multiple comparison fashion. The steel has a 6-inch pitch and a 2-inch rise; aluminum has a 9-inch pitch and a 2.5-inch rise. Study it. Steel traded at USD 711 per metric ton on 6 November, which was up 12.7% from the same day in the prior month. Easily access valuable industry resources now with full access to the digital edition of The Tube & Pipe Journal. Get to know the n-value. Enjoy full access to the digital edition of STAMPING Journal, which serves the metal stamping market with the latest technology advancements, best practices, and industry news. However, a fourth or fifth decimal place n-values are … 12 in. A reduction in impurities such as C and N and addition of stabi- ... n-value 0.19 0.18 0.46 Table 3 r-value and forming properties NSSC 180 SUS304 r-value average 1.4 1.0 L.D.R. Steel Hardness Conversion Table. S355 J2+N Steel Plate Grade Specification : Material: Material En-10025-2 Steel Plate , S355 J2+N Steel Plate Standard: EN 10025-2:2004 Item: Offshore & Structural Steel Plate Width: 1000mm-4500mm Thickness: 5mm-150mm Length: 3000mm-18000mm Surface treatment: Bare, galvanized coated or as customer's requirements. Steel's Multiple Comparison Wilcoxon Tests. Is there some other of estimating n-value? At Value Steel & Pipe we're all about offering high-quality product to our customers at some of the industry's most competitive prices.We specialize in sourcing and selling excess stock materials such as Mild Steel Plate, H.S.S. SAE AISI 4140 Alloy Steel. The value of the strain hardening exponent lies between 0 and 1. Various strengthening mechanisms are employed to achieve a range of strength, ductility, toughness, and fatigue properties. ... S355J2+N/M is the direct EN equivalent of ASTM A572 GR 50. Checking the n-value of the problem coil would be a good first step. a - Minimum specified value of the American Society of Testing Materials. Easily access valuable industry resources now with full access to the digital edition of The Fabricator en Español. TEL:+86-816-3646575 FAX: +86-816-3639422 Study it. 48 in. Why is there a difference? Value of a 1943 Steel Penny . "simulated" uses Nsim simulated standardized Steel statistics based on random splits of the pooled samples into samples of sizes \(n_1, \ldots, n_k\), to estimate the \(P\)-value. An example might be: I have never heard this type of question asked before last month. AHSS are complex, sophisticated materials, with carefully selected chemical compositions and multiphase microstructures resulting from precisely controlled heating and cooling processes. Enjoy full access to the digital edition of The Additive Report to learn how to use additive manufacturing technology to increase operational efficiency and improve the bottom line. Gage marks spaced at 2 inches are applied with a punch. • The n-values for a coil of steel often are given to three decimal places. Tests show somewhat higher n values for this metal and type of construction than for riveted construction. Your steel supplier must run its tests from the coil ends. For this reason, some of the new advanced higher strength steels have complex microstructures that allow higher n-values for the same strength. N = 0.5(N 1 + N 2) Where N 1 is the smallest SPT-N values over the two effective diameters below the toe level N 2 is the average SPT-N value over 10 effective diameters below the pile toe. Oil Retreats on Signs of OPEC+ Discord Ahead of Key Meeting. | Independence | Ohio 44131-2500 1.3 Forming Properties of Steel 9 1.3.1 Stretchability Index: n-values 11 1.3.2 Drawability index: r-values 14 1.4 Ultrasonic Measurements of r-values: Ultra-Form 19 1.4.1 Review 19 1.4.2 Ultra-Form's Description 22 1.4.3 Other Recent Advances 24 1.5 The Present Research 24 CHAPTER 2. Specimens are obtained from the strip or sheet at set angles with respect to the lamination direction which affects the r-value. A steel mill or service center sometimes will correct poor shape with an extra heavy temper pass. Title: Microsoft Word - IB 4001 Mannings n Value Research.doc Author: SCMcKeen Created Date: 12/28/2005 3:43:44 PM Cold working is the plastic deformation of metal below its recrystallization temperature and this is used in many manufacturing processes, such as wire drawing, forging and rolling. This problem is highlighted as point A in the graph. The N value (standard Penetration test) value is widely used in Geotechnical Engineering designs especially in India and nearby countries. Pipe-arches approx-imately have the same roughness characteristics as their equivalent round pipes. • Our tensile test machine does not compute the n-value. n-value noun. LOW CARBON STEELS 28 We have to convert it into dyne/cm². MF, © Copyright 2020 - PMA Services, Inc. It should be noted that in European design standards, the section factor is presented as A/V which has the same numerical value … and Larger Young's modulus of steel at room temperature is typically between 190 GPa (27500 ksi) and 215 GPa (31200 ksi). In the final step we divide the equity value by the number of shares outstanding. The density of steel is in the range of 7.75 and 8.05 g/cm 3 (7750 and 8050 kg/m 3 or 0.280 and 0.291 lb/in 3).The theoretical density of mild steel (low-carbon steel) is about 7.87 g/cm 3 (0.284 lb/in 3).. Density of carbon steels, alloy steels, tool steels and stainless steels are shown below in g/cm 3, kg/m 3 and lb/in 3. EU Quota Tracking EU Quota Tracking. 12 in. roughness than for annually corrugated steel pipe. Fortunately, the n-value is not that complicated, but is a primary metal property measured directly during a tensile test. They are worth about 10 to 13 cents each in circulated condition, and as much as 50 cents or more if uncirculated.. The unit weight of structural steel is specified in the design standard EN 1991-1-1 Table A.4 between 77.0 kN/m 3 and 78.5 kN/m 3. Steel has little or no resistance to heat transfer. However, a fourth or fifth decimal place n-values are sometimes published. Excessive stretching leads to local necking and tearing of the stamping. The following table lists the buy price (what you can expect to pay to a dealer to purchase the coin) and sell value (what you can expect a dealer to pay you if you sell the coin). Steel Pipe, Ungalvanized: 0.015 — Cast Iron Pipe: 0.015 — Clay Sewer Pipe: 0.013 — Polymer Concrete Grated Line Drain: 0.011: 0.010 – 0.013: Notes: (1) Tabulated n-values apply to circular pipes flowing full except for the grated line drain. Guaranteed. For years the n-values were only good to two decimal places. characteristic of steel and titanium in benign environmental conditions. • How much additional tensile testing is required to obtain the n-value? 15 in. We know that the relationship between dyne and Newton is, 1 Newton = 10⁵ dyne. • We run n-value tests on some of our blanks, but the results are different from those provided by our steel supplier. 0 0. measurement of a material's capacity to resist heat flow from one side 18 in. Steel (USA) Price Outlook Prices for hot-rolled coil U.S. steel continued to soar in recent weeks, hitting a one-year high, amid strong manufacturing activity data in China and the U.S in October. The upper curve is the maximum expected n-value as a function of yield strength. Corrugated Metal: Subdrain: 0.017: 0.019: 0.021: Stormdrain: 0.021: 0.024: 0.030: 6. Steel - Coal-tar enamel: 0.010: Steel - smooth: 0.012: Steel - New unlined: 0.011: Steel - Riveted: 0.019: Vitrified clay sewer pipe: 0.013 - 0.015: Wood - planed: 0.012: Wood - unplaned: 0.013: Wood stave pipe, small diameter: 0.011 - 0.012: Wood stave pipe, large diameter: 0.012 - 0.013 Determine the K and n values. A more truthful reason is a lack of understanding. Is this related to the n-value? Highest price of steel per pound in all of Florida. 48 in. PRESS HARDENABLE STEELS (PHS) Press Hardenable Steel (PHS), commonly referred to as Mn22B5 or 15B22, is available as Cold Rolled full hard or annealed and tempered. 12:00 AM . Usually none. And also 1 m² = 10⁴ cm². SAE 4140 (AISI 4140 steel) is a Cr-Mo series (Chrome molybdenum series) low alloy steel, this material has high strength and hardenability, good toughness, small deformation during quenching, high creep strength and long-lasting strength at high temperature. A more truthful reason is a lack of understanding. And stainless steel is about 4% lower. This shows the steel with the 20,000 PSI spread will have more stretchability than the steel with only a 10,000 PSI spread. The work-hardening exponent of sheet steel which indicates how much the material strengthens when the material is deformed. Structural steel is a standard construction material made from specific grades of steel and formed in a range of industry-standard cross-sectional shapes (or 'Sections'). Detailer – a person or entity that is charged with the production of the advanced bill of materials, final bill of materials, and the production of all shop drawings necessary to purchase, fabricate and erect structural steel. It will become your most powerful trouble-shooter for sheetmetal stretch failures. The graph shows why this relationship happens. The n-value decreases with increasing yield strength. They provide a quick checklist to those very familiar with the n-value, and an introduction for those less active in the field. S355JR can be supplied as steel plate/ sheet, round steel bar, steel tube/pipe, steel stripe, steel billet, steel ingot, steel wire rods. If the exact value of n is not needed, two or more material samples can be compared using just the TS/YS ratio if none of the samples have any yield point elongation (YPE). Point A shows a steel sample with an n-value below the expected value. Various strengthening mechanisms are employed to achieve a range of strength, ductility, toughness, and fatigue properties. Structural rivet steel , ASTM A141; high-strength structural rivet steel, ASTM A195 . The die reacts to the properties of the incoming blank. Is the third decimal place accurate? Call now! Specimens are obtained from the strip or sheet at set angles with respect to the lamination direction which affects the r-value. N umerous research programmes show that some types of fully stressed steel sections can achieve a 30 minute fire resistance without ... factor is presented as A/V which has the same numerical value as Hp/A. strain hardening exponent, has been shown to correlate with stretch forming behavior, while the r m is a measure of deep-drawing capability. Often the n-value is ignored because it seems too complex. Apple Snail Louisiana, Amadeus To Sabre Commands, Radial Fan Blower, Baby Shop High Chair, Texas Flowering Tree Identification, Walking To New Orleans Lyrics And Chords, Stamp Drawing Images, How Long Does It Take To Get Hired At Popeyes, Our Essential Oils Can Help You Relieve Stress Review: 100% Organic Argan Oil Loose Curls Hair Tutorial sr..org REKLI SU O NAMA Visoki profesionalizam i prijateljsko okruženje. Dr Drago i Gorica, pobednička ekipa! Sanja Mirosavljevic Mezoterapija lica je fantastičan tretman. Moje lice sada izgleda kao kada se vratim sa dugog i lepog odmora. Julija Korda Divni!!! Dejana Dea Živković Jako profesionalna ekipa. Svaka čast. Sve pohvale za dr. Dragu i Goricu koji su divni. Danka Milićević Najbolji,najprofesionalniji i najljubazniji tim!!! Preporucujem svima! Ksenija Savić Uroša Martinovića br. 11 (A Blok) (+381) 060 0 300 888 [email protected] Mi na drustvenim mrezama Matijasevic Estetik 2016
CommonCrawl
Hostname: page-component-8bbf57454-6pl8d Total loading time: 0.756 Render date: 2022-01-22T12:57:10.709Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true } >Publications of the Astronomical Society of Australia >Volume 36 >Gridded and direct Epoch of Reionisation bispectrum... Publications of the Astronomical Society of Australia MWA Phase II Array Power spectrum Bispectrum The gridded estimator The direct estimator Triangles considered for estimation Bispectrum signature of foregrounds Gridded and direct Epoch of Reionisation bispectrum estimates using the Murchison Widefield Array Published online by Cambridge University Press: 18 July 2019 Cathryn M. Trott , Catherine A. Watkinson , Christopher H. Jordan [Opens in a new window] , Shintaro Yoshiura , Suman Majumdar , N. Barry , R. Byrne , B. J. Hazelton , K. Hasegawa and R. Joseph ...Show all authors Cathryn M. Trott* International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley WA, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Bentley 6845, Australia Catherine A. Watkinson Blackett Laboratory, Department of Physics, Imperial College,London SW7 2AZ, UK Christopher H. Jordan Shintaro Yoshiura Faculty of Science, Kumamoto University, Kumamoto 860-8555, Japan Suman Majumdar Blackett Laboratory, Department of Physics, Imperial College,London SW7 2AZ, UK Centre of Astronomy, Indian Institute of Technology Indore, Simrol, Indore 453552, India N. Barry ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Bentley 6845, Australia School of Physics, The University of Melbourne, Parkville, VIC 3010, Australia R. Byrne Department of Physics, University of Washington, Seattle, WA 98195, USA B. J. Hazelton Department of Physics, University of Washington, Seattle, WA 98195, USA University of Washington, eScience Institute, Seattle, WA 98195, USA K. Hasegawa T. Kaneuji K. Kubota W. Li Department of Physics, Brown University, Providence, RI 02912, USA J. Line C. Lynch B. McKinley D. A. Mitchell CSIRO Astronomy & Space Science, Australia Telescope National Facility, P.O. Box 76, Epping, NSW 1710, Australia M. F. Morales ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Bentley 6845, Australia Department of Physics, University of Washington, Seattle, WA 98195, USA International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley WA, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Bentley 6845, Australia School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA B. Pindor J. C. Pober M. Rahimi School of Physics, The University of Melbourne, Parkville, VIC 3010, Australia J. Riding K. Takahashi S. J. Tingay International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley WA, Australia R. B. Wayth R. L. Webster M. Wilensky J. S. B. Wyithe Q. Zheng Shanghai Astronomical Observatory, China David Emrich A. P. Beardsley School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA T. Booler B. Crosse T. M. O. Franzen L. Horsley M. Johnston-Hollitt D. L. Kaplan Department of Physics, University of Wisconsin–Milwaukee, Milwaukee, WI 53201, USA D. Kenney D. Pallot International Centre for Radio Astronomy Research (ICRAR), University of Western Australia, Crawley, WA 6009, Australia G. Sleap K. Steele M. Walker C. Wu Author for correspondence: Cathryn M. Trott, E-mail: [email protected] We apply two methods to estimate the 21-cm bispectrum from data taken within the Epoch of Reionisation (EoR) project of the Murchison Widefield Array (MWA). Using data acquired with the Phase II compact array allows a direct bispectrum estimate to be undertaken on the multiple redundantly spaced triangles of antenna tiles, as well as an estimate based on data gridded to the uv-plane. The direct and gridded bispectrum estimators are applied to 21 h of high-band (167–197 MHz; z = 6.2–7.5) data from the 2016 and 2017 observing seasons. Analytic predictions for the bispectrum bias and variance for point-source foregrounds are derived. We compare the output of these approaches, the foreground contribution to the signal, and future prospects for measuring the bispectra with redundant and non-redundant arrays. We find that some triangle configurations yield bispectrum estimates that are consistent with the expected noise level after 10 h, while equilateral configurations are strongly foreground-dominated. Careful choice of triangle configurations may be made to reduce foreground bias that hinders power spectrum estimators, and the 21-cm bispectrum may be accessible in less time than the 21-cm power spectrum for some wave modes, with detections in hundreds of hours. cosmologyearly universeinstrumentationmethods: statistical Publications of the Astronomical Society of Australia , Volume 36 , 2019 , e023 DOI: https://doi.org/10.1017/pasa.2019.15[Opens in a new window] NASA ADS Abstract Service[Opens in a new window] Copyright © Astronomical Society of Australia 2019 Exploration of the growth of structure in the first billion years of the Universe is a key observational driver for many experiments. One tracer of the conditions within the early Universe is the 21-cm spectral line of neutral hydrogen, which encodes in its brightness temperature distribution details of the radiation field and gas properties in the intergalactic medium permeating the cosmos (Furlanetto, Oh, & Briggs Reference Furlanetto, Oh and Briggs2006; Pritchard & Loeb Reference Pritchard and Loeb2008). Redshifted to low frequencies, the 21-cm line is accessible with radio telescopes (υ < 300 MHz), including current and future instruments. These include the Murchison Widefield Array, MWAFootnote a (Bowman et al. Reference Bowman, Cairns, Kaplan, Murphy and Oberoi2013; Tingay et al. Reference Tingay2013; Jacobs et al. Reference Jacobs2016); the Precision Array for Probing the Epoch of Reionisation, PAPERFootnote b (Parsons et al. Reference Parsons2010); the LOw-Frequency ARray, LOFARFootnote c (van Haarlem et al. Reference van Haarlem2013; Patil et al. Reference Patil2016); the Long Wavelength Array, LWAFootnote d (Ellingson et al. Reference Ellingson, Clarke, Cohen, Craig, Kassim, Pihlstrom, Rickard and Taylor2009), and the future HERA (DeBoer et al. Reference DeBoer2016) and SKA-Low (Koopmans et al. Reference Koopmans2015). The weakness of the signal, combined with the expectation that most of its information content is contained in the second moment (Wyithe & Morales Reference Wyithe and Morales2007), which is uncorrelated across spatial Fourier wave mode, motivates the use of the power spectrum as a statistical tool for detecting and characterising the cosmological signal. Despite the ease with which the power spectrum can be computed from radio interferometric data, the presence of strong, spectrally structured residual foreground sources (Trott, Wayth, & Tingay Reference Trott, Wayth and Tingay2012; Datta, Bowman, & Carilli Reference Datta, Bowman and Carilli2010; Vedantham, Udaya Shankar, & Subrahmanyan Reference Vedantham, Udaya Shankar and Subrahmanyan2012; Thyagarajan et al. Reference Thyagarajan2015), complex instrumentation (Trott & Wayth Reference Trott and Wayth2016), and imperfect calibration (Patil et al. Reference Patil2014; Barry et al. Reference Barry, Hazelton, Sullivan, Morales and Pober2016), yield power spectra that are dominated by systematics. Thus far, a detection of signal from the Epoch of Reionisation (EoR) has not been achieved (Patil et al. Reference Patil2016; Beardsley et al. Reference Beardsley2016; Trott et al. Reference Trott and Wayth2016; Cheng et al. Reference Cheng2018). These systematics, combined with the expectation that non-Gaussian information can be extracted usefully from cosmological data, lead the discussion for other statistics. The bispectrum, as a measure of signal non-Gaussianity, is one such statistic that contains cosmologically relevant information (Bharadwaj & Pandey Reference Bharadwaj and Pandey2005; Majumdar et al. Reference Majumdar, Pritchard, Mondal, Watkinson, Bharadwaj and Mellema2018; Watkinson et al. Reference Watkinson, Giri, Ross, Dixon, Iliev, Mellama and Pritchard2018), while being relatively straightforward to compute with interferometric data (Shimabukuro et al. Reference Shimabukuro, Yoshiura, Takahashi, Yokoyama and Ichiki2017). The bispectrum is the Fourier Transform of the three-point correlation function, and extracts higher-order correlations between different spatial scales. Its spatial and redshift evolution can be used to place different constraints on the underlying processes that set the 21-cm brightness temperature, and therefore it provides complementary information to the power spectrum. In an early paper exploring the use of the bispectrum for a model EoR signal, and radio interferometers, Bharadwaj & Pandey (Reference Bharadwaj and Pandey2005) demonstrated that a strong non-Gaussian signal is produced by the presence of ionised regions, and discussed the behaviour of the power spectrum and bispectrum signals as a function of frequency channel separation, although they only consider non-Gaussianity due to the ionisation field modelled as non-overlapping randomly placed spherical ionised regions. Some recent work has explored the combination of bispectrum with other tracers (CII spectral features) to extract clean cosmological information (Beane & Lidz Reference Beane and Lidz2018). The bispectrum has also been used in the single-frequency (angular) case in the CMB community, where non-Gaussianities can be contaminated by structured foregrounds (Jung, Racine, & van Tent Reference Jung, Racine and van Tent2018). Majumdar et al. (Reference Majumdar, Pritchard, Mondal, Watkinson, Bharadwaj and Mellema2018) explore the ability of the bispectrum to discriminate fluctuations in the matter density distribution from those of the hydrogen neutral fraction, reporting that for some triangle configurations the sign of the bispectrum is a marker for which of these processes is dominating the bispectrum. They show output bispectra for equilateral and isosceles configurations over a range of wavemodes and redshifts, including parameters of relevance to current low-frequency 21-cm experiments (z < 9, 0.1 < k < 1.0). For modes relevant to the MWA, the bispectrum amplitude fluctuates in sign with wavenumber and triangle geometry (stretched → equilateral → squeezed) with a range spanning 103 − 109 mK3h −6 Mpc6. This range of potential signs and amplitudes in measurable modes and redshifts motivates us to study this signal in MWA data. Watkinson et al. (Reference Watkinson, Giri, Ross, Dixon, Iliev, Mellama and Pritchard2018) provide a useful tool for visualising the correspondence of real-space structures and bispectrum. They highlight that equilateral k-vector configurations probe above-average signal concentrated in filaments with a circular cross-section (their Figure 1). Stretched (flattened) k-vector triangle configurations (with one k-mode larger than the other two), by extension, probe above-average signal concentrated in filaments with ellipsoidal cross-sections (at the extreme these filaments tend towards planes). Finally, squeezed k-vector triangle configurations (with one k-mode smaller than the other two) correspond to a modulation of a large-scale mode over small-scale plane-wave concentrations of above-average signal, and therefore measure the correlation of the small-scale power spectrum with large-scale modes. Notably, they introduce and explore other bispectrum normalisations that are found to be more stable to parameter fluctuations. In this work, we discuss the relative merits of different bispectrum statistics for use with real data in the presence of real systematics. Crucially, the switch to positive bispectrum at the end of reionisation occurs as we reach regimes/scales at which the concentration of above-average signal drive the non-Gaussianity. This will occur before the EoR (on scales where the density field is the dominant driver of the temperature fluctuations, or, if the spin temperature is not yet saturated during this phase, when heated regions are driving the non-Gaussianity) and towards the end of reionisation (when islands of 21-cm signal drive the non-Gaussianity). Conversely, a negative-valued bispectrum will be unique to the phase when ionised regions drive the non-Gaussianity. In general, foreground astrophysical processes are not expected to produce a negative bispectrum, because they are associated with overdensities in the brightness temperature distribution (Lewis Reference Lewis2011; Watkinson & Pritchard Reference Watkinson and Pritchard2014). These factors may play a future important role in discriminating real cosmological non-Gaussianity from contaminants. Despite some work studying the sensitivity of current and future experiments for measuring the bispectrum (Shimabukuro et al. Reference Shimabukuro, Yoshiura, Takahashi, Yokoyama and Ichiki2017; Yoshiura et al. Reference Yoshiura, Shimabukuro, Takahashi, Momose, Nakanishi and Imai2015), these have used idealised scenarios that omit any residual foreground signal and systematics introduced by the instrument. Bharadwaj & Pandey (Reference Bharadwaj and Pandey2005) discuss foreground fitting tools using frequency separation to study the bispectrum over visibility correlations across frequency, but this method breaks down for large field-of-view instruments where the interferometric response affects the foreground smoothness (Morales et al. Reference Morales, Hazelton, Sullivan and Beardsley2012). Further, no 21-cm interferometric data have been used to estimate the bispectrum. In this work, we address both of these by presenting bispectrum estimators that can use real datasets, computing the expected impact of foregrounds measured by the instrument, and applying the estimators to 21 h of MWA EoR data. 2. MWA Phase II Array The MWA is a 256-tile low-frequency radio interferometer located in the Western Australian desert, on the future site of the Square Kilometre Array (SKA) (Tingay et al. Reference Tingay2013; Bowman et al. Reference Bowman, Cairns, Kaplan, Murphy and Oberoi2013). The telescope operates from 80 to 300 MHz with antennas spread over a 5-km diameter. Its primary science areas include exploration of the EoR, radio transients, solar and heliospheric studies, study of pulsars and fast transients, and the production of a full-sky low-frequency extragalactic catalogue. In 2016 it underwent an upgrade from 128 to 256 antenna tiles (Wayth et al. Reference Wayth2018). At any time, 128 of the tiles can be connected to the signal processing system. The array operates in a 'compact' configuration, utilising redundant spacings and short baselines for EoR science, or an 'extended' configuration, maximising angular resolution and instantaneous uv-coverage. The compact configuration is employed in this work. Figure 1. Zoomed MWA compact configuration layout showing the two hexagonal sub-arrays of 36 tiles each, with redundant tile spacings. These short redundant baselines are used in this work to form equilateral and isosceles triangle bispectra with high sensitivity. Some of the longer baseline tiles of the MWA are not shown. The compact configuration has a maximum baseline of 500 m and is optimised for EoR science. Figure 1 shows the tile layout, including the two 36-tile hexagonal subarrays of redundantly spaced tiles. The minimum redundant spacing is 14 m. The primary motivations for the hexagons are twofold: (1) to increase the sensitivity to angular scales of relevance for the EoR, allowing coherent addition of measurements from redundant baselines, and (2) enabling additional methods for calibrating the array (redundant calibration, Li et al. Reference Li2018, Joseph et al. Reference Joseph, Trott and Wayth2018). For the bispectrum, there is an additional advantage of multiple, redundant equilateral triangle baselines being formed from the short spacings. These can be added coherently to study the bispectrum signal on particular scales, and allows for a direct bispectrum measurement (perfectly defined triangles formed from discrete baselines). These direct bispectrum results can be compared to a more general gridded bispectrum, whereby all baselines formed by an irregularly spaced array (such as MWA Phase I, or the non-hexagon tiles of Phase II compact) can be gridded onto the Fourier (uv-) plane, using a gridding kernel that represents the Fourier response function of the telescope (in this case, the Fourier Transform of the primary beam response to the sky). These estimators will both be explored in this work. 3. Power spectrum We briefly review the power spectrum as the primary estimator for studying the EoR with 21-cm observations. The power spectrum is typically used to describe radio interferometer observations from the EoR, and contains all of the Gaussian-distributed fluctuation information. The power spectrum is the power spectral density of the spatial fluctuations in the 21-cm brightness temperature field. It is used because it encodes the fluctuation variance (where most of the EoR signal is expected to reside), and sums signal from across the observing volume to increase sensitivity. It is defined as (1) $$P(\vec k) = {\delta _D}(\vec k - {\vec k^{'}}){1 \over {{\Omega _V}}}\langle {V^{ *} }(\vec k)V({\vec k^{'}})\rangle ,$$ where $V(\vec k) = V(u,v,\eta ) = {\cal F}{\cal T}\left( {V(u,v,\nu )} \right)$ is the measured interferometric visibility (Jansky), Fourier transformed along frequency (υ) to map frequency to line-of-sight spatial scales (Jy Hz) at a given point in the Fourier (uv-) angular plane (u, v); 〈〉 encode an ensemble average over different realisations of the Universe, and the δD-function ensures that we are expecting to measure a Gaussian random field where the different modes are uncorrelatedFootnote e. Further assuming spatial isotropy allows us to average incoherently in spherical shells, where $\vec k = k$. ΩV provides the volume normalisation, where ΩV = (BW) Ω is the product of the observing bandwidth and angular field of view. Converting from measured to physical units maps Jy2 Hz2 to mK2 h −6 Mpc6. After volume normalisation this becomes, mK2 h −3 Mpc3. 3.1. Power spectra with radio interferometric data The power spectrum can be produced naturally with interferometric data. Unlike optical telescopes that produce images of the sky, or single-dish radio telescopes that acquire a single sky power, a radio interferometer visibility (Jy) directly measures Fourier representations of the sky brightness distribution at the projected baseline location (u = Δx/λ; v = Δy/λ). In the flat-sky approximationFootnote f: (4) $$V(u,v,\nu ) = \int_\Omega A(l,m,\nu )S(l,m,\nu )\exp ( - 2\pi i(ul + vm))dldm,$$ where A(l, m, υ) is the instrument primary beam response to the sky at position (l, m) from the phase centre and frequency υ, S(l, m, υ) is the corresponding sky brightness (Jy/sr, which is proportional to temperature), and the exponential encodes the Fourier kernel. The physical correspondence of sky projected on to the tile locations yields a fixed set of discrete but incomplete Fourier modes to be measured. This incompleteness leads to parts of the Fourier plane where there is no information. The line-of-sight spatial scales are obtained by Fourier Transform of visibilities measured at different frequencies, along frequency to map υ to η: (5) $$V(\eta (k)) = {\cal F}{\cal T}(V(\nu )) = {{\Delta \nu } \over {{N_{{\rm{c}}h}}}}\sum\limits_{j = 1}^{{N_{{\rm{c}}h}}} V(\nu )\exp \left( { - {{2\pi ijk} \over {{N_{{\rm{c}}h}}}}} \right),$$ where N ch is the number of spectral channels, Δυ is the spectral resolution, and j and k index frequency and spatial mode (Hz−1, or seconds). The attenuation of the sky due to the primary beam (and general sky finiteness) alters the complete continuous Fourier Transform to a windowed transform, whereby the primary beam response leaks signal into adjacent Fourier modes, as can be seen using the convolution theorem: (6) \begin{equation} V(u,v,\eta) = {\kern2pt\tilde{\kern-2pt A}}(u,v,\eta) \circledast \tilde{S}(u,v,\eta), \end{equation} where the true sky brightness distribution is convolved with the Fourier Transform of the primary beam response. This leakage implies that the visibility measured by a discrete baseline actually contains signal from a region of the Fourier plane, as described by the Fourier beam kernel, Ã (u, v, η). In general, to compute the power spectrum from a large amount of data, we are motivated by the signal weakness to add the data coherently; i.e., we sum complex visibilities directly that contribute signal to the same point in the Fourier uv-plane. To do this, the measurement from each baseline is convolved with the Fourier beam kernel and 'gridded' (added with a weight) onto a common two-dimensional plane. Signal will add coherently, while noise adds as the square-root (because the thermal noise is uncorrelated between measurements). The weights for each measurement are also gridded with the kernel onto a similar plane. After addition of all the data, the signal uv-plane is divided by the weights to yield the optimal-weighted average signal at each point. The resulting cube resides in (u, v, υ) space, and can be Fourier transformed along frequency to obtain a cube in (u, v, η) space. The power spectrum can then be formed by squaring and normalising the cube, and averaging incoherently (in power) in spherical shells: (7) $$P(|\vec k|) = {{\sum\limits_{i \in k} V_i^ * (\vec k){V_i}(\vec k){W_i}(\vec k)} \over {\sum\limits_{i \in k} {W_i}(\vec k)}},$$ where W are the weights and $|\vec k| = |({k_u},{k_v},{k_\eta })| = \sqrt {k_u^2 + k_v^2 + k_\eta ^2} $. As an intermediate step, the cylindrically averaged power spectrum can be formed (e.g., Datta et al. Reference Datta, Bowman and Carilli2010): (8) $$P({k_ \bot },{k_\parallel }) = {{\sum\limits_{i \in {k_ \bot }} {V^ * }(\vec k)V(\vec k)W(\vec k)} \over {\sum\limits_{i \in {k_ \bot }} W(\vec k)}},$$ and ${k_ \bot } = \sqrt {k_u^2 + k_v^2}, \ k_\parallel=k_\eta$. This is a useful estimator for discriminating contaminating foregrounds (continuum sources with power concentrated at small k ||) from 21-cm signal. Herein we will refer to this power spectrum, and its bispectrum analog, as the 'gridded power spectrum' and 'gridded bispectrum', respectively. Alternatively, one can take the baselines themselves, and their visibilities measured along frequency, and take the Fourier Transform directly along the frequency axis. This 'delay spectrum' approach is utilised by some experiments with short baselines (Parsons et al. Reference Parsons, Pober, McQuinn, Jacobs and Aguirre2012; Ali et al. Reference Ali2015; Thyagarajan et al. Reference Thyagarajan2015), both to increase sensitivity when there are redundant spacings, and to work as a diagnostic. The frequency and η axes are not parallel, except at zero-length baseline. Because an interferometer is formed instantaneously from antennas with a fixed spatial offset, the baseline length in Fourier space (e.g., u) evolves with frequency as u = Δxυ/c, and this evolution is therefore increased for larger bandwidths and for longer baselines. For the short spacings of interest to the EoR, the correspondence is good, and the delay transform can be used to mimic the direct k || transform of gridded data (see Figure 1 of Morales et al. Reference Morales, Hazelton, Sullivan and Beardsley2012, for a visual explanation). In general, 'imaging' arrays with many non-redundant spacings are suited to gridded power spectra, whereas redundant arrays, with a lesser number of multiply-sampled modes, are suited to delay power spectra. For the Phase II compact MWA, the two hexagonal subarrays have these short-spaced redundant baselines, and the 'delay power spectrum' and its bispectrum analog can also be used effectively. In general, we would not suggest use of the delay spectrum to undertake EoR science, because of the limitations discussed, but it is the appropriate analogue for the direct bispectrum estimator, and is therefore pertinent for the normalised bispectrum analysis. 4. Bispectrum The bispectrum is the Fourier Transform of the three-point correlation function. Akin to the two-point correlation function (the Fourier dual of which is the power spectrum), the three-point correlation function measures the excess signal over that of a Gaussian random field distribution measured at three spatial locations, averaged over the volume. For a field with Fourier Transform denoted by $\Delta (\vec k)$, the bispectrum is formed over closed triangles of k vectors in Fourier space: (9) $$\langle \Delta ({\vec k_1})\Delta ({\vec k_2})\Delta ({\vec k_3})\rangle = {\delta _D}({\vec k_1},{\vec k_2},{\vec k_3}){\rm{B}}\,({{\rm{\tilde k}}_1},{{\rm{\tilde k}}_2},{{\rm{\tilde k}}_3}).$$ Here the δD-function ensures closure in Fourier space. It has units of mK3 h −6Mpc6 after volume normalisation. The bispectrum is often applied to matter density fields, where $\Delta (\vec k)$ is the Fourier Transform of matter overdensity, $\delta (\vec x) = {{\rho (\vec x)} \over {\overline \rho }} - 1$. In radio interferometric measurements, the coherence of the wavefront (the visibilities obtained by cross-correlating voltages from individual antennas) represents the Fourier Transform of the sky brightness temperature distribution, measured in Jansky. As discussed earlier, this bispectrum estimator can be unstable, with cosmological simulations showing rapid fluctuations between positive and negative values as non-Gaussianity becomes negligible but the amplitude is still large. As such, Watkinson et al. (Reference Watkinson, Giri, Ross, Dixon, Iliev, Mellama and Pritchard2018) suggest the normalised bispectrum as a more stable statistic: (10) $${\cal B}({\vec k_1},{\vec k_2},{\vec k_3}) = {{{\rm{B}}\,({{{\rm{\tilde k}}}_1},{{{\rm{\tilde k}}}_2},{{{\rm{\tilde k}}}_3})\sqrt {{{\rm{k}}_1}{{\rm{k}}_2}{{\rm{k}}_3}} } \over {\sqrt {P({{\vec k}_1})P({{\vec k}_2})P({{\vec k}_3})} }},$$ where $P(|\vec k|)$ is the three-dimensional power spectrum, which describes the volume-normalised variance on a given spatial scale, and is the Fourier Transform of the two-point correlation function (Eggemeier & Smith Reference Eggemeier and Smith2017; Brillinger & Rosenblatt Reference Brillinger, Rosenblatt and Bernard1967). This normalisation isolates the contribution from the non-Gaussianity to the bispectrum, by normalising out the amplitude part of the statistic. It is akin to normalising the third central moment by σ 3 to calculate the skewness. 4.1. Bispectrum with radio interferometric data Because the bispectrum is formed from the triple product of a triangle of wavespace measurements, it can be formed directly through the product of three interferometric visibilities. In the limit where the array has perfect (complete) uv-sampling, individual measurements of signal on triangles of baselines can be multiplied to form the bispectrum estimate. In the more general case, where an interferometer has instantaneously incomplete, but well-sampled baselines, there are two options for extracting the triangles of signal measurements: direct (via multiplication of measurements from three tiles forming a triangle of baselines), or gridded, where each uv-measurement is gridded onto the uv-plane (with its corresponding Fourier beam kernel and weights), and the final bispectra are computed from the fully-integrated and gridded data. Direct bispectrum estimators can be applied to specific triangles according to the array layout, but these are usually unique, with irregular configurations (all three internal angles are distinct), leading to difficult cosmological interpretation and poor sensitivity. These issues arise for imaging-like arrays with pseudo-random layouts, but are alleviated for redundant arrays, where regular triangles (isosceles and equilateral) exist and are instantaneously available in many copies in the array. These features make interpretation more straightforward and increase sensitivity to these bispectrum modes. Gridded bispectrum estimators can be applied to any array, yield improved sensitivity by coherent gridding of data, and may allow for a wider range of triangles to be probed. Nonetheless, they suffer from the increased difficulty of extracting robust estimates that correctly account for the correlation of data in uv-space. With the benefit of having a redundant array, we will apply both sets of estimators to our data. 5. The gridded estimator Each measured visibility encodes information about a small range of Fourier modes of the sky brightness distribution. Although each baseline is usually reported as a single number representing the antenna separation measured between antenna centres, the baselines actually measure a range of separations when accounting for the actual physical size.Footnote g This translates to a range of Fourier modes being measured by a given baseline, and is equivalent to the statement that a finite primary beam response to the sky mixes Fourier modes through spectral leakage (effectively a taper on the continuous Fourier transform). Thus, when measurements from different baselines are combined coherently (with phase information) onto a uv-plane, they can be gridded with a kernel that is the Fourier Transform of the primary beam response to the sky. Such a gridding kernel captures the degree of spectral leakage introduced by the antenna response, and means that baselines of similar length and orientation have some shared information. The gridding kernel is represented by à in Equation (6). With a single defined visibility phase centre, all visibility measurements can be added with this beam kernel onto a single plane (for each frequency channel), along with their associated weights, to form a coherently averaged estimate for the Fourier representation of the sky brightness temperature: (11) $${\hat V_{uv}} = {{\sum\limits_i V({u_i},{v_i})\tilde A({u_i},{v_i})W({u_i},{v_i})} \over {\sum\limits_i \tilde A({u_i},{v_i})W({u_i},{v_i})}},$$ where i indexes measurement and W is the weight associated with each. The bispectrum is then estimated as the sum over the beam-weighted gridded visibilities: (12) $${\hat B_{123}} = {{\sum\limits_{j \in \tilde A} {{\hat V}_{j1}}{{\hat V}_{j2}}{{\hat V}_{j3}}{W_{j1}}{W_{j2}}{W_{j3}}} \over {\sum\limits_j {W_{j1}}{W_{j2}}{W_{j3}}}},$$ (13) $${W_{1j}} = {W_1}{\tilde A_{1j}},$$ are the beam-gridded measurement weights. 5.1. Gridded estimator noise The gridded bispectrum estimator is formed from coherent addition of visibilities over all observations. As such, if a given visibility has thermal noise level σ therm (Jy Hz)Footnote h, the uncertainty on the bispectrum is (14) \begin{equation} \Delta\hat{B}_{\rm TOT} = \frac{\sqrt{3}\sigma_{\rm therm}^3}{\sqrt{\displaystyle\sum_{{\kern2pt\tilde{\kern-2pt A}}} W_{1}W_{2}W_{3}}}, \end{equation} where the denominator is the sum over the gridding kernel of the weights triplets. The uncertainty on the normalised bispectrum is then: (15) $$\Delta {\hat {\cal B}_{123,{\rm{TOT}}}} = {\cal B}\sqrt {{{\Delta {B^2}} \over {{B^2}}} + {{\Delta P_1^2} \over {4P_1^2}} + {{\Delta P_2^2} \over {4P_2^2}} + {{\Delta P_3^2} \over {4P_3^2}}} ,$$ where the uncertainties can contain both thermal noise and noise-like uncertainty from residual foregrounds. For 300 2-min observations, and 24 triangles per 28 m baseline triad group, the expected thermal noise level for a complete dataset is (16) $$\Delta B = 4.2 \times {10^{10}}{\rm{m}}{K^3}{h^{ - 6}}{\rm{M}}p{c^6}.$$ The presence of residual foregrounds will be studied in Section 10.2. 6. The direct estimator As an alternate approach to the gridded estimator, visibilities are Fourier-transformed along the frequency direction to compute the delay transform, and closed bispectrum triangles formed from the closed redundant triads of antennas. This approach does not use the primary beam and ignores the local spatial correlations generated by the primary beam spatial taper. It also transforms along a dimension that changes angle with respect to k || as a function of baseline length, but approximates a k || Fourier Transform for small u (small angle). The bispectrum for a given observation is the weighted average over all triads: (17) $${\hat B_{123}} = {{(\sum\limits_i {V_{1i}}{W_{1i}})(\sum\limits_i {V_{2i}}{W_{2i}})(\sum\limits_i {V_{3i}}{W_{3i}})} \over {(\sum\limits_i {W_{1i}})(\sum\limits_i {W_{2i}})(\sum\limits_i {W_{3i}})}},$$ where i indexes over redundant triangles (triads). The final bispectrum estimate then performs a weighted average over observations, such that: (18) $${\hat B_{123,{\rm{TOT}}}} = {{\sum\limits_j {{\hat B}_{123,j}}{W_j}} \over {\sum\limits_j {W_j}}},$$ (19) $${W_j} = (\sum\limits_i {W_{1i}})(\sum\limits_i {W_{2i}})(\sum\limits_i {W_{3i}}).$$ Figure 2. Schematic of how a stretched isosceles triangle configuration is extracted from redundant angularly-equilateral triangle baselines of the MWA Phase II hexagons. 6.1. Direct Estimator noise The direct bispectrum estimator is formed from coherent addition of baseline triplets for a given observation, which are then averaged with relative weights to the final estimate. As such, if a given visibility has thermal noise level σ therm (Jy Hz), the uncertainty on the bispectrum is (20) $$\Delta {\hat B_{{\rm{TOT}}}} = {{\sqrt 3 \sigma _{{\rm{therm}}}^3} \over {\sqrt {\sum\limits_j {W_j}} }}.$$ The uncertainty on the normalised bispectrum is then given by the same expression as for the Gridded Estimator [Equation (15)]. For 300 observations, and 24 triangles per 28 m baseline triad group, the expected noise level is 7. Triangles considered for estimation Unlike bispectrum estimates that can be obtained from Phase I data, where the array is in an imaging configuration with no redundant triangles, we aim to take advantage of the 72 redundant tiles in the hexagonal sub-arrays, afforded by the Phase II layout. This allows for both direct and gridded bispectrum estimators to be applied to matched observations with matched data calibration. The most numerous (highest sensitivity) groups of redundant triangles are the angularly-equilateral configurations of the 14 m and 28 m baselines (48 and 24 sets, respectively). For these triangles, the equilateral configurations exist only for the η = 0 (k || = 0) line-of-sight mode. Other configurations of these closed angular triangles are isosceles or irregular triangles, depending on the η values chosen; however, the closed triangle requirement of the bispectrum demands that: (22) $${\eta _1} + {\eta _2} + {\eta _3} = 0,$$ in addition to the angular components of the vectors summing to zero (as is enforced by choosing the closed triangle baselines). For comparison with theoretical predictions, we will focus on equilateral and isosceles triangles. The 14 m and 28 m baselines are very short, corresponding to cosmological scales of k ⊥ ≃ 0.01hMpc−1 at z = 9. Thus, although the equilateral configuration is cosmologically relevant and the easiest to interpret, these modes are expected to be heavily foreground dominated (i.e., they correspond to the line-of-sight DC mode, and the large angular scales of diffuse and point-source foreground emission). We consider them for completeness, but will show them to be cosmologically irrelevant from an observational perspective when computed this way. These same angularly-equilateral triangle configurations will, however, be used to form relevant isosceles configurations with η 1 = η 2 and η 3 = −2 η 1. Given that we aim to sample modes where foregrounds are not dominant in our power spectra, these isosceles configurations form 'stretched' (also referred to 'flattened' in Watkinson et al. Reference Watkinson, Giri, Ross, Dixon, Iliev, Mellama and Pritchard2018) configurations (k || > > k ⊥). Figure 2 shows how the stretched isosceles configurations are extracted from the data with a redundant baseline triad. Figure 3 then shows schematically the approximate vectors for two of the four isosceles configurations considered here. The direct and gridded estimators are applied to 21.0 h of Phase II high-band zenith-pointed data, comprising 10.7 h (320 observations) on the EoR0 field (RA = 0 h, Dec. = −27°) and 10.3 h (309 observations) on the EoR1 field (RA = 4 h, Dec. = −30°). We observe 30.72 MHz in 384 contiguous 80 kHz channels, with a base frequency of 167.035 MHz. Approximately 15% of the observations were obtained from drift-scan data, where the telescope remains pointed at zenith for many hours and the sky drifts through. For consistency with the drift-n-shift data, we chose drift scan data observed with the field phase centres within 3° of zenith. The data were observed over 5 weeks from 2016 October 15 to November 28, and 1 week in 2017 July. Because the delay spectrum is used as part of the power spectrum estimator for the direct bispectrum, each observation was individually inspected for poor calibration or data quality, and bad observations excised from the dataset. The excised observations comprised ∼ 5% of the dataset, and primarily were due to poor calibration solutions over sets of data contiguous in time due to poor instrument conditions (e.g., many flagged tiles or spectral channels). Figure 3. Schematic of how isosceles triangle vectors are extracted, overlaid on a power spectrum. We aim to choose triangles with vectors that reside in noise-like regions of the delay spectrum. The 2-min observations were each calibrated through the MWA Real-Time System (RTS; Mitchell et al. 2008), as is routinely performed for MWA EoR data, and 1 000 of the brightest (apparent) sources peeled from the dataset (Jacobs et al. Reference Jacobs2016). These 629 calibrated and peeled observations were used for bispectrum estimation. We begin by reporting the bispectrum estimates for the two methods and fields, and then report the normalised bispectra, which incorporate the power spectrum estimates. Table 1 shows the bispectrum estimates and their one-sigma uncertainties (thermal noise) for the direct and gridded estimators, both observing fields and for different triangle configurations. Bold-faced results indicate bispectrum estimates that are consistent with thermal noise. These tend to be those that are extremely stretched isosceles configurations (cos θ ∼ 1), with estimates that sit well outside the primary foreground contamination parts of parameter space. Conversely, the equilateral triangle configurations that use the k || = 0 mode exclusively show extremely large detections. There is no suggestion that these are 21-cm cosmological bispectrum detections, but rather are foreground contaminants. This will be explored more fully in Section 10. Note also that the thermal noise levels reported here are a factor of a few larger than the theoretical expectation derived in Sections 5 and 6. This is due to the fraction of data with weights that are less than unity, indicating flagged baselines and spectral channels. Also listed in Table 1 is the expected bispectrum values from simulations that assume either bright or faint galaxies drive reionisation (Greig & Mesinger Reference Greig and Mesinger2017). The largest amplitudes are for the smallest k-modes, which also tend to be more foreground dominated. The normalised bispectrum, $\cal {B}$, is normalised by the power spectra at each of the k modes forming the triangles. Figure 4 shows the power spectra for the EoR1 and EoR0 fields for the full datasets as used in the gridded estimator. These have been processed through the CHIPS power spectrum estimator (Trott et al. Reference Trott and Wayth2016). Figures 5 and 6 show the corresponding delay spectra, as used in the direct bispectrum. There are small differences between the two power spectrum estimators, as is expected given that delay spectra do not grid with primary beams, and Fourier Transform along frequency, yielding different results for longer baselines. The signature of Galactic emission from close to the horizon is evident in the EoR0 power spectra, while it is less structured in EoR1, where the Galactic Centre has set. Most notably, the delay spectra show large foreground leakage into the EoR window (k || < 0.4), yielding large power spectrum denominator values for the normalised bispectrum. Table 1. Bispectrum estimates and one-sigma uncertainties for the direct and gridded bispectra for each observing field and triangle type. Bold-faced values indicate bispectrum estimates that are consistent with thermal noise. The right-hand column lists expected bispectrum values from simulation for faint and bright galaxies driving reionisation. k modes are comoving and measured in h Mpc−1. Figure 4. Gridded power spectra for the 21 h of observations on two fields used in this work, as processed through the CHIPS estimator (Trott et al. Reference Trott and Wayth2016). Figure 5. Delay transform power spectra for the EoR1 field for the data used in this analysis. Note the large leakage into the EoR window, which yields large denominators for the normalised bispectrum. Figure 6. Delay transform power spectra for the EoR0 field for the data used in this analysis. Using these data, Table 2 describes the normalised bispectrum. Bold-faced results are broadly consistent with thermal noise (< 5σ), again reflecting the modes that are least affected by foregrounds. The difference between the dimensional and reduced bispectrum results is due to the different power spectral estimators. Also notable is the difference in amplitude of the gridded and direct normalised bispectrum estimates. Due to the division by the power spectrum, the normalised bispectrum is heavily dependent on the details of the power spectrum estimates, which fluctuate substantially in foreground-affected regions. The delay-space power spectra show increased foreground power in the EoR window, and this is reflected in a larger power spectrum estimate, and therefore a lower normalised bispectrum. This reliance highlights the complexity for interpreting the normalised bispectrum with foreground-affected data. 10. Bispectrum signature of foregrounds Estimates of bispectrum sensitivity for operational and future 21-cm experiments are incomplete without a treatment of foregrounds. Despite the expectation that point source, continuum foregrounds only impact a region of the three dimensional EoR parameter space (kx, ky, k ||), in reality the details of the instruments, complexity of extragalactic and Galactic emission, limited bandwidth and calibration errors leave residual contaminating signal throughout the full parameter space. Although these methods perform very well to remove such signal, the extreme dynamic range demanded by this experiment translate to bias that exceeds the expected cosmological signal strength. The results presented here are clearly foreground-dominated, particularly for the equilateral triangle configuration. As such, the bispectrum signature of foregrounds can be computed for a simple point-source foreground model. We first consider the expected foreground bispectrum, which quantifies the bias in the measurement, and then turn to the variance of the foreground bispectrum, which quantifies the additional noise term. We employ a model where the sky is populated with a random distribution of unresolved extragalactic point sources that follow a low-frequency number counts distribution (Intema et al. Reference Intema, van Weeren, Röttgering and Lal2011; Franzen et al. Reference Franzen2016): (23) $${{dN} \over {dS}} = \alpha {S^\beta }{\kern 1pt} {\kern 1pt} {\rm{J}}{y^{ - 1}}{\rm{s}}{{\rm{r}}^{ - 1}},$$ where α ≃ 3900 and β = −1.59 for sources with flux density at 150 MHz of less than 1 Jansky. We assume there is no source clustering and spectral dependence, yielding a Poisson-distributed number of sources in each differential sky area. The clustering of point sources in the power spectrum has been studied by Murray, Trott, & Jordan (Reference Murray, Trott and Jordan2017). They find that source clustering will be unimportant for the MWA (unless the clustering is extreme, which is not measured), but may be important for the SKA, which can clean to deeper source levels. Nonetheless, the structure due to clustered point-source foregrounds only changes the amplitude of the foreground structure in the EoR wedge as a function of angular scale (k ⊥). Because the line-of-sight spectral component is unaffected, the signature of clustered foregrounds in the EoR Window is mostly unchanged. These more realistic point-source foregrounds will be considered in the simulations of Watkinson & Trott (Reference Watkinson and Trott2019) and here we retain the analytic signature of the Poisson foregrounds. We further assume that the primary beam can be approximated by a frequency-dependent Gaussian: (24) $$A(l,m,{\nu _0}) = \exp \left( { - {{({l^2} + {m^2})\nu _0^2{{\rm{A}}_{\rm{eff}}}} \over {2{c^2}{\epsilon^2}}}} \right),$$ Table 2 Normalised bispectrum estimates, ${\cal B}$, and one-sigma uncertainties for the direct and gridded bispectra for each observing field and triangle type. Bold-faced values indicate bispectrum estimates that are consistent with thermal noise. where Aeff is the tile effective area and ϵ encodes the conversion from an Airy disc to a Gaussian. The visibility is given by Equation (4) for frequency υ. To compute the line-of-sight component to the visibility, we Fourier Transform over frequency channels, after employing a frequency taper (window function) to reduce spectral leakage from the finite bandwidth: (25) \begin{array}{*{20}{l}} {V(u,v,\eta )}& = &{\int d ldmS(l,m,{\nu _0})A(l,m,{\nu _0})}\\ {}&{}&{ \times \int d \nu \Upsilon (\nu )\exp \left( { - 2\pi i\frac{{\nu (xl + ym)}}{c}} \right)\exp \left( { - 2\pi i\nu \eta } \right)}\\ {}& = &{\int d ldmS(l,m,{\nu _0})A(l,m,{\nu _0})}\\ {}&{}&{ \times \int d \nu \Upsilon (\nu )\exp \left( { - 2\pi i\nu (xl/c + ym/c + \eta )} \right)} \end{array} (26) \begin{array}{*{20}{l}} = &{\int d ldmS(l,m)A(l,m)}\\ {}&{ \times \;\tilde \Upsilon (xl/c + ym/c + \eta )\,{\rm{J}}yHz,} \end{array} whee γ(υ) is the spectral taper, Δυ. is the channel resolution, and we have performed the Fourier Transform over frequency. For analytic tractability, in this work we use a Gaussian taper, with a characteristic width, Σ ≃BW/7, such that the edges of the band are consistent with zero and it is well-matched to a Blackman–Harris taper: (27) $$\Upsilon (\nu ) = {1 \over {\sqrt {2\pi {\Sigma ^2}} }}\exp - {{{\nu ^2}} \over {2{\Sigma ^2}}},$$ with corresponding Fourier Transform, (28) $$\tilde \Upsilon (\eta ) = \exp - 2{\pi ^2}{\Sigma ^2}{\eta ^2}.$$ The bispectrum is formed from the triple product of visibilities. Accounting for the fact that the point sources are only correlated locally (δD(l 1 + l 2 + l 3 = 0)), its expected value with respect to foregrounds is (29) $$\langle {V_1}{V_2}{V_3}\rangle = \int dldm\langle {S^3}(l,m)\rangle {A^3}(l,m)\exp \left( { - 2{\pi ^2}{\Sigma ^2}{T^2}} \right)$$ (30) $$\matrix{ {{T^2} = {{\left( {{{{x_1}l} \over c} + {{{y_1}m} \over c} - {\eta _1}} \right)}^2}} \hfill \cr { + {{\left( {{{{x_2}l} \over c} + {{{y_2}m} \over c} - {\eta _2}} \right)}^2} + {{\left( {{{{x_3}l} \over c} + {{{y_3}m} \over c} - {\eta _3}} \right)}^2}.} \hfill \cr } $$ Here, the source counts have been separated from the spatial integral. This is a general expression for a triplet of baselines. We can now simplify this for triangles, particularly those with isosceles configurations (where the equilateral is a single case of an isosceles). Closed triangles follow the relations: (31) $${x_1} + {x_2} = - {x_3}$$ (32) $${y_1} + {y_2} = - {y_3}$$ (33) $${\eta_1} + {\eta_2} = - {\eta_3}$$ and we define, without loss of generality, the following relations for the isosceles configurations considered in this work: (34) \begin{array}{*{20}{l}} {{x_1}}& = &{ - 2{x_2}}\\ {{x_2}}& = &{{x_3}}\\ {{y_1}}& = &0\\ {{y_2}}& = &{ - {y_3}}\\ {{y_2}}& = &{{x_1}\cos \pi /6 = 2{x_2}\cos \pi /6 = \sqrt 3 {x_2}}\\ {2{\eta _2}}& = &{2{\eta _3} = - {\eta _1}.} \end{array} Making these substitutions in Equation (29), completing the squares and collecting terms, we find: (35) \begin{array}{*{20}{c}} {\langle {V_1}{V_2}{V_3}\rangle = \int d ldm\langle {S^3}(l,m)\rangle {A^3}(l)}\\ { \times \exp \left( { - 12{\pi ^2}{\Sigma ^2}\left( {x_2^2/{c^2}({l^2} + {m^2}) + \eta _2^2} \right)} \right).} \end{array} The source count expectation value uses the source number counts distribution and the fact that the number of sources at any sky location is Poisson-distributed to find: (36) \begin{array}{*{20}{l}} {\langle {S^3}(l,m)\rangle }& = &{\int_S {{S^3}} ({\nu _0})\frac{{dN}}{{dS}}}\\ {}& = &{\frac{\alpha }{{4 + \beta }}S_{{\rm{m}}ax}^{4 + \beta }\,{\rm{J}}{y^3}s{r^{ - 1}}} \end{array} Incorporating the primary beam from Equation 24, moving to polar coordinates, and performing the integral over (l, m), we find for the expected foreground bispectrum bias: (37) \begin{array}{*{20}{c}} {\langle {V_1}{V_2}{V_3}\rangle = \frac{\alpha }{{4 + \beta }}S_{{\rm{m}}ax}^{4 + \beta }}\\ { \times \frac{\pi }{\theta }\exp \left( { - \frac{{{\pi ^2}{\rm{B}}{{\rm{W}}^2}\eta _2^2}}{{25}}} \right)\,{\rm{J}}{y^3}H{z^3}} \end{array} (38) $$\theta = {{3{{\rm{A}}_{\rm{eff}}}\nu _0^2} \over {{c^2}}} + {{{\pi ^2}{\rm{B}}{W^2}u_2^2} \over {25\nu _0^2}},$$ and BW is the experiment bandwidth. This factor combines the primary beam (spatial taper) and spectral taper components into a single factor. The equilateral configuration can be derived from this expression with η 2 = 0. For the 28-m baselines, a maximum source flux density of 1 Jy and Aeff = 21 m2, and performing the cosmological conversions, we expect a bispectrum estimate of: (39) $$B(x = 28) \simeq 8.6 \times {10^{19}}{\rm{m}}{K^3}{h^{ - 6}}{\rm{M}}p{c^6},$$ which is comparable to the estimates found in Section 9. For 14-m baselines, B(x = 14) ≃ 1.0 ×1020mK3h −6Mpc6. The isosceles configurations incorporate the η term. For k || > 0.1, this term decays to below the noise, which is consistent with that observed in the data. The signature of this isosceles foreground dimensional bispectrum in k ⊥ - k || space is shown in Figure 7. For the k 1 = 0.1 h Mpc−1 stretched configuration, we expect for 28 m (14 m): (40) $$B \simeq 1.7 \times {10^{12}}{\kern 1pt} {\kern 1pt} (1.0 \times {10^{12}}){\rm{m}}{K^3}{h^{ - 6}}{\rm{M}}p{c^6}.$$ The squeezed configurations of large k ⊥ combined with small k || might be interesting for future studies, depending on the expected cosmological signal on these scales. Given that the power spectrum is expected to be small on these combination of line-of-sight and angular scales, most EoR experiments are not designed for high sensitivity here (k ⊥ = 0.1 corresponds to 200-m baselines). Figure 7. Point-source-foreground dimensional bispectrum signature of isosceles triangle vectors in k ⊥ − k ||-space (note the stretched logarithmic colour bar). In this model, the expected foreground signal has fallen to below the expected cosmological signal value by k || ≥ 0.12. Interestingly, the expected foreground bispectrum signal is positive, due to its constituent astrophysical sources being associated with overdensities. Conversely, the stretched isosceles 21-cm bispectrum from the cosmological signal will be negative on many scales during reionisation (Majumdar et al. Reference Majumdar, Pritchard, Mondal, Watkinson, Bharadwaj and Mellema2018). 10.1. Normalised foreground bispectrum The normalised bispectrum also contains the expected power spectrum values for a foreground model. In line with the methodology developed in the previous section, we can write the expected power spectrum at (u, v, η) as (41) \begin{array}{*{20}{l}} {P(u,v,\eta )}& = &{\langle {V^*}(u,v,\eta )V(u,v,\eta )\rangle }\\ {}& = &{\frac{\alpha }{{3 + \beta }}S_{{\rm{m}}ax}^{3 + \beta }} \end{array} (42) $${ \times \left( {{\rm{erf}}\left( {\frac{{b + 2a}}{{\sqrt 2 a}}} \right) - {\rm{erf}}\left( {\frac{{b - 2a}}{{\sqrt 2 a}}} \right)} \right)}$$ (43) $${ \times \exp - 4{\pi ^2}{\Sigma ^2}{\eta ^2}\sqrt {\frac{\pi }{{4a}}} \exp \frac{{{b^2}}}{{4a}},}$$ (44) $$a = \frac{{2\pi {c^2}}}{{\nu _0^2{{\rm{A}}_{\rm{eff}}}/{\varepsilon ^2}}} + \frac{{4{\Sigma ^2}|x{|^2}}}{{{c^2}}}$$ (45) $$b = \frac{{8{\Sigma ^2}|x|\eta }}{c},$$ encode the spatial and spectral tapers, and |x|2 = x 2 + y 2 (without loss of generality). This expression is derived from the Fourier Transform over Gaussians, and then the integral over dldm.Footnote i When η = 0 and for the 28-m baseline triangles, the expected bispectrum normalisation is (46) $$\sqrt {P{{(u,v,\eta )}^3}} /V = 2.8 \times {10^{21}}{\kern 1pt} {\kern 1pt} {\rm{m}}{\rm{K}^3}{h^{ - 6}}{\rm{M}}p{c^6}.$$ For the 14-m triangles, we find $\sqrt {P{{(u,v,\eta )}^3}} /V = 3.3 \times {10^{21}}{\kern 1pt} {\kern 1pt} {\rm{m}}{\rm{K}^3}{h^{ - 6}}{\rm{M}}p{c^6}$. When compared with the expected bispectrum value, we find that (28 m): (47) $$\langle {\cal B}\rangle = 1.7,$$ and $\left\langle \cal B \right\rangle = 0.6$ for the 14-m baselines, which exceed the equilateral triangle configuration estimates from the MWA data. As with the bispectrum estimate, the isosceles configurations have expected power values that fall rapidly with η, and are less comparable to the data in these idealised scenarios. However, for the k 1 = 0.1h Mpc −1 stretched configuration, we expect for 28 m (14 m): (48) $$\langle {\cal B}\rangle = 4.0{\kern 1pt} {\kern 1pt} (240,000).$$ These values are ratios of very small numbers, and therefore are highly dependent on numerical details and are not representative. However, they may lend support to the idea that the normalised bispectrum is difficult to interpret because it relies on foreground details in both the bispectrum and power spectrum. Alternatively, the combination of power spectrum, dimensional bispectrum and normalised bispectrum may help to shed additional light on whether data are really foreground free. Given the different behaviour of foregrounds in these statistics, this information may be used to discriminate cosmological information from foregrounds, or to help to design some iterative foreground cleaning algorithm, taking into consideration their behaviour in cosmological simulations. In this scenario, the normalised bispectrum may provide useful information. Figure 8 displays the normalised foreground bispectrum for isosceles configurations in k ⊥ − k || space. For the point-source foregrounds, the power spectrum denominator dominates over the expected bispectrum signal, yielding values < 10−3 across all of parameter space. This presents an interesting divergence from the usual expectation of foreground bias in the power spectrum, where foregrounds add overall signal. In this case, a large measurement that exceeds the thermal noise is consistent with a cosmological origin, and not with residual foregrounds. 10.2. Foreground bispectrum error We now turn our attention to consideration of the signal variance due to residual foregrounds, $\langle B_{{\rm{F}}G}^2\rangle $, such that [cf., Equation (14)]: (49) $$\Delta {B^2} = {{3\sigma _{{\rm{therm}}}^6} \over {\sum\limits_j {W_j}}} + \langle B_{{\rm{F}}G}^2\rangle ,$$ (50) $$\langle B_{{\rm{F}}G}^2\rangle = \langle V_1^ * V_2^ * V_3^ * {V_1}{V_2}{V_3}\rangle .$$ This reduces to a relatively simple expression for the simple point-source case, due to the cancelling of complex components (this is not generally true for the covariance). Using the same formalism as earlier, and again considering the Poisson-distributed nature of the flux density of the sources, we find: (51) \begin{array}{*{20}{l}} {\langle B_{{\rm{F}}G}^2\rangle }& = &{\int {{S^6}} \frac{{dN}}{{dS}}dS\int {{A^6}} (l,m)dldm}\\ {}&{}&{ \times \int {\vec \Upsilon } {{\rm{e}}^{( - 2\pi i({\eta _1}\Delta {\nu _{12}} + {\eta _2}\Delta {\nu _{34}} + {\eta _3}\Delta {\nu _{56}})}}d\vec \nu }\\ {}& = &{\alpha \frac{{S_{{\rm{m}}ax}^{7 + \beta }}}{{7 + \beta }}\left( {12\pi {\rm{ }}\frac{{{c^2}{\varepsilon ^2}}}{{\nu _0^2{A_{{\rm{eff}}}}}}} \right){{\rm{e}}^{ - 4{\pi ^2}{\Sigma ^2}(\eta _1^2 + \eta _2^2 + \eta _3^2)}},}\\ {}& = &{\alpha \frac{{S_{{\rm{m}}ax}^{7 + \beta }}}{{7 + \beta }}\left( {12\pi \frac{{{c^2}{\varepsilon ^2}}}{{\nu _0^2{A_{{\rm{eff}}}}}}} \right){{\rm{e}}^{ - 6{\pi ^2}{\Sigma ^2}\eta _1^2}},} \end{array} where $\vec \Upsilon \equiv \Upsilon ({\nu _1})\Upsilon ({\nu _2})\Upsilon ({\nu _3})\Upsilon ({\nu _4})\Upsilon ({\nu _5})\Upsilon ({\nu _6})$. This expression is flat in angular modes, and decays rapidly in line-of-sight modes. Comparing this with the expected value of the foreground bispectrum, Equation (37), we can form the foreground bispectrum signal-to-error ratio (Figure 9). The signal bias exceeds the uncertainty for small scales, but on the larger scales of interest for EoR, the uncertainty dominates. Nonetheless, there is no line-of-sight dependence, demonstrating that the foreground bias and uncertainty both drop rapidly and are negligible for k || > 0.12hMpc−1, implying that for larger k || scales, point-source foregrounds are not significant in the signal or noise budget. Figure 8. Point-source-normalised foreground bispectrum signature of isosceles triangle vectors in k ⊥ − k || -space (note the stretched logarithmic colour bar). In this model, the expected foreground signal has fallen to below the expected cosmological signal value by k || ≥ 0.12. One can also now compare the foreground uncertainty to the expected thermal noise level. For the EoR1 field data, the measured uncertainty for the direct bispectrum estimator was 6.7 × 1012 mK3 Mpc6. Figure 10 shows this level (green line) compared with the foreground bispectrum error (red line) as a function of line-of-sight scale. (The gridded estimator has slightly lower noise level, but the distinction is not significant when compared to the large gradient of the foreground contribution.) As with the foreground bias, the error induced by residual foregrounds drops steeply beyond k || = 0.12hMpc−1, and falls below the thermal noise (even in this case with a small dataset for the EoR1 field). Figure 9. Ratio of point-source foreground bispectrum bias to uncertainty, for isosceles triangle vectors in k ⊥ − k || -space (note linear plot). The bias exceeds the uncertainty at large angular modes, but rapidly falls below for larger k ⊥, with no dependence on line-of-sight scale. Figure 10. Errors for point-source-normalised foreground bispectrum (red) and thermal noise (green), for isosceles triangle vectors in k ⊥ − k || -space and 300 observations used in this work for the EoR1 field. As a final step to assessing the advantages of the bispectrum compared with the power spectrum to detect the cosmological signal, we divide the expected 21-cm bispectrum values for the faint and bright galaxies, presented in Table 1 by the foreground error for these modes, and compare with that for the power spectrum. The power spectrum values for faint and bright galaxies are taken from the same underlying dataset generated by 21-cm FAST (Mesinger, Furlanetto, & Cen Reference Mesinger, Furlanetto and Cen2011). For all but the k 1 = 0.1hMpc-1 mode, the foreground uncertainty is negligible, and the ratio is uninteresting. For k 1 = 0.1hMpc-1, k 2 = k 3 = 0.2hMpc-1: (52) $$\matrix{ {{P_{21}}} \hfill & = \hfill & {1.3 \times {{10}^4}m{K^2}Mp{c^3}} \hfill \cr {\Delta {P_{FG}}} \hfill & = \hfill & {0.2 \times {{10}^{ - 1}}m{K^2}Mp{c^3}} \hfill \cr {{B_{21}}} \hfill & = \hfill & {4.4 \times {{10}^9}m{K^3}Mp{c^6}} \hfill \cr {\Delta {B_{FG}}} \hfill & = \hfill & {4.6 \times {{10}^{10}}m{K^3}Mp{c^6},} \hfill \cr } $$ yielding better performance for the power spectrum, within the foreground dominated region. However, outside of the foreground 'wedge', which exists in both power spectrum and bispectrum space, the data uncertainty is limited by the thermal noise, and here the bispectrum achieves higher signal-to-noise ratio for a set observation time: (53) $$\matrix{ {{P_{21}}} \hfill & = \hfill & {1.3 \times {{10}^4}m{K^2}Mp{c^3}} \hfill \cr {\Delta {P_{{\rm{therm}}}}} \hfill & = \hfill & {5.7 \times {{10}^6}m{K^2}Mp{c^3}} \hfill \cr {{B_{21}}} \hfill & = \hfill & {4.4 \times {{10}^9}m{K^3}Mp{c^6}} \hfill \cr {\Delta {B_{{\rm{therm}}}}} \hfill & = \hfill & {5.3 \times {{10}^{11}}m{K^3}Mp{c^6}.} \hfill \cr } $$ Taking the ratios we find, $$\matrix{ {{P_{21}}/\Delta {P_{{\rm{t}}herm}}} \hfill & = \hfill & {0.002} \hfill \cr {{B_{21}}/\Delta {B_{{\rm{t}}herm}}} \hfill & = \hfill & {0.008.} \hfill \cr } $$ Accounting for the fact that the gridded bispectrum averages down with t 1.5 while the gridded power spectrum averages with t, the observing time multiple (above 10 h) for a detection (SNR = 1) is $$\matrix{ {{t_P}} \hfill & = \hfill & {500 \times } \hfill \cr {{t_B}} \hfill & = \hfill & {25 \times .} \hfill \cr } $$ Therefore, the bispectrum detection can theoretically be achieved in a fraction of the time of the power spectrum detection, for thermal noise-limited modes close to the EoR wedge. An SNR = 1 detection level could potentially be reached in 250 h, for this wave mode. This conclusion is relevant for the MWA, where the excellent instantaneous uv-coverage allows for rapid observation of triangle configurations. [We note that the power spectrum SNR shown here is not inconsistent with previous expectations for the performance of the MWA, because it applies only to this single mode (Beardsley et al. Reference Beardsley2016; Wayth et al. Reference Wayth2018)]. Future work presented in Watkinson & Trott (Reference Watkinson and Trott2019) will explore a more full range of triangle configurations and foreground bias and error. 11. Discussion and conclusions As discussed, the model used for the foreground bispectrum signal predicts that isosceles configurations have amplitudes that fall rapidly with non-zero k ||. Nevertheless, we find that the numerator and denominator of the normalised bispectrum scale such that its amplitude in this model increases as a function of η. For the 28-m baselines, the ratio doubles by k || > 0.014h Mpc −1. Over the same k range, the dimensional bispectrum is rapidly decaying, losing seven orders of magnitude from the k || = 0 mode. In this model, the expected foreground signal has fallen to below the expected cosmological signal value by k || ≥ 0.12. The results from the MWA datasets have some modes thermal noise-limited at 10 h, and only the k || = 0 mode is clearly foreground dominated for all experiments. In line with the discussion of Bharadwaj & Pandey (Reference Bharadwaj and Pandey2005), it is possible that the dimensional bispectrum is less affected by foregrounds than the power spectrum. However, the normalised bispectrum is more difficult to interpret, given the different observational foreground effects on the bispectrum numerator and power spectra denominator. Reduction of foreground contamination is an active field of research in 21-cm EoR experiments, and primary motivator for testing statistics other than the power spectrum. Despite the normalised bispectrum providing a cosmologically stable and robust estimate of non-Gaussianity compared with the dimensional bispectrum, the expected foreground value is difficult to discriminate from the expected signal value (Watkinson et al. Reference Watkinson, Giri, Ross, Dixon, Iliev, Mellama and Pritchard2018). Conversely, the dimensional bispectrum yields values that are highly significant detections, showing clear foreground contamination. Thus, the normalised bispectrum may not be the best discriminant in real EoR experiments. For future experiments, with higher sensitivity, exploration of modes with negative bispectra may help discrimination from foreground contamination, where the bispectrum is expected to be positive This is explored further in Watkinson & Trott (Reference Watkinson and Trott2019) and demonstrated previously in Lewis (Reference Lewis2011). It would also be interesting to study the signature of calibration errors in radio data on bispectrum estimates, to explore whether they have an imprint that can be discriminated from the cosmological signal. The thermal noise levels, as are achieved in these 10 h datasets for large k isosceles configurations, are 3–4 orders of magnitude larger than the expected bispectrum value for these configurations at low redshifts (Majumdar et al. Reference Majumdar, Pritchard, Mondal, Watkinson, Bharadwaj and Mellema2018). The gridded bispectrum noise scales with observation time to the power of 1.5, requiring a 1 000 h observation with the MWA to achieve a cosmological detection. This estimate is in line with predictions from Yoshiura et al. (Reference Yoshiura, Shimabukuro, Takahashi, Momose, Nakanishi and Imai2015). Further advantage may be gained from incoherent addition of isosceles triangle configurations with similar vector lengths, where the bispectrum is expected to vary slowly with changing parameters. An initial test of this for the k 1 = 0.1h Mpc−1 mode shows an improvement in sensitivity by a factor of ten for the gridded estimator, yielding a theoretical detection of the signal with 150 h of data. The direct estimator scales incoherently with time (t 0.75), due to the incoherent addition of triangles from different observations, but does utilise coherent addition of instantaneously-redundant triads. We have presented the first effort to estimate the cosmological bispectrum from the EoR with 21 h of MWA data, and have shown the parts of parameter space that are consistent with thermal noise at this level, using two types of bispectrum estimator. These two approaches are presented in order to demonstrate the practicalities of estimation of the bispectrum with real radio interferometer data. We have also derived a form for the expected bispectrum signature of point-source foregrounds for equilateral and isosceles configurations, and demonstrated broad consistency between the analytic model and the estimates obtained from the data. By considering the foreground bispectrum variance in the noise estimation, we have demonstrated that both the foreground bias and variance are insignificant for k || < 0.12h Mpc−1, allowing these regions of parameter space to be probed with dominant thermal noise. Due to the ability of the gridded bispectrum estimator to reduce thermal noise proportional to t 1.5, unlike the power spectrum which reduces with t, the 21-cm cosmological bispectrum may be detectable with fewer observing hours than the power spectrum for arrays with excellent instantaneous uv- coverage (i.e., with well-sampled baselines). This insight makes observational pursuit of the bispectrum worthwhile for some current instruments. In a companion paper (Watkinson & Trott Reference Watkinson and Trott2019), we explore optimal triangles to study from a signal and foreground contamination ratio perspective. Future work can also study the signature of calibration errors on the bispectrum. This work helps to define the optimal observational strategy and approach to bispectrum studies. This research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. CMT is supported by an ARC Future Fellowship under grant FT180100196. CAW and SM acknowledge financial support from the European Research Council under ERC grant number 638743-FIRSTDAWN (held by Jonathan R. Pritchard). KT's work is partially supported by Grand-in-Aid from the Ministry of Education, Culture, Sports, and Science and Technology (MEXT) of Japan, No. 15H05896, 16H05999 and 17H01110, and Bilateral Joint Research Projects of JSPS. The International Centre for Radio Astronomy Research (ICRAR) is a Joint Venture of Curtin University and The University of Western Australia, funded by the Western Australian State government. The MWA Phase II upgrade project was supported by Australian Research Council LIEF grant LE160100031 and the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto. This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. a http://www.mwatelescope.org b http://eor.berkeley.edu c http://www.lofar.org d http://lwa.unm.edu e The mapping from observed to cosmological dimensions is given by (2) $${k_ \bot } = {{2\pi |u|} \over {{D_M}(z)}},$$ (3) $${k_\parallel } = {{2\pi {H_0}{\kern 1pt} {f_{21}}E(z)} \over {c{{(1 + z)}^2}}}\eta ,$$ where D M is the transverse comoving distance, and f 21 is the rest frequency of the neutral hydrogen emission. f This is appropriate for this work where the data used are all from zenith-pointed snapshots, where the w-terms are small. g Due to the physical size of the collecting antenna element, some parts of the antenna have a smaller effective baseline length (closer to other antenna), and some have a longer (further from the other antenna). h ${\sigma _{{\rm{therm}}}} = {{2kT} \over {{\lambda ^2}}}\Omega {{\Delta \nu } \over {\sqrt {{\rm{BW}}\Delta t} }}$ for bandwidth BW, spectral resolution Δv and observation time interval Δt i This can also be derived as a covariance between u modes and η modes, which encodes the spectral leakage that stems from the spatial and spectral tapers. This covariance is that used to understand power spectrum uncertainties in EoR work, where correlations between k-cells must be correctly treated. Ali, Z. S., et al. 2015, ApJ, 809, 61CrossRefGoogle Scholar Barry, N., Hazelton, B., Sullivan, I., Morales, M. F., & Pober, J. C. 2016, preprint, arXiv:1603.00607Google Scholar Beane, A., & Lidz, A. 2018, preprint, arXiv:1806.02796Google Scholar Beardsley, A. P., et al. 2016, preprint, arXiv:1608.06281Google Scholar Bharadwaj, S., & Pandey, S. K. 2005, MNRAS, 358, 968CrossRefGoogle Scholar Bowman, J. D., Cairns, I., Kaplan, D. L., Murphy, T., Oberoi, D., et al. 2013, PASA, 30, 31CrossRefGoogle Scholar Brillinger, D., & Rosenblatt, M. 1967, in Spectr. Anal. time Ser., ed. Bernard, Harris (New York: Wiley), 189Google Scholar Cheng, C., et al. 2018, preprint, arXiv:1810.05175Google Scholar Datta, A., Bowman, J. D., & Carilli, C. L. 2010, ApJ, 724, 526CrossRefGoogle Scholar DeBoer, D. R., et al. 2016, preprint, arXiv:1606.07473Google Scholar Eggemeier, A., & Smith, R. E. 2017, MNRAS, 466, 2496CrossRefGoogle Scholar Ellingson, S. W., Clarke, T. E., Cohen, A., Craig, J., Kassim, N. E., Pihlstrom, Y., Rickard, L. J., & Taylor, G. B. 2009, Proc. IEEE, 97, 1421CrossRefGoogle Scholar Franzen, T. M. O., et al. 2016, MNRAS, 459, 3314CrossRefGoogle Scholar Furlanetto, S. R., Oh, S. P., & Briggs, F. H. 2006, PhR, 433, 181Google Scholar Greig, B., & Mesinger, A. 2017, MNRAS, 472, 2651CrossRefGoogle Scholar Intema, H. T., van Weeren, R. J., Röttgering, H. J. A., & Lal, D. V. 2011, A&A, 535, A38Google Scholar Jacobs, D. C., et al. 2016, ApJ, 825, 114CrossRefGoogle Scholar Joseph, R. C., Trott, C. M., & Wayth, R. B. 2018, ApJ, 156, 285CrossRefGoogle Scholar Jung, G., Racine, B., & van Tent, B. 2018, Journal of Cosmology and Astroparticle Physics, 11, 047CrossRefGoogle Scholar Koopmans, L., et al. 2015, Advancing astrophysics with the square kilometre array (AASKA14), p. 1Google Scholar Lewis, A. 2011, Journal of Cosmology and Astroparticle Physics, 10, 026CrossRefGoogle Scholar Li, W., et al. 2018, ApJ, 863, 170CrossRefGoogle Scholar Majumdar, S., Pritchard, J. R., Mondal, R., Watkinson, C. A., Bharadwaj, S., & Mellema, G. 2018, MNRAS, 476, 4007CrossRefGoogle Scholar Mesinger, A., Furlanetto, S., & Cen, R. 2011, MNRAS, 411, 955CrossRefGoogle Scholar Morales, M. F., Hazelton, B., Sullivan, I., & Beardsley, A. 2012, ApJ, 752, 137CrossRefGoogle Scholar Murray, S. G., Trott, C. M., & Jordan, C. H. 2017, ApJ, 845, 7CrossRefGoogle Scholar Parsons, A. R., et al. 2010, AJ, 139, 1468CrossRefGoogle Scholar Parsons, A., Pober, J., McQuinn, M., Jacobs, D., & Aguirre, J. 2012, ApJ, 753, 81CrossRefGoogle Scholar Patil, A. H., et al., 2014, MNRAS, 443, 1113CrossRefGoogle Scholar Patil, A. H., et al. 2016, MNRAS, 436, 4317CrossRefGoogle Scholar Pritchard, J. R., & Loeb, A. 2008, PhRvD, 78, 103511Google Scholar Shimabukuro, H., Yoshiura, S., Takahashi, K., Yokoyama, S., & Ichiki, K. 2017, MNRAS, 468, 1542CrossRefGoogle Scholar Thyagarajan, N., et al. 2015, ApJ, 804, 14CrossRefGoogle Scholar Tingay, S. J., et al. 2013, PASA, 30, 7CrossRefGoogle Scholar Trott, C. M., & Wayth, R. B. 2016, PASA, 33, e019CrossRefGoogle Scholar Trott, C. M., Wayth, R. B., & Tingay, S. J. 2012, ApJ, 757, 101CrossRefGoogle Scholar Trott, C. M., et al. 2016, ApJ, 818, 139CrossRefGoogle Scholar Vedantham, H., Udaya Shankar, N., & Subrahmanyan, R. 2012, ApJ, 745, 176CrossRefGoogle Scholar Watkinson, C. A., & Pritchard, J. R. 2014, MNRAS, 443, 3090CrossRefGoogle Scholar Watkinson, C. A., & Trott, C. M. 2019, PASA, in prep.Google Scholar Watkinson, C. A., Giri, S. K., Ross, H. E., Dixon, K. L., Iliev, I. T., Mellama, G., & Pritchard, J. R. 2018, preprint, arXiv:1808.02372Google Scholar Wayth, R. B., et al. 2018, preprint, arXiv:1809.06466Google Scholar Wyithe, J. S. B., & Morales, M. F. 2007, MNRAS, 379, 1647CrossRefGoogle Scholar Yoshiura, S., Shimabukuro, H., Takahashi, K., Momose, R., Nakanishi, H., & Imai, H. 2015, MNRAS, 451, 266CrossRefGoogle Scholar van Haarlem, M. P., et al., 2013, A&A, 556, A2Google Scholar Figure 4. Gridded power spectra for the 21 h of observations on two fields used in this work, as processed through the CHIPS estimator (Trott et al. 2016). Figure 7. Point-source-foreground dimensional bispectrum signature of isosceles triangle vectors in k⊥ − k||-space (note the stretched logarithmic colour bar). In this model, the expected foreground signal has fallen to below the expected cosmological signal value by k|| ≥ 0.12. Figure 8. Point-source-normalised foreground bispectrum signature of isosceles triangle vectors in k⊥ − k|| -space (note the stretched logarithmic colour bar). In this model, the expected foreground signal has fallen to below the expected cosmological signal value by k|| ≥ 0.12. Figure 9. Ratio of point-source foreground bispectrum bias to uncertainty, for isosceles triangle vectors in k⊥ − k|| -space (note linear plot). The bias exceeds the uncertainty at large angular modes, but rapidly falls below for larger k⊥, with no dependence on line-of-sight scale. Figure 10. Errors for point-source-normalised foreground bispectrum (red) and thermal noise (green), for isosceles triangle vectors in k⊥ − k|| -space and 300 observations used in this work for the EoR1 field. Barry, N. Wilensky, M. Trott, C. M. Pindor, B. Beardsley, A. P. Hazelton, B. J. Sullivan, I. S. Morales, M. F. Pober, J. C. Line, J. Greig, B. Byrne, R. Lanman, A. Li, W. Jordan, C. H. Joseph, R. C. McKinley, B. Rahimi, M. Yoshiura, S. Bowman, J. D. Gaensler, B. M. Hewitt, J. N. Jacobs, D. C. Mitchell, D. A. Udaya Shankar, N. Sethi, S. K. Subrahmanyan, R. Tingay, S. J. Webster, R. L. and Wyithe, J. S. B. 2019. Improving the Epoch of Reionization Power Spectrum Results from Murchison Widefield Array Season 1 Observations. The Astrophysical Journal, Vol. 884, Issue. 1, p. 1. Beardsley, A. P. Johnston-Hollitt, M. Trott, C. M. Pober, J. C. Morgan, J. Oberoi, D. Kaplan, D. L. Lynch, C. R. Anderson, G. E. McCauley, P. I. Croft, S. James, C. W. Wong, O. I. Tremblay, C. D. Norris, R. P. Cairns, I. H. Lonsdale, C. J. Hancock, P. J. Gaensler, B. M. Bhat, N. D. R. Li, W. Hurley-Walker, N. Callingham, J. R. Seymour, N. Yoshiura, S. Joseph, R. C. Takahashi, K. Sokolowski, M. Miller-Jones, J. C. A. Chauhan, J. V. Bojičić, I. Filipović, M. D. Leahy, D. Su, H. Tian, W. W. McSweeney, S. J. Meyers, B. W. Kitaeff, S. Vernstrom, T. Gürkan, G. Heald, G. Xue, M. Riseley, C. J. Duchesne, S. W. Bowman, J. D. Jacobs, D. C. Crosse, B. Emrich, D. Franzen, T. M. O. Horsley, L. Kenney, D. Morales, M. F. Pallot, D. Steele, K. Tingay, S. J. Walker, M. Wayth, R. B. Williams, A. and Wu, C. 2019. Science with the Murchison Widefield Array: Phase I results and Phase II opportunities. Publications of the Astronomical Society of Australia, Vol. 36, Issue. , Thyagarajan, Nithyanandan Carilli, Chris L. Nikolic, Bojan Kent, James Mesinger, Andrei Kern, Nicholas S. Bernardi, Gianni Matika, Siyanda Abdurashidova, Zara Aguirre, James E. Alexander, Paul Ali, Zaki S. Balfour, Yanga Beardsley, Adam P. Billings, Tashalee S. Bowman, Judd D. Bradley, Richard F. Burba, Jacob Carey, Steve Cheng, Carina DeBoer, David R. Dexter, Matt Acedo, Eloy de Lera Dillon, Joshua S. Ely, John Ewall-Wice, Aaron Fagnoni, Nicolas Fritz, Randall Furlanetto, Steven R. Gale-Sides, Kingsley Glendenning, Brian Gorthi, Deepthi Greig, Bradley Grobbelaar, Jasper Halday, Ziyaad Hazelton, Bryna J. Hewitt, Jacqueline N. Hickish, Jack Jacobs, Daniel C. Julius, Austin Kerrigan, Joshua Kittiwisit, Piyanat Kohn, Saul A. Kolopanis, Matthew Lanman, Adam La Plante, Paul Lekalake, Telalo Lewis, David Liu, Adrian MacMahon, David Malan, Lourence Malgas, Cresshim Maree, Matthys Martinot, Zachary E. Matsetela, Eunice Molewa, Mathakane Morales, Miguel F. Mosiane, Tshegofalang Neben, Abraham R. Parsons, Aaron R. Patra, Nipanjana Pieterse, Samantha Pober, Jonathan C. Razavi-Ghods, Nima Ringuette, Jon Robnett, James Rosie, Kathryn Sims, Peter Smith, Craig Syce, Angelo Williams, Peter K. G. and Zheng, Haoxuan 2020. Detection of cosmic structures using the bispectrum phase. II. First results from application to cosmic reionization using the Hydrogen Epoch of Reionization Array. Physical Review D, Vol. 102, Issue. 2, Watkinson, Catherine A Trott, Cathryn M and Hothi, Ian 2020. The bispectrum and 21-cm foregrounds during the Epoch of Reionization. Monthly Notices of the Royal Astronomical Society, Vol. 501, Issue. 1, p. 367. Liu, Adrian and Shaw, J. Richard 2020. Data Analysis for Precision 21 cm Cosmology. Publications of the Astronomical Society of the Pacific, Vol. 132, Issue. 1012, p. 062001. Majumdar, Suman Kamran, Mohd Pritchard, Jonathan R Mondal, Rajesh Mazumdar, Arindam Bharadwaj, Somnath and Mellema, Garrelt 2020. Redshifted 21-cm bispectrum – I. Impact of the redshift space distortions on the signal from the Epoch of Reionization. Monthly Notices of the Royal Astronomical Society, Vol. 499, Issue. 4, p. 5090. Hutter, Anne Watkinson, Catherine A Seiler, Jacob Dayal, Pratika Sinha, Manodeep and Croton, Darren J 2020. The 21 cm bispectrum during reionization: a tracer of the ionization topology. Monthly Notices of the Royal Astronomical Society, Vol. 492, Issue. 1, p. 653. Mondal, Rajesh Mellema, Garrelt Shaw, Abinash Kumar Kamran, Mohd and Majumdar, Suman 2021. The Epoch of Reionization 21-cm bispectrum: the impact of light-cone effects and detectability. Monthly Notices of the Royal Astronomical Society, Vol. 508, Issue. 3, p. 3848. Shaw, Abinash Kumar Bharadwaj, Somnath Sarkar, Debanjan Mazumdar, Arindam Singh, Sukhdeep and Majumdar, Suman 2021. A fast estimator for quantifying the shape dependence of the 3D bispectrum. Journal of Cosmology and Astroparticle Physics, Vol. 2021, Issue. 12, p. 024. Graham, Alister Kenyon, Katherine Bull, Lochlan Lokuge Don, Visura and Kuhlmann, Kazuki 2021. History of Astronomy in Australia: Big-Impact Astronomy from World War II until the Lunar Landing (1945–1969). Galaxies, Vol. 9, Issue. 2, p. 24. Banet, Alon Barkana, Rennan Fialkov, Anastasia and Guttman, Or 2021. Quantiles as robust probes of non-Gaussianity in 21-cm images. Monthly Notices of the Royal Astronomical Society, Vol. 503, Issue. 1, p. 1221. View all Google Scholar citations for this article. Send article to Kindle Cathryn M. Trott (a1) (a2), Catherine A. Watkinson (a3), Christopher H. Jordan (a1) (a2), Shintaro Yoshiura (a4), Suman Majumdar (a3) (a5), N. Barry (a2) (a6), R. Byrne (a7), B. J. Hazelton (a7) (a8), K. Hasegawa (a4), R. Joseph (a1) (a2), T. Kaneuji (a4), K. Kubota (a4), W. Li (a9), J. Line (a1) (a2), C. Lynch (a1) (a2), B. McKinley (a1) (a2), D. A. Mitchell (a10), M. F. Morales (a2) (a7), S. Murray (a1) (a2) (a11), B. Pindor (a2) (a6), J. C. Pober (a9), M. Rahimi (a6), J. Riding (a6), K. Takahashi (a4), S. J. Tingay (a1), R. B. Wayth (a1) (a2), R. L. Webster (a2) (a6), M. Wilensky (a7), J. S. B. Wyithe (a2) (a6), Q. Zheng (a12), David Emrich (a1), A. P. Beardsley (a11), T. Booler (a1), B. Crosse (a1), T. M. O. Franzen (a10), L. Horsley (a1), M. Johnston-Hollitt (a1), D. L. Kaplan (a13), D. Kenney (a1), D. Pallot (a14), G. Sleap (a1), K. Steele (a1), M. Walker (a1), A. Williams (a1) and C. Wu (a14) DOI: https://doi.org/10.1017/pasa.2019.15 Available formats PDF Please select a format to send. Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox. Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive. Reply to: Submit a response Title * Please enter a title for your response. Contents * Contents help Close Contents help - No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted Please enter your response. Email * Email help Close Email help Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly. Occupation Please enter your occupation. Affiliation Please enter any affiliation. Conflicting interests Do you have any conflicting interests? * Conflicting interests help Close Conflicting interests help Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners. More information * Please enter details of the conflict of interest or select 'No'. Please tick the box to confirm you agree to our Terms of use. * Please accept terms of use. Please tick the box to confirm you agree that your name, comment and conflicts of interest (if accepted) will be visible on the website and your comment may be printed in the journal at the Editor's discretion. * Please confirm you agree that your details will be displayed.
CommonCrawl
Session B8: Focus Session: Spin Transfer Torque Sponsoring Units: GMAG Chair: Olle Heinonen, Argonne National Laboratory B8.00001: Mode coupling in spin-torque oscillators: first-principle derivation Olle Heinonen, Yan Zhou, Dong Li A number of recent experimental works have shown that the dynamics of a single spin torque oscillator can exhibit complex behavior that stems from interactions between two or more modes of the oscillator. Examples are observed mode-hopping or mode coexistence$^{\mathrm{1-3}}$. There has been some initial work indicating how the theory for a single-mode (macro-spin) spin torque oscillator should be generalized to include several modes and the interactions between them. In the present work, we derive such a theory starting with the Landau-Lifshitz-Gilbert equation for magnetization dynamics. We compare our results with the single-mode theory, and show how the coupled-mode theory is a natural extension of the single-mode. Argonne National Laboratory is a US DOE Science Laboratory operated under Contract No. DE-AC02-06CH11357 by UChicago Argonne, LLC. References: [1] S. Bonetti, V. Tiberkevich, G. Consolo et al., \textit{Phys. Rev. Lett. }\textbf{105}, 217204 (2010). [2] P. Muduli et al., \textit{Phys. Rev. Lett}. \textbf{108}, 207203 (2012). [3] R. Dumas, E. Iacocca, S. Bonetti et al., \textit{Phys. Rev. Lett.} \textbf{110}, 257202 (2013). [Preview Abstract] B8.00002: Nanowire Spin Torque Oscillator Driven by Spin Orbit Torques Andrew Smith, Zheng Duan, Liu Yang, Brian Youngblood, Ilya Krivorotov We report microwave signal emission from a spin torque oscillator driven by spin orbit torques in a 17 um long Py(5 nm)/Pt(5 nm) ferromagnetic nanowire with an 1.8 um long active region. The emitted signal arises from excitation of the bulk and edge spin wave eigenmodes of the nanowire and detected with anisotropic magnetoresistance. This type of self-oscillatory dynamics is qualitatively different from the previously reported self-localized nonlinear bullet mode excited by spin orbit torques in extended ferromagnetic films. The eigenmode self-oscillations in the nanowire geometry are enabled by geometric confinement suppressing nonlinear magnon scattering. Our work demonstrates feasibility of spin torque oscillators with a micrometer-scale active region. [Preview Abstract] B8.00003: Magnetization reversal in orthogonal spin transfer magnetic devices Georg Wolf, Andrew D. Kent, Bartek Kardasz, Mustafa Pinarbasi Orthogonal spin transfer (OST) magnetic devices have distinct magnetization dynamics and switching characteristics compared to conventional collinearly magnetized devices. A perpendicular magnetized layer provides a large initial spin torque on the free layer magnetization and thus initiates magnetization dynamics. In order to read out the information stored in the OST device, the free layer forms a magnetic tunnel junction with an in plane magnetized reference layer, which also exerts a spin torque on the free layer. The combination of those two spin torques leads to different switching dynamics of the free layer. Quasistatic and fast pulsed measurements have been conducted to explore the state diagram and magnetization dynamics of such devices. The absolute value of the switching current I$_s$ is in general smaller for the antiparallel (AP) to parallel (P) transition, due to the angular dependence of the reference layer torque. I$_s$ also has a weak field dependence for this transition, indicating that the reference layer torque governs this transition. On the other hand, the P to AP transition shows a stronger field dependence of I$_s$ and occurs for both current polarities. Both these features denote the influence of the spin-torque generated from the perpendicular polarizer. [Preview Abstract] B8.00004: Spin wave mode coexistence on the nanoscale: A consequence of the Oersted field induced asymmetric energy landscape Invited Speaker: Randy Dumas The emerging field of magnonics relies on the systematic generation, manipulation, and detection of spin waves (SWs). Nanocontact spin torque oscillators (NC-STOs) provide an ideal platform to study spin transfer torque induced SW emission [1, 2]. In analogy to two species competing for the same food supply it has been argued that only one SW mode can survive in the steady state [3]. However, as evidenced in many experiments clear signatures of mode-hopping are often observed [1, 4]. Here, we present a third possibility, namely that under the correct experimental conditions, mode \textit{coexistence }can be realized in NC-STOs [5]. Micromagnetic simulations reveal that the SW modes are spatially separated under the NC. Mode coexistence is facilitated by the local field asymmetries induced by the spatially inhomogeneous Oersted field in the vicinity of the NC and further promoted by SW localization. Finally, both simulation and experiment reveal a weak low frequency signal exactly at the difference of the mode frequencies, consistent with inter-modulation of two coexistent modes. \\[4pt] [1] S. Bonetti, V. Tiberkevich, G. Consolo, G. Finocchio, P. Muduli, F. Mancoff, A. Slavin, and J. {\AA}kerman, Phys. Rev. Lett. \textbf{105}, 217204 (2010). \\[0pt] [2] M. Madami, S. Bonetti, G. Consolo, S. Tacchi, G. Carlotti, G. Gubbiotti, F.B. Mancoff, M.A. Yar, and J. {\AA}kerman, Nature Nanotechnology \textbf{6}, 635 (2011). \\[0pt] [3] F. M. de Aguiar, A. Azevedo, and S. M. Rezende, Phys. Rev. B \textbf{75}, 132404 (2007).\\[0pt] [4] P. K. Muduli, O. G. Heinonen, and J. {\AA}kerman, Phys. Rev. Lett. \textbf{108}, 207203 (2012).\\[0pt] [5] R.K. Dumas, E. Iacocca, S. Bonetti, S.R. Sani, S.M. Mohseni, A. Eklund, J. Persson, O. Heinonen, and Johan {\AA}kerman, Phys. Rev. Lett. \textbf{110}, 257202 (2013). [Preview Abstract] B8.00005: Spin-Torque Ferromagnetic Resonance in PMA Thin Film Structures Luis Vilela-Le\~ao, Chi-Feng Pai, Yongxi Ou, Yun Li, Daniel Ralph, Robert Buhrman Thin film systems with strong perpendicular magnetic anisotropy (PMA) are important for many spintronic device applications. For example for magnetic memory, strong PMA can enable ultra-high density storage, in combination with enhanced non-volatility and low write energy. Recently, some interesting phenomena, including the spin Hall effect and spin orbit fields, have been reported in normal metal/ferromagnetic (NM/FM) structures with PMA. These effects, which arise from spin-orbit coupling in the structure, convert charge current into torques (a field-like torque and/or a damping-like torque) on the magnetization of the FM that can be strong enough to switch the magnetization, generate persistent magnetic oscillation, and efficiently move domain walls. Here we show that spin-torque ferromagnetic resonance (ST-FMR) can be effectively employed to characterize the anisotropy, the spin-orbit torques, and the magnetic damping of a PMA structure. We will report on ST-FMR results from several PMA systems, including W/Hf(t)/CoFeB/MgO and W/CoFeB where we have found large values for both the first order and second order terms of the magnetic anisotropy and where we have measured intrinsic and extrinsic contributions to the effective magnetic damping. [Preview Abstract] B8.00006: Field-modulated spin torque ferromagnetic resonance: Characterization of spin wave eigenmodes in magnetic tunnel junctions Yu-Jin Chen, Alexandre Goncalves, Igor Barsukov, Liu Yang, Jordan Katine, Ilya Krivorotov A common technique for measurements of magnetic parameters in nanoscale magnetic tunnel junctions (MTJs) is spin torque-driven ferromagnetic resonance (ST-FMR) based on an amplitude-modulated microwave drive [1,2]. We demonstrate a technique of broadband ST-FMR based on magnetic field modulation for measurements of spin wave properties in magnetic nanostructures [3]. Application of the field-modulated ST-FMR technique to MTJs gives reliable information on magnetization dynamics for arbitrary magnetic state of the MTJ, including the case of collinear magnetizations. This configuration is difficult to measure in conventional ST-FMR due to the weak spin torque drive. The improved signal-to-noise ratio and improved sensitivity of field-modulated ST-FMR allow us to measure the entire spectrum of low-frequency standing spin waves. We find the magnetic field dependence of the measured spin wave eigenmodes to be in good agreement with micromagnetic simulation results, which allow us to identify the observed modes as the free layer eigenmodes and to determine their spatial profiles [3]. \\ 1. A. A. Tulapurkar et al., Nature 438, 339-342 (2005)\\ 2. J. C. Sankey et al., Phys. Rev. Lett. 96, 227601 (2006)\\ 3. A. M. Goncalves et al., Appl. Phys. Lett. 103, 172406 (2013) [Preview Abstract] B8.00007: Control of Propagating Spin Waves via Spin Transfer Torque in a Metallic Bilayer Waveguide Kyongmo An, Daniel Birt, Chi-Feng Pai, Kevin Olsson, Daniel Ralph, Robert Buhrman, Xiaoqin Li We investigate the effect of a direct current on propagating spin waves in a CoFeB/Ta bilayer structure. Using the micro-Brillouin light scattering technique, we observe that the spin wave amplitude may be attenuated or amplified depending on the direction of the current and the applied magnetic field. Our work suggests an effective approach for electrically controlling the propagation of spin waves in a magnetic waveguide and may be useful in a number of applications such as phase locked nano-oscillators and hybrid information processing devices. [Preview Abstract] B8.00008: Experimental observation of exchange-mode spin-wave via domain wall annihilation Seonghoon Woo, Tristan Delaney, Geoffrey Beach Spin waves (SWs) in magnetic nanostructures have generated great interest recently, motivated by the possibility of high-speed, low-power magnonic devices applications. A number of micromagnetic researches, therefore, have been conducted, revealing the particular behaviors of SWs in nanostructured ferromagnets. However, SWs' short attenuation length prevents them from being observed and used experimentally. Generating large-amplitude exchange-mode SWs, which is thus indispensable for real device applications, are still challenging because their very short wavelengths cannot be directly excited. Here, we present the first experimental evidence of the exchange-mode SWs. Using micromagnetics, we firstly show that the annihilation of two DWs releases their exchange energy by a mean of localized SW burst, which has broad range band and intense amplitude. Another micromagnetic result also shows that the collision-induced SWs inside a nanowire can cause the depinning of a DW with an assisting magnetic field. By taking advantage of an anisotropic magneto-resistance (AMR) effect and relative electrical measurements, we observe the generation/annihilation of DWs and the contribution of generated SWs to the DW depinning process experimentally. The additional depinning field of $\sim$ 8 Oe caused by SWs can be readily achieved, enough to propagate a standstill DW in a well-defined pinning-free nanostructure. This work shows the first experimental observation of exchange-mode SWs and highlights a new route towards SW-integrated spintronic devices. [Preview Abstract] B8.00009: Onset and annihilation of droplet solitons in Spin Torque Nano-Oscillators with perpendicular magnetized free layers Ferran Macia, Dirk Backes, Andrew Kent Nanometer scale electrical contacts to ferromagnetic thin films (STNOs) can provide sufficient current densities to excite magnetization dynamics resulting in either localized or propagating short wavelength spin waves. These oscillations can be detected through the magnetoresistance because of the change in the relative orientation between the current polarization and the free layer magnetization. We have fabricated point contacts to continuous magnetic bilayers where the polarizer magnetic film has in-plane magnetic anisotropy and with free layers with different magnetic anisotropies ranging from in-plane to perpendicular magnetic anisotropy (PMA). Our measurements on STNOs with perpendicularly magnetized free layers indicate that over a region of magnetic field and current there is an onset of an excitation with characteristics consistent with the formation of a \textit{droplet soliton}. We have systematically studied the state diagram of these excitations that shows both their onset and annihilation. We also studied the onset and annihilation of droplet solitons in arrays of STNOs. [Preview Abstract] B8.00010: Macrospin model of spin-transfer oscillators: an energy space approach Daniele Pinna, Andrew Kent, Daniel Stein A direct current applied to a nanomagnet produces a spin-transfer torque that drives the magnetization out of equilibrium. In this talk we discuss an effective theory to characterize the magnetization dynamics by focusing on its diffusive evolution over the energy landscape. The procedure allows us to model macrospin behavior with a one dimensional stochastic differential equation. We model spin-transfer oscillators (STOs) with a spin-current at an angle to the easy plane of a biaxial magnet (i.e. have a component along the magnet's hard axis). We trace the properties of stable out-of-plane precessional states and discuss their hysteretic behavior on applied current. We discuss the structure of the expected linewidth and phase noise, along with how the oscillator frequency is expected to depend on applied current. Finally, contributions due to thermal noise will be outlined and some thermally activated properties described. D. Pinna, A. D. Kent, D. L. Stein, Phys. Rev. B 88, 104405 (2013). [Preview Abstract] B8.00011: State Diagram of Orthogonal Spin-Transfer Spin-Valve Devices Li Ye, Georg Wolf, Daniele Pinna, Andrew Kent Orthogonal spin transfer (OST) devices that incorporate an out-of-plane magnetized polarizing layer with an in-plane collinear spin valve are expected to exhibit ultrafast magnetization switching as well as large amplitude precessional modes. The current and field dependence of the switching thresholds are also distinct from the collinear spin-valves because of the combined effect from in-plane reference layer (RL) and polarizing layer (PL). Here we present an experimental investigation of complete current-field state diagrams, demonstrating reversal between parallel (P) and anti-parallel (AP) states and dynamic states of the free layer in both OST pseudo spin valve and spin valve devices, where in the latter a synthetic anti-ferromagnetic layer (SAF) is used as reference layer. Switching from AP (P) to P (AP) states is observed at both positive and negative current with a different field dependence of the critical current, reflecting spin polarization asymmetry between AP and P states and different RL and PL spin torque efficiencies. High frequency noise spectra have also been acquired providing evidence of out-of-plane precessional modes, where an intermediate resistance is seen in quasistatic measurements. Modeling of this orthogonal spin transfer system is also discussed. [Preview Abstract] B8.00012: Distribution of domain wall spin torque in magnetic metals and ferromagnetic semiconductors Elizabeth Golovatski The design and implementation of many spintronic devices[1] will be dependent on good models of spin torque and domain wall motion caused by coherent carrier transport[2]. We model spin torque in N\'eel walls[3] using a piecewise linear transfer-matrix method[4], and calculate the spin torque distribution[5] throughout the system. We examine the differences in spin torque for ferromagnetic semiconductors (where the Fermi energy is much less than the spin splitting) and magnetic metals (where the Fermi energy is much greater than the spin splitting). We find that the torque distribution is more asymmetric for adiabatic torques and more symmetric for non-adiabatic torques in a magnetic metal vs. a ferromagnetic semiconductor, leading to differences in velocities for two domain walls in close proximity. [1] S. Parkin et al., Science 320, 190 (2008) [2] M. Yamanouchi et al., Nature 428, 539 (2004) [3] G. Vignale and M. Flatt\'e, Phys. Rev. Lett. 89, 098302 (2002) [4] E. Golovatski and M. Flatt\'e, Phys. Rev. B, 84, 115210 (2011) [5] J. Xiao et al., Phys. Rev. B, 73, 054428 (2006) [Preview Abstract] B8.00013: Current-Induced Domain-Wall Motion in Perpendicularly Magnetized Magnetic Nanowires with Unflatted Surfaces Hirofumi Morise, Tsuyoshi Kondo, Shiho Nakamura We study the current-induced domain wall motion in perpendicularly magnetized magnetic nanowires with unflatted surfaces numerically. There, the axis of the anisotropy is assumed to be normal to the surfaces, that is, it is inclined continuously along the extended direction of the nanowire. The relationship between the current density and the velocities of domain walls are investigated by use of micromagnetics simulations. Comparing with the motion in a flat nanowire, the existence of an additional exchange energy due to the curvature significantly affects the motion of the domain walls. [Preview Abstract]
CommonCrawl
Multiquartic functional equations Abasalt Bodaghi1, Choonkil Park ORCID: orcid.org/0000-0001-6329-82282 & Oluwatosin T. Mewomo3 In this paper, we study n-variable mappings that are quartic in each variable. We show that the conditions defining such mappings can be unified in a single functional equation. Furthermore, we apply an alternative fixed point method to prove the Hyers–Ulam stability for the multiquartic functional equations in the normed spaces. We also prove that under some mild conditions, every approximately multiquartic mapping is a multiquartic mapping. A fundamental question in the theory of functional equations is as follows: When is it true that a function that approximately satisfies a functional equation is close to an exact solution of the equation? If this is the case, then we say that the equation is stable. The stability problem for the group homomorphisms was introduced by Ulam [1] in 1940. The first partial answer to Ulam's question in the case of Cauchy's equation or additive equation \(A(x+y)=A(x)+A(y)\) in Banach spaces was given by Hyers [2] (stability involving a positive constant). Later the result of Hyers was significantly generalized by Aoki [3], T.M. Rassias [4] (stability incorporated with sum of powers of norms), Găvruţa [5] (stability controlled by a general control function) and J.M. Rassias [6] (stability including mixed product-sum of powers of norms). Let V be a commutative group, let W be a linear space, and let \(n\geq 2\) be an integer. Recall from [7] that a mapping \(f: V^{n}\longrightarrow W\) is called multiadditive if it is additive (i.e., it satisfies Cauchy's functional equation) in each variable. Furthermore, f is said to be multiquadratic if it is quadratic (i.e., it satisfies quadratic the functional equation \(Q(x+y)+Q(x-y)=2Q(x)+2Q(y)\)) in each variable [8]. Zhao et al. [9], showed that the system of functional equations defining a multiquadratic mapping can be unified in a single equation. Indeed, they proved that the mentioned mapping f is multiquadratic if and only if the following relation holds: $$\begin{aligned} \sum_{s\in \{-1,1\}^{n}} f(x_{1}+sx_{2})=2^{n} \sum_{j_{1},j_{2},\ldots,j_{n}\in \{1,2\}}f(x_{1j_{1}},x_{2j_{2}}, \ldots,x_{nj_{n}}), \end{aligned}$$ where \(x_{j}=(x_{1j},x_{2j},\ldots,x_{nj})\in V^{n}\) and \(j\in \{1,2 \}\). Ciepliński [7, 8] studied the Hyers–Ulam stability of multiadditive and multiquadratic mappings in Banach spaces (see also [9]). For more remarks on the Hyers–Ulam stability of some systems of functional equations, we refer to [10]. A mapping \(f: V^{n}\longrightarrow W\) is called multicubic if it is cubic (i.e., it satisfies the cubic functional equation \(C(2x+y)+C(2x-y)=2C(x+y)+2C(x-y)+12C(x)\)) in each variable [11]). In [12], the first author and Shojaee studied the Hyers–Ulam stability for multicubic mappings on normed spaces and also proved that a multicubic functional equation can be hyperstable, that is, every approximately multicubic mapping is multicubic. For other forms of cubic functional equations and their stabilities, we refer to [13,14,15,16,17,18]. The quartic functional equation $$\begin{aligned} \mathcal{Q}(x+2y)+\mathcal{Q}(x-2y)=4\mathcal{Q}(x+y)+4 \mathcal{Q}(x-y)-6 \mathcal{Q}(x)+24\mathcal{Q}(y) \end{aligned}$$ was introduced for the first time by Rassias [19]. It is easy to see that the function \(\mathcal{Q}(x)=ax^{4}\) satisfies (1.2). Thus, every solution of the quartic functional equation (1.2) is said to be a quartic function. The functional equation (1.2) was generalized by the first author and Kang in [20] and [21], respectively. Motivated by definitions of multiadditive, multiqudratic, and multicubic mappings, we define multiquartic mappings and provide their characterization. In fact, we prove that every multiquartic mapping can be characterized by a single functional equation and vice versa. In addition, we investigate the Hyers–Ulam stability for multiquartic functional equations by applying the fixed point method, which was used for the first time by Baker in [22]. For more applications of this approach to the stability of multiadditive-quadratic mappings and multi-Cauchy–Jensen mappings in non-Archimedean spaces and Banach spaces, see [23,24,25]. Characterization of multiquartic mappings Throughout this paper, \(\mathbb{N}\) stands for the set of all positive integers, \(\mathbb{N}_{0}:=\mathbb{N} \cup \{0\}\), \(\mathbb{R}_{+}:=[0, \infty )\), \(n\in \mathbb{N}\). For any \(l\in \mathbb{N}_{0}\), \(m\in \mathbb{N}\), \(t=(t_{1},\ldots,t_{m})\in \{-2,2\}^{m}\), and \(x=(x_{1},\ldots,x_{m})\in V^{m}\), we write \(lx:=(lx_{1},\ldots,lx _{m})\) and \(tx:=(t_{1}x_{1},\ldots,t_{m}x_{m})\), where ra stands, as usual, for the rth power of an element a of the commutative group V. Let \(n\in \mathbb{N}\) with \(n\geq 2\), and let \(x_{i}^{n}=(x_{i1},x _{i2},\ldots,x_{in})\in V^{n}\), \(i\in \{1,2\}\). We denote \(x_{i}^{n}\) by \(x_{i}\) when there is no risk of ambiguity. For \(x_{1},x_{2}\in V ^{n}\) and \(p_{i}\in \mathbb{N}_{0}\) with \(0\leq p_{i}\leq n\), put \(\mathcal{N}= \{(N_{1},N_{2},\ldots,N_{n})\mid N_{j}\in \{x _{1j}\pm x_{2j},x_{1j},x_{2j}\} \}\), where \(j\in \{1,\ldots,n \}\) and \(i\in \{1,2\}\). Consider the following subset of \(\mathcal{N}\): $$ \mathcal{N}_{(p_{1},p_{2})}^{n} := \bigl\{ \mathfrak{N}_{n}=(N_{1},N _{2},\ldots,N_{n})\in \mathcal{N}\mid \operatorname{Card}\{N_{j}: N_{j}=x _{ij} \}=p_{i}\ \bigl(i\in \{1,2\}\bigr) \bigr\} . $$ For \(r\in \mathbb{R}\), we put \(r\mathcal{N}_{(p_{1},p_{2})}^{n}= \{r \mathfrak{N}_{n}: \mathfrak{N}_{n}\in \mathcal{N}_{(p_{1},p_{2})}^{n} \}\). In this section, we assume that V and W are vector spaces over the rationals. We say a mapping \(f:V^{n}\longrightarrow W\) is n-multiquartic or multiquartic if f is quartic in each variable (see equation (1.2)). For such mappings, we use the following notations: $$\begin{aligned}& \begin{aligned}[t]& f \bigl(\mathcal{N}_{(p_{1},p_{2})}^{n} \bigr):= \sum_{\mathfrak{N}_{n}\in \mathcal{N}_{(p_{1},p_{2})}^{n}}f( \mathfrak{N}_{n}), \\ &f \bigl(\mathcal{N}_{(p_{1},p_{2})}^{n},z \bigr):= \sum _{\mathfrak{N}_{n}\in \mathcal{N}_{(p_{1},p_{2})}^{n}}f( \mathfrak{N}_{n},z) \quad (z\in V). \end{aligned} \end{aligned}$$ For all \(x_{1},x_{2}\in V^{n}\), we consider the equation $$\begin{aligned} \sum_{t\in \{-2,2\}^{n}} f(x_{1}+tx_{2})= \sum_{p_{2}=0}^{n}\sum _{p _{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p_{1}}24^{p_{2}}f \bigl(\mathcal{N} _{(p_{1},p_{2})}^{n} \bigr). \end{aligned}$$ By a mathematical computation we can check that the mapping \(f:\mathbb{R}^{n}\longrightarrow \mathbb{R}\) defined as \(f(z_{1},\ldots,z_{n})=\prod_{j=1}^{n}a_{j}z_{j}^{4}\) satisfies (2.2). Thus this equation is said to be the multiquartic functional equation. We denote \(\binom{n}{k}=n!/(k!(n-k)!)\) (the binomial coefficients) for all \(n, k\in \mathbb{N}\) with \(n\geq k\). Let \(0\leq k\leq n-1\). Put \(\mathcal{K}_{k}=\{_{k}x:=(0,\ldots,0,x _{j_{1}},0,\ldots,0,x_{j_{k}},0,\ldots,0)\in V^{n}\}\), where \(1\leq j_{1}<\cdots < j_{k}\leq n\). In other words, \(\mathcal{K}_{k}\) is the set of all vectors in \(V^{n}\) whose exactly k components are nonzero. We will show that a mapping \(f: V^{n}\longrightarrow W\) satisfies the functional equation (2.2) if and only if it is multiquartic. For this, we need the following lemma. Lemma 2.1 If a mapping \(f: V^{n}\longrightarrow W\) satisfies equation (2.2), then \(f(x)=0\) for any \(x\in V^{n}\) with at least one component equal to zero. We argue by induction on k that for each \({}_{k}x\in \mathcal{K}_{k}\), \(f(_{k}x)=0\) for \(0\leq k\leq n-1\). For \(k=0\), by putting \(x_{1}=x _{2}=(0,\ldots,0)\) in (2.2) we have $$\begin{aligned}& \begin{aligned}[b] & 2^{n}f(0,\ldots,0) \\ &\quad =\sum_{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}} \begin{pmatrix} n \\ n-p_{1}-p_{2} \end{pmatrix} \begin{pmatrix} p_{1}+p_{2} \\ p_{1} \end{pmatrix}2^{n-p_{1}-p_{2}}f(0, \ldots,0). \end{aligned} \end{aligned}$$ It is easily verified that $$\begin{aligned} \begin{pmatrix} n-k \\ n-k-p_{1}-p_{2} \end{pmatrix} \begin{pmatrix} p_{1}+p_{2} \\ p_{1} \end{pmatrix}= \begin{pmatrix} n-k \\ p_{2} \end{pmatrix} \begin{pmatrix} n-k-p_{2} \\ p_{1} \end{pmatrix} \end{aligned}$$ for \(0\leq k\leq n-1\). Using (2.4) for \(k=0\), we compute the right-hand side of (2.3) as follows: $$\begin{aligned} &\sum_{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}} \begin{pmatrix} n \\ n-p_{1}-p_{2} \end{pmatrix} \begin{pmatrix} p_{1}+p_{2} \\ p_{1} \end{pmatrix}2^{n-p_{1}-p_{2}}f(0, \ldots,0) \\ &\quad = 2^{n}\left [\sum_{p_{2}=0}^{n} \begin{pmatrix} n \\ p_{2} \end{pmatrix}12^{p_{2}}\sum _{p_{1}=0}^{n-p_{2}} \begin{pmatrix} n-p_{2} \\ p_{1} \end{pmatrix}4^{n-p_{1}-p_{2}}(-3)^{p_{1}} \right ]f(0,\ldots,0) \\ &\quad = 2^{n}\left [\sum_{p_{2}=0}^{n} \begin{pmatrix} n \\ p_{2} \end{pmatrix}12^{p_{2}}(4-3)^{n-p_{2}} \right ]f(0,\ldots,0) \\ &\quad =2^{n}(12+1)^{n}f(0,\ldots,0)=26^{n}f(0, \ldots,0). \end{aligned}$$ From relations (2.3) an (2.5) it follows that \(f(0,\ldots,0)=0\). Assume that for each \({}_{k-1}x\in \mathcal{K}_{k-1}\), \(f(_{k-1}x)=0\). We show that if \({}_{k}x\in \mathcal{K}_{k}\), then \(f(_{k}x)=0\). By a suitable replacement in (2.2) we get $$\begin{aligned} 2^{n}f(_{k}x) &=\sum _{p_{2}=0}^{n-k}\sum_{p_{1}=0}^{n-k-p_{2}}4^{n-p _{1}-p_{2}}(-6)^{p_{1}}24^{p_{2}} \begin{pmatrix} n-k \\ n-k-p_{1}-p_{2} \end{pmatrix} \begin{pmatrix} p_{1}+p_{2} \\ p_{1} \end{pmatrix}2^{n-p_{1}-p_{2}}f(_{k}x) \\ &= 2^{n}4^{k}\left [\sum _{p_{2}=0}^{n-k} \begin{pmatrix} n-k \\ p_{2} \end{pmatrix}12^{p_{2}} \sum_{p_{1}=0}^{n-k-p_{2}} \begin{pmatrix} n-k-p_{2} \\ p_{1} \end{pmatrix}4^{n-k-p_{1}-p_{2}}(-3)^{p_{1}}\right ]f(_{k}x) \\ &=2^{n}4^{k}\left [\sum _{p_{2}=0}^{n-k} \begin{pmatrix} n-k \\ p_{2} \end{pmatrix}12^{p_{2}}(4-3)^{n-k-p_{2}} \right ]f(_{k}x) \\ &=2^{n}4^{k}(12+1)^{n-k}f(_{k}x)=2^{n+2k}13^{n-k}f(_{k}x). \end{aligned}$$ Hence \(f(_{k}x)=0\). This shows that \(f(x)=0\) for any \(x\in V^{n}\) with at least one component equal to zero. □ We now prove the main result of this section. Theorem 2.2 A mapping \(f: V^{n}\longrightarrow W\) is multiquartic if and only if it satisfies the functional equation (2.2). Let f be multiquartic. We prove that f satisfies the functional equation (2.2) by induction on n. For \(n=1\), it is trivial that f satisfies the functional equation (1.2). If (2.2) is valid for some positive integer \(n>1\), then $$\begin{aligned} &\sum_{t\in \{-2,2\}^{n+1}} f\bigl(x_{1}^{n+1}+tx_{2}^{n+1} \bigr) \\ &\quad =4\sum_{t\in \{-2,2\}^{n}} f\bigl(x_{1}^{n}+tx_{2}^{n},x_{1n+1}+x_{2n+1} \bigr)+4 \sum_{t\in \{-2,2\}^{n}} f\bigl(x_{1}^{n}+tx_{2}^{n},x_{1n+1}-x_{2n+1} \bigr) \\ &\quad\quad {} -6\sum_{t\in \{-2,2\}^{n}} f\bigl(x_{1}^{n}+tx_{2}^{n},x_{1n+1} \bigr)+24 \sum_{t\in \{-2,2\}^{n}} f\bigl(x_{1}^{n}+tx_{2}^{n},x_{2n+1} \bigr) \\ &\quad =4\sum_{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}\sum_{t\in \{-2,2\}}4^{n-p _{1}-p_{2}}(-6)^{p_{1}}24^{p_{2}}f \bigl(\mathcal{N}_{(p_{1},p_{2})} ^{n},x_{1n+1}+tx_{2n+1} \bigr) \\ &\quad\quad {} -6\sum_{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}}f \bigl(\mathcal{N}_{(p_{1},p_{2})}^{n},x_{1n+1} \bigr) \\ & \quad\quad {} +24\sum_{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}}f \bigl(\mathcal{N}_{(p_{1},p_{2})}^{n},x_{2n+1} \bigr) \\ &\quad =\sum_{p_{2}=0}^{n+1}\sum _{p_{1}=0}^{n+1-p_{2}}4^{n+1-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}}f \bigl(\mathcal{N}_{(p_{1},p_{2})}^{n+1} \bigr). \end{aligned}$$ This means that (2.2) holds for \(n+1\). Conversely, suppose that f satisfies the functional equation (2.2). Fix \(j\in \{1,\ldots,n\}\). Set $$\begin{aligned} \begin{aligned} f^{*}(x_{1j},x_{2j}): ={}&f (x_{11},\ldots,x_{1j-1},x_{1j}+x_{2j},x _{1j+1},\ldots,x_{1n} ) \\ & {} +f (x_{11},\ldots,x_{1j-1},x_{1j}-x_{2j},x_{1j+1}, \ldots,x _{1n} )\end{aligned} \end{aligned}$$ $$\begin{aligned} f^{*}(x_{2j}): &=f (x_{11}, \ldots,x_{1j-1},x_{2j},x_{1j+1}, \ldots,x_{1n} ). \end{aligned}$$ Putting \(x_{2k}=0\) for all \(k\in \{1,\ldots,n\}\backslash \{j\}\) in (2.2) and using Lemma 2.1, we get $$\begin{aligned} &2^{n-1}\bigl[f (x_{11},\ldots,x_{1j-1},x_{1j}+2x_{2j},x_{1j+1}, \ldots,x_{1n} ) \\ &\quad \quad {} +f (x_{11},\ldots,x_{1j-1},x_{1j}-2x_{2j},x_{1j+1}, \ldots,x _{1n} )\bigr] \\ &\quad =\sum_{p_{1}=0}^{n-1} \begin{pmatrix} n-1 \\ p_{1} \end{pmatrix}4^{n-p_{1}}(-6)^{p_{1}}2^{n-p_{1}-1}f^{*}(x_{1j},x_{2j}) \\ &\quad \quad {} +\sum_{p_{1}=1}^{n} \begin{pmatrix} n-1 \\ p_{1}-1 \end{pmatrix}4^{n-p_{1}}(-6)^{p_{1}}2^{n-p_{1}}f (x_{11},\ldots,x _{1n} ) \\ &\quad \quad {} +\sum_{p_{1}=1}^{n} \begin{pmatrix} n-1 \\ p_{1}-1 \end{pmatrix}4^{n-p_{1}}(-6)^{p_{1}-1}2^{n-p_{1}}f^{*}(x_{2j}) \\ &\quad =4\times 2^{n-1}\sum_{p_{1}=0}^{n-1} \begin{pmatrix} n-1 \\ p_{1} \end{pmatrix}4^{n-1-p_{1}}(-3)^{p_{1}}f^{*}(x_{1j},x_{2j}) \\ &\quad \quad {} -6\times 2^{n-1}\sum_{p_{1}=0}^{n-1} \begin{pmatrix} n-1 \\ p_{1} \end{pmatrix}4^{n-1-p_{1}}(-3)^{p_{1}}f (x_{11},\ldots,x_{1n} ) \\ &\quad \quad {} +24\times 2^{n-1}\sum_{p_{1}=0}^{n-1} \begin{pmatrix} n-1 \\ p_{1} \end{pmatrix}4^{n-1-p_{1}}(-3)^{p_{1}}f^{*}(x_{2j}) \\ &\quad =4\times 2^{n-1}f^{*}(x_{1j},x_{2j})-6 \times 2^{n-1}f (x_{11},\ldots,x_{1n} ) +24 \times 2^{n-1}f^{*}(x_{2j}). \end{aligned}$$ Note that we have used the fact that \(\sum_{p_{1}=0}^{n-1}\binom{n-1}{p _{1}} 4^{n-1-p_{1}}(-3)^{p_{1}}=(4-3)^{n-1}=1\) in the above computations. So this relation implies that f is quartic in the jth variable. Since j is arbitrary, we obtain the desired result. □ Stability results for the functional equation (2.2) For two sets X and Y, we denote by \(Y^{X}\) the set of all mappings from X to Y, In this section, we wish to prove the Hyers–Ulam stability of the functional equation (2.2) in normed spaces. The proof is based on a fixed point result that can be derived from [26, Theorem 1]. To state it, we introduce three hypotheses: (A1) Y is a Banach space, \(\mathcal{S}\) is a nonempty set, \(j\in \mathbb{N}\), \(g_{1},\ldots,g_{j}:\mathcal{S}\longrightarrow \mathcal{S}\), and \(L_{1},\ldots,L_{j}:\mathcal{S}\longrightarrow \mathbb{R}_{+}\), \(\mathcal{T}:Y^{\mathcal{S}}\longrightarrow Y^{ \mathcal{S}}\) is an operator satisfying the inequality $$ \bigl\Vert \mathcal{T}\lambda (x)-\mathcal{T}\mu (x) \bigr\Vert \leq \sum _{i=1}^{j}L_{i}(x) \bigl\Vert \lambda \bigl(g_{i}(x)\bigr)-\mu \bigl(g_{i}(x) \bigr) \bigr\Vert , \quad \lambda ,\mu \in Y^{\mathcal{S}}, x\in \mathcal{S}, $$ \(\varLambda :\mathbb{R}_{+}^{\mathcal{S}}\longrightarrow \mathbb{R}_{+}^{\mathcal{S}}\) is an operator defined as $$ \varLambda \delta (x):=\sum_{i=1}^{j}L_{i}(x) \delta \bigl(g_{i}(x)\bigr) \quad \delta \in \mathbb{R}_{+}^{\mathcal{S}}, x\in \mathcal{S}. $$ Here we highlight the following theorem. which is a fundamental result in fixed point theory [26, Theorem 1]. This result plays a key tool to obtain our objective in this paper. Let (A1)–(A3) hold and suppose that a function \(\theta : \mathcal{S}\longrightarrow \mathbb{R}_{+}\) and a mapping \(\phi : \mathcal{S}\longrightarrow Y\) fulfill the following two conditions: $$ \bigl\Vert \mathcal{T}\phi (x)-\phi (x) \bigr\Vert \leq \theta (x),\quad \quad \theta ^{*}(x):=\sum_{l=0}^{\infty } \varLambda ^{l}\theta (x)< \infty \quad (x\in \mathcal{S}). $$ Then there exists a unique fixed point ψ of \(\mathcal{T}\) such that $$ \bigl\Vert \phi (x)-\psi (x) \bigr\Vert \leq \theta ^{*}(x) \quad (x\in \mathcal{S}). $$ Moreover, \(\psi (x)=\lim_{l\rightarrow \infty }\mathcal{T}^{l}\phi (x)\) for all \(x\in \mathcal{S}\). For a given the mapping \(f:V^{n} \longrightarrow W\), we define the difference operator \(\varGamma f:V^{n}\times V^{n} \longrightarrow W\) by $$\begin{aligned} \varGamma f(x_{1},x_{2}) &:=\sum _{t\in \{-2,2\}^{n}} f(x_{1}+tx_{2})- \sum _{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}}f \bigl(\mathcal{N}_{(p_{1},p_{2})}^{n} \bigr) \end{aligned}$$ for all \(x_{1},x_{2}\in V^{n}\), where \(f (\mathcal{N}_{(p_{1},p _{2})}^{n} )\) is defined in (2.1). Definition 3.2 Let V be a vector space, let W be a normed space, and let \(\varphi :V^{n}\times V^{n} \longrightarrow \mathbb{R}_{+}\) be a function. We call that a mapping \(f:V^{n} \longrightarrow W\) is approximately \(\varphi (x_{1},x_{2})\)-multiquartic or briefly approximately φ-multiquartic if $$\begin{aligned} \bigl\Vert \varGamma f(x_{1},x_{2}) \bigr\Vert \leq \varphi (x_{1},x_{2}) \quad \bigl(x_{1},x_{2}\in V^{n}\bigr). \end{aligned}$$ In addition, the mapping \(f: V^{n}\longrightarrow W\) is called even in the jth variable if $$ f(z_{1},\ldots,z_{j-1},-z_{j},z_{j+1}, \ldots, z_{n})=f(z_{1},\ldots,z_{j-1},z_{j},z_{j+1}, \ldots, z_{n}), \quad z_{1},\ldots,z_{n}\in V. $$ We say that a mapping \(f: V^{n}\longrightarrow W\) satisfies the approximately φ-even-zero conditions if f is approximately φ-multiquartic; f is even in each variable; \(f(x)=0\) for any \(x\in V^{n}\) with at least one component equal to 0. Remark 3.3 We note that the approximately φ-even-zero conditions for the mapping \(f: V^{n}\longrightarrow W\) do not imply that f is multiquartic. Indeed, there are plenty of examples of f with the mentioned conditions that are not multiquartic. Here we give a concrete example for \(n=2\). Let \((\mathcal{A}, \Vert \cdot \Vert )\) be a Banach algebra. Fix a unit vector \(a_{0}\) in \(\mathcal{A}\). Define the mapping \(h:\mathcal{A}\times \mathcal{A} \longrightarrow \mathcal{A}\) by \(h(x,y)= \Vert x \Vert \Vert y \Vert a_{0}\) for \(x,y\in \mathcal{A}\). Clearly, h satisfies conditions (ii) and (iii). Define \(\varphi :\mathcal{A}^{2} \times \mathcal{A}^{2} \longrightarrow \mathbb{R}_{+}\) by $$ \varphi \bigl((a_{1},b_{1}),(a_{2},b_{2}) \bigr)=c\bigl( \Vert a_{1} \Vert + \Vert a_{2} \Vert \bigr) \bigl( \Vert b_{1} \Vert + \Vert b_{2} \Vert \bigr), \quad (a_{1},b_{1}),(a_{2},b_{2}) \in \mathcal{A}^{2}, $$ where \(c\geq 1172\). A computation shows that $$\begin{aligned} \bigl\Vert \varGamma h\bigl((a_{1},b_{1}),(a_{2},b_{2}) \bigr) \bigr\Vert &\leq \varphi \bigl((a_{1},b_{1}),(a _{2},b_{2})\bigr). \end{aligned}$$ Hence h satisfies the approximately φ-even-zero conditions, but it is not a 2-multiquartic mapping. In the next theorem, we prove the Hyers–Ulam stability of the functional equation (2.2). Let \(s\in \{-1,1\}\), let V be a linear space, and let W be a Banach space. Suppose that \(f: V^{n}\longrightarrow W\) satisfies approximately φ-even-zero conditions and $$\begin{aligned} \lim_{l\rightarrow \infty } \biggl(\frac{1}{2^{4ns}} \biggr)^{l} \varphi \bigl(2^{sl}x_{1},2^{sl}x_{2} \bigr)=0 \end{aligned}$$ for all \(x_{1},x_{2}\in V^{n}\). If $$\begin{aligned} \varPhi (x)=\frac{1}{2^{n+2n(s+1)}}\sum _{l=0}^{\infty } \biggl(\frac{1}{2^{4ns}} \biggr) ^{l}\varphi \bigl(0,2^{sl+\frac{s-1}{2}}x \bigr)< \infty \end{aligned}$$ for all \(x\in V^{n}\), then there exists a unique multiquartic mapping \(\mathfrak{Q}:V^{n} \longrightarrow W\) such that $$ \bigl\Vert f(x)-\mathfrak{Q}(x) \bigr\Vert \leq \varPhi (x) $$ for all \(x\in V^{n}\). Replacing \((x_{1},x_{2})\) by \((0,x)\) in (3.1) and using the assumptions, we have $$\begin{aligned} \left \Vert 2^{n}f(2x)-\sum _{p_{2}=0}^{n} \begin{pmatrix} n \\ p_{2} \end{pmatrix}4^{n-p_{2}}24^{p_{2}}2^{n-p_{2}}f(x) \right \Vert \leq \varphi (0,x) \end{aligned}$$ for all \(x\in V^{n}\). We note that \(\sum_{p_{2}=0}^{n} \binom{n}{p_{2}}4^{n-p_{2}}24^{p_{2}}2^{n-p_{2}}=(8+24)^{n}=32^{n}\). Inequality (3.5) implies that $$\begin{aligned} \bigl\Vert f(2x)-2^{4n}f(x) \bigr\Vert \leq \frac{1}{2^{n}}\varphi (0,x) \quad \bigl(x\in V^{n}\bigr). \end{aligned}$$ For each \(x\in V^{n}\), set $$ \xi (x):=\frac{1}{2^{n+2n(s+1)}}\varphi \bigl(0,2^{\frac{s-1}{2}}x \bigr), \quad \text{and} \quad \mathcal{T}\xi (x):=\frac{1}{2^{4ns}}\xi \bigl(2^{s}x\bigr) \quad \bigl(\xi \in W^{V^{n}}\bigr). $$ Then relation (3.6) can be written as $$\begin{aligned} \bigl\Vert f(x)-\mathcal{T}f(x) \bigr\Vert \leq \xi (x) \quad \bigl(x\in V^{n}\bigr). \end{aligned}$$ Define \(\varLambda \eta (x):=\frac{1}{2^{4ns}}\eta (2^{s}x)\) for \(\eta \in \mathbb{R}_{+}^{V^{n}}\) and \(x\in V^{n}\). We now see that Λ has the form described in (A3) with \(\mathcal{S}=V^{n}\), \(g_{1}(x)=2^{s}x\), and \(L_{1}(x)=\frac{1}{2^{4ns}}\) for \(x\in V^{n}\). Furthermore, for all \(\lambda ,\mu \in W^{V^{n}}\) and \(x\in V^{n}\), we get $$\begin{aligned} \bigl\Vert \mathcal{T}\lambda (x)-\mathcal{T}\mu (x) \bigr\Vert = \biggl\Vert \frac{1}{2^{4ns}} \bigl[\lambda \bigl(2^{s}x\bigr)-\mu \bigl(2^{s}x\bigr) \bigr] \biggr\Vert \leq L_{1}(x) \bigl\Vert \lambda \bigl(g_{1}(x)\bigr)-\mu \bigl(g_{1}(x) \bigr) \bigr\Vert . \end{aligned}$$ This relation shows that hypothesis (A2) holds. By induction on l we can check for any \(l\in \mathbb{N}_{0}\) and \(x\in V^{n}\) that $$\begin{aligned} \varLambda ^{l}\xi (x):= \biggl(\frac{1}{2^{4ns}} \biggr)^{l}\xi \bigl(2^{sl}x\bigr)= \frac{1}{2^{n+2n(s+1)}} \biggl(\frac{1}{2^{4ns}} \biggr)^{l}\varphi \bigl(0,2^{sl+\frac{s-1}{2}}x \bigr) \end{aligned}$$ for all \(x\in V^{n}\). Relations (3.3) and (3.8) ensure that all assumptions of Theorem 3.1 are satisfied. Hence there exists a unique mapping \(\mathfrak{Q}:V^{n} \longrightarrow W\) such that $$ \mathfrak{Q}(x)=\lim_{l\rightarrow \infty }\bigl(\mathcal{T}^{l}f \bigr) (x)= \frac{1}{2^{4ns}}\mathfrak{Q}\bigl(2^{s}x\bigr) \quad \bigl(x\in V^{n}\bigr) $$ and (3.4) holds. We will show that $$\begin{aligned} \bigl\Vert \varGamma \bigl(\mathcal{T}^{l}f\bigr) (x_{1},x_{2}) \bigr\Vert \leq \biggl( \frac{1}{2^{4ns}} \biggr) ^{l} \varphi \bigl(2^{sl}x_{1},2^{sl}x_{2} \bigr) \end{aligned}$$ for all \(x_{1},x_{2}\in V^{n}\) and \(l\in \mathbb{N}_{0}\). We argue by induction on l. Inequality (3.9) is valid for \(l=0\) by (3.1). Assume that (3.9) is true for \(l\in \mathbb{N}_{0}\). Then $$\begin{aligned} & \bigl\Vert \varGamma \bigl(\mathcal{T}^{l+1}f\bigr) (x_{1},x_{2}) \bigr\Vert \\ &\quad = \Biggl\Vert \sum_{t\in \{-2,2\}^{n}} \bigl( \mathcal{T}^{l+1}f\bigr) (x_{1}+tx_{2})- \sum _{p_{2}=0}^{n}\sum _{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p_{2}}(-6)^{p _{1}}24^{p_{2}} \bigl(\mathcal{T}^{l+1}f\bigr) \bigl(\mathcal{N}_{(p_{1},p_{2})} ^{n} \bigr) \Biggr\Vert \\ &\quad =\frac{1}{2^{4ns}} \Biggl\Vert \sum_{t\in \{-2,2\}^{n}} \bigl(\mathcal{T}^{l}f\bigr) \bigl(2^{s}(x _{1}+tx_{2})\bigr)-\sum_{p_{2}=0}^{n} \sum_{p_{1}=0}^{n-p_{2}}4^{n-p_{1}-p _{2}}(-6)^{p_{1}}24^{p_{2}} \bigl(\mathcal{T}^{l}f\bigr) \bigl(2^{s}\mathcal{N} _{(p_{1},p_{2})}^{n} \bigr) \Biggr\Vert \\ &\quad =\frac{1}{2^{4ns}} \bigl\Vert \varGamma \bigl(\mathcal{T}^{l}f \bigr) \bigl(2^{s}x_{1},2^{s}x _{2}\bigr) \bigr\Vert \leq \biggl(\frac{1}{2^{4ns}} \biggr)^{l+1} \varphi \bigl(2^{s(l+1)}x _{1},2^{s(l+1)}x_{2} \bigr) \end{aligned}$$ for all \(x_{1},x_{2}\in V^{n}\). Letting \(l\rightarrow \infty \) in (3.9) and applying (3.2), we obtain that \(\varGamma \mathfrak{Q}(x_{1},x_{2})=0\) for all \(x_{1},x_{2}\in V^{n}\). So the mapping \(\mathfrak{Q}\) satisfies (2.2) and thus is multiquartic. This finishes the proof. □ Let A be a nonempty set, let \((X,d)\) bea metric space, let \(\psi \in \mathbb{R}_{+}^{A^{n}}\), and let \(\mathcal{F}_{1}\), \(\mathcal{F}_{2}\) be operators mapping a nonempty set \(D\subset X^{A}\) into \(X^{A^{n}}\). We say that the operator equation $$\begin{aligned} \mathcal{F}_{1}\varphi (a_{1}, \ldots,a_{n})=\mathcal{F}_{2}\varphi (a _{1},\ldots,a_{n}) \end{aligned}$$ is ψ-hyperstable if every \(\varphi _{0}\in D\) satisfying the inequality $$ d\bigl(\mathcal{F}_{1}\varphi _{0}(a_{1}, \ldots,a_{n}),\mathcal{F}_{2} \varphi _{0}(a_{1},\ldots,a_{n})\bigr)\leq \psi (a_{1},\ldots,a_{n}), \quad a_{1}, \ldots,a_{n}\in A, $$ fulfils (3.10); this definition is introduced in [27]. Under some conditions, the functional equation (2.2) can be hyperstable as the following corollary shows. Corollary 3.5 Let \(\delta >0\). Suppose that \(\chi _{kj}>0\) for \(k\in \{1,2\}\) and \(j\in \{1,\ldots,n\}\) fulfill \(\sum_{k=1}^{2}\sum_{j=1}^{n}\chi _{kj} \neq 4n\). Let V be a normed space, and let W be a Banach space. If \(f:V^{n} \longrightarrow W\) satisfies approximately \(\prod_{k=1}^{2} \prod_{j=1}^{n} \Vert x_{kj} \Vert ^{\chi _{kj}}\delta \)-even-zero conditions, then it is multiquartic. In the following corollaries, which are direct consequences of Theorem 3.4, we show that the functional equation (2.2) is stable. Since the proofs are routine, we include them without proofs. Let \(\lambda \in \mathbb{R}\) with \(\lambda \neq 4n\). Let V be a normed space, and let W be a Banach space. If \(f:V^{n} \longrightarrow W\) satisfies approximately \(\sum_{k=1}^{2}\sum_{j=1}^{n} \Vert x_{kj} \Vert ^{ \lambda }\)-even-zero conditions, then there exists a unique multiquartic mapping \(\mathfrak{Q}:V^{n} \longrightarrow W\) such that $$ \bigl\Vert f(x)-\mathfrak{Q}(x) \bigr\Vert \leq \textstyle\begin{cases} \frac{2^{4n}}{2^{5n}(2^{4n}-2^{\lambda })}\sum_{j=1}^{n} \Vert x_{1j} \Vert ^{\lambda }, & \lambda < 4n, \\ \frac{1}{2^{n}(2^{\lambda}-2^{4n})}\sum_{j=1}^{n} \Vert x_{1j} \Vert ^{\lambda }, & \lambda > 4n, \end{cases} $$ for all \(x=x_{1}\in V^{n}\). Let \(\delta >0\). Let V be a normed space, and let W be a Banach space. If \(f:V^{n} \longrightarrow W\) satisfies approximately δ-even-zero conditions, then there exists a unique multiquartic mapping \(\mathfrak{Q}:V^{n} \longrightarrow W\) such that $$ \bigl\Vert f(x)-\mathfrak{Q}(x) \bigr\Vert \leq \frac{2^{4n}}{2^{5n}(2^{4n}-1)}\delta $$ We have applied an alternative fixed point method to prove the Hyers–Ulam stability for the multiquartic functional equations in the normed spaces, and we have proved that under some mild conditions, every approximately multiquartic mapping is a multiquartic mapping. Ulam, S.M.: Problems in Modern Mathematics. Science Editions. Wiley, New York (1964) MATH Google Scholar Hyers, D.H.: On the stability of the linear functional equation. Proc. Natl. Acad. Sci. USA 27, 222–224 (1941) Aoki, T.: On the stability of the linear transformation in Banach spaces. J. Math. Soc. Jpn. 2, 64–66 (1950) Rassias, T.M.: On the stability of the linear mapping in Banach spaces. Proc. Am. Math. Soc. 72, 297–300 (1978) Găvruţa, P.: A generalization of the Hyers–Ulam–Rassias stability of approximately additive mappings. J. Math. Anal. Appl. 184, 431–436 (1994) Rassias, J.M.: On approximately of approximately linear mappings by linear mappings. J. Funct. Anal. 46, 126–130 (1982) Ciepliński, K.: Generalized stability of multi-additive mappings. Appl. Math. Lett. 23, 1291–1294 (2010) Ciepliński, K.: On the generalized Hyers–Ulam stability of multi-quadratic mappings. Comput. Math. Appl. 62, 3418–3426 (2011) Zhao, X., Yang, X., Pang, C.T.: Solution and stability of the multiquadratic functional equation. Abstr. Appl. Anal. 2013, Article ID 415053 (2013) MathSciNet MATH Google Scholar Brzdȩk, J., Ciepliński, K.: Remarks on the Hyers–Ulam stability of some systems of functional equations. Appl. Math. Comput. 219, 4096–4105 (2012) Jun, K., Kim, H.: The generalized Hyers–Ulam–Rassias stability of a cubic functional equation. J. Math. Anal. Appl. 274, 267–278 (2002) Bodaghi, A., Shojaee, B.: On an equation characterizing multi-cubic mappings and its stability and hyperstability. arXiv:1907.09378v1 Bodaghi, A.: Intuitionistic fuzzy stability of the generalized forms of cubic and quartic functional equations. J. Intell. Fuzzy Syst. 30, 2309–2317 (2016) Bodaghi, A., Moosavi, S.M., Rahimi, H.: The generalized cubic functional equation and the stability of cubic Jordan ∗-derivations. Ann. Univ. Ferrara 59, 235–250 (2013) Eghbali, N., Rassias, J.M., Taheri, M.: On the stability of a k-cubic functional equation in intuitionistic fuzzy n-normed spaces. Results Math. 70, 233–248 (2016) Eskandani, Z., Rassias, J.M.: Stability of general A-cubic functional equations in modular spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 112, 425–435 (2018). https://doi.org/10.1007/s13398-017-0388-5 MathSciNet Article MATH Google Scholar Jun, K., Kim, H.: On the Hyers–Ulam–Rassias stability of a general cubic functional equation. Math. Inequal. Appl. 6, 289–302 (2003) Rassias, J.M.: Solution of the Ulam stability problem for cubic mappings. Glas. Mat. Ser. III 36, 63–72 (2001) Rassias, J.M.: Solution of the Ulam stability problem for quartic mappings. Glas. Mat. Ser. III 34, 243–252 (1999) Bodaghi, A.: Stability of a quartic functional equation. Sci. World J. 2014, Article ID 752146 (2014). https://doi.org/10.1155/2014/752146 Kang, D.: On the stability of generalized quartic mappings in quasi-β-normed spaces. J. Inequal. Appl. 2010, Article ID 198098 (2010). https://doi.org/10.1155/2010/198098 Baker, J.A.: The stability of certain functional equations. Proc. Am. Math. Soc. 112, 729–732 (1991) Bahyrycz, A., Ciepliński, K., Olko, J.: On an equation characterizing multi-additive-quadratic mappings and its Hyers–Ulam stability. Appl. Math. Comput. 265, 448–455 (2015) Bahyrycz, A., Ciepliński, K., Olko, J.: On an equation characterizing multi-Cauchy–Jensen mappings and its Hyers–Ulam stability. Acta Math. Sci. Ser. B Engl. Ed. 35, 1349–1358 (2015) Bahyrycz, A., Ciepliński, K., Olko, J.: On Hyers–Ulam stability of two functional equations in non-Archimedean spaces. J. Fixed Point Theory Appl. 18, 433–444 (2016) Brzdȩk, J., Chudziak, J., Páles, Z.: A fixed point approach to stability of functional equations. Nonlinear Anal. 74, 6728–6732 (2011) Brzdȩk, J., Ciepliński, K.: Hyperstability and superstability. Abstr. Appl. Anal. 2013, Article ID 401756 (2013) The authors sincerely thank the anonymous reviewer for her/his careful reading, constructive comments, and fruitful suggestions that substantially improved the manuscript. Department of Mathematics, Garmsar Branch, Islamic Azad University, Garmsar, Iran Abasalt Bodaghi Research Institute for Natural Sciences, Hanyang University, Seoul, Korea Choonkil Park School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa Oluwatosin T. Mewomo All authors conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript. Correspondence to Choonkil Park. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Bodaghi, A., Park, C. & Mewomo, O.T. Multiquartic functional equations. Adv Differ Equ 2019, 312 (2019). https://doi.org/10.1186/s13662-019-2255-5 Banach space Multiquartic mapping Hyers–Ulam stability Fixed point method
CommonCrawl
Solving the electron and muon \(g-2\) anomalies in \(Z'\) models Arushi Bodas1, Rupert Coy ORCID: orcid.org/0000-0002-6789-44382 & Simon J. D. King3 The European Physical Journal C volume 81, Article number: 1065 (2021) Cite this article A preprint version of the article is available at arXiv. We consider simultaneous explanations of the electron and muon \(g-2\) anomalies through a single \(Z'\) of a \(U(1)'\) extension to the Standard Model (SM). We first perform a model-independent analysis of the viable flavour-dependent \(Z'\) couplings to leptons, which are subject to various strict experimental constraints. We show that only a narrow region of parameter space with an MeV-scale \(Z'\) can account for the two anomalies. Following the conclusions of this analysis, we then explore the ability of different classes of \(Z'\) models to realise these couplings, including the SM\(+U(1)'\), the N-Higgs Doublet Model\(+U(1)'\), and a Froggatt–Nielsen style scenario. In each case, the necessary combination of couplings cannot be obtained, owing to additional relations between the \(Z'\) couplings to charged leptons and neutrinos induced by the gauge structure, and to the stringency of neutrino scattering bounds. Hence, we conclude that no \(U(1)'\) extension can resolve both anomalies unless other new fields are also introduced. While most of our study assumes the Caesium \((g-2)_e\) measurement, our findings in fact also hold in the case of the Rubidium measurement, despite the tension between the two. The excellent agreement between the Standard Model (SM) and experimental observations makes the persisting anomalies all the more interesting. One long-standing discrepancy between theory and experiment is that of the anomalous magnetic dipole moment of the muon, \(a_{\mu } \equiv (g-2)_{\mu }/2\), which has recently been updated to a \(4.2\sigma \) tension with the SM [1,2,3],Footnote 1 $$\begin{aligned} \Delta a_{\mu } \equiv a_\mu ^\mathrm{exp} - a_\mu ^\mathrm{SM} = (2.51 \pm 0.59) \times 10^{-9} \, . \end{aligned}$$ Further data from the ongoing Muon g-2 experiment at Fermilab is expected to reduce the uncertainty by a factor of four [5], and the future J-PARC experiment forecasts similar precision [6], both of which should clarify the status of this disagreement. To add to the puzzle, an anomaly emerged in the electron sector due to (a) an improved measurement of fine-structure constant, \(\alpha _{\mathrm{em}}\), using Caesium atoms [7], from which the value of \((g-2)_e\) may be extracted, and (b) an updated theoretical calculation [8]. This yielded a discrepancy in the electron anomalous magnetic moment of $$\begin{aligned} \Delta a_e^{\text {Cs}} \equiv a_e^\mathrm{exp}(\mathrm{Cs}) - a_e^\text {SM} = (-8.7 \pm 3.6) \times 10^{-13} \, , \end{aligned}$$ which constitutes a \(2.4 \sigma \) tension with the SM [9]. Notably, this has the opposite sign to the muon anomaly, Eq. (1). Recently, however, a new measurement of the fine-structure constant using Rubidium atoms gave [10] $$\begin{aligned} \Delta a_e^{\text {Rb}} \equiv a_e^\mathrm{exp }(\mathrm{Rb}) - a_e^\text {SM} = (4.8 \pm 3.0) \times 10^{-13} \, . \end{aligned}$$ This is a milder anomaly, the discrepancy between experiment and SM being only 1.6\(\sigma \), and it is in the same direction as the muon anomaly. Remarkably, the Caesium and Rubidium measurements of \(\alpha _{\mathrm{em}}\) disagree by more than \(5\sigma \), therefore it is difficult to obtain a consistent picture of \(a_e^\text {exp}\). Given this uncertain status quo, in this paper we choose to focus predominantly on the earlier Caesium result, Eq. (2), and only discuss the Rubidium result in Sect. 5 (which, however, is the first \(Z'\) analysis of this new experimental situation, to the best of our knowledge). The presence of dual anomalies in the electron and muon sectors motivates an exploration of new physics models that could simultaneously explain both. Moreover, the relative size and sign of these anomalies poses an interesting theoretical challenge. Let us consider these issues. Firstly, the opposite signs of \(\Delta a_\mu \) and \(\Delta a_e^{\text {Cs}}\) (from now on we will drop the superscript) immediately excludes all new physics models whose contribution to the magnetic dipole moment of charged leptons has a fixed sign. The dark photon [11], for instance, generates \(\Delta a_{e,\mu } > 0\), and therefore cannot satisfy the dual anomalies. Secondly, the contribution from flavour-universal new physics to \((g-2)\) is generally expected to be proportional to the mass or mass squared of the lepton (see e.g. [12, 13]), whereas from Eqs. (1) and (2) we find $$\begin{aligned} \frac{m_e^2}{m_\mu ^2} \ll \left| \frac{\Delta a_e}{\Delta a_{\mu }} \right| \sim 3.5 \times 10^{-4} \ll \frac{m_e}{m_\mu } \, . \end{aligned}$$ These considerations, along with numerous low scale constraints discussed below, lead to significant model-building obstacles. So far, various attempts have been made to explain the anomalies, with different solutions relying on the introduction of new scalars, SUSY, leptoquarks, vector-like fermions, or other BSM mechanisms, see e.g. [9, 14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46]. In this paper, we study a rather unexplored possibility that a (light) \(Z'\) boson with flavour-dependent lepton couplings accounts for both anomalies. A new gauge boson of a \(U'(1)\) symmetry is a well-motivated candidate for many BSM models. It has long been considered a possible explanation of the \((g-2)_\mu \) anomaly [47] (see also e.g. [48,49,50,51,52]), thus it seems important to investigate if a \(U(1)'\) extension of the SM can at the same time also resolve the \((g-2)_e\) anomaly. One immediate advantage of \(Z'\) models is that it is possible to generate positive or negative contributions to the magnetic moment simply by adjusting the relative size of its vector and axial couplings to fermions, as will be shown below. We focus on the \(Z'\) in mass range \(m_e< m_{Z'} < m_\mu \), which is a natural consequence of various experimental bounds (more on this in Sects. 2.2 and 3). A \(Z'\) in the MeV mass range has been of interest (see e.g. [53,54,55,56,57,58]) due to hints of a new 17 MeV boson to explain anomalies in nuclear transitions observed by the Atomki collaboration, both in Beryllium [59], and more recently Helium [60]. Models with MeV-scale \(Z'\) also have the capacity to generate \(\Delta N_\text {eff} \simeq 0.2\) in the early Universe [61], thereby somewhat ameliorating the Hubble tension [62]. The question then is whether the scenario survives the wealth of sensitive experiments, in particular for \(m_{Z'} \sim {\mathcal {O}}(\text {MeV})\). To answer this, we first perform a model-independent analysis to identify regions in the parameter space of \(Z'\) models that can successfully explain both the \((g-2)\) anomalies. This to our knowledge is the first study of this scenario in such a general and model-independent way, although a specific \(Z'\) model was previously studied in the context of the dual \((g-2)\) anomalies and found not to work [63]. Note that we are focusing on the minimal scenario where the additional contribution to the anomalous magnetic moments comes solely from the \(Z'\), which is different from some of the other models studied in literature that include a \(Z'\) plus other new fields (e.g. [39, 44, 63]). The conclusions from our model-independent analysis serve as a powerful tool in checking the viability of various specific \(Z'\) models, and we hope that it will be useful for more complex model-building. The layout is as follows: Sect. 2 introduces our conventions for the effective \(Z'\) couplings and potential origins of these couplings. We study experimental constraints on these couplings in Sect. 3, summarising our findings in Figs. 2 and 3. In light of the array of experiments probing light vector bosons in the near future, we discuss the discovery potential of such a \(Z'\) in Sect. 3.3. Equipped with the model-independent analysis, in Sect. 4 we consider several models and the challenges they face. We demonstrate that some of the simplest and most common classes of \(U(1)'\) extensions of the SM cannot explain the two anomalies simultaneously. Finally, in Sect. 5 we address the Rubidium \((g-2)_e\) anomaly and study the capacity of a \(Z'\) model to explain it in conjunction with the \((g-2)_\mu \) anomaly. Formalism of a light \(Z'\) Effective \(Z'\) couplings In the most general framework, a new \(Z'\) with family-dependent charged lepton couplings leads to flavour violation. However, in this paper we assume that the charged lepton Yukawa matrix and the matrix of charged lepton \(Z'\) couplings are simultaneously diagonalisable and therefore the \(Z'\) has only lepton-flavour conserving couplings. Various flavour models predict such scenarios (see, for instance, [64]) and in this way we avoid stringent limits on flavour-violation, such as from \(\mu \rightarrow e \gamma \) [65]. Flavour-conserving couplings of fermions to the \(Z'\) can be described through \({\mathcal {L}} = - Z'_\mu J^\mu _{Z'}\), with gauge current, $$\begin{aligned} J_{Z'} ^\mu = \sum _f {\bar{\psi }}_f \gamma ^\mu ( C_{L f}P_L + C_{R f} P_R) \psi _f \, . \end{aligned}$$ Rewriting the charged lepton interactions in terms of vector and axial couplings, \(C_{V(A)~f} = (C_{R f}\pm C_{L f})/2\), gives $$\begin{aligned} {\mathcal {L}} \supset- & {} \sum \limits _{\alpha = e,\mu ,\tau } \left[ \overline{\ell _\alpha } \gamma ^\mu \left( C_{V\alpha } + C_{A \alpha } \gamma _5 \right) \ell _\alpha \nonumber \right. \\+ & {} \left. C_{\nu \alpha } \overline{\nu _\alpha } \gamma ^\mu P_L \nu _\alpha \right] Z'_\mu ~. \end{aligned}$$ It is typically a simple exercise to derive these effective couplings for a given model. For now we assume that the different effective couplings are unrelated. In models with no extra fermions, there are three different contributions to the couplings of SM fermions to the \(Z'\) arising from a \(U(1)'\) gauge group. These are: Charge assignment of the fermion under the \(U(1)'\) (flavour dependent). Gauge-Kinetic Mixing (GKM) arising from the Lagrangian term \({\mathcal {L}} ^\text {GKM} = -\frac{\varepsilon }{2} B'_{\mu \nu } X'^{\mu \nu }\), where \(B'_{\mu \nu }\) and \(X'_{\mu \nu }\) are the field strength tensors of hypercharge and \(U(1)'\), respectively (flavour universal). \(Z-Z'\) mass mixing, which is generated if the SM Higgs sector is charged under the \(U(1)'\) (flavour universal). The combination of these three contributions can generate variety of vector and axial couplings. As explained in the introduction, in this work we are concerned with exploring the possibility that a single \(Z'\) accounts for the \((g-2)_{e,\mu }\) discrepancies. We will firstly survey the parameter space in a model-independent way in terms of the effective lepton-\(Z'\) couplings defined in Eg. (6). The conclusions from this analysis are then used in Sects. 4 and 5 to study whether these couplings can be realised in a few specific classes of \(Z'\) models. The one-loop \(Z'\) contribution to the anomalous magnetic moment of a charged lepton Contribution to the charged lepton anomalous magnetic moment The \(Z'\) modifies the magnetic moment of a charged lepton via the one-loop diagram in Fig. 1. In the notation of Eq. (6), the contribution for a charged lepton of flavour \(\alpha \) is [66] $$\begin{aligned} \Delta a_\alpha= & {} \frac{m_\alpha ^2}{4\pi ^2 m_{Z'}^2} \left( C_{V\alpha }^2 \int _0^1 \frac{x^2 (1-x)}{1 - x + x^2 m_\alpha ^2/m_{Z'}^2} dx\nonumber \right. \\&\left. - C_{A\alpha }^2 \int _0^1 \frac{x (1-x) (4 - x) + 2 x^3 m_\alpha ^2/ m_{Z'}^2}{1 - x + x^2 m_\alpha ^2/m_{Z'}^2} dx \right) \, . \nonumber \\ \end{aligned}$$ In the limits \(m_\alpha \ll m_{Z'}\) and \(m_\alpha \gg m_{Z'}\), this simplifies to $$\begin{aligned} \Delta a_\alpha \simeq {\left\{ \begin{array}{ll} m_\alpha ^2 \left( C_{V\alpha }^2 - 5 C_{A\alpha }^2 \right) /(12\pi ^2 m_{Z'}^2)~, &{} m_\alpha \ll m_{Z'} \, , \\ (m_{Z'}^2 C_{V\alpha }^2 - 2m_\alpha ^2 C_{A\alpha }^2)/(8\pi ^2 m_{Z'}^2) ~, &{} m_\alpha \gg m_{Z'} \, . \end{array}\right. } \nonumber \\ \end{aligned}$$ We see that the way to achieve correct signs for the contributions to muon and electron anomalies (\(\Delta a_e < 0\) and \(\Delta a_\mu >0\)) is with a non-zero axial coupling for the electron (\(C_{Ae}\)) and vector coupling for the muon (\(C_{V\mu }\)). We remark that it is impossible to satisfy both the anomalies simultaneously if we demand flavour universality, i.e. \(C_{Ve} = C_{V \mu }\) and \(C_{Ae} = C_{A \mu }\). This is straightforward to see from Eq. (8) when \(m_{Z'} < m_e\) or \(m_{Z'} > m_\mu \). For the remaining case of \(m_e< m_{Z'} < m_{\mu }\), solving the anomalies demands \(|C_{Ae}| > rsim 0.45 \, |C_{Ve}|\) and \(|C_{V\mu }| \gg |C_{A\mu }|\), which is inconsistent with flavour universality.Footnote 2 This is precisely why we consider models with flavour-dependent \(Z'\) couplings in this paper. We may now make some broad arguments about preferred \(m_{Z'}\) values. In the case of a light \(Z'\) with \(m_{Z'}\ll m_e\), even the smallest effective couplings required to explain the anomalies, accomplished by setting \(C_{Ve}=C_{A \mu }=0\), lead to orders of magnitude between \(C_{V \mu }\) and \(C_{A e}\), which could only be accounted for by either an orders of magnitude difference in their charges under the \(U(1)'\) or a very fine-tuned cancellation of the flavour-dependent part of \(C_{Ae}\) against the flavour-universal contribution. We will see in Sect. 3 that such a light \(Z'\) with couplings sufficiently large that it satisfies the anomalies is in any case excluded by cosmological constraints. Therefore, we will focus on \(m_{Z'} > m_e\). For the heavy regime, i.e. \(m_{Z'} \gg m_\mu \), two arguments follow. Firstly, considering the muon sector, in the region \(2m_{\mu } < m_{Z'} \le 10 \text { GeV}\), the new vector boson is excluded by BaBar from its decay into two muons [67], while for 5 GeV \(\le m_{Z'} \le 70\) GeV it is similarly excluded by CMS [68]. Turning to the electron sector, we note that for \(m_{Z'} > rsim 10 \) GeV, the axial coupling to electrons required to satisfy the anomaly in electron sector is \(|C_{Ae}| > rsim 0.1\). With such large couplings to electrons, any GeV-scale object would have likely showed a signal at previous colliders, such as SLAC RF linac, LEP and LHC runs. A heavy \(Z'\) solution to the anomalies therefore seems improbable given these considerations. Finally, in the intermediate range, \(m_e< m_{Z'}< m_\mu \), the values of \(C_{Ae}\) and \(C_{V\mu }\) required to explain the two anomalies are of a similar order of magnitude, which lends this mass range to potentially more natural, i.e. less fine-tuned, solutions and so we will focus on this regime in the remainder of this paper. Model-independent analysis of constraints on \(Z'\) couplings The effective couplings introduced in Eq. (6) are subject to a wide variety of constraints, which we shall now discuss. In general, the \(Z'\) could couple to all SM fermions, and indeed there are some rather stringent bounds on \(Z'\) couplings to quarks. However, we will focus on \(Z'\) interactions with electrons and muons, those being the critical ones for the explanation of the \((g-2)_{e,\mu }\) anomalies. Since lepton doublets contain both charged leptons and neutrinos, non-zero effective couplings to charged leptons generally imply effective couplings to neutrinos, which have their own experimental constraints. This will be borne out in the example models considered in Sect. 4.Footnote 3 For a given explicit model, there may be many additional constraints. These can arise in several different ways. Firstly, as mentioned just above, the \(Z'\) may also couple to the tau or to quarks. Bounds on \(Z'\) couplings to light quarks are discussed for instance in [53, 54, 56,57,58]. Secondly, \(Z-Z'\) mixing leads to a shift in Z boson couplings, which have been very precisely measured at LEP [69], as well as other electroweak-scale parameters. While there may be such model-dependent bounds, the goal of this section is to study the viability or otherwise of a \(Z'\) solution to the two anomalies based on leptonic \(Z'\) couplings alone. The plethora of experimental constraints are described below, with our results summarised in Figs. 2 and 3 . Couplings to electrons We first outline the most important limits on the effective couplings of the \(Z'\) to electrons (\(C_{Ve}\), \(C_{Ae}\)) and electron neutrinos (\(C_{\nu e}\)). Cosmological and astrophysical bounds MeV-scale states with even very small interactions with electrons or neutrinos (effective couplings as tiny as \(|C| \sim 10^{-9}\)) can remain in thermal contact with the SM plasma during Big Bang Nucleosynthesis (BBN) and thereby significantly alter early universe cosmology. Bounds on the masses of electrophilic and neutrinophilic vector bosons from various cosmological probes were calculated in [70]. Combining BBN and Planck data, they found at 95.4% CL that an electrophilic \(Z'\), i.e. \(\sqrt{C_{Ve}^2 + C_{Ae}^2} \gg |C_{\nu e}|\), is constrained to have a mass of at least 9.8 MeV. From Eqs. (2) and (8), we see that for \(m_{Z'} > rsim \) MeV, the effective electron-\(Z'\) coupling should be \(|C_{Ae}| > 10^{-6}\), so the BBN bounds do apply here. The limit is slightly weakened for larger \(|C_{\nu e}|\), therefore we take \(m_{Z'} \ge 9.8\) MeV as a conservative lower bound on our \(Z'\) mass.Footnote 4 The \(Z'\) also affects various aspects of stellar evolution. The most critical of these for a MeV-scale \(Z'\) is white dwarf cooling [71]. The \(Z'\) mediates an additional source of cooling, via \(e^+ e^- \rightarrow Z' \rightarrow \nu {\bar{\nu }}\). Since the \(Z'\) mass under consideration is much larger than white dwarf temperatures, \(T_{WD} \sim 5\) keV, this can be treated as an effective four-fermion interaction at the scale \(T_{WD}\) with the \(Z'\) integrated out. Motivated by the good agreement between predictions and observations of white dwarf cooling, the benchmark set by [71] is that new sources of cooling should not exceed SM ones. We therefore impose $$\begin{aligned} \frac{\sqrt{(C_{Ve}^2 + C_{Ae}^2) (C^2_{\nu e}+C^2_{\nu \mu }+C^2_{\nu \tau })}}{m_{Z'}^2} \le G_F \, , \end{aligned}$$ as an approximate bound. When plotting this constraint in Fig. 2, we assume that only \(C_{Ve}\), \(C_{Ae}\) and \(C_{\nu e}\) are non-zero. Finally, we note that a \(Z'\) which couples to neutrinos can also be an additional source of energy loss for supernovae, if it is able to escape the supernova core. We followed the formalism in Appendix B of [61] and enforced that the additional energy loss due to the \(Z'\) is no greater than the energy loss in the SM during the first ten seconds of the supernova explosion. However, for a roughly MeV-scale \(Z'\) this observation only constrains a band of effective couplings \(10^{-12} \lesssim C_{\nu \alpha } \lesssim 10^{-7}\), which is much too small to be relevant for the anomalies. Collider and beam dump bounds A stringent limit on \(Z'\) interactions with electrons comes from the BaBar experiment, which searched for a dark photon, \(A'\), via \(e^+ e^- \rightarrow \gamma A'\) with \(A' \rightarrow e^+ e^-\). The results are reported in [72] and probe masses from 20 MeV up to 10.2 GeV. The bound on \(\varepsilon \), the kinetic mixing parameter in the dark photon model arising from the gauge-kinetic term \({\mathcal {L}}^\text {GKM} \supset -\frac{\varepsilon }{2}B'_{\mu \nu } X'^{\mu \nu }\) can be converted into a limit on \(\sqrt{C_{Ve}^2 + C_{Ae}^2}\). We neglect the statistical fluctuations in the BaBar bound (cf. Fig. 4 of [72]), opting conservatively to extrapolate from the most constraining points of the 90% confidence exclusion region and obtain our bound by interpolating between these. This constraint becomes mildly stronger with \(Z'\) mass, with for instance $$\begin{aligned} \sqrt{(C_{Ve}^2 + C_{Ae}^2) \text {BR}(Z' \rightarrow e^+ e^-)} \lesssim 3 (6) \times 10^{-4} \, , \end{aligned}$$ for \(m_{Z'} > rsim 40(20)\) MeV. For \(m_{Z'} < 2m_\mu \), the \(Z'\) is sufficiently light that it decays only to electrons and neutrinos. We will see that the couplings to electrons should generically be much larger than couplings to neutrinos, thus \(\text {BR}(Z' \rightarrow e^+ e^-) \approx 1\). The BaBar result alone rules out a vast region of parameter space. The smallest axial \(Z'-e\) coupling required to satisfy the \((g-2)_e\) discrepancy is given by \(|C_{Ae}| \simeq 9\times 10^{-6} (m_{Z'}/\text {MeV})\), as can be seen from Eq. (8) by setting \(C_{Ve} = 0\). Then the BaBar bound \(|C_{Ae}| \lesssim 3\times 10^{-4}\) for \(m_{Z'} > rsim 40\) MeV rules out all solutions (notwithstanding some statistical fluctuations) with a \(Z'\) heavier than 40 MeV up to the largest mass probed by the experiment, 10.2 GeV. This limit only strengthens for \(C_{Ve} \ne 0\), since a larger \(C_{Ae}\) is then required to explain \((g-2)_e\), while at the same time \(C_{Ae}\) is more constrained because BaBar bounds the combination \(\sqrt{C_{Ve}^2 + C_{Ae}^2}\). The KLOE experiment also constrains the \(Z'\) coupling to electrons [73]. Although generally weaker than BaBar's limit, its exclusion region covers additional parameter space since the experiment probes masses as small as 5 MeV. For these low masses, the bound is around $$\begin{aligned} \sqrt{(C_{Ve}^2 + C_{Ae}^2) \text {BR}(Z' \rightarrow e^+ e^-)} \lesssim 6 \times 10^{-4} \, . \end{aligned}$$ Beam dump experiments probe the \(Z'\) couplings to electrons, since the \(Z'\) may be produced and detected via \(e^- + Z \rightarrow e^- + Z'[\rightarrow e^+ e^-]\), see e.g. [74]. The produced \(Z'\)s should therefore decay in the dump before they reach the detector. The best bound comes from NA64 [75], which sets limits on a \(Z'\) with masses between 1 MeV and 24 MeV. A further stringent bound on the parameter space comes from the precise measurement of parity-violating Møller scattering at SLAC [76]. For \(Z'\) masses below around 100 MeV, the bound is independent of \(m_{Z'}\) and yields [77] $$\begin{aligned} |C_{Ve} C_{Ae}| \lesssim 10^{-8} \, . \end{aligned}$$ As indicated above, a tiny \(C_{Ve}\) is ideal for explaining the \((g-2)_e\) anomaly while avoiding collider constraints with as small a value of \(\sqrt{C_{Ve}^2 + C_{Ae}^2}\) as possible. Taking \(C_{Ve}\) close to zero is clearly also an efficient way to evade this Møller scattering limit. Neutrino scattering bounds Very strong restrictions on the effective couplings come from measurements of neutrino-charged lepton scattering [55]. There have been many experiments testing neutrino interactions. Here we study the most relevant ones: TEXONO [78] Borexino [79], and CHARM-II [80]. These experiments are known to be among the most constraining in general (see e.g. [53, 55, 56, 58]), they cover a range of energies and different neutrino flavours. Let us first consider TEXONO. The typical energy transfer in a scattering event is \(\sim \sqrt{m_e T}\), where \(3 \text {~MeV} \le T \le 8 \) MeV is the electron recoil energy. With \(m_{Z'} > rsim 10\) MeV (as enforced by the limits from cosmology), we may safely make the assumption \(m_{Z'} \gg \sqrt{m_e T}\). The correction to the SM cross-section of anti-neutrino scattering is then $$\begin{aligned}&\frac{\sigma (\overline{\nu _e} e^- \rightarrow \overline{\nu _e} e^-)}{\sigma (\overline{\nu _e} e^- \rightarrow \overline{\nu _e} e^-)^\text {SM}} \simeq 1 \nonumber \\&\quad + \left( 2.07 C_{Ve} + 1.39 C_{Ae} \right) 10^{11} C_{\nu e} \left( \frac{\text {MeV}}{m_{Z'}} \right) ^2 \nonumber \\&\quad + \left( 1.37 C_{Ve}^2 + 2.62 C_{Ve} C_{Ae} + 1.64 C_{Ae}^2 \right) \left( 10^{11} C_{\nu e} \right) ^2\nonumber \\&\qquad \times \left( \frac{\text {MeV}}{m_{Z'}} \right) ^4 ~, \end{aligned}$$ following Ref. [55]. Comparing this with the TEXONO measurement, \(\sigma (\overline{\nu _e} e^- \rightarrow \overline{\nu _e} e^-)^\text {exp} = (1.08 \pm 0.26) \times \sigma (\overline{\nu _e} e^- \rightarrow \overline{\nu _e} e^-)^\text {SM}\) [78] puts extremely stringent bounds on the \(Z'\) effective couplings. Borexino measures the scattering of solar neutrinos. The electron neutrino survival probability is measured as \((51 \pm 7)\%\), while the experiment cannot distinguish muon and tau neutrinos. For simplicity, we therefore assume that 50% of the scattered neutrinos are electron neutrinos, with 25% each of muon and tau neutrinos.Footnote 5 Then the scattering rate induced by the \(Z'\) is $$\begin{aligned}&\frac{\sigma (\nu e^- \rightarrow \nu e^-)}{\sigma (\nu e^- \rightarrow \nu e^-)^\text {SM}} \simeq 1 + 10^{10} \left( \frac{\text {MeV}}{m_{Z'}} \right) ^2\nonumber \\&\qquad \times \Big [ C_{Ve} \left( 6.86 C_{\nu e} - 1.16 C_{\nu \mu } - 1.16 C_{\nu \tau } \right) \nonumber \\&\qquad + C_{Ae} \left( - 8.27 C_{\nu e} + 2.22 C_{\nu \mu } + 2.22 C_{\nu \tau } \right) \Big ] \nonumber \\&\qquad + 10^{21} \left( \frac{\text {MeV}}{m_{Z'}} \right) ^4 \left( 1.38 C_{Ae}^2 + 0.81 C_{Ve }^2 - 1.38 C_{Ve} C_{Ae} \right) \nonumber \\&\qquad \times \left( 2 C_{\nu e}^2 + C_{\nu \mu }^2 + C_{\nu \tau }^2 \right) ~. \end{aligned}$$ The cross-section including new physics should not deviate from the SM cross-section by more than about 10\(\%\) [79, 81], and this restriction sets a strong limit on the parameter space. Note from Eqs. (13) and (14) that \(C_{Ve}\) and \(C_{Ae}\) can both be large as long as the \(C_{\nu e,\mu ,\tau }\) are sufficiently small. Constraints on the mass and effective couplings of the \(Z'\) to the electron sector. In the upper plots, we have set \(C_{\nu e} = 0\) and taken \(|C_{Ve}|\) so that the contribution of the \(Z'\) loop induces a value of \(a_e\) which is a \(1\sigma \) below, and b \(1 \sigma \) above the experimental value. The shaded regions are excluded and in each plot a thin white strip of allowed parameter space remains, indicated by the red arrow. In the lower plots the allowed neutrino coupling is shown, with zero vector coupling \(C_{Ve}=0\), and axial coupling \(C_{Ae}>0\) such that the contribution of the \(Z'\) loop induces a value of \(a_e\) which is c \(1\sigma \) below, and d \(1 \sigma \) above the experimental value Analysis of constraints in the electron sector We now combine all the constraints discussed above to analyse viable parameter space for the explanation of \(\Delta a_{e}\). Our results are summarised in Fig. 2. In the plots, effective couplings to muons and taus are set to zero, which is relevant for the bounds from White Dwarfs and Borexino, cf. Eqs. (9) and (14) respectively. We first set the neutrino coupling, \(C_{\nu e}\), to zero in Fig. 2a, b to analyse limits solely on the electron couplings, and plot constraints on the axial electron coupling, \(C_{Ae}\), against the vector boson mass, \(m_{Z'}\). We focus on the axial coupling for two reasons. Firstly, \(C_{Ae}\) generates the \(\Delta a_e < 0\) required by experiment. Secondly, axial couplings provide various model-building challenges, see Sect. 4. For each value of \(C_{Ae}\) and \(m_{Z'}\), in Fig. 2a, b we choose \(|C_{Ve}|\) such that the \(Z'\) loop induces a correction to \(a_e\) that is respectively \(1\sigma \) less than and \(1\sigma \) greater than the discrepancy of Eq. (2), i.e. \(\Delta a_e = -12.3 \times 10^{-13}\) in Fig. 2a and \(\Delta a_e = -5.1 \times 10^{-13}\) in Fig. 2b. This therefore displays the full range of \(Z'\) masses and axial couplings which can reduce the \(a_e\) anomaly to less than \(1\sigma \). Note that the signs of \(C_{Ae}\) and \(C_{Ve}\) are irrelevant for Fig. 2a, b since all constraints in these plots bound only their absolute values. The blue triangular regions in the lower right half of Fig. 2a, b correspond to values of \(C_{Ae}\) and \(m_{Z'}\) such that it is impossible to generate the desired deviation in \(a_e\), regardless of the value of \(C_{Ve}\). The other shaded regions are excluded by the experimental constraints discussed above. In both plots there is a thin white strip, bounded between the blue \(\Delta a_e\) and yellow Møller exclusion regions, which represents the allowed parameter space. The smallness of these allowed regions shows that even before any model-building considerations are taken into account, it is rather difficult to satisfy the \((g-2)_e\) anomaly while obeying the copious experimental constraints we have mentioned. In the white strips, \(|C_{Ve}| < |C_{Ae}|\). Indeed, since the parity-violating Møller scattering bound is \(|C_{Ve} C_{Ae}| \lesssim 10^{-8}\), we find that since \(|C_{Ae}| > rsim 1.3 \times 10^{-4}\) is needed to explain the anomalies, we therefore have \(C_{Ve} \lesssim 7.7 \times 10^{-5}\). As the vector coupling of \(Z'\) to electron is required to be smaller than the axial coupling, Eq. 8 for electron is well approximated by $$\begin{aligned} \Delta a_e \simeq - 5 \, m_e^2 \,C_{Ae}^2\,/ (12\pi ^2 \,m_{Z'}^2) \, . \end{aligned}$$ Following this conclusion, we set \(C_{Ve} = 0\) in Fig. 2c, d to explore the maximum allowed parameter space for the neutrino coupling, \(C_{\nu _e}\), against the mass \(m_{Z'}\). Similar to before, in the left plot, Fig. 2c, we set \(C_{Ae}\) such that \(a_e\) is \(1 \sigma \) below its experimental value, while in the right plot, Fig. 2d, we set \(a_e\) to \(1\sigma \) above it. These values of \(C_{Ae}\) can be taken from Eq. (15). We take \(C_{Ae} >0\) here: if instead \(C_{Ae}<0\), Fig. 2c, d look the same but reflected about the x-axis, since all bounds are invariant under \(C_{\nu e} \rightarrow - C_{\nu e}\) and \(C_{Ae} \rightarrow -C_{Ae}\) when \(C_{Ve} = 0\). The Texono and White Dwarf bounds become apparent in these two plots, however we note that both Møller scattering and KLOE constraints are satisfied when \(C_{Ve} = 0\) and therefore do not appear. A key conclusion from Fig. 2c, d is that NA64 and BaBar effectively restrict the mass range of a \(Z'\) which can satisfy the \(a_e\) anomaly to within \(1\sigma \) to be \(16 \, \mathrm {MeV} \, \lesssim m_{Z'} \lesssim 38 \, \mathrm {MeV}\). In both cases, Borexino gives the strongest constraint on neutrino coupling, \(|C_{\nu e}| \lesssim 10^{-5}\), more than an order of magnitude smaller than the required axial coupling. In summary, it is clear that a \(Z'\) solution to just the \((g-2)_e\) anomaly alone requires some specific ingredients. In particular, \(|C_{Ae}| \sim O(10^{-4})\) should be larger than \(|C_{Ve}|\) and at least an order of magnitude larger than \(|C_{\nu e}|\), while the mass of \(Z'\) is constrained to a small window 16 MeV \(\lesssim m_{Z'} \lesssim \) 38 MeV. Couplings to muons Now we turn to the bounds on the effective couplings of the \(Z'\) to muons and the muon neutrino, namely \(C_{V\mu }\), \(C_{A\mu }\), and \(C_{\nu \mu }\). There are fewer bounds on these than on the couplings to electrons for a few reasons. One is that electrons, being stable, are far easier to handle experimentally. Another reason is that we are led to probe \(Z'\) masses sufficiently light that they don't decay into muons. Then, as we have seen, various experiments constrain \(C_{Ve,Ae}\) from the absence of \(Z' \rightarrow e^+ e^-\) but cannot similarly constrain \(C_{V\mu ,A \mu }\) from the absence of the \(Z' \rightarrow \mu ^+ \mu ^-\) decays as these are already kinematically forbidden. Despite this, there remain various strict limits on \(Z'\) interactions with muons and muon neutrinos. When \(|C_{\nu \mu }| > rsim 10^{-9}\), bounds from BBN and Planck studied by [70] set a lower limit on the \(Z'\) mass, \(m_{Z'} > rsim 8.3\) MeV. This is similar to the limit on new electrophilic species outlined at the start of Sect. 3.1. For \(|C_{\nu \mu }| \lesssim 10^{-9}\), however, it may seem that lower \(m_{Z'}\) masses are in principle allowed. For a light \(Z'\) (\(m_{Z'} \ll m_\mu \)), a minimum vector coupling to muons of \(|C_{V\mu }| > rsim 4\times 10^{-4}\) is required to reduce the \((g-2)_\mu \) tension to within \(1\sigma \). This coupling ensures that the \(Z'\) was in thermal equilibrium with the SM at earlier times. After decoupling (at temperature \(T \sim m_\mu /10\)), a very light \(Z'\) would constitute an extra relativistic species contributing to the expansion rate of the Universe during neutrino decoupling and BBN, which took place between \(10 \,\mathrm{keV} \lesssim T \lesssim 2 \, \mathrm{MeV}\). In general, we therefore consider \(m_{Z'} > rsim 2\) MeV to avoid constraints from measurements of \(N_{\mathrm{eff}}\) and primordial element abundances. Additionally, we note that a study of energy loss in supernovae due to \(Z'-\mu \) interactions by [82] rules out a \(Z'\) with coupling \(|C_{V\mu }| > rsim 4\times 10^{-4}\) for masses less than \({\mathcal {O}}(100)\) eV.Footnote 6 Recall, however, that for \(m_{Z'} > rsim 100\) eV, the effective coupling required to explain the \((g-2)_{e}\) anomaly must be greater than \(10^{-9}\). With an interaction of this size, the BBN bound on a new electrophilic species dictates that \(m_{Z'}\) must be at least in the MeV range. We can therefore rule out the possibility of an extremely light \(Z'\) (i.e. \(m_{Z'} \ll \) MeV) being able to explain the two \(g-2\) anomalies. Its mass must consequently be at least 16 MeV, as we showed from the analysis of constraints on \(Z'\) couplings to the electron sector in the previous section. Several neutrino scattering experiments bound couplings to muons and muon neutrinos. The most stringent of these are Borexino and CHARM-II, introduced above. The Borexino result was given in Eq. (14). The mean (anti)neutrino energy in the CHARM-II experiment is much larger than the \(Z'\) masses we consider, with \(\langle E_\nu \rangle = 23.7\) GeV and \(\langle E_{{\bar{\nu }}} \rangle = 19.1\) GeV [80], therefore the approximation \(m_{Z'} \gg \sqrt{m_e T}\) which we used to obtain Eqs. (13) and (14) cannot be used. We apply the formalism in [55, 81] to obtain numerical results, which enter into Fig. 3 by enforcing that the shift in the neutrino scattering cross-section induced by the \(Z'\) is no greater than \(6\%\) [55]. We mention that some doubts on the CHARM-II analysis were presented in [56], however we do not enter into this discussion. A \(Z'\) with couplings to muons and muon neutrinos also modifies the neutrino trident process, \(\nu _\mu N \rightarrow \nu _\mu \mu ^+ \mu ^- N\) [83]. Neglecting the coupling \(C_{A \mu }\), since \(|C_{A \mu }| \ll |C_{V \mu }|\) is necessary to explain the \((g-2)_\mu \) anomaly when \(m_{Z'} \lesssim m_\mu \) (see Eq. (8)), the trident cross-section including the \(Z'\) contribution is [83] $$\begin{aligned} \frac{\sigma _\text {Trident}}{\sigma _\text {Trident}^\text {SM}}\simeq & {} 1 + 5.6 \times 10^5 C_{V\mu } C_{\nu \mu }\nonumber \\&+ 1.3 \times 10^{11} C_{V \mu }^2 C_{\nu \mu }^2 \log \frac{m_\mu ^2}{m_{Z'}^2} ~. \end{aligned}$$ This can be compared with the CCFR measurement, \(\sigma ^\text {CCFR}/\sigma ^\text {SM} = 0.82 \pm 0.28\) [84], to give a constraint. Constraints on the mass and effective couplings of the \(Z'\) in the muon sector. In a we have set \(C_{\nu \mu } = 10^{-5}\) fixed \(C_{A\mu }\) so that the \(a_\mu \) anomaly is exactly satisfied, while in b we show contours of the values of \(|C_{A\mu }|\) this corresponds to, as a function of \(|C_{V\mu }|\) and \(m_{Z'}\). In the bottom two plots we focus on the neutrino couplings, setting \(C_{Ae} >0\) and \(C_{Ve} = 0\) such that the contribution of the \(Z'\) to \(a_e\) is c \(1\sigma \) below, and d \(1\sigma \) above the experimental value. See text for more details Analysis of constraints in the muon sector We combine the results of the above constraints in Fig. 3. In Fig. 3a we plot the allowed regions for the effective vector coupling of the \(Z'\) to the muon, \(C_{V \mu }\), against the \(Z'\) mass, with relevant constraints overlaid, while setting \(C_{\nu \mu } = 10^{-5}\) and fixing \(C_{A\mu }\) such that \((g-2)_{\mu }\) is satisfied. The large white-space shows the parameter space which can explain the \((g-2)_\mu \) anomaly and is not excluded. It reflects the fact that when a light \(Z'\) couples only to muons, there are few relevant constraints. \(|C_{V\mu }|\) can be as large as 0.05 in the mass region of interest, 16 MeV \(\lesssim m_{Z'} \lesssim 38\) MeV. In purple on the left of Fig. 3a, c, d is the mass bound \(m_{Z'} > rsim 8.3\) MeV from BBN and Planck data. The pink bounds at the top and bottom of (a) come from the CCFR neutrino trident measurement, see Eq. (16). The thin blue line for \(|C_{V\mu }| \lesssim 5\times 10^{-4}\) gives the region for which \(|C_{V\mu }|\) is too small to reduce the \((g-2)_\mu \) tension to within \(1\sigma \), even if \(C_{A\mu } = 0\). Then (b) shows contours of the necessary values of \(|C_{A\mu }|\) to exactly satisfy the anomaly, given \(m_{Z'}\) and \(C_{V\mu }\). This can be as large as \({\mathcal {O}}(10^{-3})\), but must always be at least factor of a few smaller than \(|C_{V\mu }|\). The bounds on the \(Z'\) interaction with muon neutrinos are significantly stronger. Figure 3c and d show bounds on neutrino couplings from various experiments (we take \(C_{\nu e} = C_{\nu \tau } = 0\)). We must invoke couplings to electrons, since modifications to both neutrino scattering on electrons and white dwarf cooling necessarily depend on the \(Z'\) coupling to electrons, as does \(Z'\) detection at beam dumps. To be as minimal as possible, we take only non-zero \(C_{Ae}\), assuming \(C_{Ve} = 0\). In (c) \(C_{Ae} > 0\) is set (as a function of \(m_{Z'}\)) such that \(a_e\) is \(1\sigma \) below its experimental value, while in (d) it is set such that \(a_e\) is instead \(1\sigma \) above. This allows us to see the full range of allowed \(C_{\nu \mu }\). Clearly, its absolute value cannot be much larger than \(\sim 2 \times 10^{-5}\), which justifies the choice of \(C_{\nu \mu }\) in plot (a). Taking \(C_{Ae} < 0\) instead would only flip (c) and (d) about the x-axis, since the neutrino scattering and white dwarf constraints are invariant under \(C_{Ae} \rightarrow -C_{Ae}\) and \(C_{\nu \mu } \rightarrow -C_{\nu \mu }\) when those are the only non-zero couplings. Future discovery potential Having surveyed the current limits, in this section we will discuss future experiments which could discover (or preclude) the low scale \(Z'\) explanation of \((g-2)_{e,\mu }\) by closing the allowed parameter space given in Figs. 2 and 3 (keeping in mind that these were generated assuming the Caesium \(a_e\) result). The place to start is with the magnetic dipole moment anomalies themselves. The two highly inconsistent measurements of \(\alpha _\text {em}\) (from which the value of \((g-2)_e\) is derived) made in Caesium [7] and Rubidium [10] atoms demand a third independent experiment to resolve the situation. It is indeed not even clear whether an anomaly exists. On top of this, the Muon g-2 and J-PARC experiments [5, 6] are expected to provide improved measurements of \(a_\mu \), which is particularly important given the recent debate about the SM prediction [4]. Beyond this, there are several future experiments which are expected to test the allowed \(Z'\) couplings to charged leptons. We note first of all that an improved measurement of parity-violating Møller scattering can never close the parameter space, as this bounds the combination \(|C_{Ve} C_{Ae}|\), which can always be satisfied by taking one of \(C_{Ve}\) or \(C_{Ae}\) to zero while the other (depending on the sign of the \(a_e\) anomaly) explains the discrepancy. Thus, we will not discuss future experiments in this area. To fully probe the available space, we require other bounds to be strengthened. Currently, the lower bound on the \(Z'\) mass, \(m_{Z'} > rsim 16\) MeV, is fixed from NA64's visible decay limits. Future NA64 results after the LHC's Long Shutdown 2 (LS2) should probe masses up to around 20 MeV [85]. For higher \(Z'\) masses, the most sensitive future experiment will be Belle-II [86], from the visible dark photon search mode \(A' \rightarrow \ell \ell \). With 50 ab\({}^{-1}\) luminosity (expected in 2025), the projected sensitivity is [87] $$\begin{aligned} \sqrt{(C_{Ve}^2 + C_{Ae} ^2) \text {BR}(Z' \rightarrow e^+ e^-)} \lesssim 9 \times 10^{-5} . \end{aligned}$$ An alternative experiment with similar sensitivity is MAGIX at MESA [88], which is also currently under construction and expecting results in the next few years. The combination of NA64 and Belle-II (or MAGIX) could entirely rule out or discover low scale \(Z'\) explanations of the current Caesium \((g-2)_e\) result. Beam dumps (e.g. FASER [89] and SHiP [90]) are also expected to play a role. This provides hope that a firm conclusion could be reached within the next few years. The MUonE experiment [91] will probe the product of couplings to electrons and to muons. In this way it is a unique test of a \(Z'\) which explains both anomalies, because it is required to have significant couplings to both leptons. The experiment is expected to cover a significant portion of the parameter space which remains open, see [92, 93]. Finally, we point out that while there are many dark photon experiments beyond those listed above, many do not directly test our framework. There are two reasons for this. Firstly, we are concerned with the lepton-\(Z'\) couplings only, so experiments which involve production of the \(Z'\) though quarks are not applicable. This includes electron-proton scattering (such as DarkLight [94]), proton-proton scattering and pion decays (e.g. NA62 [95]). Secondly, we require visible (\(Z' \rightarrow ee\)) decays of the \(Z'\), which excludes the invisible-only experiments such as PADME [96], VEPP-3 [97], BDX [98] and LDMX [99]. Consequently, the available parameter space in Figs. 2 and 3, and hence the discovery potential, may only be fully reached by the small number of experiments which focus on vector bosons produced by leptons and which decay to \(e^+ e^-\). Viability of specific \(Z'\) models Having completed our model-independent analysis in Sect. 3, we now turn to specific realisations of \(Z'\) models. The ingredients for the simultaneous explanation of the \((g-2)\) anomalies with a single \(Z'\) are: A light \(Z'\) in the mass range \(16 \; \mathrm{MeV} \lesssim m_{Z'} \lesssim 38 \; \mathrm{MeV}\). Axial coupling of the \(Z'\) to electrons larger than the vector coupling: \(|C_{Ae}| \sim [1-3.2] \times 10^{-4}, \, |C_{Ve}| \lesssim 7.7\times 10^{-5}\). Large vector coupling to muons, \(5 \times 10^{-4} < |C_{V\mu }| \lesssim 0.05\), and an axial coupling \(C_{A\mu }\) that is smaller by at least a factor of a few. Tiny \(Z'\) couplings to neutrinos: \(|C_{\nu _e,\, \nu _\mu }| \lesssim 10^{-5}\). We now attempt to realise this hierarchy of couplings in various classes of \(Z'\) models, each of which inevitably introduces additional relations between effective couplings. We will begin with the simplest case of just the SM extended by a \(U(1)'\). We will then move onto a scenario with additional Higgs doublets, and finally discuss the viability of a Froggatt–Nielsen style model, in which the gauge invariance of the charged lepton Yukawa interactions is relaxed. Note that in each case the dominant contribution to the shift in \((g-2)_{e,\mu }\) comes solely from the \(Z'\). Before commencing, we also remark that the cancellation of gauge anomalies is crucial for constructing a consistent theory. The \(U(1)'^3\) and \(U(1)'\text {grav}^2\) anomalies can always be satisfied by introducing additional chiral fermions which are charged under the \(U(1)'\) but sterile with respect to the SM (in fact one needs at most five [100]). The anomaly cancellation conditions involving SM groups are typically more challenging to satisfy. However, this section addresses the primary question of whether it is possible to generate the desired effective couplings, without delving into how to do so in an anomaly-free way. SM+\(U(1)'\) First consider a minimal \(Z'\) model, in which the SM is extended by a gauge \(U(1)'\) and also add a scalar, S, charged under the \(U(1)'\), whose non-zero VEV, \(\langle S \rangle = v_S/\sqrt{2}\), spontaneously breaks the \(U(1)'\) symmetry. We note here that this unspecified \(U(1)'\) covers in particular the case of gauging combinations of electron, muon and tau number, i.e. \(U(1)_{x e + y\mu + z\tau }\) for some x, y, z. Let us establish the formalism, which will also be useful for the subsequent models. In general there is mixing between \(U(1)_Y\) and \(U(1)'\), and the kinetic terms for the pair of U(1)s can be written as $$\begin{aligned} {\mathcal {L}}_\text {kin} \supset - \frac{1}{4} B'_{\mu \nu } B'^{\mu \nu } - \frac{1}{4} X'_{\mu \nu } X'^{\mu \nu } - \frac{\varepsilon }{2} B'_{\mu \nu } X'^{\mu \nu } ~, \end{aligned}$$ where \(X'_\mu \) is the gauge field associated with \(U(1)'\) and \(X'_{\mu \nu }\) is the corresponding field strength tensor. An appropriate rotation and rescaling of fields removes the mixing (see e.g. [101]), $$\begin{aligned} \begin{pmatrix} B'_{\mu }\\ X'_{\mu } \end{pmatrix} = \begin{pmatrix} 1 &{} \frac{-\varepsilon }{\sqrt{1-\varepsilon ^2}}\\ 0 &{} \frac{1}{\sqrt{1-\varepsilon ^2}} \end{pmatrix} \begin{pmatrix} B_{\mu }\\ X_{\mu } \end{pmatrix} \, , \end{aligned}$$ and leaves the couplings in the covariant derivative in the form, $$\begin{aligned} D_\mu = \partial _\mu + i g_1 Y B_\mu + i ({\tilde{g}} Y + g' z) X_\mu ~, \end{aligned}$$ where \(g'\) and \(g_1\) are the respective \(U(1)'\) and \(U(1)_Y\) gauge couplings, and Y and z are the respective charges of the field under \(U(1)_Y\) and \(U(1)'\). In the above, we have only kept terms that are leading order in the kinetic mixing parameter \(\varepsilon \), which is taken to be small. This gives \({\tilde{g}} \simeq -g_1 \varepsilon \). Breaking the EW and \(U(1)'\) symmetries and diagonalising the gauge boson mass matrix, we move into the basis of mass eigenstates, \(A_\mu \), \(Z_\mu \), and \(Z'_\mu \), using $$\begin{aligned} \begin{pmatrix} B_{\mu }\\ W_{\mu }^{3}\\ X_{\mu } \end{pmatrix} = \begin{pmatrix} c_w &{} -s_w c_{\phi } &{} s_w s_{\phi }\\ s_w &{} c_w c_{\phi } &{} -c_w s_{\phi }\\ 0 &{} s_{\phi } &{} c_{\phi } \end{pmatrix} \begin{pmatrix} A_{\mu }\\ Z_{\mu }\\ Z'_{\mu } \end{pmatrix} \, , \end{aligned}$$ where w is the weak-mixing angle, \(\phi \) is the \(Z-Z'\) mixing angle, and s (c) denotes sine (cosine). This gauge boson mixing is given by $$\begin{aligned} \tan 2\phi \simeq \frac{2z'_H g' e/(s_w c_w)}{z'^2_H g'^2 + (2 z_S g' v_S/v)^2 - e^2/(s_w^2 c_w^2)} ~, \end{aligned}$$ where \(z'_H \simeq {\tilde{g}}/g' + 2 z_H\), \(z_H\) (\(z_S\)) is the \(U(1)'\) charge of the Higgs (S). Finally, after outlining this procedure, we can write the effective couplings of SM fermions to the gauge boson mass eigenstates. We find that the effective couplings for charged leptons at leading order in \(g', {\tilde{g}}\) are $$\begin{aligned} C_{V\alpha }&\simeq -{\tilde{g}} c_w^2 + \frac{g'}{2} ( z_H (4 s_w^2 - 1) + z_{L\alpha } + z_{R\alpha }) ~, \end{aligned}$$ $$\begin{aligned} C_{A\alpha }&\simeq \frac{g'}{2} ( z_H + z_{R\alpha } - z_{L\alpha }) ~, \end{aligned}$$ where \(z_{L\alpha }\) (\(z_{R\alpha }\)) is the \(U(1)'\) charge of the lepton doublet (singlet), \(l_{L\alpha }\) (\(e_{R\alpha }\)). Here we see from the \(U(1)'\) invariance of the SM charged lepton Yukawa couplings, \({\mathcal {L}} \supset - \overline{l_L} Y_e H e_R + h.c.\), we have that \(z_{L\alpha } = z_{R \alpha } + z_H\). Consequently, \(C_{A\alpha } = 0\) at leading order in \(g', {\tilde{g}}\), and therefore \(|C_{V\alpha }| \gg |C_{A\alpha }|\). Under this condition, the \(Z'\) with \(m_{Z'} > m_e\) always induces a positive shift in \(a_e\), cf. Eq. (8), which is the wrong direction for explaining the Caesium anomaly. The simplest \(U(1)'\) extension of the SM can therefore be ruled out as a possibility of resolving both \((g-2)_e\) and \((g-2)_\mu \) discrepancies. NHDM\(+U(1)'\) We have seen that extending the SM by just a gauge \(U(1)'\) and a scalar does not give us enough freedom to arrange \(|C_{Ae}| > rsim |C_{Ve}|\). There are several options to circumvent this problem, including a) introducing new fermions which mix with the SM ones, b) extending the Higgs sector, or c) removing the gauge invariance of the Yukawas via a Froggatt–Nielsen [102] type set-up. In the case of option (a), our analysis is not valid because loops involving the new fermions could also contribute to \((g-2)_{e,\mu }\).Footnote 7 Here we consider option (b). This was previously explored e.g. in the context of the Atomki anomaly [103]. In Sect. 4.3 we will consider option (c). Let us take the type-I 2HDM, wherein all SM fermions couple to the same Higgs doublet, \(H_2\). This choice will not be important for the following discussion, since we are concerned only with the lepton couplings, thus our discussion is general. We can also generalise to the case of many Higgs doublets, see for instance Appendix A of [55]. The key point is that this set-up modifies Eq. (24) and therefore permits non-negligible axial couplings. The kinetic mixing between \(U(1)_Y\) and \(U(1)'\) and the subsequent modification of covariant derivatives is as described in Eqs. (18)–(20). The neutral gauge boson mass mixing is modified by the presence of two Higgs fields, \(H_{1,2}\), with \(U(1)'\) charges \(z_{1,2}\) and VEVs \(\langle H_{1,2} \rangle = (0~,~ v_{1,2}/\sqrt{2})^T\), where \(v_1 = v \cos \beta \) and \(v_2 = v \sin \beta \). Then the mixing angle is given by $$\begin{aligned} \tan 2 \phi \simeq \frac{2z_H g' e/(s_w c_w)}{z_{H^2}^2 g'^2 + (2 z_S g' v_S/v)^2 - e^2/(s_w^2 c_w^2)} ~, \end{aligned}$$ $$\begin{aligned}&z_H = z_1' c_\beta ^2 + z_2' s_\beta ^2 ~,&z_{H^2} = z_1'^2 c_\beta ^2 + z_2'^2 s_\beta ^2 ~,&\end{aligned}$$ with \(z_j' = {\tilde{g}}/g' + 2 z_j\) for \(j = 1,2\). Note that in the limit \(\beta \rightarrow 0 ~(\pi )\), i.e. when only \(v_1\) (\(v_2\)) is non-zero, we recover the result of Eq. (22) up to \(z_H \rightarrow z_1 ~(z_2)\). Accounting for the kinetic and mass mixing, the effective couplings for charged leptons and neutrinos at leading order in in \(g', {\tilde{g}}\) are $$\begin{aligned} C_{V\alpha }&\simeq z_{L \alpha } g' - c_w^2 {\tilde{g}}- \frac{g'}{2} \left[ (1 - 4s_w^2) c_\beta ^2 z_1 \nonumber \right. \\&\left. \quad + ( 1 + s_\beta ^2 - 4s_w^2 s_\beta ^2 ) z_2 \right] \end{aligned}$$ $$\begin{aligned} C_{A \alpha }&\simeq \frac{(z_1 - z_2)}{2} c_\beta ^2 g' \end{aligned}$$ $$\begin{aligned} C_{\nu \alpha }&\simeq - \frac{{\tilde{g}}}{2} + g' \left( z_{L \alpha } + \frac{z_H}{2}\right) ~, \end{aligned}$$ using that the \(U(1)'\)-invariance of the charged lepton Yukawa couplings demands \(z_{R\alpha } = z_{L\alpha } - z_2\). We see that \(C_{A \alpha }\) can be non-zero when \(z_1 \ne z_2\), and that it is flavour-universal. \(C_{V\alpha }\) and \(C_{\nu \alpha }\), on the other hand, are flavour-dependent. However, both depend linearly on \(z_{L \alpha }\), so that $$\begin{aligned} C_{Ve} - C_{V \mu } = g' (z_{Le} - z_{L\mu }) = C_{\nu e} - C_{\nu \mu } ~. \end{aligned}$$ Consequently, there are not six independent effective couplings \(C_{V\alpha }, C_{A \alpha }, C_{\nu \alpha }\) for \(\alpha = e,\mu \), but rather only four are independent. Given this, it is in fact simple to argue that this class of models cannot simultaneously explain the \((g-2)_{e,\mu }\) anomalies. Our model-independent analysis in Sect. 3 established that due to the stringency of the bounds from neutrino scattering experiments, the effective neutrino couplings must be tiny: \(C_{\nu e}, C_{\nu \mu } \lesssim 10^{-5}\), cf. Figs. 2 and 3. From Eq. (30), this implies that we need \(|C_{Ve} - C_{V\mu }| \lesssim 10^{-5}\). However, it is apparent from points 2 and 3 of the summary list at the beginning of this section that \(|C_{Ve} - C_{V\mu }| > rsim 4 \times 10^{-4}\). Clearly, this framework is not successful. In the simplest \(U(1)'\) extension of the SM, only the \((g-2)_\mu \) anomaly could be resolved as it was impossible to generate significant axial couplings of the \(Z'\). Introducing additional Higgs fields enables large axial couplings, so that either the \((g-2)_e\) or the \((g-2)_\mu \) anomaly may be explained. However, the correlations between different effective couplings and the strength of the bounds on neutrino couplings conspire to preclude an explanation of both anomalies at the same time. Froggatt–Nielsen model A second way to generate sizeable axial couplings, as is necessary to explain the Caesium \((g-2)_e\) anomaly, is by considering a Froggatt–Nielsen type model [102]. In this set-up, we modify the charged lepton Yukawa interactions to some effective interactions of the form, $$\begin{aligned} {\mathcal {L}} \supset - \frac{\lambda _{\alpha \beta }}{\Lambda ^{n_{\alpha \beta }}} \overline{l_{L\alpha }} H e_{R\beta } \varphi ^{n_{\alpha \beta }} + h.c. \end{aligned}$$ Here \(\lambda _{\alpha \beta } = \lambda _\alpha \delta _{\alpha \beta }\) is a diagonal matrix of couplings (in the charged lepton mass basis), \(\varphi \) is a flavon, \(n_{\alpha \beta } = n_\alpha \delta _{\alpha \beta }\) is a diagonal matrix whose entries are determined by the \(U(1)'\) charges of the flavon and the SM leptons, and \(\Lambda \) is the scale of some unspecified UV physics. Then the SM charged lepton Yukawa couplings are recovered at the non-zero VEV of the flavon, i.e. \(y_\alpha = \lambda _{\alpha } (\langle \varphi \rangle /\Lambda )^{n_{\alpha }}\). More complicated set-ups can also be written down (e.g. the clockwork model of [58]), and there may be more than one flavon. The introduction of flavons removes the relation between the \(U(1)'\) charges of the SM leptons and the Higgs. This permits non-vanishing axial \(Z'\) couplings at leading order in \(g'\), unlike in the standard SM\(+U(1)'\) scenario, (recall Eq. (24)). From Eq. (31) we have \(z_H + z_{R\alpha } - z_{L\alpha } = -n_\alpha \), and we have the freedom to treat \(n_\alpha \) as a free, family-dependent parameter.Footnote 8 In all other respects, the formalism of this model follows that of the \(U(1)'\) extension outlined in Sect. 4.1. Equations (22)–(24) still hold, while the effective neutrino couplings are given by $$\begin{aligned} C_{\nu \alpha } \simeq g' (z_H + z_{L\alpha }) \, , \end{aligned}$$ at leading order in \(g',{\tilde{g}}\). This model was previously studied in [57] to explain the Atomki Beryllium anomaly [59], another instance in which unsuppressed \(C_{Ae}\) is required. Combining Eqs. (23), (24) and (32) gives $$\begin{aligned} C_{Ve} - C_{Ae} - C_{\nu e} = C_{V\mu } - C_{A\mu } - C_{\nu \mu } \, . \end{aligned}$$ This is a generalisation of Eq. (30) to the case of non-universal \(C_{A}\). However, we see from Fig. 2a and Eq. (7) that in order to reduce the \((g-2)_{e,\mu }\) anomalies to \(< 1 \sigma \) while satisfying all experimental constraints, we require \(|C_{Ae}| \lesssim 3.2 \times 10^{-4}\) and \(|C_{V\mu }| > rsim 4.8 \times 10^{-4}\), given \(m_{Z'} > rsim 16\) MeV, the lower bound on the \(Z'\) mass obtained in Sect. 3. However, we have established that \(|C_{\nu e}|,|C_{\nu \mu }| \ll 10^{-4}\), while the Møller scattering bound gives \(|C_{Ve}| \lesssim 3 \times 10^{-5}\) for \(|C_{Ae}| \sim 3 \times 10^{-4}\). Finally, a sizeable \(|C_{A\mu }|\) demands an even larger \(|C_{V\mu }|\), since \(\Delta a_\mu \propto (m_{Z'}^2 C_{V\mu }^2 - m_\mu ^2 C_{A \mu }^2)\) to a good approximation, see Eqs. (7) and (8). Thus, \(|C_{Ve} - C_{Ae} - C_{\nu e}| < 3.5 \times 10^{-4}\), while \(|C_{V\mu } - C_{A\mu } - C_{\nu \mu }| > 4.6\times 10^{-4}\) in the mass range of interest, and hence there is no combination of effective couplings fulfilling Eq. (33) such that both anomalies are satisfied to within \(1\sigma \) and all experimental constraints are satisfied. It is notable that even in such a general theoretical setting, the \(Z'\) explanation is unsuccessful. \(Z'\) solutions considering the Rubidium measurement We have thus far considered only the \((g-2)_e\) anomaly from the Caesium measurement, Eq. (2). Significantly, this has the opposite sign to the muon anomaly. In Sect. 4, it was shown that the combination of the different signs and sizes of the anomalies, along with the copious experimental constraints, makes it impossible to construct a model which can satisfy both at the same time. One might suppose that it is easier to explain two anomalies which have the same sign, which is exactly the situation if one considers instead the recent Rubidium result for \(a_e\), cf. Eq. (3). Here we consider this possibility. This has not, to the best of our knowledge, previously been studied. Our model-independent analysis of muon sector constraints in Sect. 3.2 still applies. The conclusions of the study of bounds on electron and electron neutrino couplings in Sect. 3.1 are no longer valid, however Eqs. (9)–(14) all still hold. Let us immediately turn to the most general class of models considered in the previous section, the Froggatt–Nielsen scenario. The SM\(+U(1)'\) and NHDM\(+U(1)'\) models are indeed specific cases of this set-up. The key feature of this model is the relation between electron and muon couplings given in Eq. (33), which is itself a consequence of gauge invariance. We note that the magnitude of the Rubidium anomaly is similar to that of the Caesium anomaly, with \(|\Delta a_e^\text {Rb}/\Delta a_e^\text {Cs}| = 0.55\), and therefore the former demands \(C_{Ve} \sim {\mathcal {O}}(10^{-4})\), just as the latter had required \(C_{Ae} \sim {\mathcal {O}}(10^{-4})\). Moreover, the electron neutrino couplings are still constrained to be \(\lesssim {\mathcal {O}}(10^{-5})\), with the bounds of Fig. 2c, d modified by an order-one factor because the relevant bounds are similar or identical under \(C_{Ae} \rightarrow C_{Ve}\), see Eqs. (9), (13) and (14). Constraints on the mass and effective coupling of the \(Z'\), given the Rubidium measurement of \(a_e\). The bounds are on \(C_{Ve}\), where we have taken \(C_{Ae} = -10^{-8}/C_{Ae}\), saturating the Møller scattering constraint (12), and fixed \(C_{V\mu }\) by Eq. (33), while setting all other effective couplings to zero. The blue (purple) stripe corresponds to where \(a_e\) (\(a_\mu \)) is satisfied to within \(1\sigma \). The pink region in the bottom left, green region in top left, and orange region in top right are ruled out by NA64, KLOE and BaBar, respectively. As can be seen, the region simultaneously satisfying \(a_e\) and \(a_\mu \) is excluded by BaBar The most minimal case is non-zero \(C_{Ve}\) and \(C_{V\mu }\) only, in which case Eq. (33) dictates \(C_{Ve} = C_{V\mu }\) . Such a \(Z'\) only satisfies both anomalies to within \(1\sigma \) for \(m_{Z'} > rsim 25\) MeV and \(C_{Ve,\mu } > rsim 5 \times 10^{-4}\), which however is excluded by BaBar.Footnote 9 For smaller \(m_{Z'}\), keeping the same effective coupling \(C_V\) causes either too small a shift in \(a_\mu \) or too great a shift in \(a_e\). Generalising to include \(C_{Ae}\) and \(C_{A\mu }\), the former is restricted by the Møller scattering bound, \(|C_{Ve} C_{Ae}| \lesssim 10^{-8}\). In Fig. 4, we plot 1\(\sigma \) regions which explain the two anomalies individually along with the various constraints, setting \(C_{Ae} = -10^{-8}/C_{Ve}\) to saturate the Møller scattering limit, and \(C_{A\mu } = C_{\nu e} = C_{\nu \mu } = 0\). Equation (33) dictates that \(C_{V\mu } = C_{Ve} + 10^{-8}/C_{Ve}\). As can be seen, while either anomaly can be satisfied by itself, the pair cannot simultaneously be explained. Various alternatives do not ameliorate the problem. Smaller \(|C_{Ae}|\) would in turn require that \(|C_{Ve}|\) is smaller in order to satisfy \(\Delta a_e^\text {Rb}\), thereby lowering the blue bland in Fig. 4. Making \(C_{Ae} > 0\) would decrease \(C_{V\mu }\) as a function of \(C_{Ve}\), thus raising the purple \(a_\mu \) band. Finally, larger \(|C_{A\mu }|\) would mean a larger \(C_{V\mu }\) is needed to explain \(\Delta a_\mu \), this also raises the purple band. For this reason, the general Froggatt–Nielsen scenario cannot solve the anomalies. Since this set-up covers the SM\(+U(1)'\) and NHDM\(+U(1)'\) models, those scenarios are similarly unsuccessful. We see that three main challenges in explaining both \(\Delta a_e^\text {Cs}\) and \(\Delta a_\mu \) – namely (i) the relative magnitudes of the anomalies, (ii) the stringent experimental limits on the different effective couplings, particularly \(C_{\nu _e}\) and \(C_{\nu _\mu }\), and (iii) the relations between the effective couplings due to gauge invariance – are also present in the attempt to explain \(\Delta a_e^\text {Rb}\) and \(\Delta a_\mu \) simultaneously. Thus, although the different signs of the muon and Caesium electron anomalies is an interesting feature, it therefore seems that this is not the main obstacle for \(Z'\) model-building. Since the sizes of the anomalies is fixed by experiment and the limits on effective couplings will only get stronger with time (see the summary in Sect. 3.3), in order to solve both anomalies one must find ways to get around Eq. (33) in particular. Possible ways to do this, such as introducing extra fermions, are beyond the scope of this paper. There is a mixed experimental picture for the anomalous magnetic moment of charged leptons. While the status of \((g-2)_\mu \) has been solidified by the recent Fermilab measurement, there is considerably more uncertainty surrounding \((g-2)_e\). We have explored in detail the possibility of simultaneously explaining both the (Caesium) \((g-2)_e\) and \((g-2)_\mu \) anomalies with a single low scale \(Z'\). After introducing the formalism in Sect. 2, in Sect. 3 we found the experimentally allowed region which can explain the anomalies to within \(1\sigma \). The permitted \(Z'\) mass range is \(16\text { MeV} \lesssim m_{Z'} \lesssim 38 \text { MeV}\), and one requires some sizeable effective couplings, \(\{ 5\times 10^{-4} \lesssim |C_{V \mu }| \,\lesssim 0.05 ;~ 1.3 \times 10^{-4} \lesssim |C_{A e}| \lesssim 3.2 \times 10^{-4} \}\), and some smaller ones, \(\{ |C_{V e}| \lesssim 7.7 \times 10^{-5};~ |C_{\nu e}, C_{\nu \mu }| \lesssim 10^{-5} \}\), while \(|C_{A \mu }|\) can be anywhere between 0 and \(8 \times 10^{-3}\) depending on the size of \(|C_{V\mu }|\). The key findings are summarised in Figs. 2 and 3. Our survey of the parameter space was very general, in particular allowing for both vector and axial \(Z'\) couplings and for flavour non-universality. Turning to the range of experiments planned for the near future, we argued in Sect. 3.3 that the entirety of the allowed parameter space for solving the \((g-2)_e\) anomaly could be tested soon, in particular by NA64, Belle-II and MAGIX. This analysis provides a very specific target for model-building. In Sect. 4 we explored three classes of models of increasing complexity with the aim of generating a combination of couplings which lies within the allowed parameter space. In the simplest extension, a SM+\(U(1)'\) model, the gauge invariance of the Yukawa couplings prevented the significant \(C_{Ae}\) required to resolve the \((g-2)_e\) anomaly. Going further to a 2HDM+\(U(1)'\) (which can be generalised to a NHDM+\(U(1)'\) scenario), we showed that the smallness of the neutrino couplings required to evade constraints from neutrino scattering experiments demands nearly universal effective vector couplings, and that this, in addition to the universal effective axial couplings of the model, does not permit an explanation of both the anomalies at the same time. Finally, we turned to a Froggatt–Nielsen inspired scenario which permitted greater freedom by removing the gauge invariance of the Yukawas. The relation between the couplings to left-handed charged leptons and their respective neutrinos imposed by the gauge structure, in conjunction with the very stringent bounds on the neutrino couplings in particular, again conspired to forbid a solution to the two anomalies. We then demonstrated in Sect. 5 that such models also cannot simultaneously satisfy the \((g-2)_\mu \) and Rubidium \((g-2)_e\) anomalies. This was notable since those two anomalies have the same sign. Thus, factors such as the strong individual limits on \(Z'\) couplings (studied in Sect. 3) and the relative size of the two anomalies are more challenging to overcome in \(Z'\) models than their relative sign. To our knowledge, this was the first study of a \(Z'\) explanation for the muon anomaly with the newest \((g-2)_e\) result. The conclusion of our analysis is that \(Z'\)-only explanations of the dual \((g-2)_e\) and \((g-2)_\mu \) anomalies are ruled out. Additional new fields must be introduced in order to explain the two discrepancies. This is true both for the Caesium and Rubidium values of \(a_e\). If the \((g-2)_\mu \) anomaly, measured both at Brookhaven and Fermilab, is borne out by the future J-PARC experiment, and (either) \((g-2)_e\) discrepancy persists, the SM will be faced by two disagreements between theory and experiment of a similar nature but a different magnitude and possibly sign. In principle, a MeV-scale vector boson can have couplings to leptons which resolve both while satisfying the plethora of existing experimental constraints. It appears, however, that additional fields contributing to leptonic magnetic moment(s) are also required. Given the promising experimental outlook over the next decade, we should know soon whether or not there does exist such a \(Z'\), and associated dark sector, with the ability to resolve the \((g-2)_{e,\mu }\) anomalies. This manuscript has no associated data or the data will not be deposited. [Authors' comment: The article has no associated data, it is self-contained.] We note that the significance of this anomaly has been questioned by a lattice QCD calculation of the leading-order hadronic vacuum polarisation contribution to \(a_\mu ^\text {SM}\) [4]. It is interesting to note that if the anomalies had the opposite sign, i.e. had the experimental data required \(\Delta a_e > 0\) and \(\Delta a_\mu < 0\), then \(C_{Ve} = C_{V \mu }\) and \(C_{Ae} = C_{A \mu }\) could have given a viable solution. Thus, neither the different sign nor the unusual ratio of the anomalies necessarily implies that flavour non-universal physics must be present. The dark photon is a notable counter-example, with interactions solely generated through gauge-kinetic mixing, where \(C_{V\alpha } \ne 0\) while \(C_{A\alpha } = C_{\nu \alpha } = 0\). However, the dark photon does not successfully explain the \((g-2)_{e,\mu }\) anomalies because, as is easily seen from Eq. (7), \(C_{Ae} = 0\) implies \(\Delta a_e \ge 0\). This bound can in principle be avoided by sufficiently light \(m_{Z'}\). When \(m_{Z'} \lesssim 100\) eV, the \((g-2)_e\) anomaly can be explained with \(|C_{Ve}|, |C_{Ae}|, |C_{\nu e}| < 10^{-9}\). However, we will see in Sect. 3.2 that such a light \(Z'\) solution to the \((g-2)_\mu \) discrepancy is ruled out by similar cosmological considerations. This assumption has negligible bearing on our main results since \(C_{\nu \mu }\) is specifically probed by CHARM-II, as outlined below, and we are not concerned with \(C_{\nu \tau }\). In the models studied in [82], the \(Z'\) has interactions with both muons and muon neutrinos. However, at low masses \((\ll \) MeV) it is the \(Z'-\mu \) interactions with dominate the bounds, while the \(Z'-\nu _\mu \) interaction plays a negligible role. An attempt to explain both anomalies by introducing a heavy vector-like fourth family of leptons was made in [63] but was ultimately unsuccessful. In this framework we will not attempt to generate the observed charged lepton Yukawa couplings, but rather focus on whether the \((g-2)_{e,\mu }\) anomalies can be simultaneously explained The BaBar and NA64 bounds on \(|C_{Ae}|\) in Fig. 2a, b for \(C_{Ve} \simeq 0\) (i.e. along the diagonal) can be reinterpreted here as a bound on \(|C_{Ve}|\), since the experiments bound the combination \(C_{Ve}^2 + C_{Ae}^2\). Muon g-2 Collaboration, G.W. Bennett et al., Final report of the muon E821 anomalous magnetic moment measurement at BNL. Phys. Rev. D 73, 072003 (2006). arXiv:hep-ex/0602035 T. Aoyama et al., The anomalous magnetic moment of the muon in the Standard Model. Phys. Rep. 887, 1–166 (2020). arXiv:2006.04822 ADS Google Scholar Muon g-2 Collaboration, B. Abi et al., Measurement of the positive muon anomalous magnetic moment to 0.46 ppm. Phys. Rev. Lett. 126(14), 141801 (2021). arXiv:2104.03281 S. Borsanyi et al., Leading-order hadronic vacuum polarization contribution to the muon magnetic moment from lattice QCD. arXiv:2002.12347 Muon g-2 Collaboration, J. Grange et al., Muon (g-2) Technical Design Report. arXiv:1501.06858 M. Abe et al., A new approach for measuring the muon anomalous magnetic moment and electric dipole moment. PTEP 2019(5), 053C02 (2019). arXiv:1901.03047 R.H. Parker, C. Yu, W. Zhong, B. Estey, H. Müller, Measurement of the fine-structure constant as a test of the Standard Model. Science 360, 191 (2018). arXiv:1812.04130 ADS MathSciNet MATH Google Scholar T. Aoyama, T. Kinoshita, M. Nio, Revised and improved value of the QED tenth-order electron anomalous magnetic moment. Phys. Rev. D 97(3), 036001 (2018). arXiv:1712.06060 H. Davoudiasl, W.J. Marciano, Tale of two anomalies. Phys. Rev. D 98(7), 075011 (2018). arXiv:1806.10252 L. Morel, Z. Yao, P. Cladé, S. Guellati-Khélifa, Determination of the fine-structure constant with an accuracy of 81 parts per trillion. Nature 588(7836), 61–65 (2020) B. Holdom, Two U(1)'s and epsilon charge shifts. Phys. Lett. B 166, 196–198 (1986) A. Freitas, J. Lykken, S. Kell, S. Westhoff, Testing the muon g-2 anomaly at the LHC. JHEP 05, 145 (2014). arXiv:1402.7065 (Erratum: JHEP 09, 155 (2014)) I. Doršner, S. Fajfer, A. Greljo, J.F. Kamenik, N. Košnik, Physics of leptoquarks in precision experiments and at particle colliders. Phys. Rep. 641, 1–68 (2016). arXiv:1603.04993 ADS MathSciNet Google Scholar A. Crivellin, M. Hoferichter, P. Schmidt-Wellenburg, Combined explanations of \((g-2)_{\mu , e}\) and implications for a large muon EDM. Phys. Rev. D 98(11), 113002 (2018). arXiv:1807.11484 M. Endo, W. Yin, Explaining electron and muon \(g-2\) anomaly in SUSY without lepton-flavor mixings. JHEP 08, 122 (2019). arXiv:1906.08768 M. Badziak, K. Sakurai, Explanation of electron and muon g-2 anomalies in the MSSM. JHEP 10, 024 (2019). arXiv:1908.03607 G. Hiller, C. Hormigos-Feliu, D.F. Litim, T. Steudtner, Anomalous magnetic moments from asymptotic safety. Phys. Rev. D 102(7), 071901 (2020). arXiv:1910.14062 M. Bauer, M. Neubert, S. Renner, M. Schnubel, A. Thamm, Axionlike particles, lepton-flavor violation, and a new explanation of \(a_\mu \) and \(a_e\). Phys. Rev. Lett. 124(21), 211803 (2020). arXiv:1908.00008 M. Endo, S. Iguro, T. Kitahara, Probing \(e\mu \) flavor-violating ALP at Belle II. JHEP 06, 040 (2020). arXiv:2002.05948 G.F. Giudice, P. Paradisi, M. Passera, Testing new physics with the electron g-2. JHEP 11, 113 (2012). arXiv:1208.6583 J. Liu, C.E.M. Wagner, X.-P. Wang, A light complex scalar for the electron and muon anomalous magnetic moments. JHEP 03, 008 (2019). arXiv:1810.11028 B. Dutta, Y. Mimura, Electron \(g-2\) with flavor violation in MSSM. Phys. Lett. B 790, 563–567 (2019). arXiv:1811.10209 S. Gardner, X. Yan, Light scalars with lepton number to solve the \((g-2)_e\) anomaly. Phys. Rev. D 102(7), 075016 (2020). arXiv:1907.12571 C. Cornella, P. Paradisi, O. Sumensari, Hunting for ALPs with lepton flavor violation. JHEP 01, 158 (2020). arXiv:1911.06279 B. Dutta, S. Ghosh, T. Li, Explaining \((g-2)_{\mu , e}\), the KOTO anomaly and the MiniBooNE excess in an extended Higgs model with sterile neutrinos. Phys. Rev. D 102(5), 055017 (2020). arXiv:2006.01319 J.-L. Yang, T.-F. Feng, H.-B. Zhang, Electron and muon \((g-2)\) in the B-LSSM. J. Phys. G 47(5), 055004 (2020). arXiv:2003.09781 A. Crivellin, M. Hoferichter, Combined explanations of \((g-2)_\mu \), \((g-2)_e\) and implications for a large muon EDM, in an alpine lhc physics summit 2019 (SISSA, 2019), pp. 29–34. arXiv:1905.03789 I. Bigaran, R.R. Volkas, Getting chirality right: single scalar leptoquark solutions to the \((g-2)_{e,\mu }\) puzzle. Phys. Rev. D 102(7), 075037 (2020). arXiv:2002.12544 I. Doršner, S. Fajfer, S. Saad, \(\mu \rightarrow e \gamma \) selecting scalar leptoquark solutions for the \((g-2)_{e,\mu }\) puzzles. Phys. Rev. D 102(7), 075007 (2020). arXiv:2006.11624 F.J. Botella, F. Cornet-Gomez, M. Nebot, Electron and muon \(g-2\) anomalies in general flavour conserving two Higgs doublets models. Phys. Rev. D 102(3), 035023 (2020). arXiv:2006.01934 S. Jana, V.P. K, S. Saad, Resolving electron and muon \(g-2\) within the 2HDM. Phys. Rev. D 101(11), 115037 (2020). arXiv:2003.03386 X.-F. Han, T. Li, L. Wang, Y. Zhang, Simple interpretations of lepton anomalies in the lepton-specific inert two-Higgs-doublet model. Phys. Rev. D 99(9), 095034 (2019). arXiv:1812.02449 M. Abdullah, B. Dutta, S. Ghosh, T. Li, \((g-2)_{\mu , e}\) and the ANITA anomalous events in a three-loop neutrino mass model. Phys. Rev. D 100(11), 115006 (2019). arXiv:1907.08109 A.E. Cárcamo Hernández, Y. Hidalgo Velásquez, S. Kovalenko, H.N. Long, N.A. Pérez-Julve, V.V. Vien, Fermion spectrum and \(g-2\) anomalies in a low scale 3-3-1 model. arXiv:2002.07347 N. Haba, Y. Shimizu, T. Yamada, Muon and electron \(g - 2\) and the origin of the fermion mass hierarchy. PTEP 2020(9), 093B05 (2020). arXiv:2002.10230 MATH Google Scholar L. Calibbi, M.L. López-Ibáñez, A. Melis, O. Vives, Muon and electron \(g - 2\) and lepton masses in flavor models. JHEP 06, 087 (2020). arXiv:2003.06633 C. Arbeláez, R. Cepedello, R.M. Fonseca, M. Hirsch, \((g-2)\) anomalies and neutrino mass. Phys. Rev. D 102(7), 075005 (2020). arXiv:2007.11007 C.-H. Chen, T. Nomura, Electron and muon \(g-2\), radiative neutrino mass, and \(\ell ^{\prime } \rightarrow \ell \gamma \) in a \(U(1)_{e-\mu }\) model. Nucl. Phys. B 964, 115314 (2021). arXiv:2003.07638 C. Hati, J. Kriewald, J. Orloff, A.M. Teixeira, Anomalies in \(^8\)Be nuclear transitions and \((g-2)_{e,\mu }\): towards a minimal combined explanation. JHEP 07, 235 (2020). arXiv:2005.00028 S. Jana, P.K. Vishnu, W. Rodejohann, S. Saad, Dark matter assisted lepton anomalous magnetic moments and neutrino masses. Phys. Rev. D 102(7), 075003 (2020). arXiv:2008.02377 K.-F. Chen, C.-W. Chiang, K. Yagyu, An explanation for the muon and electron \(g - 2\) anomalies and dark matter. JHEP 09, 119 (2020). arXiv:2006.07929 E.J. Chun, T. Mondal, Explaining \(g-2\) anomalies in two Higgs doublet model with vector-like leptons. JHEP 11, 077 (2020). arXiv:2009.08314 S.-P. Li, X.-Q. Li, Y.-Y. Li, Y.-D. Yang, X. Zhang, Power-aligned 2HDM: a correlative perspective on \((g-2)_{e,\mu }\). JHEP 01, 034 (2021). arXiv:2010.02799 H. Banerjee, B. Dutta, S. Roy, Supersymmetric gauged U(1)\(_{L_{\mu }-L_{\tau }}\) model for electron and muon \((g-2)\) anomaly (2020). arXiv preprint arXiv:2011.05083 A.E.C. Hernández, S.F. King, H. Lee, Fermion mass hierarchies from vector-like families with an extended 2HDM and a possible explanation for the electron and muon anomalous magnetic moments. arXiv:2101.05819 L. Darmé, F. Giacchino, E. Nardi, M. Raggi, Invisible decays of axion-like particles: constraints and prospects (2020). arXiv preprint arXiv:2012.07894 M. Pospelov, Secluded U(1) below the weak scale. Phys. Rev. D 80, 095002 (2009). arXiv:0811.1030 H. Davoudiasl, H.-S. Lee, W.J. Marciano, Muon \(g-2\), rare kaon decays, and parity violation from dark bosons. Phys. Rev. D 89(9), 095006 (2014). arXiv:1402.3620 B. Allanach, F.S. Queiroz, A. Strumia, S. Sun, \(Z^{\prime }\) models for the LHCb and \(g-2\) muon anomalies. Phys. Rev. D 93(5), 055045 (2016). arXiv:1511.07447 (Erratum: Phys. Rev. D 95, 119902 (2017)) S. Raby, A. Trautner, Vectorlike chiral fourth family to explain muon anomalies. Phys. Rev. D 97(9), 095006 (2018). arXiv:1712.09360 A.E. Cárcamo Hernández, S. Kovalenko, R. Pasechnik, I. Schmidt, Phenomenology of an extended IDM with loop-generated fermion mass hierarchies. Eur. Phys. J. C 79(7), 610 (2019). arXiv:1901.09552 J. Kawamura, S. Raby, A. Trautner, Complete vectorlike fourth family and new U(1)' for muon anomalies. Phys. Rev. D 100(5), 055030 (2019). arXiv:1906.11297 J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T.M.P. Tait, P. Tanedo, Particle physics models for the 17 MeV anomaly in beryllium nuclear decays. Phys. Rev. D 95(3), 035017 (2017). arXiv:1608.03591 J. Kozaczuk, D.E. Morrissey, S.R. Stroberg, Light axial vector bosons, nuclear transitions, and the \(^8\)Be anomaly. Phys. Rev. D 95(11), 115024 (2017). arXiv:1612.01525 M. Lindner, F.S. Queiroz, W. Rodejohann, X.-J. Xu, Neutrino-electron scattering: general constraints on Z\(^{^{\prime }}\) and dark photon models. JHEP 05, 098 (2018). arXiv:1803.00060 M. Bauer, P. Foldenauer, J. Jaeckel, Hunting all the hidden photons. JHEP 18, 094 (2020). arXiv:1803.05466 L. Delle Rose, S. Khalil, S.J.D. King, S. Moretti, A.M. Thabt, Atomki anomaly in family-dependent \(U(1)^{\prime }\) extension of the standard model. Phys. Rev. D 99(5), 055022 (2019). arXiv:1811.07953 A. Smolkovič, M. Tammaro, J. Zupan, Anomaly free Froggatt–Nielsen models of flavor. JHEP 10, 188 (2019). arXiv:1907.10063 A.J. Krasznahorkay et al., Observation of anomalous internal pair creation in Be8: a possible indication of a light. Neutral Boson. Phys. Rev. Lett. 116(4), 042501 (2016). arXiv:1504.01527 A.J. Krasznahorkay et al., New evidence supporting the existence of the hypothetic X17 particle. arXiv:1910.10459 M. Escudero, D. Hooper, G. Krnjaic, M. Pierre, Cosmology with a very light L\(_{\mu }\) - L\(_{\tau }\) gauge boson. JHEP 03, 071 (2019). arXiv:1901.02010 MathSciNet Google Scholar J.L. Bernal, L. Verde, A.G. Riess, The trouble with \(H_0\). JCAP 10, 019 (2016). arXiv:1607.05617 A.E. Cárcamo Hernández, S.F. King, H. Lee, S.J. Rowley, Is it possible to explain the muon and electron \(g-2\) in a \(Z^{\prime }\) model? Phys. Rev. D 101(11), 115016 (2020). arXiv:1910.10734 J.C. Criado, F. Feruglio, S.J.D. King, Modular invariant models of lepton masses at levels 4 and 5. JHEP 02, 001 (2020). arXiv:1908.11867 MEG Collaboration, A.M. Baldini et al., Search for the lepton flavour violating decay \(\mu ^+ \rightarrow \rm e^+ \gamma \) with the full dataset of the MEG experiment. Eur. Phys. J. C 76(8), 434 (2016). arXiv:1605.05081 J.P. Leveille, The second order weak correction to (G-2) of the muon in arbitrary gauge models. Nucl. Phys. B 137, 63–76 (1978) BaBar Collaboration, J.P. Lees et al., Search for a muonic dark force at BABAR. Phys. Rev. D 94(1), 011102 (2016). arXiv:1606.03501 CMS Collaboration, A.M. Sirunyan et al., Search for an \(L_{\mu }-L_{\tau }\) gauge boson using Z\(\rightarrow 4\mu \) events in proton-proton collisions at \(\sqrt{s} =\) 13 TeV. Phys. Lett. B 792, 345–368 (2019). arXiv:1808.03684 ALEPH, DELPHI, L3, OPAL, SLD, LEP Electroweak Working Group, SLD Electroweak Group, SLD Heavy Flavour Group Collaboration, S. Schael et al., Precision electroweak measurements on the \(Z\) resonance. Phys. Rep. 427, 257–454 (2006). arXiv:hep-ex/0509008 N. Sabti, J. Alvey, M. Escudero, M. Fairbairn, D. Blas, Refined bounds on MeV-scale thermal dark sectors from BBN and the CMB. JCAP 01, 004 (2020). arXiv:1910.01649 H.K. Dreiner, J.-F. Fortin, J. Isern, L. Ubaldi, White dwarfs constrain dark forces. Phys. Rev. D 88, 043517 (2013). arXiv:1303.7232 BaBar Collaboration, J.P. Lees et al., Search for a Dark Photon in \(e^+e^-\) Collisions at BaBar. Phys. Rev. Lett. 113(20), 201801 (2014). arXiv:1406.2980 A. Anastasi et al., Limit on the production of a low-mass vector boson in \({\rm e}^{+}{\rm e}^{-} \rightarrow {\rm U}\gamma \), \({\rm U} \rightarrow {\rm e}^{+}{\rm e}^{-}\) with the KLOE experiment. Phys. Lett. B 750, 633–637 (2015). arXiv:1509.00740 J.D. Bjorken, R. Essig, P. Schuster, N. Toro, New fixed-target experiments to search for dark gauge forces. Phys. Rev. D 80, 075018 (2009). arXiv:0906.0580 NA64 Collaboration, D. Banerjee et al., Improved limits on a hypothetical X(16.7) boson and a dark photon decaying into \(e^+e^-\) pairs. Phys. Rev. D 101(7), 071101 (2020). arXiv:1912.11389 SLAC E158 Collaboration, P.L. Anthony et al., Precision measurement of the weak mixing angle in Moller scattering. Phys. Rev. Lett. 95, 081601 (2005). arXiv:hep-ex/0504049 Y. Kahn, G. Krnjaic, S. Mishra-Sharma, T.M.P. Tait, Light weakly coupled axial forces: models, constraints, and projections. JHEP 05, 002 (2017). arXiv:1609.09072 ADS MATH Google Scholar TEXONO Collaboration, M. Deniz et al., Measurement of Nu(e)-bar-electron scattering cross-section with a CsI(Tl) scintillating crystal array at the Kuo-Sheng nuclear power reactor. Phys. Rev. D 81, 072001 (2010). arXiv:0911.1597 G. Bellini et al., Precision measurement of the 7Be solar neutrino interaction rate in Borexino. Phys. Rev. Lett. 107, 141302 (2011). arXiv:1104.1816 CHARM-II Collaboration, P. Vilain et al., Measurement of differential cross-sections for muon-neutrino electron scattering. Phys. Lett. B 302, 351–355 (1993) R. Harnik, J. Kopp, P.A.N. Machado, Exploring nu signals in dark matter detectors. JCAP 07, 026 (2012). arXiv:1202.6073 D. Croon, G. Elor, R.K. Leane, S.D. McDermott, Supernova muons: new constraints on \(Z\)' bosons. Axions and ALPs. JHEP 01, 107 (2021). arXiv:2006.13942 W. Altmannshofer, S. Gori, M. Pospelov, I. Yavin, Neutrino trident production: a powerful probe of new physics with neutrino beams. Phys. Rev. Lett. 113, 091801 (2014). arXiv:1406.2332 CCFR Collaboration, S.R. Mishra et al., Neutrino tridents and W Z interference. Phys. Rev. Lett. 66, 3117–3120 (1991) S.N. Gninenko, N.V. Krasnikov, V.A. Matveev, Search for dark sector physics with NA64. Phys. Part. Nucl. 51(5), 829–858 (2020). arXiv:2003.07257 Belle-II Collaboration, W. Altmannshofer et al., The Belle II physics book. PTEP 2019(12), 123C01 (2019). arXiv:1808.10567 (Erratum: PTEP 2020, 029201 (2020)) T. Ferber, Towards First Physics at Belle II. Acta Phys. Pol. B 46(11), 2285 (2015) L. Doria, P. Achenbach, M. Christmann, A. Denig, P. Gülker, H. Merkel, Search for light dark matter with the MESA accelerator, in 13th Conference on the Intersections of Particle and Nuclear Physics, 9 (2018). arXiv:1809.07168 FASER Collaboration, A. Ariga et al., FASER's physics reach for long-lived particles. Phys. Rev. D 99(9), 095011 (2019). arXiv:1811.12522 S. Alekhin et al., A facility to search for hidden particles at the CERN SPS: the SHiP physics case. Rep. Prog. Phys. 79(12), 124201 (2016). arXiv:1504.04855 P. Banerjee et al., Theory for muon-electron scattering @ 10 ppm: a report of the MUonE theory initiative. Eur. Phys. J. C 80(6), 591 (2020). arXiv:2004.13663 P.S.B. Dev, W. Rodejohann, X.-J. Xu, Y. Zhang, MUonE sensitivity to new physics explanations of the muon anomalous magnetic moment. JHEP 05, 053 (2020). arXiv:2002.04822 A. Masiero, P. Paradisi, M. Passera, New physics at the MUonE experiment at CERN. Phys. Rev. D 102(7), 075013 (2020). arXiv:2002.05418 J. Balewski et al., The DarkLight experiment: a precision search for new physics at low energies 12 (2014). arXiv:1412.4717 NA62 Collaboration, E. Cortina Gil et al., Search for production of an invisible dark photon in \(\pi ^0\) decays. JHEP 05, 182 (2019). arXiv:1903.08767 PADME Collaboration, V. Kozhuharov, PADME: searching for dark mediator at the Frascati BTF. Nuovo Cim. C 40(5), 192 (2017) B. Wojtsekhowski et al., Searching for a dark photon: project of the experiment at VEPP-3. JINST 13(02), P02021 (2018). arXiv:1708.07901 BDX Collaboration, M. Battaglieri et al., Dark Matter Search in a Beam-Dump EXperiment (BDX) at Jefferson Lab-2018 Update to PR12-16-001. arXiv:1910.03532 LDMX Collaboration, T. Åkesson et al., Light dark matter eXperiment (LDMX). arXiv:1808.05219 B.C. Allanach, B. Gripaios, J. Tooby-Smith, Solving local anomaly equations in gauge-rank extensions of the standard model. Phys. Rev. D 101(7), 075015 (2020). arXiv:1912.10022 C. Coriano, L. Delle Rose, C. Marzo, Constraints on abelian extensions of the standard model from two-loop vacuum stability and \(U(1)_{B-L}\). JHEP 02, 135 (2016). arXiv:1510.02379 C.D. Froggatt, H.B. Nielsen, Hierarchy of quark masses, Cabibbo angles and CP violation. Nucl. Phys. B 147, 277–298 (1979) L. Delle Rose, S. Khalil, S. Moretti, Explanation of the 17 MeV Atomki anomaly in a U(1)'-extended two Higgs doublet model. Phys. Rev. D 96(11), 115024 (2017). arXiv:1704.03436 We thank Stefano Moretti and Raman Sundrum for their very helpful comments on the manuscript. A.B. and R.C. thank the organisers of the 2019 SLAC Summer Institute, 'Menu of Flavors', where this project was conceived and initiated. In particular, we thank Thomas G. Rizzo for the encouragement to pursue this work, and also Felix Kress, Elisabeth Niel, Peilong Wang and Jennifer Rittenhouse West for interesting discussions. A.B. is supported by the NSF grant PHY-1914731 and by the Maryland Center for Fundamental Physics (MCFP). R.C. is supported by the IISN convention 4.4503.15. R.C. thanks the UNSW School of Physics, where he is a Visiting Fellow, for their hospitality during part of this project. Maryland Center for Fundamental Physics, University of Maryland, College Park, MD, 20742, USA Arushi Bodas Service de Physique Théorique, Université Libre de Bruxelles, Boulevard du Triomphe, CP225, 1050, Brussels, Belgium Rupert Coy School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ, UK Simon J. D. King Correspondence to Rupert Coy. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Funded by SCOAP3 Bodas, A., Coy, R. & King, S.J.D. Solving the electron and muon \(g-2\) anomalies in \(Z'\) models. Eur. Phys. J. C 81, 1065 (2021). https://doi.org/10.1140/epjc/s10052-021-09850-x DOI: https://doi.org/10.1140/epjc/s10052-021-09850-x
CommonCrawl
Category Archives: Calculus Consider the curve $y=\frac{1}{x}$, $1\leq x<\infty$. The Graph of y=1/x, x=1…50 Gabriel's horn is the surface that is obtained by rotating the above curve about the x-axis. What's interesting about this Gabriel's horn is that its surface area is infinite while the volume of its interior is finite. Let us first calculate the volume. Using the disk method, the volume is given by \begin{align*}V&=\pi\int_1^\infty\left(\frac{1}{x}\right)^2dx\\&=\pi\lim_{a\to\infty}\int_1^a\frac{1}{x^2}dx\\&=\pi\lim_{a\to\infty}\left[-\frac{1}{x}\right]_1^a\\&=\pi\lim_{a\to\infty}\left[1-\frac{1}{a}\right]\\&=\pi\end{align*}Its surface area is obtained by calculating the integral \begin{align*}A&=2\pi\int_1^\infty\frac{1}{x}\sqrt{1+\left(-\frac{1}{x^2}\right)^2}dx\\&=2\pi\lim_{a\to\infty}\int_1^a\frac{1}{x}\sqrt{1+\frac{1}{x^4}}dx\end{align*} We don't actually have to evaluate this integral to see the area is infinite. Since $\sqrt{1+\frac{1}{x^4}}>1$, $$\int_1^a\frac{1}{x}\sqrt{1+\frac{1}{x^4}}dx\geq \int_1^a\frac{1}{x}dx=\ln a$$ Hence, $A=\infty$. The integral $\int \frac{1}{x}\sqrt{1+\frac{1}{x^4}}dx$ can be evaluated exactly. Using first the substitution $u=x^2$ and then the trigonometric substitution $u=\tan\theta$, \begin{align*}\int\frac{1}{x}\sqrt{1+\frac{1}{x^4}}dx&=\frac{1}{2}\int\frac{\sqrt{1+u^2}}{u^2}du\\&=\frac{1}{2}\int\frac{\sec^2\theta\sec\theta}{\tan^2\theta}d\theta\\&=\frac{1}{2}\int\frac{(1+\tan^2\theta)\sec\theta}{\tan^2\theta}d\theta\\&=\frac{1}{2}\left[\int\cot^2\theta\sec\theta d\theta+\int\sec\theta d\theta\right]\\&=\frac{1}{2}\left[\int\frac{\cos\theta}{\sin^2\theta}d\theta+\int\sec\theta d\theta\right]\\&=\frac{1}{2}\left[-\csc\theta+\ln|\sec\theta+\tan\theta|\right]\\&=\frac{1}{2}\left[-\frac{\sqrt{x^4+1}}{x^2}+\ln(\sqrt{x^4+1}+x^2)\right]\end{align*} Another horn-shaped surface can be obtained by rotating the curve $y=e^{-x}$, $0\leq x<\infty$: The Graph of y=exp(-x), x=0…5 Surface of revolution of y=exp(-x) about the x-axis The volume of the interior is finite and \begin{align*}V&=\pi\int_0^\infty e^{-2x}dx\\&=\pi\lim_{a\to\infty}\int_0^\infty e^{-2x}dx\\&=-\frac{\pi}{2}\lim_{a\to\infty}[e^{-2x}]_0^a\\&=\frac{\pi}{2}\end{align*} Unlike Gabriel's horn, the surface area is also finite. To see that, it is given by the improper integral \begin{align*}A&=\int_0^\infty 2\pi e^{-x}\sqrt{1+(-e^{-x})^2}dx\\&=2\pi\lim_{a\to\infty}\int_0^a e^{-x}\sqrt{1+e^{-2x}}dx\end{align*} Using the substitution $u=e^{-x}$ and then the trigonometric substitution $u=\tan\theta$ the integral $\int e^{-x}\sqrt{1+e^{-2x}}dx$ is evaluated to be \begin{align*}\int e^{-x}\sqrt{1+e^{-2x}}dx&=-\int\sqrt{1+u^2}du\\&=-\int\sec^3\theta d\theta\\&=-\frac{1}{2}[\sec\theta\tan\theta+\ln|\sec\theta+\tan\theta|]\\&=-\frac{1}{2}[u\sqrt{1+u^2}+\ln|\sqrt{u^2+1}+u|]\\&=-\frac{1}{2}[e^{-x}\sqrt{e^{-2x}+1}+\ln(\sqrt{e^{-2x}+1}+e^{-x})]\end{align*} (For the details on how to evaluate the integral $\int\sec^3\theta d\theta$, see here.) Hence, $A$ is computed to be $\pi[\sqrt{2}+\ln(\sqrt{2}+1)]$. There is a particularly interesting horn-shaped surface (actually a shape of two identical horns put together as shown in the figure below) although it has both a finite volume and a finite surface area. It is called a pseudosphere. A pseudosphere is a surface with constant negative Gaussian curvature. The pseudosphere of radius 1 is obtained by revolving the tractrix $$t\mapsto (t-\tanh t,\mathrm{sech}\ t),\ -\infty<t<\infty$$ about its asymptote (the $x$-axis). The Tractrix The resulting surface of revolution, the pseudosphere of radius 1, is seen in the following figure. The pseudosphere of radius 1 The volume of the interior of the pseudosphere is \begin{align*}V&=\pi\int_{-\infty}^\infty y^2dx\\&=2\pi\int_0^\infty y^2dx\\&=2\pi\int_0^\infty\mathrm{sech}^2\ t (1-\mathrm{sech}^2\ t)dt\\&=2\pi\int_0^\infty \mathrm{sech}^2\ t\tanh^2 tdt\\&=2\pi\int_0^1 u^2du\ (u=\tanh t)\\&=2\pi\left[\frac{u^3}{3}\right]_0^1\\&=\frac{2\pi}{3}\end{align*} The area of the pseudosphere is \begin{align*}A&=\int_{-\infty}^\infty 2\pi y ds\\&=2\int_0^\infty 2\pi y\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}dt\\&=4\pi\int_0^\infty\mathrm{sech}\ t\sqrt{(1-\mathrm{sech}^2\ t)^2+(-\mathrm{sech}\ t\tanh t)^2}dt\\&=4\pi\int_0^\infty\mathrm{sech}\ t\sqrt{\tanh^4 t+\mathrm{sech}^2 t\tanh^2 t}dt\\&=4\pi\int_0^\infty\mathrm{sech}\ t\tanh t dt\\&=4\pi\int_0^\infty\frac{\sinh t}{\cosh^2 t}dt\\&=4\pi\int_1^\infty\frac{1}{u^2}du\ (u=\cosh t)\\&=4\pi\end{align*} Notice that its volume is half of the volume of the unit sphere and its area is the same as the area of the unit sphere. Such volume and area relationships are still true for the pseudosphere of radius $r$, i.e. the volume and the area of the pseudosphere of radius $r$ are, respectively, $\frac{2}{3}\pi r^3$ and $4\pi r^2$ (we do not discuss it here, but the radius of a pseudosphere is defined to be the radius of its equator) as noted by the Dutch physicist Christiaan Huygens. The Gaussian curvature of a regular surface can be computed by the Gauss' formula. The pseudosphere of radius 1 as a parametric surface is represented by the equation $$\varphi(t,s)=(t-\tanh t,\mathrm{sech}\ t\cos s,\mathrm{sech}\ t\sin s)$$ As seen in the figure above, the pseudosphere is not regular along the equator (at $t=0$). It has a constant negative Gaussian curvature $K=-1$ anywhere else. A common misunderstanding (usually by non-mathematicians) is that a surface with constant negative Gaussian curvature is a hyperbolic surface. In order for a surface to be hyperbolic, in addition to having a constant negative curvature, it is required to be complete and regular, so the pseudosphere is not hyperbolic although it was introduced by Eugenio Beltrami as a model of hyperbolic geometry. In fact, Hilbert's theorem states that there exist no complete regular surface of constant negative Gaussian curvature immersed in $\mathbb{R}^3$. This means that one cannot obtain a model of two-dimensional hyperbolic geometry in $\mathbb{R}^3$. However, one can obtain a model of two-dimensional hyperbolic geoemtry in $\mathbb{R^{2+1}}$, a 3-dimensional Minkowski space-time. There is another interesting surface with constant negative Gaussian curvature called Dini's surface. It is described by the parametric equations \begin{align*}x&=a\cos u\sin v\\y&=a\sin u\sin v\\z&=a\left(\cos v+\ln\tan\frac{v}{2}\right)+bu\end{align*} Dini's surface with a=1 and b=1/2. The Gaussian curvature of Dini's surface is computed to be $K=-\frac{1}{a^2+b^2}$. This entry was posted in Calculus on August 19, 2022 by Sung Lee. Related rates problems often involve (context-wise) real-life applications of the chain rule/implicit differentiation. Here are some of the examples that are commonly seen in calculus textbooks. Example. Car A is traveling west at 50mi/h and car B is traveling north at 60mi/h. Both are headed for the intersection of the two roads. At what rate are the cars approaching each other when car A is 0.3 mi and car B is 0.4 mi from the intersection? Solution. Denote by $x$ and $y$ the distances from the intersection to car A and to car B, respectively. Then we have $\frac{dx}{dt}=-50$mi/h and $\frac{dy}{dt}=-60$mi/h. Let us denote $z$ the distance between $A$ and $B$. Then by Pythagorean law we have $$z^2=x^2+y^2$$ Differentiating this with respect to $t$, we obtain $$z\frac{dz}{dt}=x\frac{dx}{dt}+y\frac{dy}{dt}$$ and thus \begin{align*}\frac{dz}{dt}&=\frac{1}{z}\left[x\frac{dx}{dt}+y\frac{dy}{dt}\right]\\&=\frac{1}{0.5}[0.3(-50)+0.4(-60)]=-78\mathrm{mi/h}\end{align*} Example. Air is being pumped into a spherical balloon so that its volume increases at a rate of $100\mathrm{cm}^3/\mathrm{s}$. How fast is the radius of the balloon increasing when the diameter is 50 cm? Solution. Let $V$ and $r$ denote the volume and the radius of the spherical balloon. Then $V=\frac{4}{3}\pi r^3$. Differentiating this with respect to $t$, we obtain $$\frac{dV}{dt}=4\pi r^2\frac{dr}{dt}$$ So, \begin{align*}\frac{dr}{dt}&=\frac{1}{4\pi r^2}\frac{dV}{dt}\\&=\frac{1}{4\pi(25)^2}100\\&=\frac{1}{25\pi}\mathrm{cm/s}\end{align*} Example. Gravel is being dumped from a conveyor belt at a rate of $30 \mathrm{ft}^3/\mathrm{min}$ and its coarseness is such that it forms a pile in the shape of a cone whose base diameter and height are the same. How fast is the height of the pile increasing when the pile is 10 ft high? Solution. The cross section of the gravel pile is shown in the figure below. The amount of gravel dumped is the same as the volume of the cone. Let us denote the volume by $V$, its base radius by $r$, and its height by $h$. Then $V=\frac{1}{3}\pi r^2h$. Since $h=2r$, $V$ can be written as $$V=\frac{1}{12}\pi h^3$$ Differentiating this with respect to $t$, we obtain $$\frac{dV}{dt}=\frac{1}{4}\pi h^2\frac{dh}{dt}$$ So, we have \begin{align*}\frac{dh}{dt}&=\frac{4}{\pi h^2}\frac{dV}{dt}\\&=\frac{4}{\pi(10)^2}(30)=\frac{1.2}{\pi}\mathrm{ft/min}\approx 0.38\mathrm{ft/min}\end{align*} Example. A ladder 10 ft long rests against a vertical wall. If the bottom of the ladder slides away from the wall at a rate of 1 ft/s, how fast is the top of the ladder sliding down the wall when the bottom of the ladder is 6 ft from the wall? Let us denote by $x$ and $y$ the distance from the wall to the bottom of the ladder and the distance from the top of the ladder to the floor, respectively. By Pythagorean law, we have $x^2+y^2=100$. Differentiating this with respect to $t$, we obtain $$x\frac{dx}{dt}+y\frac{dy}{dt}=0$$ Hence, we have \begin{align*}\frac{dy}{dt}&=-\frac{x}{y}\frac{dx}{dt}\\&=-\frac{6}{8}(1)=-\frac{3}{4}\mathrm{ft/s}\end{align*} Example. A water tank has the shape of an inverted circular cone with base radius 2m and heigh 4 m. If water is being pumped into the tank at a rate of $2 \mathrm{m}^3/\mathrm{min}$, find the rate at which the water level is rising when the water is 3 m deep. Solution. The cross section of the water tank is shown in the figure below. The amount of water $V$ when the water level is $h$ and the surface radius is $r$ is $V=\frac{1}{3}\pi r^2h$. From the above figure we have the following ratio holds $$\frac{2}{4}=\frac{r}{h}$$ i.e. $r=\frac{h}{2}$. SO $V$ can be written as $$V=\frac{1}{12}\pi h^3$$ Differentiating this with respect to $t$, we obtain $$\frac{dV}{dt}=\frac{1}{4}\pi h^2\frac{dh}{dt}$$ Hence, \begin{align*}\frac{dh}{dt}&=\frac{4}{\pi h^2}\frac{dV}{dt}\\&=\frac{4}{\pi(3)^2}(2)\\&=\frac{8}{9\pi}\mathrm{m/min}\approx 0.28\mathrm{m/min}\end{align*} Example. The fish population, $N$, in a small pond depends on the amount of algae, $a$ (measured in pounds), in it. The equation modeling the fish population is given by $N=(3a^2-20a+26)^4$. If the amount of algae is increasing at a rate of 2 lb/week, at what rate is the fish population changing when the pond contains 5 lb of algae? Solution. By the chain rule, we obtain \begin{align*}\frac{dN}{dt}&=4(3a^2-20a+26)^3\left(6d\frac{da}{dt}-20\frac{da}{dt}\right)\\&=4(3a^2-20a+26)^3(6a-20)\frac{da}{dt}\end{align*} $\frac{da}{dt}=2$lb/week, so when $a=5$lb, $\frac{dN}{dt}$ is $$\frac{dN}{dt}=4(3(5)^2-20(5)+26)^3(6(5)-20)(2)=-80\ \mathrm{lb/week}$$ What this means is that the fish population is decreasing by 80 lb/week at the instant when the pond contains 5 lb of algae. Example. The retail price per gallon of gasoline is increasing at \$0.02 per week. The demand equation is given by $$10p-\sqrt{356-x^2}=0$$ where $p$ is the price per gallon (in dollars), when $x$ million gallons are demanded. At what rate is the revenue changing when 10 million gallons are demanded? Solution. The total revenue $R$ is given by the equation $$R=xp$$ Differentiating this equation with respect to $t$, we obtain $$\frac{dR}{dt}=\frac{dx}{dt}p+x\frac{dp}{dt}$$ The only quantity we don't have to calculate $\frac{dR}{dt}$ is $\frac{dx}{dt}$. To find it, let us differentiate the demand function with respect to $x$. By the chain rule, we obtain $$10\frac{dp}{dt}+\frac{x}{\sqrt{351-x^2}}\frac{dx}{dt}=0$$ When $x=10$ million gallons, with $\frac{dp}{dt}=0.02$, we find from this equation that $$\frac{dx}{dt}=-0.02\sqrt{251}\approx -0.32\ \mathrm{million gallons/week}$$ When $x=10$, from the demand function, we find $p$ as $$p=\frac{1}{10\sqrt{251}}=1.6$$ Therefore, the rate of change of revenue when 10 million gallons of gasoline is demanded is $$\frac{dR}{dt}=-0.32(1.6)+10(0.02)=-0.312$$ What this means is that the revenue is decreasing by about 0.31 million dollars per week when the price increases 0.02 dollars per week (consequently the demand decreases by 0.32 million gallons per week as we saw earlier). This entry was posted in Calculus on June 29, 2020 by Sung Lee. Arc Length and Reparametrization We have already discussed the length of a plane curve represented by the parametric equation ${\bf r}(t)=(x(t),y(t))$, $a\leq t\leq b$ here. The same goes for a space curve. Namely, if ${\bf r}(t)=(x(t),y(t),z(t))$, $a\leq t\leq b$, then its arc length $L$ is given by \begin{equation}\begin{aligned}L&=\int_a^b|{\bf r}'(t)|dt\\&=\int_a^b\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2+\left(\frac{dz}{dt}\right)^2}dt\end{aligned}\label{eq:spacearclength}\end{equation} Example. Find the length of the arc of the circular helix $${\bf r}(t)=\cos t{\bf i}+\sin t{\bf j}+t{\bf k}$$ from the point $(1,0,0)$ to the point $(1,0,2\pi)$. Solution. ${\bf r}'(t)=-\sin t{\bf i}+\cos t{\bf j}+{\bf k}$ so we have $$|{\bf r}'(t)|=\sqrt{(-\sin t)^2+(\cos t)^2+1^2}=\sqrt{2}$$ The arc is going from $(1,0,0)$ to $(1,0,2\pi)$ and the $z$-component of ${\bf r}(t)$ is $t$, so $0\leq t\leq 2\pi$. Now, using \eqref{eq:spacearclength}, we obtain $$L=\int_0^{2\pi}|{\bf r}'(t)|dt=\int_0^{2\pi}\sqrt{2}dt=2\sqrt{2}\pi$$ Figure 1 shows the circular helix from $t=0$ to $t=2\pi$. Figure 1, A circular helix Given a curve ${\bf r}(t)$, $a\leq t\leq b$, sometimes we need to reparametrize it by another parameter $s$ for various reasons. Imagine that the curve represents the path of a particle moving in space. A reparametrization does not change the path of the particle (hence nor the distance it traveled) but it changes the particle's speed! To see this, let $t=t(s)$, $\alpha\leq s\leq\beta$, $a=t(\alpha)$, $b=t(\beta)$ be an increasing and differentiable function. Since $t=t(s)$ is one-to-one and onto, ${\bf r}(t)$ and ${\bf r}(t(s))$, its reparametrization by the parameter $s$, represent the same path. By the chain rule, \begin{equation}\label{eq:reparametrization}\frac{d}{ds}{\bf r}(t(s))=\frac{d}{dt}{\bf r}(t)\frac{dt}{ds}\end{equation} Thus we see that the speed of the reparametrization $\left|{\bf r}(t(s))\right|$ differs from that of ${\bf r}(t)$ by a factor of $\left|\frac{dt}{ds}\right|=\frac{dt}{ds}$ (since $\frac{dt}{ds}>0$). However, the arc length of the reparametrization is \begin{align*}\int_{\alpha}^{\beta}\left|\frac{d}{ds}{\bf r}(t(s))\right|ds&=\int_{\alpha}^{\beta}\left|\frac{d}{dt}{\bf r}(t)\right|\frac{dt}{ds}ds\\&=\int_a^b\left|\frac{d}{dt}{\bf r}(t)\right|dt=L\end{align*} That is, no change of the distance. There is a particular reparametrization we are interested. To discuss that, suppose ${\bf r}(t)$, $a\leq t\leq b$ be a differentiable curve in space such that ${\bf r}'(t)\ne 0$ for all $t$. Such a curve is said to be regular or smooth. Let us now define the arc length function \begin{equation}\label{eq:arclengthfunction}s(t)=\int_a^t|{\bf r}'(u)|du\end{equation} By the Fundamental Theorem of Calculus, we have \begin{equation}\label{eq:arclengthfunction2}\frac{ds}{dt}=|{\bf r}'(t)|>0\end{equation} and so the arc length function $s=s(t)$ is increasing. This means that $s(t)$ is one-to-one and onto, so it is invertible. It's inverse function can be written as $t=t(s)$ and ${\bf r}(t(s))$ is called the reparamerization by arc length. The reason we are interested in this particular reparametrization is that it results in the unit speed: From \eqref{eq:reparametrization} and \eqref{eq:arclengthfunction2}, $$\left|\frac{d}{ds}{\bf r}(t(s))\right|=|{\bf r}'(t)|\left|\frac{dt}{ds}\right|=|{\bf r}'(t)|\frac{1}{|{\bf r}'(t)|}=1$$ So it is also called the unit-speed reparametrization. The reparametrization by arc length plays an important role in defining the curvature of a curve. This will be discussed elsewhere. Example. Reparametrize the helix ${\bf r}(t)=\cos t{\bf i}+\sin t{\bf j}+t{\bf k}$ by arc length measured from $(1,0,0)$in the direction of increasing $t$. Solution. The initial point $(1,0,0)$ corresponds to $t=0$. From the previous example, we know that the helix has the constant speed $|{\bf r}'(t)|=\sqrt{2}$. Thus, $$s(t)=\int_0^t|{\bf r}'(u)|du=\sqrt{2}t$$ Hence, we obtain $t=\frac{s}{\sqrt{2}}$. The reparametrization is then given by $${\bf r}(t(s))=\cos\left(\frac{s}{\sqrt{2}}\right){\bf i}+\sin\left(\frac{s}{\sqrt{2}}\right){\bf j}+\frac{s}{\sqrt{2}}{\bf k}$$ Examples in this note have been taken from [1]. [1] Calculus, Early Transcendentals, James Stewart, 6th Edition, Thompson Brooks/Cole This entry was posted in Calculus on April 29, 2020 by Sung Lee. Derivatives and Integrals of Vector-Valued Functions The derivative $\frac{d{\bf r}}{dt}={\bf r}'(t)$ of a vector-valued function ${\bf r}(t)=(x(t),y(t),z(t))$ is defined by \begin{equation}\label{eq:vectorderivative}\frac{d{\bf r}}{dt}=\lim_{\Delta t\to 0}\frac{{\bf r}(t+\Delta t)-{\bf r}(t)}{\Delta t}\end{equation} In case of a scalar-valued function or a real-valued function, the geometric meaning of derivative is that it is the slope of tangent line. In case of a vector-valued function, the geometric meaning of derivative is that it is the tangent vector. It can be easily seen from Figure 1. As $\Delta t$ gets smaller and smaller, $\frac{{\bf r}(t+\Delta t)-{\bf r}(t)}{\Delta t}$ gets closer to a line tangent to ${\bf r}(t)$. Figure 1. The derivative of a vector-valued function From the definition \eqref{eq:vectorderivative}, it is straightforward to show \begin{equation}\label{eq:vectorderivative2}{\bf r}'(t)=(x'(t),y'(t),z'(t))\end{equation} If ${\bf r}'(t)\ne 0$, the unit tangent vector ${\bf T}(t)$ of ${\bf r}(t)$ is given by \begin{equation}\label{eq:unittangent}{\bf T}(t)=\frac{{\bf r}'(t)}{|{\bf r}'(t)|}\end{equation} Find the derivative of ${\bf r}(t)=(1+t^3){\bf i}+te^{-t}{\bf j}+\sin 2t{\bf k}$. Find the unit tangent vector when $t=0$. Using \eqref{eq:vectorderivative2}, we have $${\bf r}'(t)=3t^2{\bf i}+(1-t)e^{-t}{\bf j}+2\cos 2t{\bf k}$$ ${\bf r}'(0)={\bf j}+2{\bf k}$ and $|{\bf r}'(0)|=\sqrt{5}$. So by \eqref{eq:unittangent}, we have $${\bf T}(0)=\frac{{\bf r}'(0)}{|{\bf r}'(0)|}=\frac{1}{\sqrt{5}}{\bf j}+\frac{2}{\sqrt{5}}{\bf k}$$ Figure 2. The vector-valued function r(t) (in blue) and r'(0) (in red) The following theorem is a summary of differentiation rules for vector-valued functions. We omit the proofs of these rules. They can be proved straightforwardly from differentiation rules for real-valued functions. Note that there are three different types of the product rule or the Leibniz rule for vector-valued functions (rules 3, 4, and 5). Theorem. Let ${\bf u}(t)$ and ${\bf v}(t)$ be differentiable vector-valued functions, $f(t)$ a scalar function, and $c$ a scalar. Then $\frac{d}{t}[{\bf u}(t)+{\bf v}(t)]={\bf u}'(t)+{\bf v}'(t)$ $\frac{d}{dt}[c{\bf u}(t)]=c{\bf u}'(t)$ $\frac{d}{dt}[f(t){\bf u}(t)]=f'(t){\bf u}(t)+f(t){\bf u}'(t)$ $\frac{d}{dt}[{\bf u}(t)\cdot{\bf v}(t)]={\bf u}'(t)\cdot{\bf v}(t)+{\bf u}(t)\cdot{\bf v}'(t)$ $\frac{d}{dt}[{\bf u}(t)\times{\bf v}(t)]={\bf u}'(t)\times{\bf v}(t)+{\bf u}(t)\times{\bf v}'(t)$ $\frac{d}{dt}[{\bf u}(f(t))]=f'(t){\bf u}'(f(t))$ (Chain Rule) Example. Show that if $|{\bf r}(t)|=c$ (a constant), then ${\bf r}'(t)$ is orthogonal to ${\bf r}(t)$ for all $t$. Proof. Differentiating ${\bf r}(t)\cdot{\bf r}(t)=|{\bf r}(t)|^2=c^2$ using the Leibniz rule 5, we obtain $$0=\frac{d}{dt}[{\bf r}(t)\cdot{\bf r}(t)]={\bf r}'(t){\bf r}(t)+{\bf r}(t){\bf r}'(t)=2{\bf r}'(t)\cdot{\bf r}(t)$$ This implies that ${\bf r}'(t)$ is orthogonal to ${\bf r}(t)$. As seen in \eqref{eq:vectorderivative2}, the derivative of a vector-valued functions is obtained by differentiating component-wise. The indefinite and definite integral of a vector-valued function are done similarly by integrating component-wise, namely\begin{equation}\label{eq:vectorintegral}\int{\bf r}(t)dt=\left(\int x(t)dt\right){\bf i}+\left(\int y(t)dt\right){\bf j}+\left(\int z(t)dt\right){\bf k}\end{equation} and \begin{equation}\label{eq:vectorintegral2}\int_a^b{\bf r}(t)dt=\left(\int_a^b x(t)dt\right){\bf i}+\left(\int_a^b y(t)dt\right){\bf j}+\left(\int_a^b z(t)dt\right){\bf k}\end{equation}, respectively. When evaluate the definite integral of a vector-valued function, one can use \eqref{eq:vectorintegral2} but it would be easier to first find the indefinite integral using \eqref{eq:vectorintegral} and then evaluate the definite integral the Fundamental Theorem of Calculus (and yes, the Fundamental Theorem of Calculus still works for vector-valued functions). Example. Find $\int_0^{\frac{\pi}{2}}{\bf r}(t)dt$ if ${\bf r}(t)=2\cos t{\bf i}+\sin t{\bf j}+2t{\bf k}$. Solution. \begin{align*}\int{\bf r}(t)dt&=\left(\int 2\cos tdt\right){\bf i}+\left(\int \sin tdt\right){\bf j}+\left(\int 2tdt\right){\bf k}\\&=2\sin t{\bf i}-\cos t{\bf j}+t^2{\bf k}+{\bf C}\end{align*} where ${\bf C}$ is a vector-valued constant of integration. Now, $$\int_0^{\frac{\pi}{2}}{\bf r}(t)dt=[2\sin t{\bf i}-\cos t{\bf j}+t^2{\bf k}]_0^{\frac{\pi}{2}}=2{\bf i}+{\bf j}+\frac{\pi^2}{4}{\bf k}$$ Lines and Planes in Space You remember from algebra that a line in the plane can be determined by its slope and a point on the line or two points on the line (in which case those two points determine the slope). For a space line, slope is not a suitable ingredient. It's alternative ingredient is a vector parallel to the line. As shown in Figure 1, with a point ${\bf r}_0=(x_0,y_0,z_0)$ on the line $L$ and a vector ${\bf v}=(a,b,c)$ parallel to $L$, we can determine any point ${\bf r}=(x,y,z)$ on $L$ by vector addition of ${\bf r}_0$ and $t{\bf v}$, a dilation of ${\bf v}$: \begin{equation}\label{eq:spaceline}{\bf r}={\bf r}_0+t{\bf v}\end{equation} where $-\infty<t<\infty$. The equation \eqref{eq:spaceline} is called a vector equation of $L$. Figure 1, A space line In terms of the components, \eqref{eq:spaceline} can be written as \begin{equation}\begin{aligned}x&=x_0+at\\y&=y_0+bt\\z&=z_0+ct\end{aligned}\label{eq:spaceline2}\end{equation} where $-\infty<t<\infty$. The equations in \eqref{eq:spaceline2} are called parametric equations of $L$. Solving the parametric equations in \eqref{eq:spaceline2} for $t$, we also obtain \begin{equation}\label{eq:spaceline3}\frac{x-x_0}{a}=\frac{y-y_0}{b}=\frac{z-z_0}{c}\end{equation} The equations in \eqref{eq:spaceline3} are called symmetric equations of $L$. Often we need to work with a line segment. The vector equation \eqref{eq:spaceline} can be used to figure out an equation of a line segment. Consider the line segment from ${\bf r}_0$ to ${\bf r}_1$. Then the difference ${\bf r}_1-{\bf r}_0$ is a vector parallel to the line segment and by \eqref{eq:spaceline}, we have \begin{align*}{\bf r}(t)&={\bf r}_0+t({\bf r}_1-{\bf r}_0)\\&=(1-t){\bf r}_0+t{\bf r}_1\end{align*} This still represents an infinite line through ${\bf r}_0$ and ${\bf r}_1$. By limiting the range of $t$ to represent only the line segment from ${\bf r}_0$ to ${\bf r}_1$, we obtain an equation of the line segment \begin{equation}\label{eq:linesegment}{\bf r}(t)=(1-t){\bf r}_0+t{\bf r}_1\end{equation} where $0\leq t\leq 1$. Find a vector equation and parametric equations for the line that passes through the point $(5,1,3)$ and is parallel to the vector ${\bf i}+4{\bf j}-2{\bf k}$. Find two other points on the line. ${\bf r}_0=(5,1,3)$ and ${\bf v}=(1,4,-2)$. Hence by \eqref{eq:spaceline}, we have $${\bf r}(t)=(5,1,3)+t(1,4,-2)=(5+t,1+4t,3-2t)$$ Parametric equations are $$x(t)=5+t,\ y(t)=1+4t,\ z(t)=3-2t$$ From the parametric equations, for example, $t=1$ gives $(6,5,1)$ and $t=-1$ gives $(4,-3,5)$. Find parametric equations and symmetric equations of the line that passes through the points $A(2,4,-3)$ and $B(3,-1,1)$. At what point does this line intersect the $xy$-plane? Note that the vector ${\bf v}=\overrightarrow{AB}=(1, -5, 4)$ is parallel to the line. So $a=1$, $b=-5$, and $c=4$. By taking ${\bf r}_0=(x_0,y_0,z_0)=(2,4,-3)$, we have the parametric equations $$x=2+t,\ y=4-5t,\ z=-3+4t$$ Symmetric equations are then $$\frac{x-2}{1}=\frac{y-4}{-5}=\frac{z+3}{4}$$ The line intersects the $xy$-plane when $z=0$. By setting $z=0$ in the symmetric equations from part 1, we get $$\frac{x-2}{1}=\frac{y-4}{-5}=\frac{3}{4}$$ Solving these equations for $x$ and $y$ respectively, we find $x=\frac{11}{4}$ and $y=\frac{1}{4}$. As we have seen, a line is determined by a point on the line and a vector parallel to the line. A plane, on the other hand, can be determined by a point ${\bf r}_0$ on the plane and a vector ${\bf n}$ perpendicular to the plane (such a vector is called a normal vector to the plane). See Figure 2. Figure 2. A plane From Figure 2, we see that \begin{equation}\label{eq:plane}{\bf n}\cdot({\bf r}-{\bf r}_0)=0\end{equation} The equation \eqref{eq:plane} is called a vector equation of the plane. If ${\bf n}=(a,b,c)$, ${\bf r}=(x,y,z)$, and ${\bf r}_0=(x_0,y_0,z_0)$, then the equation \eqref{eq:plane} can be written as \begin{equation}\label{eq:plane2}a(x-x_0)+b(y-y_0)+c(z-z_0)=0\end{equation} Example. Find an equation of the plane through the point $(2,4,-1)$ with normal vector ${\bf n}=(2,3,4)$. Find the intercepts and sketch the plane. Solution. $a=2$, $b=3$, $c=4$, $x_0=2$, $y_0=4$, and $z_0=-1$. So by the equation \eqref{eq:plane2}, we have $$2(x-2)+3(y-4)+4(z+1)=0$$ or $$2x+3y+4z=12$$ To find the $x$-intercept, set $y=z=0$ in the equation and we get $x=6$. Similarly, we find the $y$- and $z$-intercepts $y=4$ and $z=3$, respectively. Figure 3 shows the plane. Figure 3. Plane 2x+3y+4z=12 Example. Find an equation of the plane that passes through the points $P(1,3,2)$, $Q(3,-1,6)$, and $R(5,2,0)$. Solution. The vectors $\overrightarrow{PQ}=(2,-4,4)$ and $\overrightarrow{PR}=(4,-1,-2)$ are on the plane, so the cross product $$\overrightarrow{PQ}\times\overrightarrow{PR}=\begin{vmatrix}{\bf i} & {\bf j} & {\bf k}\\2 & -4 & 4\\4 & -1 & -2\end{vmatrix}=12{\bf i}+20{\bf j}+14{\bf k}$$ is a normal vector to the plane. With $(x_0,y_0,z_0)=(1,3,2)$ and $(a,b,c)=(12,20,14)$, we find an equation of the plane $$12(x-1)+20(y-3)+14(z-2)=0$$ or $$6x+10y+7z=50$$ Using basic geometry, we see that the angle between two planes $P_1$ and $P_2$ is the same as the angle between their respective normal vectors ${\bf n}_1$ and ${\bf n}_2$. See Figure 4 where cross sections of two planes $P_1$ and $P_2$ are shown. Figure 4. The angle between two planes P1 and P2 Find the angle between the planes $x+y+z=1$ and $x-2y+3z=1$. Find the symmetric equations for the line of intersection $L$ of these two planes The normal vectors of these planes are ${\bf n}_1=(1,1,1)$ and ${\bf n}_2=(1,-2,3)$. Let $\theta$ be the angle between ${\bf n}_1$ and ${\bf n}_2$ . Then $$\cos\theta=\frac{{\bf n}_1\cdot{\bf n }_2}{|{\bf n}_1||{\bf n}_2|}=\frac{1(1)+1(-2)+1(3)}{\sqrt{1^2+1^2+1^2}\sqrt{1^2+(-2)^2+3^2}}=\frac{2}{\sqrt{42}}$$ Thus, $$\theta=\cos^{-1}\left(\frac{2}{\sqrt{42}}\right)\approx 72^\circ$$ First, let us find a point on $L$. Set $z=0$. Then we have $x+y=1$ and $x-2y=1$. Solving these two equations simultaneously we find $x=1$ and $y=0$. So $(1,1,0)$ is on the line $L$. Now we need a vector parallel to the line $L$. The cross product $${\bf n}_1\times{\bf n}_2=\begin{vmatrix}{\bf i} & {\bf j} & {\bf k}\\1 & 1 & 1\\1 & -2 & 3\end{vmatrix}=5{\bf i}-2{\bf j}-3{\bf k}$$ is perpendicular to both ${\bf n}_1$ and ${\bf n}_2$, hence it is parallel to $L$. Therefore, the symmetric equations of $L$ are $$\frac{x-1}{5}=\frac{y}{-5}=\frac{z}{-3}$$ Let us find the distance $D$ from a point $Q(x_1,y_1,z_1)$ to the plane $ax+by+cz+d=0$. See Figure 5. Figure 5. The distance from a point to a plane Let $P(x_0,y_0,z_0)$ be a point in the plane and let ${\bf b}=\overrightarrow{PQ}=(x_1-x_0,y_1-y_0,z_1-z_0)$. Then from Figure 5, we see that the distance from $Q$ to the plane is given by the scalar projection of ${\bf b}$ onto the normal vector ${\bf n}$: \begin{align*}D&=|\mathrm{comp}_{\bf n}{\bf b}|=\frac{|{\bf b}\cdot{\bf n}|}{|{\bf n}|}\\&=\frac{|a(x_1-x_0)+b(y_1-y_0)+c(z_1-z_0)|}{\sqrt{a^2+b^2+c^2}}\\&=\frac{|ax_1+by_1+cz_1+d|}{\sqrt{a^2+b^2+c^2}}\end{align*} The last expression is obtained by the fact that $ax_0+by_0+cz_0=-d$. Therefore, the distance $D$ from a point $Q(x_1,y_1,z_1)$ to the plane $ax+by+cz+d=0$ is \begin{equation}\label{eq:distance2plane}D=\frac{|ax_1+by_1+cz_1+d|}{\sqrt{a^2+b^2+c^2}}\end{equation} Example. Find the distance between the parallel planes $10x+2y-2z=5$ and $5x+y-z=1$. Solution. One can easily see that the two planes are parallel because their respective normal vectors $(10,2,-2)$ and $(5,1,-1)$ are parallel. To find the distance between the planes, first one will have to find a point in one plane and then use the formula \eqref{eq:distance2plane} to find the distance. Let us find a point in the plane $10x+2y-2z=5$. One can, for instance, use the $x$-intercept, so let $y=z=0$. Then $10x=5$ i.e. $x=\frac{1}{2}$. The distance from $\left(\frac{1}{2},0,0\right)$ to the plane $5x+y-z=1$ is $$D=\frac{\left|5\left(\frac{1}{2}\right)+1(0)-(0)\right|}{\sqrt{5^2+1^2+(-1)^2}}=\frac{\sqrt{3}}{6}$$
CommonCrawl
Parallels in the sequential organization of birdsong and human speech Tim Sainburg ORCID: orcid.org/0000-0003-4223-26891,2, Brad Theilman3, Marvin Thielk ORCID: orcid.org/0000-0002-0751-36643 & Timothy Q. Gentner1,3,4,5 Nature Communications volume 10, Article number: 3636 (2019) Cite this article Human speech possesses a rich hierarchical structure that allows for meaning to be altered by words spaced far apart in time. Conversely, the sequential structure of nonhuman communication is thought to follow non-hierarchical Markovian dynamics operating over only short distances. Here, we show that human speech and birdsong share a similar sequential structure indicative of both hierarchical and Markovian organization. We analyze the sequential dynamics of song from multiple songbird species and speech from multiple languages by modeling the information content of signals as a function of the sequential distance between vocal elements. Across short sequence-distances, an exponential decay dominates the information in speech and birdsong, consistent with underlying Markovian processes. At longer sequence-distances, the decay in information follows a power law, consistent with underlying hierarchical processes. Thus, the sequential organization of acoustic elements in two learned vocal communication signals (speech and birdsong) shows functionally equivalent dynamics, governed by similar processes. Human language is unique among animal communication systems in its extensive capacity to convey infinite meaning through a finite set of linguistic units and rules1. The evolutionary origin of this capacity is not well understood, but it appears closely tied to the rich hierarchical structure of language, which enables words to alter meanings across long distances (i.e., over the span of many intervening words or sentences) and timescales. For example, in the sentence, "Mary, who went to my university, often said that she was an avid birder", the pronoun "she" references "Mary", which occurs nine words earlier. As the separation between words (within or between sentences) increases, the strength of these long-range dependencies decays following a power law2,3. The dependencies between words are thought to derive from syntactic hierarchies4,5, but the hierarchical organization of language encompasses more than word- or phrase-level syntax. Indeed, similar power-law relationships exist for the long-range dependencies between characters in texts6,7, and are thought to reflect the general hierarchical organization of natural language, where higher levels of abstraction (e.g., semantic meaning, syntax, and words) govern organization in lower-level components (e.g., parts of speech, words, and characters)2,3,6,7. Using mutual information (MI) to quantify the strength of the relationship between elements (e.g., words or characters) in a sequence (i.e., the predictability of one element revealed by knowing another element), the power-law decay characteristic of natural languages3,6,7,8 has also been observed in other hierarchically organized sequences, such as music3,9 and DNA codons3,10. Language is not, however, strictly hierarchical. The rules that govern the patterning of sounds in words (i.e., phonology) are explained by simpler Markovian processes11,12,13, where each sound is dependent on only the sounds that immediately precede it. Rather than following a power law, sequences generated by Markovian processes are characterized by MI that decays exponentially, as the sequential distance between any pair of elements increases3,14. How Markovian and hierarchical processes combine to govern the sequential structure of speech over different timescales is not well understood. In contrast to the complexity of natural languages, nonhuman animal communication is thought to be dictated purely by Markovian dynamics confined to relatively short-distance relationships between vocal elements in a sequence1,15,16. Evidence from a variety of sources suggests, however, that other processes may be required to fully explain some nonhuman vocal communication systems17,18,19,20,21,22,23,24,25,26. For example, non-Markovian long-range relationships across several hundred vocal units (extending over 7.5–16.5 min) have been reported in humpback whale song24. Hierarchically organized dynamics, proposed as fundamental to sequential motor behaviors27, could provide an alternate (or additional) structure for nonhuman vocal communication signals. Evidence supporting this hypothesis remains scarce1,16. This study examines how Markovian and hierarchical processes combine to govern the sequential structure of birdsong and speech. Our results indicate that these two learned vocal communication signals are governed by similar underlying processes. To determine whether hierarchical, Markovian, or some combination of these two processes better explain sequential dependencies in vocal communication signals, we measured the sequential dependencies between vocal elements in birdsong and human speech. Birdsong (i.e., the learned vocalizations of Oscine birds) is an attractive system to investigate common characteristics of communication signals because birds are phylogenetically diverse and distant from humans, but their songs are spectrally and temporally complex like speech, with acoustic units (notes, motifs, phrases, and bouts) spanning multiple timescales28. A number of complex sequential relationships have been observed in the songs of different species17,18,19,20,21,22,23,29. Most theories of birdsong sequential organization assume purely short timescale dynamics16,30,31,32, however, and rely typically on far smaller corpora than those available for written language. Because nonhuman species with complex vocal repertoires often produce hundreds of different vocal elements that may occur with exceptional rarity21, fully capturing the long-timescale dynamics in these signals is data intensive. To compare sequential dynamics in the vocal communication signals of birds and humans, we used large-scale data sets of song from four oscine species whose songs exhibit complex sequential organization (European starlings, Bengalese finches33, Cassin's vireos21,34, and California thrashers22,35). We compared these with large-scale data sets of phonetically transcribed spontaneous speech from four languages (English36, German37, Italian38, and Japanese39). To overcome the sparsity in the availability of large-scale transcribed birdsong data sets, we used a combination of hand-labeled corpora from Bengalese finches, Cassin's vireos, and California thrasher, and algorithmically transcribed data sets from European starlings (see "Methods" section; Fig. 1). The full songbird data set comprises 86 birds totaling 668,617 song syllables recorded in over 195 h of total singing (Supplementary Table 1). The Bengalese finch data were collected from laboratory-reared individuals. The European starling song was collected from wild-caught individuals recorded in a laboratory setting. The Cassin's vireo and California thrasher song were collected in the wild21,34,35,40. The diversity of individual vocal elements (syllables; a unit of song surrounded by a pause in singing) for an example bird for each species are shown through UMAP41 projections in Fig. 1a–d, and sequential organization is shown in Fig. 1e–i. For the human speech data sets, we used the Buckeye data set of spontaneous phonetically transcribed American-English speech36, the GECO data set of phonetically transcribed spontaneous German speech37, the AsiCA corpus of ortho-phonetically transcribed spontaneous Italian (Calabrian) speech38, and the CJS corpus of phonetically transcribed spontaneous Japanese speech39 totaling 4,379,552 phones from 394 speakers over 150 h of speaking (Supplementary Table 2). Latent and graphical representations of songbird vocalizations. a–d show UMAP41 reduced spectrographic representations of syllables from the songs of single birds projected into two-dimensions. Each point in the scatterplot represents a single syllable, where color is the syllable category. Syllable categories for Bengalese finch (a), California thrasher (b), and Cassin's vireo (c) are hand-labeled. European starlings (d) are labeled using a hierarchical density-based clustering algorithm67. Each column in the figure corresponds to the same animal. Transitions between syllables (e–h) in the same 2D space as a–d, where color represents the temporal position of a transition in a song and stronger hues show transitions that occur at the same position; weaker hues indicate syllable transitions that occur in multiple positions. Transitions between syllable categories (i–l), where colored circles represents a state or category corresponding to the scatterplots in a–d, and lines represent state transitions with opacity increasing in proportion to transition probability. For clarity, low-probability transitions (≤5%) are not shown For each data set, we computed MI between pairs of syllables or phones, in birdsong or speech, respectively, as a function of the sequential distance between elements (Eq. 4). For example, in the sequence A → B → C → D, where letters denote syllable (or phone) categories, A and B have a sequential distance of 1, while A and D have a distance of 3. In general, MI should decay as sequential distance between elements increases and the strength of their dependency drops, because elements separated by large sequential distances are less dependent (on average) than those separated by small sequential distances. To understand the relationship between MI decay and sequential distance in the context of existing theories, we modeled the long-range information content of sequences generated from three different classes of models: a recursive hierarchical model3, Markov models of birdsong31,32, and a model combining hierarchical and Markovian processes by setting Markov-generated sequences as the end states of the hierarchical model (Fig. 2). We then compared three models on their fit with the MI decay: a three-parameter exponential decay model (Eq. 5), a three-parameter power-law decay model (Eq. 6), and a five-parameter model which linearly combined the exponential and power-law decay models (composite model; Eq. (7)). Comparisons of model fits were made using the Akaike information criterion (AICc) and the corresponding relative probabilities of each model42 (see "Methods" section) to determine the best-fit model while accounting for the different number of parameters in each model. Consistent with prior work2,3,8,14, the MI decay of sequences generated by the Markov models is best fit by an exponential decay, while the MI decay of the sequences generated from the hierarchical model is best fit by a power-law decay. For sequences generated by the combined hierarchical and Markovian dynamics, MI decay is best explained by the composite model, that linearly combines exponential and power-law decay (relative probability > 0.999). Because separate aspects of natural language can be explained by Markovian and non-Markovian dynamics, we hypothesized the MI decay observed in human language would be best explained by a pattern of MI decay similar to that observed in the composite model which combines both Markovian and hierarchical processes. Likewise, we hypothesized that Markovian dynamics alone would not provide a full explanation of the MI decay in birdsong. MI decay of sequences generated by three classes of models. a MI decay of sequences generated by the hierarchically organized model proposed by Lin and Tegmark3 (red points) is best fit by a power-law decay (red line). b MI decay of sequences generated by Markov models of Bengalese finch song from Jin et al.31 and Katahira et al.32 (green points) are best fit by an exponential decay model (green lines). c MI decay of sequences generated by a composite model (blue points) that combines the hierarchical model (a) and the exponential model (b) is best fit by a composite model (blue line) with both power-law and exponential decays In all four phonetically transcribed speech data sets, MI decay as a function of inter-phone distance is best fit by a composite model that combines a power-law and exponential decay (Fig. 3, relative probabilities > 0.999, Supplementary Table 3). To understand the relative contributions of the exponential and power-law components more precisely, we measured the curvature of the fit of the log-transformed MI decay (Fig. 3d). The minimum of the curvature corresponds to a downward elbow in the exponential component of the decay, and the maximum in the curvature corresponds to the point at which the contribution of the power law begins to outweigh that of the exponential. The minimum of the curvature for speech (~3–6 phones for each language or ~0.21–0.31 s) aligns roughly with median word length (3–4 phones) in each language data set (Fig. 3e), while the maximum curvature (~8–13 phones for each language) captures most (~89–99%) of the distribution of word lengths (in phones) in each data set. Thus, the exponential component contributes most strongly at short distances between phones, at the scale of words, while the power law primarily governs longer distances between phones, presumably reflecting relationships between words. The observed exponential decay at inter-word distances agrees with the longstanding consensus that phonological organization is governed by regular (or subregular) grammars with Markovian dynamics11. The emphasis of a power-law decay at intra-word distances, likewise, agrees with the prior observations of hierarchical long-range organization in language12,13. Mutual information decay in human speech. a MI decay in human speech for four languages (maroon: German, orange: Italian, blue-green: Japanese, green: English) as a function of the sequential distance between phones. MI decay in each language is best fit by a composite model (colored lines) with exponential and power-law decays, shown as a dashed and dotted gray lines, respectively. b The MI decay (as in a) with the exponential component of the fit model subtracted to show the power-law component of the decay. c The same as in b, but with the power-law component subtracted to show exponential component of the decay. d Curvature of the fitted composite decay model showing the distance (in phones) at which the dominant portion of the decay transitions from exponential to power law. The dashed line is drawn at the minimum curvature for each language (English: 3.37, German: 3.57, Italian: 3.72, Japanese: 5.74) e Histograms showing the distribution of word lengths in phones, fit with a smoothed Gaussian kernel (colored line). The dashed vertical line shows the median word length (German: 3, Italian: 4, Japanese: 3, English: 3) To more closely examine the language-relevant timescales over which Markovian and hierarchical processes operate in speech, we performed shuffling analyses that isolate the information carried within and between words and utterances in the phone data sets. We defined utterances in English and Japanese as periods of continuous speech broken by pauses in the speech stream (Supplementary Fig. 1; median utterance length in Japanese: 19 phones, English: 21 phones; the German and Italian data sets were not transcribed by utterance). To isolate within-sequence (word or utterance) information, we shuffled the order of sequences within a transcript, while preserving the natural order of phones within each sequence. Isolating within-word information in this way yields MI decay in all four languages that is best fit by an exponential model (Supplementary Fig. 2a–d). Isolating within-utterance information in the same way yields MI decay best fit by a composite model (Supplementary Fig. 2i, j), much like the unshuffled data (Fig. 3a). Thus, only Markovian dynamics appear to govern phone-to-phone dependencies within words. Using a similar strategy, we also isolated information between phones at longer timescales by shuffling the order of phones within each word or utterance, while preserving the order of words (or utterances). Removing within-word information in this way yields MI decay in English, Italian, and Japanese that is best fit by a composite model and MI decay in German that is best fit by a power-law model (Supplementary Fig. 2e–h). Removing within-utterance information yields MI decay that is best fit by a power-law model (English; Supplementary Fig. 2k) or a composite model (Japanese; Supplementary Fig. 2l). Thus, phone-to-phone dependencies within utterances can be governed by both Markov and/or hierarchical processes. The strength of any Markovian dynamics between phones in different words or utterances weakens as sequence size increase, from words to utterances, eventually disappearing altogether in two of the four languages examined here. The same processes that govern phone-to-phone dependencies also appear to shape dependencies between other levels of organization in speech. We analyzed MI decay in the different speech data sets between words, parts-of-speech, mora, and syllables (depending on transcription availability in each language, see Supplementary Table 2). The MI decay between words was similar to that between phones when within-word order was shuffled. Likewise, the MI decay between parts-of-speech paralleled that between words, and the MI decay between mora and syllables (Supplementary Fig. 3) was similar to that between phones (Fig. 3a). This supports the notion that long-range relationships in language are interrelated at multiple levels of organization6. As with speech, we analyzed the MI decay of birdsong as a function of inter-element distance (using song syllables rather than phones) for the vocalizations of each of the four songbird species. In all four species, a composite model best fit the MI decay across syllable sequences. (Fig. 4, relative probabilities > 0.999; Supplementary Table 4). The relative contributions of the exponential and power-law components mirrored those observed for phones in speech. That is, the exponential component of the decay is stronger at short syllable-distances, while the power-law component of the decay dominates longer-distance syllable relationships. The transition from exponential to power-law decay (minimum curvature of the fit), was much more variable between songbird species than between languages (Bengalese finch: ~24 syllables or 2.64 s, European starlings ~26 syllables or 19.13 s, Cassin's vireo: ~21 syllables or 48.94 s, California thrasher: ~2 syllables or 0.64 s). Mutual information decay in birdsong. a MI decay in song from four songbird species (purple: Bengalese finch, teal: California thrasher, red: Cassin's vireo, blue: European starling) as a function of the sequential distance between syllables. MI decay in each species is best fit by a composite model (colored lines) with exponential and power-law decays, shown as a dashed and dotted gray lines, respectively. b The MI decay (as in a) with the exponential component of the fit model subtracted to show the power-law component of the decay. c The same as in b, but with the power-law component subtracted to show exponential component of the decay. d Curvature of the fitted composite decay model showing the distance (in syllables) at which the dominant portion of the decay transitions from exponential to power law. The dashed line is drawn at the minimum curvature for each species (Bengalese finch: ~24, California thrasher: ~2, Cassin's vireo: ~21, European starling: ~26) e Histograms showing the distribution of bout lengths in syllables, fit with a smoothed Gaussian kernel (colored line). The dashed line shows the median bout length (Bengalese finch: 68, California thrasher: 88, Cassin's vireo: 33, European starling: 42) To examine more closely the timescales over which Markovian and hierarchical processes operate in birdsong, we performed shuffling analyses (similar to those performed on speech data sets) that isolate the information carried within and between song bouts. We defined song bouts operationally by inter-syllable pauses based upon the species (see "Methods"). To isolate within-bout information, we shuffled the order of song bouts within a day, while preserving the natural order of syllables within each bout. This yields a syllable-to-syllable MI decay that is best fit by a composite model in each species (Supplementary Fig. 4a–d), similar to that observed in the unshuffled data (Fig. 4). Thus, both Markovian and hierarchical processes operate at within-bout timescales. To confirm this, we also isolated within-bout relationships by computing the MI decay only over syllables pairs that occur within the same bout (as opposed to pairs occurring over an entire day of singing). Similar to the bout shuffling analysis, MI decay in each species was best fit by the composite model (Supplementary Fig. 5). To isolate information between syllables at long timescales, we shuffled the order of syllables within bouts while preserving the order of bouts within a day. Removing within-bout information in this way yields MI decay that is best fit by an exponential decay alone (Supplementary Fig. 4e–h). This contrasts with the results of similar shuffles of phones within words or within utterances in human speech (Supplementary Fig. 2e–i), and suggests that the hierarchical dependencies in birdsong do not extend across song bouts. This may reflect important differences in how hierarchical processes shape the statistics of both communication signals. Alternatively, this may be an uninteresting artifact of the relatively small number of bouts produced by most birds each day (median bouts per day; finch: 117, starling: 13, thrasher: 1, vireo: 3; see the "Discussion" section). To understand how the syntactic organization of song might vary between individual songbirds, even those within the same species, we performed our MI analysis on the data from individuals (Supplementary Figs. 6 and 7). One important source of variability is the size of the data set for each individual. In general, the ability of the composite model to explain additional variance in the MI decay over the exponential model alone correlates positively with the total number of syllables in the data set (Supplementary Fig. 7a; Pearson's correlation between (log) data set size and ΔAICc: r = 0.57, p < 0.001, n = 66). That is, for smaller data sets it is relatively more difficult to detect the hierarchical relationships in syllable-to-syllable dependencies. In general, repeating the within-bout and bout-order shuffling analyses on individual songbirds yields results consistent with analyses on the full species data sets (Supplementary Fig. 7b–d). Even in larger data sets containing thousands of syllables, however, there are a number of individual songbirds for whom the composite decay model does not explain any additional variance beyond the exponential model alone (Supplementary Fig. 7). In a subset of the data where it was possible, we also analyzed MI decay between syllables within a single-day recording session, looking at the longest available recordings in our data set, which were produced by Cassin's vireos and California thrashers and contained over 1000 syllables in some cases (Supplementary Fig. 8). These single-recording sessions show some variability even within individuals, exhibiting decay, that in some cases, appears to be purely dictated by a power law, and in other cases decay is best-fit by the composite model. Collectively, our results reveal a common structure in both the short- and long-range sequential dependencies between vocal elements in birdsong and speech. For short timescale dependencies, information decay is predominantly exponential, indicating sequential structure that is governed largely by Markovian processes. Throughout vocal sequences, however, and especially for long timescale dependencies, a power law, indicative of non-Markovian hierarchical processes, governs information decay in both birdsong and speech. These results change our understanding of how speech and birdsong are related. For speech, our observations of non-Markovian processes are not unexpected. For birdsong, they explain a variety of complex sequential dynamics observed in prior studies, including long-range organization20, music-like structure19, renewal processes17,18, and multiple timescales of organization23,29. In addition, the dominance of Markovian dynamics at shorter timescales may explain why such models have seemed appealing in past descriptions of birdsong28,30 and language43 which have relied on relatively small data sets parsed into short bouts (or smaller segments) where the non-Markovian structure is hard to detect (Supplementary Fig. 7). Because the longer-range dependencies in birdsong and speech cannot be fully explained by Markov models, our observations rule out the notion that either birdsong or speech is fully defined by regular grammars28. Instead, we suggest that the organizing principles of birdsong23, speech1, and perhaps sequentially patterned behaviors in general27,44, are better explained by models that incorporate hierarchical organization. The composite structure of the sequential dependencies in these signals helps explain why Hidden Markov Models (HMMs) and Recurrent Neural Networks (RNNs) have been used successfully to model sequential dynamics in speech3,45,46,47,48,49,50 and (to a lesser extent) animal communication29,32,51,52,53,54,55,56,57. HMMs are a class of Markov model which can represent hidden states that underlie observed data, allowing more complex (but still Markovian) sequential dynamics to be captured. HMMs have historically played an important role in speech and language-modeling tasks such as speech synthesis58 and speech recognition50, but have recently been overtaken by RNNs46,47,48,49,59, which model long-range dependencies better than the Markovian assumptions underlying HMMs. A similar shift to incorporate RNNs, or other methods to model hierarchical dynamics, will aid our understanding of at least some nonhuman vocal communication signals. The structure of dependencies between vocal elements in birdsong and human speech are best described by both hierarchical and Markovian processes, but the relative contributions of these processes show some differences across languages and species. In speech, information between phones within words decays exponentially (Supplementary Fig. 2a–d), while the information within utterances follows a combination of exponential and power-law decay (Supplementary Fig. 2i, j). When this within-word and within-utterance structure is removed (Supplementary Fig. 2), a strong power law still governs dependencies between phones, indicating a hierarchical organization that extends over very long timescales. Like speech, information between syllables within bouts of birdsong are best described by a combination of power-law and exponential decay (Supplementary Figs 5, 7a, b). In contrast to speech, however, we did not observe a significant power-law decay beyond that in the bout-level structure (Supplementary Fig. 7c). The absence of a power law governing syllable dependencies between bouts must be confirmed in future work, as our failure to find it may reflect the fact that we had far fewer bouts per analysis window in the birdsong data sets than we had utterances in the speech data sets. If confirmed, however, it would indicate an upper bound for the hierarchical organization of birdsong. It may also suggest that a clearer delineation exists between the hierarchical and Markovian processes underlying speech than those underlying birdsong. In speech the exponential component of the decay is overtaken by the power-law decay at timescales < 1 s (0.48–0.72 s; Fig. 3a), whereas in birdsong the exponential component remains prominent for, in some cases, over 2 min (2.43–136.82 s; Fig. 4a). In addition to upward pressures that may push the reach of hierarchical processes to shape longer and longer dependencies in speech, there may also be downward pressures that limit the operational range of Markovian dynamics. In any case, words, utterances, and bouts are only a small subset of the many possible levels of transcription in both signals (e.g., note<syllable< motif<phrase<bout<song; phone<syllable<word<phrase<sentence). Understanding how the component processes that shape sequence statistics are blended and/or separated in different languages and species, and at different levels of organization is a topic for future work. It is also important to note that many individual songbirds produced songs that could be fully captured by Markov processes (Supplementary Fig. 7). In so far as both the Markovian and hierarchical dynamics capture the output of underlying biological production mechanisms, it is tempting to postulate that variation in signal dynamics across individuals and species may reflect the pliability of these underlying mechanisms, and their capacity to serve as a target (in some species) for selective pressure. The songbird species sampled here are only a tiny subset of the many songbirds and nonhuman animals that produce sequentially patterned communication signals, let alone other sequentially organized behaviors and biological processes. It will be important for future work to document variation in hierarchical organization in a phylogenetically controlled manner and in the context of ontogenic experience (i.e., learning). Our sampling of songbird species was based on available large-scale corpora of songbird vocalizations, and most likely does not capture the full diversity of long- and short-range organizational patterns across birdsong and nonhuman communication. The same may hold true for our incomplete sampling of languages. Our observations provide evidence that the sequential dynamics of human speech and birdsong are governed by both Markovian and hierarchical processes. Importantly, this result does not speak to the presence of any specific formal grammar underlying the structure of birdsong, especially as it relates to the various hierarchical grammars thought to support the phrasal syntax of language. It is possible that the mechanisms governing syntax are distinct from those governing other levels of hierarchical organization. One parsimonious conclusion is that the non-Markovian dynamics seen here are epiphenomena of a class of hierarchical processes used to construct complex signals or behaviors from smaller parts, as have been observed in other organisms including fruit flies60,61. These processes might reasonably be co-opted for speech and language production62. Regardless of variability in mechanisms, however, the power-law decay in information content between vocal elements is not unique to human language. It can and does occur in other temporally sequenced vocal communication signals including those that lack a well-defined (perhaps any) hierarchical syntactic organization through which meaning is conveyed. Birdsong data sets We analyzed song recordings from four different species: European starling (Sturnus vulgaris), Bengalese finch (Lonchura striata domestica), Cassin's vireo (Vireo cassinii), and California thrasher (Toxostoma redivivum). As the four data sets were each hand-segmented or algorithmically segmented by different research groups, the segmentation methodology varies between species. The choice of the acoustic unit used in our analyses are somewhat arbitrary and the choice of the term syllable is used synonymously across all four species in this text, however the units that are referred to here as syllables for the California thrasher and Cassin's vireo are sometimes referred to as phrases in other work21,22,34,35. Information about the length and diversity of each syllable repertoire is provided in Extended Data Table 1. The Bengalese finch data set33,52 was recorded from sound-isolated individuals and was hand-labeled. The Cassin's vireo21,34,63 and the California thrasher35 data sets were acquired from the Bird-DB40 database of wild recordings, and were recorded from the Sierra Nevada and Santa Monica mountains, respectively. Both data sets are hand-labeled. The European starling song64 was collected from wild-caught male starlings (sexed by morphological characteristics) 1 year of age or older. Starling song was recorded at either 44.1 or 48 kHz over the course of several days to weeks, at various points throughout the year in sound-isolated chambers. Some European starlings were administered with testosterone before audio recordings to increase singing behavior. The methods for annotating the European starling data set are detailed in the "Corpus annotation for European starlings" section. Procedures and methods comply with all relevant ethical regulations for animal testing and research and were carried out in accordance with the guidelines of the Institutional Animal Care and Use Committee at the University of California, San Diego. Speech corpora Phone transcripts were taken from four different data sets: the Buckeye corpus of spontaneous conversational American-English speech36, the IMS GECO corpus of spontaneous German speech37, the AsiCA corpus of spontaneous Italian speech of the Calabrian dialect38 (south Italian), and the CSJ corpus of spontaneous Japanese speech39. The American-English speech corpus (Buckeye) consists of conversational speech taken from 40 speakers in Columbus, Ohio. Alongside the recordings, the corpus includes transcripts of the speech and time aligned segmentation into words and phones. Phonetic alignment was performed in two steps: first using HMM automatic alignment, followed by hand adjustment and relabeling to be consistent with the trained human labeler. The Buckeye data set also transcribes pauses, which are used as the basis for boundaries in an utterance in our analyses. The German speech corpus (GECO) consists of 46 dialogs ~25 min in length each, in which previously unacquainted female subjects are recorded conversing with one another. The GECO corpus is automatically aligned at the phoneme and word level using forced alignment65 from manually generated orthographic transcriptions. A second algorithmic step is then used to segment the data set into syllables65. The Italian speech data (AsiCA) consist of directed, informative, and spontaneous recordings. Only the spontaneous subset of the data set was used for our analysis to remain consistent with the other data sets. The spontaneous subset of the data set consists of 61 transcripts each lasting an average of 35 min. The AsiCA data set is transcribed using a hybrid orthographic/phonetic transcription method where certain phonetic features were noted with International Phonetic Alphabet labels. The CSJ consists of spontaneous speech from either monologues or conversations which are hand transcribed. We use the core subset of the corpus, both because it is the fully annotated subset of the data set, and because it is similar in size to the other data sets used. The core subset of the corpus contains over 500,000 words annotated for phonemes and several other speech features, and consists primarily of spontaneously spoken monologues. CSJ is also annotated at the level of mora, a syllable-like unit consisting of one or more phonemes and serving as the basis of the 5–7–5 structure of the Haiku66. In addition, CSJ is transcribed at the level of Inter-Pausal Units (IPUs) which are periods of continuous speech surrounded by an at-least 200-ms pause. We refer here to IPUs as utterances to remain consistent with the Buckeye data set. As each of the data sets was transcribed using a different methodology, this disparity between the transcription methods may account for some differences in the observed MI decay. The impact of using different transcription methods are at present unknown. The same disparity is true of the birdsong data sets. Corpus annotation for European starlings The European starling corpus was annotated using a novel unsupervised segmentation and annotation algorithm being maintained at GitHub.com/timsainb/AVGN. An outline of the algorithm is given here. Spectrograms of each song bout were created by taking the absolute value of the one-sided short-time Fourier transformation of the band-pass-filtered waveform. The resulting power was normalized from 0 to 1, log-scaled, and thresholded to remove low-amplitude background noise in each spectrogram. The threshold for each spectrogram was set dynamically. Beginning at a base-power threshold, all power in the spectrogram below that threshold was set to zero. We then estimated the periods of silence in the spectrogram as stretches of spectrogram where the sum of the power over all frequency channels at a given time point was equal to zero. If there were no stretches of silence for at least n seconds (described below), the power threshold was increased and the process was repeated until our criteria for minimum length silence was met or the maximum threshold was exceeded. Song bouts for which the maximum threshold was exceeded in our algorithm were excluded as too noisy. This method also filtered out putative bouts that were composed of nonvocal sounds. Thresholded spectrograms were convolved with a Mel-filter, with 32 equally spaced frequency bands between the high and low cutoffs of the Butterworth bandpass filter, then rescaled between 0 and 255. To segment song bouts into syllables, we computed the spectral envelope of each song spectrogram, as the sum power across the Mel-scaled frequency channels at every time-sample in the spectrogram. We defined syllables operationally as periods of continuous vocalization bracketed by silence. To find syllables, we first marked silences by minima in the spectral envelope and considered the signal between each silence as a putative syllable. We then compared the duration of the putative syllable with an upper bound on the expected syllable length for each species. If the putative syllable was longer than the expected syllable length, it was assumed to be a concatenation of two or more syllables which had not yet been segmented, and the threshold for silence was raised to find the boundary between those syllables. This processes repeated iteratively for each putative syllable until it was either segmented into multiple syllables or a maximum threshold was reached, at which point it was accepted as a long syllable. This dynamic segmentation algorithm is important for capturing certain introductory whistles in the European starling song, which can be several times longer than any other syllable in a bout. Several hyperparameters were used in the segmentation algorithm. The minimum and maximum expected lengths of a syllable in seconds (ebr_min, ebr_max) were set to 0.25/0.75 s. The minimum number of syllables (min_num_sylls) expected in a bout was set to 20. The maximum threshold for silence (max_thresh), relative to the maximum of the spectral envelope was set to 2%. To threshold out overly noisy song, a minimum length of silence threshold was expected in each bout (min_silence_for_spec), set at 0.5 s. The base spectrogram (log) threshold for power considered to be spectral background noise (spec_thresh) was set at 4.0. This threshold value was set dynamically, where the minimum spectral background noise (spec_thresh_min) was set to be 3.5. We reshaped the syllable spectrograms to create uniformly sized inputs for the dimensionality reduction algorithm. Syllable time-axes were resized using spline interpolation to match a sampling rate of 32 frames equaling the upper limit of the length of a syllable for each species (e.g., a starling's longest syllables are ~1 s, so all syllables are reshaped to a sampling rate of 32 samples/s). Syllables that were shorter than the set syllabic rate were zero-padded on either side to equal 32-time samples, and syllables that were longer than the upper bound were resized to 32-time samples to fit into the network. Multiple algorithms exist to transcribe birdsong corpora into discrete elements. Our method is unique in that it does not rely on supervised (experimenter) element labeling, or hand-engineered acoustic features specific to individual species beyond syllable length. The method consists of two steps: (1) project the complex features of each birdsong data set onto a two-dimensional space using the UMAP dimensionality reduction algorithm41 and (2) apply a clustering algorithm to determine element boundaries67. Necessary parameters (e.g. the minimum cluster size) were set based upon visual inspection of the distributions of categories in the two-dimensional latent space. We demonstrate the output of this method in Fig. 1 both on a European starling data set using our automated transcription, and on the Cassin's vireo, California thrasher, and Bengalese finch data sets. The dimensionality reduction procedure was used for the Cassin's vireo, Bengalese finch, and California thrasher data sets, but using hand segmentations rather than algorithmic segmentations of boundaries. The hand labels are also used rather than UMAP labels for these three species. Song bouts Data sets were either made available, segmented into bouts by the authors of each data set, as in the case of the Bengalese finches, or were segmented into bouts based upon inter-syllable gaps of >60 s in the case of Cassin's vireo and California thrashers, and 10 s in the case of European starlings. These thresholds were set based upon the distribution of inter-syllable gaps for each species (Supplementary Fig. 9). Mutual information estimation We calculated MI using distributions of pairs of syllables (or phones) separated by some distance within the vocal sequence. For example, in the sequence "a → b → c → d → e", where letters denote exemplars of specific syllable or phones categories, the distribution of pairs at a distance of "2" would be ((a, c), (b, d), (c, e)). We calculate MI between these pairs of elements as: $$\hat I(X,Y) = \hat S(X) + \hat S(Y) - \hat S(X,Y),$$ where X is the distribution of single elements (a, b, c) in the example, and Y is the distribution of single elements (c, d, e). \(\hat S(X)\) and \(\hat S(Y)\) are the marginal entropies of the distributions of X and Y, respectively, and \(\hat S(X,Y)\) is the entropy of the joint distribution of X and Y, ((a, c), (b, d), (c, e)). We employ the Grassberger68 method for entropy estimation used by Lin and Tegmark3 which accounts for under-sampling true entropy from finite samples: $$\hat S = {\mathrm{log}}_2(N) - \frac{1}{N}\mathop {\sum}\limits_{i = 1}^K {N_i} \psi \left( {N_i} \right),$$ where ψ is the digamma function, K is the number of categories (e.g. syllables or phones) and N is the total number of elements in each distribution. We account for the lower bound of MI by calculating the MI on the same data set, where the syllable sequence order is shuffled: $$\hat I_{{\rm{sh}}}(X,Y) = \hat S\left( {X_{{\rm{sh}}}} \right) + \hat S\left( {Y_{{\rm{sh}}}} \right) + \hat S\left( {X_{{\rm{sh}}},Y_{{\rm{sh}}}} \right),$$ where Xsh and Ysh refer to the same distributions as X and Y described above, taken from shuffled sequences. This shuffling consists of a permutation of each individual sequence being used in the analysis, which differs depending on the type of analysis (e.g. a bout of song in the analysis shown in Supplementary Fig. 5 versus an entire day of song in Fig. 4). Finally, we subtract out the estimated lower bound of the MI from the original MI measure. $${\rm{MI}} = \hat I - \hat I_{{\rm{sh}}}$$ Mutual information decay fitting To determine the shape of the MI decay, we fit three decay models to the MI as a function of element distance: an exponential decay model, a power-law decay model, and a composite model of both, termed the composite decay: $${\mathrm{exponential}}\,{\mathrm{decay}} = a \ast e^{ - x \ast b} + c$$ $${\mathrm{power}} - {\mathrm{law}}\,{\mathrm{decay}} = a \ast x^b + c$$ $${\mathrm{composite}}\,{\mathrm{decay}} = a \ast e^{ - x \ast b} + c \ast x^d + f$$ where x represents the inter-element distance between units (e.g., phones or syllables). To fit the model on a logarithmic scale, we computed the residuals between the log of the MI and of the model's estimation of the log of the MI. Because our distances were necessarily sampled linearly as integers, we scaled the residuals during fitting by the log of the distance between elements. This was done to emphasize fitting the decay in log-scale. The models were fit using the lmfit Python package69. We used the Akaike information criterion (AIC) to compare the relative quality of the exponential, composite, and power-law models. AIC takes into account goodness-of-fit and model simplicity, by penalizing larger numbers of parameters in each model (3 for the exponential and power-law models, 5 for the composite model). All comparisons use the AICc42 estimator, which imposes an additional penalty (beyond the penalty imposed by AIC) to correct for higher-parameter models overfitting on smaller data sets. We choose the best-fit model for the MI decay of each bird's song and the human speech phone data sets using the difference in AICc between models42. In the text, we report the relative probability of a given model (in comparison to other models), which is computed directly from the AICc42 (see Supplementary Information). We report the results using log-transformed data in the main text (Extended Data Tables 3 and 4). To determine a reasonable range of element-to-element distances for all the birdsong and speech data sets, we analyzed the relative goodness-of-fit (AICc) and proportion of variance explained (r2) for each model on decays over distances ranging from 15 to 1000 phones/syllables apart. The composite model provides the best fit for distances up to at least 1000 phones in each language (Supplementary Fig. 10) and at least the first 100 syllables for all songbird species (Supplementary Fig. 11). To keep analyses consistent across languages and songbird species we report on analyses using distances up to 100 elements (syllables in birdsong and phones in speech). Figures 3 and 4 show a longer range of decay in each language and songbird species, plotted up to element distances where the coefficient of determination (r2) remained within 99.9% of its value when fit to 100-element distances. Curvature of decay fits We calculated the curvature for those signals best fit by a composite model in log space (log-distance and log-MI). $$\kappa = \frac{{|y\prime\prime |}}{{\left( {1 + y^{\prime 2}} \right)^{\frac{3}{2}}}}$$ where y is the log-scaled MI. We then found the local minima and the following local maxima of the curvature function, which corresponds to the "knee" of the exponential portion of the decay function, and the transition between a primary contribution on the exponential decay to a primary contribution of the power-law decay. Sequence analyses Our primary analysis was performed on sequences of syllables that were produced within the same day to allow for both within-bout and between-bout dynamics to be present. To do so, we considered all syllables produced within the same day as a single sequence and computed MI over pairs of syllables that crossed bouts, regardless of the delay in time between the pairs of syllables. In addition to the primary within-day analysis, we performed three controls to observe whether the observed MI decay was due purely to within-bout, or between-bout organization. The first control was to compute the MI between only syllables that occur within the same bout (as defined by a 10 s gap between syllables). Similar to the primary analysis (Fig. 4), the best-fit model for within-bout MI decay is the composite model (Supplementary Figs 7b and 5). To more directly dissociate within-bout and between-bout syllable dependencies in songbirds, we computed the MI decay after removing either within- or between-bout structure. To do this, we shuffled the ordering of bouts within a day while retaining the order of syllables within each bout (Supplementary Fig. 7c), or shuffled the order of syllables within each bout while retaining the ordering of bouts (Supplementary Fig. 7d). Analyses were performed on individual songbirds with at least 150 syllables in their data set (Supplementary Fig. 7), and on the full data set of all birds in a given species. We performed similar shuffling analysis on the speech data sets (Supplementary Fig. 2). For speech, we shuffled the order of phones within words (while preserving word order) to remove within-word information, and shuffled word order (while preserving within-word phone ordering) to remove between-word information. We used a similar shuffling strategy at the utterance level remove within- and between-utterance information. The speech data sets were not broken down into individuals due to limitations in data set size at the individual level, and because language is clearly shared between individuals in each speech data set. To address the possibility that repeating syllables might account for long-range order, we performed separate analyses on both the original syllable sequences (as produced by the bird) and compressed sequences in which all sequentially repeated syllables were counted as a single syllable. The original and compressed sequences show similar MI decay shapes (Supplementary Fig. 12). We also assessed how our results relate to the timescale of segmentation and discretization of syllables or phones by computing the decay in MI between discretized spectrograms of speech and birdsong at different temporal resolutions (Supplementary Fig. 13) for a subset of the data. Long-range relationships are present throughout both speech and birdsong regardless of segmentation, but the pattern of MI decay does not follow the hypothesized decay models as closely as that observed when the signals are discretized to phones or syllables, supporting the nonarbitrariness of these low-level production units. Computational models We compared the MI decay of sequences produced by three different artificial grammars: (1) Markov models used to describe the song of two Bengalese finches31,32, (2) The hierarchical model proposed by Lin and Tegmark3, and (3) a model composed of both the hierarchical model advocated by Lin and Tegmark and a Markov model. While these models do not capture the full array of possible sequential models and their signatures in MI decay, they well-capture the predictions made based upon the discussed literature2,3,6,7,14 and provide an illustration of what would be expected given our competing hypotheses. With each model, we generate corpora of sequences, then compute the MI decay of the sequences using the same methods as with the birdsong and speech data. We also fit a power-law, exponential, and composite model to the MI decay, in the same manner (Fig. 2). A Markov model is a sequential model in which the probability of transitioning to a state (xn) is dependant solely on the previous state (xn−1). Sequences are generated from a Markov model by sampling an initial state, x0 from the set of possible states S. x0 is then followed by a new state from the probability distribution P(xn|xn−1). Markov models can thus be captured by a Matrix M of conditional probabilities Mab = P(xn = a|xn−1 = b), where a ∈ S and b ∈ S. In the example (Fig. 2b) we produce a set of 65,536 (216) sequences from Markov models describing two Bengalese finches31,32. The hierarchical model from Lin and Tegmark3 samples sequences recursively in a similar manner to how the Markov model samples sequences sequentially. Specifically, a state x0 is drawn probabilistically from the set of possible states S as in the Markov model. The initial state x0 is then replaced (rather than followed by, as in the Markov model) by q new states (rather than a single state as in the Markov model), which are similarly sampled probabilistically as P(xi|x0), where xi is any of the new q states replacing x0. The hierarchical grammar can therefore similarly be captured by a conditional probability matrix Mab = P(xl+1 = a|xl = b). The difference between the two models is that the sampled states are replaced recursively in the hierarchical model, whereas in the Markov model they are appended sequentially to the initial state. In the example (Fig. 2a) we produce a set of 1000 sequences from a model parameterized with an alphabet of 5 states recursively subsampled 12 times, with 2 states replacing the initial state at each subsampling (generating sequences of length 4096). The final model combines both the Markov model and the hierarchical model by using Markov-generated sequences as the end states of the hierarchical model. Specifically, the combined model is generated in a three-step process: (1) A Markov model is used to generate sequences equal to the number of possible states of the hierarchical model (S). (2) The combined model is sampled in the exact same manner as the hierarchical model to produce sequences. (3) The end states of the hierarchical model are replaced with their corresponding Markov-generated states from (1). In the example (Fig. 2c) we produce sequences in the same manner as the hierarchical model. Each state of these sequences is then replaced with sequences between 2 and 5 states long generated by a Markov model with an alphabet of 25 states. Neither the hierarchical model nor the combined model is meant to exhaustively sample the potential ways in which hierarchical signals can be formed or combined with Markovian processes. Instead, both models are meant to illustrate the theory proposed by prior work and to act as a baseline for comparison for our analyses on real-world signals. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The European starling song data set is available on Zenodo64. The availability of the California thrasher, Cassin's vireo, and Bengalese finch data sets are at the discretion of the corresponding laboratories and are currently publicly hosted at Bird-DB40 and FigShare33,63. The Buckeye (English)36, GECO (German)37, and AsiCA38 (Italian) speech corpora are currently available for research purposes through their respective authors. The CSJ39 (Japanese) corpus is currently available from the authors for a fee upon successful application. A reporting summary for this article is available as a Supplementary Information file. The software created for all of the analyses performed in this article are available at https://github.com/timsainb/ParallelsBirdsongLanguagePaper. The tools used for building the European starling corpus are available at https://github.com/timsainb/AVGN. Chomsky, N. Three models for the description of language. IRE Trans. Inf. Theory 2, 113–124 (1956). Li, W. Mutual information functions versus correlation functions. J. Stat. Phys. 60, 823–837 (1990). ADS MathSciNet Article Google Scholar Lin, H. W. & Tegmark, M. Critical behavior in physics and probabilistic formal languages. Entropy 19, 299 (2017). ADS Article Google Scholar Frank, S. L., Bod, R. & Christiansen, M. H. How hierarchical is language use? Proc. R. Soc. Lond. B: Biol. Sci. 279, 4522–4531 (2012). Chomsky, N. Syntactic Structures (Mouton, The Hague, 1957). Altmann, E. G., Cristadoro, G. & Degli Esposti, M. On the origin of long-range correlations in texts. Proc. Natl Acad. Sci. USA 109, 11582–11587 (2012). ADS CAS Article Google Scholar Ebeling, W. & Neiman, A. Long-range correlations between letters and sentences in texts. Phys. A Stat. Mech. Appl. 215, 233–241 (1995). Li, W. & Kaneko, K. Long-range correlation and partial 1/fα spectrum in a noncoding DNA sequence. EPL (Europhys. Lett.) 17, 655 (1992). Levitin, D. J., Chordia, P. & Menon, V. Musical rhythm spectra from Bach to Joplin obey a 1/f power law. Proc. Natl Acad. Sci. USA 109, 3716–3720 (2012). Peng, C.-K. et al. Long-range correlations in nucleotide sequences. Nature 356, 168 (1992). Kaplan, R. M. & Kay, M. Regular models of phonological rule systems. Comput. Linguist. 20, 331–378 (1994). Heinz, J. & Idsardi, W. Sentence and word complexity. Science 333, 295–297 (2011). ADS MathSciNet CAS Article Google Scholar Heinz, J. & Idsardi, W. What complexity differences reveal about domains in language. Top. Cogn. Sci. 5, 111–131 (2013). Li, W. Power spectra of regular languages and cellular automata. Complex Syst. 1, 107–130 (1987). Hauser, M. D., Chomsky, N. & Fitch, W. T. The faculty of language: what is it, who has it, and how did it evolve? Science 298, 1569–1579 (2002). Beckers, G. J., Bolhuis, J. J., Okanoya, K. & Berwick, R. C. Birdsong neurolinguistics: songbird context-free grammar claim is premature. Neuroreport 23, 139–145 (2012). Fujimoto, H., Hasegawa, T. & Watanabe, D. Neural coding of syntactic structure in learned vocalizations in the songbird. J. Neurosci. 31, 10023–10033 (2011). Kershenbaum, A. et al. Animal vocal sequences: not the Markov chains we thought they were. Proc. R. Soc. Lond. B Biol. Sci. 281, 20141370 (2014). Roeske, T. C., Kelty-Stephen, D. & Wallot, S. Multifractal analysis reveals music-like dynamic structure in songbird rhythms. Sci. Rep. 8, 4570 (2018). Markowitz, J. E., Ivie, E., Kligler, L. & Gardner, T. J. Long-range order in canary song. PLoS Comput. Biol. 9, e1003052 (2013). Hedley, R. W. Composition and sequential organization of song repertoires in Cassin's vireo (Vireo cassinii). J. Ornithol. 157, 13–22 (2016). Sasahara, K., Cody, M. L., Cohen, D. & Taylor, C. E. Structural design principles of complex bird songs: a network-based approach. PLoS One 7, e44436 (2012). Todt, D. & Hultsch, H. How songbirds deal with large amounts of serial information: retrieval rules suggest a hierarchical song memory. Biol. Cybern. 79, 487–500 (1998). Suzuki, R., Buck, J. R. & Tyack, P. L. Information entropy of humpback whale songs. J. Acoust. Soc. Am. 119, 1849–1866 (2006). Jiang, X. et al. Production of supra-regular spatial sequences by macaque monkeys. Curr. Biol. 28, 1851–1859 (2018). Bruno, J. H. & Tchernichovski, O. Regularities in zebra finch song beyond the repeated motif. Behav. Process. 163, 53–59 (2017). Lashley, K. S. The Problem of Serial Order in Behavior. In Cerebral mechanisms in behavior; the Hixon Symposium (Jeffress, L. A., ed.) 112–146 (Wiley, Oxford, England, 1951). https://psycnet.apa.org/record/1952-04498-003. Berwick, R. C., Okanoya, K., Beckers, G. J. & Bolhuis, J. J. Songs to syntax: the linguistics of birdsong. Trends Cogn. Sci. 15, 113–121 (2011). Cohen, Y. et al. Hidden neural states underlie canary song syntax. bioRxiv 561761 (2019). Gentner, T. Q. & Hulse, S. H. Perceptual mechanisms for individual vocal recognition in European starlings Sturnus vulgaris. Anim. Behav. 56, 579–594 (1998). Jin, D. Z. & Kozhevnikov, A. A. A compact statistical model of the song syntax in Bengalese finch. PLoS Comput. Biol. 7, e1001108 (2011). Katahira, K., Suzuki, K., Okanoya, K. & Okada, M. Complex sequencing rules of birdsong can be explained by simple hidden Markov processes. PLoS One 6, e24516 (2011). Nicholson, D., Queen, J. E. & Sober, S. J. Bengalese finch song repository, https://figshare.com/articles/Bengalese_Finch_song_repository/4805749 (2017). Hedley, R. W. Complexity, predictability and time homogeneity of syntax in the songs of Cassin's vireo (Vireo cassinii). PLoS One 11, e0150822 (2016). Cody, M. L., Stabler, E., Sánchez Castellanos, H. M. & Taylor, C. E. Structure, syntax and "mall-world" organization in the complex songs of California thrashers (Toxostoma redivivum). Bioacoustics 25, 41–54 (2016). Pitt, M. A. et al. Buckeye Corpus of Conversational Speech. (Department of Psychology, Ohio State University, 2007). https://buckeyecorpus.osu.edu/php/faq.php. Schweitzer, A. & Lewandowski, N. Convergence of articulation rate in spontaneous speech. In Proc. 14th Annual Conference of the International Speech Communication Association, 525–529 (Interspeech, Lyon, 2013). Krefeld, T. & Lucke, S. ASICA-online: Profilo di un nuovo atlante sintattico della Calabria. Rivista di Studi Italiani. Vol. 1, 169–211 (Toronto, Canada, 2008). http://www.rivistadistudiitaliani.it/articolo.php?id=1391. Maekawa, K. Corpus of Spontaneous Japanese: its design and evaluation. In ISCA & IEEE Workshop on Spontaneous Speech Processing and Recognition (2003). Arriaga, J. G., Cody, M. L., Vallejo, E. E. & Taylor, C. E. Bird-DB: a database for annotated bird song sequences. Ecol. Inform. 27, 21–25 (2015). McInnes, L. & Healy, J. UMAP: uniform manifold approximation and projection for dimension reduction. Preprint at https://arxiv.org/abs/1802.03426 (2018). Burnham, K. P., Anderson, D. R. & Huyvaert, K. P. AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behav. Ecol. Sociobiol. 65, 23–35 (2011). Jurafsky, D. & Martin, J.H. (eds) N-Grams in Speech and Language Processing (2nd Edition). 83–122 (Prentice-Hall, Inc., Boston, 2009). https://dl.acm.org/citation.cfm?id=1214993. Dawkins, R. Hierarchical Organisation: A Candidate Principle for Ethology in Growing points in ethology (Bateson, P.P.G. & Hinde, R.A., eds) 7–54 (Cambridge University Press, Oxford, England, 1976). https://psycnet.apa.org/record/1976-19904-012. Bourlard, H. A. & Morgan, N. Connectionist Speech Recognition: A Hybrid Approach, Vol. 247 (Springer Science & Business Media, Boston, 2012). https://www.springer.com/gp/book/9780792393962. Schmidhuber, J. Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015). Graves, A., Mohamed, A.-R. & Hinton, G. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 6645–6649 (2013). https://www.nature.com/articles/nature14539. Oord, A. v. d. et al. Wavenet: a generative model for raw audio. Preprint at https://arxiv.org/abs/1609.03499 (2016). Shen, J. et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, 4779–4783 (2018). Rabiner, L. R. A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE 77, 257–286 (1989). Arneodo, E. M., Chen, S., Gilja, V. & Gentner, T. Q. A neural decoder for learned vocal behavior. bioRxiv 193987 (2017). Nicholson, D. Comparison of machine learning methods applied to birdsong element classification. In Proc. of the 15th Python in Science Conference, 57–61 (Austin, TX, 2016). Katahira, K., Suzuki, K., Kagawa, H. & Okanoya, K. A simple explanation for the evolution of complex song syntax in bengalese finches. Biol. Lett. 9, 20130842 (2013). Mellinger, D. K. & Clark, C. W. Recognizing transient low-frequency whale sounds by spectrogram correlation. J. Acoust. Soc. Am. 107, 3518–3529 (2000). Reby, D., André-Obrecht, R., Galinier, A., Farinas, J. & Cargnelutti, B. Cepstral coefficients and hidden markov models reveal idiosyncratic voice characteristics in red deer (Cervus elaphus) stags. J. Acoust. Soc. Am. 120, 4080–4089 (2006). Weninger, F. & Schuller, B. Audio recognition in the wild: Static and dynamic classification on a real-world database of animal vocalizations. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing, 337–340 (2011). Wiltschko, A. B. et al. Mapping sub-second structure in mouse behavior. Neuron 88, 1121–1135 (2015). Tokuda, K., Yoshimura, T., Masuko, T., Kobayashi, T. & Kitamura, T. Speech parameter generation algorithms for hmm-based speech synthesis. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. Vol. 3, 1315–1318 (2000). Sak, H., Senior, A. & Beaufays, F. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In 15th Annual Conference of the International Speech Communication Association, 338–342 (Red Hook, NY, 2014). Berman, G. J., Bialek, W. & Shaevitz, J. W. Predictability and hierarchy in Drosophila behavior. Proc. Natl Acad. Sci. USA 113, 11943–11948 (2016). Dawkins, M. & Dawkins, R. Hierachical organization and postural facilitation: rules for grooming in flies. Anim. Behav. 24, 739–755 (1976). MacDonald, M. C. How language production shapes language form and comprehension. Front. Psychol. 4, 226 (2013). Hedley, R. Data used in PLoS One article "Complexity, Predictability and Time Homogeneity of Syntax in the Songs of Cassin's Vireo (Vireo cassini)" by Hedley (2016) (2016), https://figshare.com/articles/Data_used_in_PLoS_One_article_Complexity_Predictability_and_Time_Homogeneity_of_Syntax_in_the_Songs_of_Cassin_s_Vireo_Vireo_cassini_by_Hedley_2016_/3081814. Arneodo, Z., Sainburg, T., Jeanne, J. & Gentner, T. An acoustically isolated European starling song library, https://doi.org/10.5281/zenodo.3237218 (2019). Rapp, S. Automatic phonemic transcription and linguistic annotation from known text with Hidden Markov models—an aligner for German. In Proc. of ELSNET Goes East and IMACS Workshop "Integration of Language and Speech in Academia and Industry" ) (Moscow, Russia, 1995). Otake, T., Hatano, G., Cutler, A. & Mehler, J. Mora or syllable? Speech segmentation in Japanese. J. Mem. Lang. 32, 258–278 (1993). McInnes, L., Healy, J. & Astels, S. hdbscan: Hierarchical density based clustering. J. Open Source Softw. 2, 10.21105%2Fjoss.00205 (2017). Grassberger, P. Entropy estimates from insufficient samplings. Preprint at https://arxiv.org/abs/physics/0307138 (2003). Newville, M. et al. Lmfit: non-linear least-square minimization and curve-fitting for Python. zenodo https://doi.org/10.5281/zenodo.11813 (2016). We thank David Nicholson, Richard Hedley, Martin Cody, Zeke Arneodo, and James Jeanne for making available their birdsong recordings to us. Work supported by NSF Graduate Research Fellowship 2017216247 to T.S., and NIH R56DC016408 to T.Q.G. Department of Psychology, University of California, UC San Diego, La Jolla, CA, 92093, USA Tim Sainburg & Timothy Q. Gentner Center for Academic Research & Training in Anthropogeny, UC San Diego, La Jolla, CA, 92093, USA Tim Sainburg Neurosciences Graduate Program, University of California, UC San Diego, La Jolla, CA, 92093, USA Brad Theilman, Marvin Thielk & Timothy Q. Gentner Neurobiology Section, Division of Biological Sciences, UC San Diego, La Jolla, CA, 92093, USA Timothy Q. Gentner Kavli Institute for Brain and Mind, La Jolla, CA, 92093, USA Brad Theilman Marvin Thielk T.S. and T.Q.G. devised the project and the main conceptual ideas. T.S. carried out all experiments and data analyses. T.S. and T.Q.G. wrote the paper. T.S., B.T., M.T., and T.Q.G. were involved in planning the experiments, and contributed to the final version of the paper. Correspondence to Timothy Q. Gentner. Peer review information: Nature Communications thanks W. Tecumseh Fitch and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Sainburg, T., Theilman, B., Thielk, M. et al. Parallels in the sequential organization of birdsong and human speech. Nat Commun 10, 3636 (2019). https://doi.org/10.1038/s41467-019-11605-y DOI: https://doi.org/10.1038/s41467-019-11605-y Flexible scaling and persistence of social vocal communication Jingyi Chen Jeffrey E. Markowitz Lisa Stowers
CommonCrawl
Energy-entropy prediction of octanol–water logP of SAMPL7 N-acyl sulfonamide bioisosters Fabio Falcioni1,2, Jas Kalayan1,2 & Richard H. Henchman ORCID: orcid.org/0000-0002-0461-66251,2 nAff3 Journal of Computer-Aided Molecular Design volume 35, pages 831–840 (2021)Cite this article Partition coefficients quantify a molecule's distribution between two immiscible liquid phases. While there are many methods to compute them, there is not yet a method based on the free energy of each system in terms of energy and entropy, where entropy depends on the probability distribution of all quantum states of the system. Here we test a method in this class called Energy Entropy Multiscale Cell Correlation (EE-MCC) for the calculation of octanol–water logP values for 22 N-acyl sulfonamides in the SAMPL7 Physical Properties Challenge (Statistical Assessment of the Modelling of Proteins and Ligands). EE-MCC logP values have a mean error of 1.8 logP units versus experiment and a standard error of the mean of 1.0 logP units for three separate calculations. These errors are primarily due to getting sufficiently converged energies to give accurate differences of large numbers, particularly for the large-molecule solvent octanol. However, this is also an issue for entropy, and approximations in the force field and MCC theory also contribute to the error. Unique to MCC is that it explains the entropy contributions over all the degrees of freedom of all molecules in the system. A gain in orientational entropy of water is the main favourable entropic contribution, supported by small gains in solute vibrational and orientational entropy but offset by unfavourable changes in the orientational entropy of octanol, the vibrational entropy of both solvents, and the positional and conformational entropy of the solute. The partition coefficient P is a widely-used quantity to understand the transport and distribution of chemicals in biological, industrial and environmental systems [1, 2]. It expresses the relative ability of a solute molecule to dissolve in two different solvents, which are immiscible and in contact at an interface. The base-10 quantity logP is directly related to the Gibbs free energy of transfer \(\Delta G^{\text {transfer}}_{\text{X(B,A)}}\) from solvent A to solvent B using $$\begin{aligned} -{\rm{logP}}\ln (10) k_{\rm{B}} T= & {} \Delta G^{{\text{transfer}}}_{\text{X(B,A)}} \nonumber \\= & {} \Delta G_{\rm{X(B)}}^{\text{solvation}} - \Delta G_{\rm{X(A)}}^{\text{solvation}} \end{aligned}$$ where ln(10) is a base conversion factor, \(k_{\rm{B}}\) Boltzmann's constant, and T temperature. Equation 1 makes clear that logP can also be thought of as a relative solvation free energy of solute X in solvent B, \(\Delta G_{\rm{X(B)}}^{\text{solvation}}\), minus that in solvent A, \(\Delta G_{\rm{X(A)}}^{\text{solvation}}\). Values of logP are relatively straightforward to measure by the "Shake-Flask" method, followed by slow-stirring and reverse phase High Performance Liquid Chromatography [3, 4], and recently, by more accurate methods such as potentiometric titration [5]. Nonetheless, they take time and material to measure, often give highly variable results [6] and provide little insight into values obtained. Thus, there is a valuable role to play for predictive methods of logP which can save time, lower costs, and facilitate the more rational development of new chemicals, especially for the pharmaceutical industry with its long and expensive development times. There are now a wide range of methods to predict logP, building off methods to calculate solvation free energy. Firstly, there are many knowledge-based [7, 8] and machine-learning methods [9, 10] which draw on the large amount of logP data available in literature. Many continuum solvent models have been developed in combination with electronic-structure methods to calculate solvation free energies, whose difference gives \(\Delta G^{{\text{transfer}}}_{\text{X(B,A)}}\). The most common are the Polarizable Continuum Model (PCM), the series of Solvation Models (SMx), Solvation Model based on Density (SMD), and Conductor-like Screening Model (COSMO) [11, 12]. The most accurate are the COSMO-RS and COSMO-SAC methods, which have the further advantage of being applicable to many types of molecules and solvents [12,13,14], such as the variant COSMOmic to micelles and lipid bilayers [15]. Molecular-mechanics methods, which are faster than electronic-structure methods but more approximate, are better suited to calculate logP in explicit solvent. They consider ensembles of configurations generated in molecular dynamics (MD) simulations, and require the use of a force-field, such as GAFF, GAFF-DC, OPLS-AA or CHARMM, which affects the value of logP [16, 17] but mostly have no other parameters. They are most commonly applied in the alchemical formulation, yielding \(\Delta G^{{\text{transfer}}}_{\text{X(B,A)}}\) from the solvation free energies for decoupling the solute from each solvent. Methods such as exponential averaging, Thermodynamic Integration (TI) and the Bennett Acceptance Ratio (BAR) can all yield accurate results [16,17,18,19,20], even with a coarse-grain force field [21]. Less commonly implemented are formulations that yield the free energy of each system directly, whose difference gives \(\Delta G^{{\text{transfer}}}_{\text{X(B,A)}}\). Two widely used methods in biomolecular studies are the Molecular Mechanics-Poisson Boltzmann Surface Area (MM-PBSA) and its Generalized-Born variant (MM-GBSA) [22, 23], but they have not been used to calculate logP and are not as accurate as electronic-structure methods to reproduce solvation free energies in a range of solvents. More successful approaches to calculate logP from free-energy directly have been the 3D-Reference Interaction Site method (3D-RISM) [24] or grid-based inhomogeneous solvation theory (GIST) [25]. These methods have the advantage of being general for any kind of solvent free energy but still only account for the solvation contribution. We have developed a general method to evaluate free energy directly from an MD simulation for all molecules in the system, both solvent and solute alike, and over a large range of length scales [26,27,28]. Called Energy-Entropy Multiscale Cell Correlation (EE-MCC), it takes the energy from the simulation energy and evaluates the entropy over a series of units at multiple length scales, either correlated if covalently bonded, or in a mean-field cell if otherwise. Entropy is combined with energy to give free energy. Notably, entropy is calculated from the probability distribution over all quantum states of the system relating to all degrees of freedom of all molecules. MCC has been progressively developed for liquids [26, 27, 29], solutions [30,31,32,33], chemical reactions [34], and proteins [28, 35, 36]. As well as being general, MCC has the advantage of providing a detailed breakdown of entropy over all degrees of freedom of the system. Here we test MCC to calculate logP and understand the values obtained. We test it on a series of 22 N-acylsulfonamide bioisosteric compounds, shown in Fig. 1, in the "Statistical Assessment of the Modelling of Proteins and Ligands" (SAMPL) Physical Properties Blind Challenge. Structures of the 22 N-acylsulfonamides bioisosters in the SAMPL7 Physical Properties Challenge [37] As a means to encourage, promote and compare different methods to predict quantities relevant to drug design, such as logP, SAMPL is a series of blind challenges [13, 38,39,40,41,42] whereby the experimental data is made publicly available at the end of the submission period. In SAMPL5 which had the first Physical Properties Blind Challenge [38], the cyclohexane/water distribution coefficient (logD) was challenging to compute for most participants, given that logD depends on logP, protonation state and associated counter-ions. The following SAMPL6 challenges therefore separated the prediction into pKa and logP, which combine to give logD. The top-performing classes of methods were quantum-mechanics, empirical and mixed approaches, while molecular-mechanics results were more variable, given the large differences in simulation protocols. SAMPL7 follows a similar protocol to SAMPL6, and here we will only seek to calculate logP values. LogP calculation The water-octanol partition coefficient logP of solute X is defined in Equation 1 in terms of the transfer Gibbs free energy \(\Delta G^{\text {transfer}}_{\text{X(oct,wat)}}\) of X from water to octanol. In the EE method, \(\Delta G^{\text {transfer}}_{\text{X(oct,wat)}}\) is evaluated as the difference of the Gibbs free energies of each system $$\begin{aligned} \Delta G^{\text {transfer}}_{\text{X(oct,wat)}} = (G_{\text{X(oct)}} + G_{\text{wat}}) - ( G_{\text{oct}} + G_{\text{X(aq)}} ) \end{aligned}$$ where X(oct) and X(aq) denote X in octanol or water, and wat and oct denote the respective pure liquid. The Gibbs free energy of each system is calculated using \(G = H - TS\) where H is the enthalpy, S the entropy and T temperature. Energy is calculated directly from the potential and kinetic energies in a molecular dynamics (MD) simulation, ignoring the small pressure-volume term at ambient pressures that in any case almost entirely cancels in the transfer process. Entropy is calculated using MCC [26, 28, 43], explained next. Multiscale cell correlation (MCC) Entropy is calculated from MD simulations in a multiscale fashion in terms of cells of correlated units. The total entropy is calculated as a sum of components \(S_{ab}^{cd}\) using $$\begin{aligned} S = \sum _a^\text {molecule} \sum _b^\text {level} \sum _c^\text {motion} \sum _d^\text {minima} S_{ab}^{cd} \end{aligned}$$ In this equation, S is calculated for each kind of molecule a, at different length scales b of each molecule, in terms of translational or rotational motion c over all units at that level, and in terms of vibration or topography d for each type of motion. Molecular entropy The relevant molecules for a logP calculation are the solutes and the solvents water and octanol. We only consider pure solvents here, neglecting the small dissolution of water in octanol that occurs in experiment. In the solutions only the molecules in the first solvation shell are considered because the entropies of the remaining solvent molecules change little upon solute transfer and because they are not well converged, being over so many molecules. Solvation shells are defined using the Relative Angular Distance (RAD) algorithm [44, 45] based on the center-of-mass of each molecule. In each pure liquid, the same number of solvent molecules is considered as in the solute's first solvation shell to balance stoichiometry, but the averaging of data is done over all molecules in the pure liquid to give better statistics. Entropy for each level For the solutes and octanol, two levels of hierarchy are used: molecule (M) and united atom (UA), where a united atom is each non-hydrogen atom with all its bonded hydrogens as a single rigid body. Water molecules are treated only at the molecule level, which is equivalent to the united-atom level. Entropy for each type of motion The axes of a molecule are taken as its principal axes with the origin at the molecular center of mass. All molecules considered here, being non-linear, have three translational and three rotational degrees of freedom. The origin of a united atom is taken as the heavy atom and the axes are defined with respect to the covalent bonds to other heavy atoms [26]. A united atom has three translational degrees of freedom and three rotational degrees of freedom if it is non-linear (\(\ge 2\) hydrogens), 2 if it is linear (one hydrogen), and 0 if it is a point (no hydrogens). Entropy over minima The potential energy surface is discretised into energy wells, leading to two contributions: vibrational, related to the average size of energy wells for that unit, and topographical, linked to the probability of each energy well for that unit. Vibrational entropy of each kind of motion and unit is calculated in the harmonic approximation for a quantum harmonic oscillator $$\begin{aligned} S^{\rm{vib}}=k_{{\rm{B}}} \sum _{i=1}^{N_{\rm{vib}}}\left( \frac{h v_{i} / k_{{\rm{B}}} T}{\rm{e}^{h v_{i} / k_{{\rm{B}}} T}-1}-\ln \left( 1-\rm{e}^{-h v_{i} / k_{{\rm{B}}} T}\right) \right) \end{aligned}$$ where h is Planck's constant, \(N_\text {vib}\) is the number of vibrations, and \(v_i\) are the vibrational frequencies, which are derived using $$\begin{aligned} v_{i}=\frac{1}{2 \pi } \sqrt{\frac{\lambda _{i}}{k_{{\rm{B}}} T}} \end{aligned}$$ where \(\lambda _i\) are the eigenvalues of the \(N_{\text{transvib}} \times N_{\text{transvib}}\) mass-weighted force covariance matrix for translational vibration or \(N_{\text{rovib}} \times N_{\text{rovib}}\) moment-of-inertia-weighted torque covariance matrix for rotational vibration. Forces and torques are halved in the mean-field approximation except for the UA force covariance matrix [26, 27, 43, 46] because UA correlations are directly accounted for in the molecule reference frame. The six lowest-frequency vibrations for the UA force covariance matrix are removed to avoid double-counting entropy at the molecule level. Topographical entropy at the molecule level manifests as positional and orientational entropy for translation and rotation. At the united-atom level it is only conformational entropy for translation, because rotational topographical entropy of united atoms is assumed to be negligible due to rigidity, symmetry or strong correlation with the solvent. Positional entropy for a dilute solute in a solvent is calculated by discretising the volume \(V^\circ \) available to the molecule at its concentration by the volume of a solvent molecule \(V_{\text{solvent}}\), giving [30, 31, 47] $${S^{\text{transtopo}}_{\text{M}} \equiv S^{\text{pos}}} = k_{\text{B}} \ln \frac{V^\circ }{V_{\text{solvent}}} $$ \(V_{\text{solvent}}\) is taken as the volume of a simulation box of pure solvent divided by the number of solvent molecules, and \(V^\circ \) is taken as the same in both solvents and so cancels for the partition coefficient. Orientational entropy is calculated by discretising the rotational volume of the molecule about its three rotational axes according to the number of molecules in the molecule's first solvation shell \(N_{\text{c}}\) [26, 27], weighted by the probability \(p(N_{\text{c}})\) of each \(N_{\text{c}}\) using $$\begin{aligned}&{S^{\text{rotopo}}_{\text{M}} \equiv S^{\text{ or } }}\nonumber \\&\quad =k_{{\rm{B}}} \sum _{N_{{\text{c}}}} p\left( N_{{\text{c}}}\right) \ln \left[ \max \left( 1,\left( N_{{\text{c}}}^{3} \pi \right) ^{1/2} / \sigma \right) \right] \end{aligned}$$ taking the maximum ensures that the number of orientations is at least 1, and \(\sigma \) is the symmetry number of the molecule, taken as 1 for octanol and the 22 solutes and 2 for water. First-shell molecules are defined using the RAD algorithm [44, 45] as used before when defining the solvent affected by the solute. For water, an additional factor of 1/4 is included inside the logarithm of Equation 7 to account for correlations arising from hydrogen-bond directionality [26]. Conformational entropy is calculated using $$\begin{aligned} {S^{\text{transtopo}}_{\text{UA}} \equiv } S^{\text{ conf }} = k_{{\rm{B}}} \sum _{i} \lambda _{i} \ln {\left( \frac{1}{\lambda _{i}}\right) } \end{aligned}$$ where \(\lambda _i\) are the eigenvalues of a \(N_{\text{conf}} \times N_{\text{conf}}\) correlation matrix of conformations [27]. \(N_{\text{conf}}\) is the number of conformations over all flexible dihedrals in the molecule involving united-atoms, whose number ranged from 3 to 6 for the solutes. Conformations for each flexible dihedral are defined from the maxima in their probability distribution. The correlation matrix accounts for correlations between different dihedrals within the same molecule. Assembling all these terms, Equation 3 written in full for total entropy of the water solutions up to the first solvation shell of solute X becomes $$\begin{aligned}&S_{\text{X(aq)}} = S_{\text{X,M}}^{\text{transvib}} +S_{\text{X,M}}^{\text{rovib}} + S_{\text{X}}^{\text{pos}} + S_{\text{X}}^{\text{or}} + S_{\text{X,UA}}^{\text{transvib}} +S_{\text{X,UA}}^{\text{rovib}} \nonumber \\&\quad + S_{\text{X}}^{\text{conf}} +N_{\text{c,X}} \left( S_{\text{wat,M}}^{\text{transvib}} +S_{\text{wat,M}}^{\text{rovib}} + S_{\text{wat}}^{\text{or}} \right) \end{aligned}$$ and for octanol solutions $$\begin{aligned}&S_{\text{X(oct)}} = S_{\text{X,M}}^{\text{transvib}} +S_{\text{X,M}}^{\text{rovib}} + S_{\text{X}}^{\text{pos}} + S_{\text{X}}^{\text{or}} + S_{\text{X,UA}}^{\text{transvib}} +S_{\text{X,UA}}^{\text{rovib}} \nonumber \\&\quad + S_{\text{X}}^{\text{conf}} + N_{\text{c,X}} \left( S_{\text{oct,M}}^{\text{transvib}} +S_{\text{oct,M}}^{\text{rovib}} + S_{\text{oct}}^{\text{or}} + S_{\text{oct,UA}}^{\text{transvib}} \right. \nonumber \\&\left. + S_{\text{oct,UA}}^{\text{rovib}} + S_{\text{oct}}^{\text{conf}} \right) \end{aligned}$$ The corresponding equations for the pure liquids are the same but omit the solute terms. Simulation protocol The pdb files for the 22 solutes were constructed using Avogadro [48] from their SMILES string provided in the SAMPL7 GitHub repository [37]. They are labelled SM25 to SM46. Only the neutral tautomer (micro000) was considered for each solute. Four kinds of simulation box were prepared: pure water, pure octanol, one solute in water, and one solute in octanol. Cubic boxes with side \(\approx \)34 Å were created using Packmol [49] for both pure solvent and solutions, corresponding to 150 octanol molecules and 1300 water molecules, and 1 solute molecule per box in the case of the solutions. Simulations were setup using antechamber [50] and LEaP in AMBER Tools 18 [51] with the GAFF force field with AM1-BCC charges [52] for octanol and the solutes and TIP4P-Ew [53] for water. All simulations were equilibrated with 5000 steps of steepest-descent minimisation, 200 ps of NVT (constant number, volume, temperature) MD simulation at 298 K using a Langevin thermostat with a collision frequency 5.0 ps\(^{-1}\), followed by 25 ns of NPT simulation (constant number, pressure and temperature) at pressure of 1 bar using the Berendsen barostat [54] and relaxation time constant 2 ps. Data collection was run for 100 ns, saving data every 40 ps, giving 2500 frames for analysis. MD simulations were run using pmemd.cuda in AMBER 18 [55,56,57], a 10 Å cut-off for non-bonded interactions, a time step of 2 fs and the SHAKE algorithm for covalent bonds involving hydrogen. Simulations lasted 5–8 hours on 8 CPU cores or 1 GPU. The performance of the MD-based EE-MCC method to obtain logP values for the SAMPL7-logP data set is assessed by calculating the mean absolute error (MAE) and the root-mean-square error (RMSE) defined as $$\begin{aligned} \rm{MAE}= N^{-1} \sum _{j}\left| \Delta _{j}\right| \end{aligned}$$ $$\begin{aligned} \rm{RMSE}=\sqrt{N^{-1} \sum _{j} \Delta _{j}^{2}} \end{aligned}$$ where \({\Delta _j} = {\text{logP}}{_{{\text{EE - MCC}},{\text{j}}}} - {\text{logP}}{_{{\text{experiment,j}}}} \) for the j-th value and N is the total number of values analysed. Each simulation was done in triplicate to assess the statistical uncertainty of the model, yielding a Standard Error of the Mean (SEM) calculated as $$\begin{aligned} \text {SEM}=\frac{s}{\sqrt{n}} \end{aligned}$$ where s is the standard deviation and n the number of repetitions. The final energies and entropies are averaged over the values from all three simulations. The model uncertainty is 1.3 kcal \(\rm{mol}^{-1}\) based on the root-mean squared error of the energy due to GAFF as found in literature [58], which corresponds to an uncertainty in logP of 0.95. This can be used to assess the accuracy of the method prior to comparison with experimental measurements. LogP values versus experiment The octanol–water logP values computed by EE-MCC using Equations 1, 2 and 3 are presented in Fig. 2 versus experiment for all 22 SAMPL7 compounds, together with error metrics of MAE, RMSE and SEM given by Equations 11–13. EE-MCC octanol–water logP values versus experiment with SEM error bars for the 22 solutes The logP values are seen to come out in the right ball-park of a typical logP value but the correlation with experiment is weak and the range of predicted values from \(-2\) to 5 exceeds the experimental range of 0.5 to 3. Evidently, there are sizeable sources of error. To probe this further, Table 1 lists the predicted and experimental logP values, together with the corresponding \(\Delta H\), \(T\Delta S\), \(\Delta G\) values (see Tables S4 and S5 for the actual simulation values). Table 1 \(\Delta H\), \(T\Delta S\), \(\Delta G\) and computed and experimental octanol–water logP values for the 22 solutes (kcal mol\(^{-1}\)) Table 1 makes clear that the larger contribution to \(\Delta G^{\text{transfer}}_{\text{X(oct,wat)}}\) comes from the enthalpy rather than the entropy, although there are cases where entropy dominates such as SM27, SM29 or SM40. In general, \(\Delta H^{\text{transfer}}_{\text{X(oct,wat)}}\) is mostly negative and \(T\Delta S^{\text{transfer}}_{\text{X(oct,wat)}}\) is mostly positive, consistent with the favourable transfer of the solutes to octanol. The large size of the fluctuation in enthalpy is made clear in the average SEM for \(\Delta H^{\text{transfer}}_{\text{X(oct,wat)}}\) over different simulation repetitions which is seen to have a larger SEM of 1.47 kcal mol\(^{-1}\) than that of \(T\Delta S^{\text{transfer}}_{\text{X(oct,wat)}}\) at 0.31 kcal mol\(^{-1}\), demonstrating that the energy fluctuations are more responsible for deviations from experiment rather than the entropy calculated by MCC. Indeed, Table S1 lists the SEMs for the enthalpy and entropy changes for the individual solutes and shows that the SEM on the total enthalpy for a given solute is 0.4-2.7 kcal mol\(^{-1}\) for the different solutes. This is the same size as the \(\Delta H^{\text{transfer}}_{\text{X(oct,wat)}}\), even for simulations on the order of 100 ns for fairly small system sizes. Even though energies appear well converged in time (Figs. S1 and S2), this suggests that even longer and/or more simulations or saving output more often would be needed in order to drive down errors in energy, although lower errors could also be achieved by considering the energy only of the solvent molecules in the solute's solvation shell, a quantity that was not readily available using the standard energy output of AMBER. Alternatively, a recent method developed by Kofke and co-workers called mapped averaging [59,60,61] when adapted to liquids could substantially reduce the noise in these values. Entropy components Even though the logP values produced have substantial errors, largely because of statistical errors in the energy, the MCC components can be used to better understand how the entropy and associated molecular flexibility is being affected for all molecules, solute and solvent, in the transfer process. We first consider changes in the entropy components. Figure 3 illustrates the changes in each entropy component in the transfer of each solute from water to octanol. Changes in entropy components as given in Eqs. 9 and 10 for water (top), octanol (middle) and the solutes (bottom). The molecule-level changes are blue for water, red for octanol, and green for the solutes. The united-atom changes are coloured orange for octanol and pink for the solutes. Each of these components is subdivided further into transvibrational, rovibrational and topographical components at each level, indicated by shading from dark to light, respectively Data in each case is only for one of the three simulations. The most striking trend as each solute moves from water to octanol is the entropy gain of water and the entropy loss of octanol, with the latter in general being slightly smaller in magnitude. The change in water is well-known, particularly for hydrophobic molecules. The component analysis shows that the entropy gain of water is primarily orientational but offset partially by decreases in transvibrational and rovibrational entropy, consistent with earlier studies [30,31,32,33]. This is because water surrounded by water has more neighbours able to form hydrogen-bonds and the hydrogen bonds are stronger. The change for octanol is less well-known but not unexpected, given that the reduction in symmetry for molecules adjacent to solutes tends to constrain solvent molecules. A component analysis shows that essentially all terms are negative. Most of the decrease is orientational, indicating that octanol molecules have disrupted structure and fewer neighbours in the presence of the solute. There are smaller losses in united-atom topographical entropy, which is conformational, and in molecule vibration, with smaller reductions in united-atom rovibration but a tiny gain in united-atom transvibration. The changes for the solute entropy are smaller and variable in direction, indicating that the solvent is dominating the change in overall entropy. Most solutes have a smaller united-atom conformational entropy and a gain in molecular entropy, primarily orientational but also vibration. Changes for united-atom vibration are more variable. One term left out of this plot is the change in positional entropy. Only depending for dilute solutions on the molecular volumes of the solvents, this has a constant value of \(-18\) J K\(^{-1}\) mol\(^{-1}\), reflecting that there are fewer solute positions in octanol at a given concentration because of the larger volume of the octanol molecule. A greater understanding of the components comes from looking at the absolute entropies. Fig. 4 illustrates the entropy components for the 22 solutes in octanol and in water and Fig. 5 shows the corresponding entropy components for all solvent molecules in the first solvation shell of each solute for water or octanol as solvent. Data for each solute is shown for only one of the three simulations. The corresponding values of the entropy components are given in Figs. S6 and S7 and their SEMs are given in Tables S2 and S3. The most obvious difference between Figs. 4 and 5 is that the total entropy of the first-shell solvent is much larger than that of the solute, being \(\sim \)5 times larger for water and \(\sim \)14 times larger for octanol. This is one of the main reasons why the entropy of the solvent dominates the overall entropy change. The next clear trend is that the changes in entropy going from water to octanol, given explicitly in Fig. 3, are tiny compared to the total entropy values. As for energy in EE methods, changes are a small difference between large and comparable numbers. Nonetheless, the errors in the entropy components are much smaller than that in energy as noted earlier. The plots show that the vibrational entropy contributes the most to the total entropy for all compounds while topographical entropy contributes the least, consistent with earlier work [26,27,28,29,30,31,32,33, 35]. The molecule-level vibrational entropy is near-identical for all solutes but slightly varying for the surrounding solvent. The united-atom entropy terms for the solutes are larger and more variable for the solutes and for octanol. Total entropy and entropy components of each solute in octanol (left) and water (right). Components are coloured as for Fig. 3 for the molecule and united-atom levels and transvibrational, rovibrational, and topographical components Total entropy and entropy components for all the solvent molecules in the solvation shell of each solute (right) and the equivalent contribution of bulk solvent without solute (left). Colouring is as for Fig. 3 for the molecule and united-atom levels and transvibrational, rovibrational, and topographical components Concerning the entropy of the different bioisosteric solutes in Fig. 1, there is a general dependence on the size of each solute, with SM39 having the largest entropy and SM44 the smallest. All but the first four solutes can be divided into six groups, each of which has three compounds which differ by a methyl, phenyl or dimethylamine functional group attached to the sulfonyl group. The groups are G1 = SM29-SM31, G2 = SM32-SM34, G3 = SM35-SM37, G4 = SM38-SM40, G5 = SM41-SM43 and G6 = SM44-SM46. A recurring trend within each group that is evident in Fig. 4 is that the entropy of the solute with methyl is smaller than the other two solutes because of methyl's smaller size. Another distinctive trend in the solute entropies in Fig. 4 is the lower entropies of the G5 and G6 groups of molecules. This occurs because these molecules are smaller and less flexible, primarily because they have a heteroaryl ring in place of the ethyl fragment that connects the common phenyl ring. However, these trends for the solutes do not carry over to the solvent entropy terms, the changes in entropy or to the overall logP values. The EE-MCC method to calculate the free energy of a system directly from MD simulation has been used to calculate the octanol–water logP values of 22 N-acyl sulfonamides bioisosters in the SAMPL7 Physical Properties Challenge. The mean error versus experiment was 1.8 logP units and the standard error of the mean was 1.0 logP units for three separate calculations. These errors are primarily due to getting sufficiently converged energies to give accurate differences of large numbers, particularly for solvent molecules of large size and flexibility such as octanol. However, this is also an issue for entropy. Other sources of error are approximations in the force field and MCC theory, the neglect of water in the octanol phase, and different tautomeric states of the solute. The main advantages of EE-MCC are its wide applicability to many systems and that it explains the entropy in terms of all the degrees of freedom and all molecules in the system in a consistent and intuitive framework, which is superior to standard structural methods that only assess molecular flexibility for a subset of all degrees of freedom. The enthalpy of transfer from water to octanol is mostly favourable, consistent with the hydrophobic nature of the solutes. To explain the predominant gain in entropy, most comes from a large increase in the orientational entropy of water and a small increase in solute vibrational and orientational entropy. This is offset by unfavourable changes in the orientational entropy of octanol, the vibrational entropy of both solvents, and the positional and conformational entropy of the solute. This study makes clear the feasibility of Energy-Entropy methods for logP calculations, what areas need improvement, and how they might be applied to other systems more generally. Patrick GL (2013) An introduction to medicinal chemistry. Oxford University Press, Oxford Leo A, Hansch C, Elkins D (1971) Partition coefficients and their uses. Chem Rev 71(6):525–616 Andrés A, Rosés M, Ràfols C, Bosch E, Espinosa S, Segarra V, Huerta JM (2015) Setup and validation of shake-flask procedures for the determination of partition coefficients (log d) from low drug amounts. Eur J Pharm Sci 76:181–191 PubMed Article CAS PubMed Central Google Scholar Hodges G, Eadsforth C, Bossuyt B, Bouvy A, Enrici MH, Geurts M, Kotthoff M, Michie E, Miller D, Müller J et al (2019) A comparison of log \(k_{\rm ow}\) (n-octanol-water partition coefficient) values for non-ionic, anionic, cationic and amphoteric surfactants determined using predictions and experimental methods. Environ Sci Eur 31(1):1 Işık M, Levorse D, Mobley DL, Rhodes T, Chodera JD (2019) Octanol-water partition coefficient measurements for the SAMPL6 blind prediction challenge. J Comput Aided Mol Des 34:1–16 Vraka C, Nics L, Wagner KH, Hacker M, Wadsak W, Mitterhauser M (2017) Logp, a yesterday's value? Nucl Med Biol 50:1–10 CAS PubMed Article PubMed Central Google Scholar Ghose AK, Crippen GM (1986) Atomic physicochemical parameters for 3-dimensional structure-directed quantitative structure-activity-relationships.1. partition-coefficients as a measure of hydrophobicity. J Comput Chem 7:565–577 Leo AJ (1993) Calculating log p(oct) from structures. Chem Rev 93:1281–1306 Liao Q, Yao JH, Yuan SG (2006) Svm approach for predicting logp. Mol Divers 10:301–309 Riniker S (2017) Molecular dynamics fingerprints (mdfp): machine learning from md data to predict free-energy differences. J Chem Inf Model 57:726–741 Nieto-Draghi C, Fayet G, Creton B, Rozanska X, Rotureau P, de Hemptinne JC, Ungerer P, Rousseau B, Adamo C (2015) A general guidebook for the theoretical prediction of physicochemical properties of chemicals for regulatory purposes. Chem Rev 115(24):13,093–13,164 Jones MR, Brooks BR (2020) Quantum chemical predictions of water-octanol partition coefficients applied to the SAMPL6 log p blind challenge. J Comput Aided Mol Des 34:484–493 Işık M, Bergazin TD, Fox T, Rizzi A, Chodera JD, Mobley DL (2020) Assessing the accuracy of octanol-water partition coefficient predictions in the SAMPL6 part II log p challenge. J Comput Aided Mol Des 34:335–370 Loschen C, Reinisch J, Klamt A (2020) COSMO-RS based predictions for the SAMPL6 logp challenge. J Comput Aided Mol Des 34:385–392 Bittermann K, Spycher S, Goss KU (2016) Comparison of different models predicting the phospholipid-membrane water partition coefficients of charged compounds. Chemosphere 144:382–391 Bannan CC, Calabró G, Kyu DY, Mobley DL (2016) Calculating partition coefficients of small molecules in octanol/water and cyclohexane/water. J Chem Theo Comput 12(8):4015–4024 Fan S, Iorga BI, Beckstein O (2020) Prediction of octanol-water partition coefficients for the SAMPL6-log p molecules using molecular dynamics simulations with opls-aa, amber and charmm force fields. J Comput Aided Mol Des 34:405–420 Genheden S, Essex JW (2016) All-atom/coarse-grained hybrid predictions of distribution coefficients in SAMPL5. J Comput Aid Mol Des 30:969–976 Ogata K, Hatakeyama M, Nakamura S (2018) Effect of atomic charges on octanol-water partition coefficient using alchemical free energy calculation. Molecules 23(2):425 PubMed Central Article CAS Google Scholar Liu K, Kokubo H (2019) Uncovering abnormal changes in logp after fluorination using molecular dynamics simulations. J Comput Aided Mol Des 33(3):345–356 Genheden S (2016) Predicting partition coefficients with a simple all-atom/coarse-grained hybrid model. J Chem Theory Comput 12:297–304 Kollman PA, Massova I, Reyes C, Kuhn B, Huo SH, Chong L, Lee M, Lee T, Duan Y, Wang W, Donini O, Cieplak P, Srinivasan J, Case DA, Cheatham TE (2000) Calculating structures and free energies of complex molecules: combining molecular mechanics and continuum models. Accounts Chem Res 33:889–897 Wang EC, Sun HY, Wang JM, Wang Z, Liu H, Zhang JZH, Hou TJ (2019) End-point binding free energy calculation with MM/PBSA and MM/GBSA: strategies and applications in drug design. Chem Rev 119:9478–9508 Huang WJ, Blinov N, Kovalenko A (2015) Octanol-water partition coefficient from 3D-RISM-KH molecular theory of solvation with partial molar volume correction. J Phys Chem B 119:5588–5597 Kraml J, Hofer F, Kamenik AS, Waibl F, Kahler U, Schauperl M, Liedl KR (2020) Solvation thermodynamics in different solvents: water-chloroform partition coefficients from grid inhomogeneous solvation theory. J Chem Inf Model 60:3843–3853 Higham J, Chou SY, Gräter F, Henchman RH (2018) Entropy of flexible liquids from hierarchical force-torque covariance and coordination. Mol Phys 116(15–16):1965–1976 Ali HS, Higham J, Henchman RH (2019) Entropy of simulated liquids using multiscale cell correlation. Entropy 21(8):750 CAS PubMed Central Article Google Scholar Chakravorty A, Higham J, Henchman RH (2020) Entropy of proteins using multiscale cell correlation. J Chem Inf Model 60:5540–5551 Henchman RH (2007) Free energy of liquid water from a computer simulation via cell theory. J Chem Phys 126(064):504 Irudayam SJ, Henchman RH (2010) Solvation theory to provide a molecular interpretation of the hydrophobic entropy loss of noble gas hydration. J Phys 22(284):108 Irudayam SJ, Plumb RD, Henchman RH (2010) Entropic trends in aqueous solutions of common functional groups. Faraday Discuss 145:467–485 Irudayam SJ, Henchman RH (2011) Prediction and interpretation of the hydration entropies of monovalent cations and anions. Mol Phys 109:37–48 Gerogiokas G, Calabro G, Henchman RH, Southey MWY, Law RJ, Michel J (2014) Prediction of small molecule hydration thermodynamics with grid cell theory. J Chem Theory Comput 10:35–48 Ali HS, Higham J, de Visser SP, Henchman RH (2020) Comparison of free-energy methods to calculate the barriers for the nucleophilic substitution of alkyl halides by hydroxide. J Phys Chem B 124:6835–6842 Hensen U, Grater F, Henchman RH (2014) Macromolecular entropy can be accurately computed from force. J Chem Theory Comput 10(11):4777–4781 Kalayan J, Curtis RA, Warwicker J, Henchman RH (2021) Thermodynamic origin of differential excipient-lysozyme interactions. https://doi.org/10.3389/fmolb.2021.689400 Mobley D. GitHub. https://github.com/samplchallenges/SAMPL7/tree/master/physical_property. Accessed Oct 5 2020 Bannan CC, Burley KH, Chiu M, Shirts MR, Gilson MK, Mobley DL (2016) Blind prediction of cyclohexane-water distribution coefficients from the SAMPL5 challenge. J Comput Aided Mol Des 30(11):927–944 Mobley DL, Liu S, Cerutti DS, Swope WC, Rice JE (2012) Alchemical prediction of hydration free energies for SAMPL. J Comput Aided Mol Des 26(5):551–562 Geballe MT, Skillman AG, Nicholls A, Guthrie JP, Taylor PJ (2010) The SAMPL2 blind prediction challenge: introduction and overview. J Comput Aided Mol Des 24(4):259–279 Mobley DL, Wymer KL, Lim NM, Guthrie JP (2014) Blind prediction of solvation free energies from the SAMPL4 challenge. J Comput Aided Mol Des 28(3):135–150 Yin J, Henriksen NM, Slochower DR, Shirts MR, Chiu MW, Mobley DL, Gilson MK (2017) Overview of the SAMPL5 host-guest challenge: Are we doing better? J Comput Aided Mol Des 31(1):1–19 Henchman RH (2007) Free energy of liquid water from a computer simulation via cell theory. J Chem Phys 126(6):064504 Higham J, Henchman RH (2016) Locally adaptive method to define coordination shell. J Phys Chem 145(8):084108 Higham J, Henchman RH (2018) Overcoming the limitations of cutoffs for defining atomic coordination in multicomponent systems. J Comput Chem 39(12):705–710 Henchman RH (2003) Partition function for a simple liquid using cell theory parametrized by computer simulation. J Chem Phys 119:400–406 Irudayam SJ, Henchman RH (2009) Entropic cost of protein-ligand binding and its dependence on the entropy in solution. J Phys Chem B 113:5871–5884 Hanwell MD, Curtis DE, Lonie DC, Vandermeersch T, Zurek E, Hutchison GR (2012) Avogadro: an advanced semantic chemical editor, visualization, and analysis platform. J Cheminf 4(1):17 Martínez L, Andrade R, Birgin EG, Martínez JM (2009) Packmol: a package for building initial configurations for molecular dynamics simulations. J Comput Chem 30(13):2157–2164 Wang J, Wang W, Kollman PA, Case DA (2006) Automatic atom type and bond type perception in molecular mechanical calculations. J Mol Graph Model 25(2):247–260 Case D, Ben-Shalom I, Brozell S, Cerutti D, Cheatham T III, Cruzeiro V, Darden T, Duke R, Ghoreishi D, Gilson M et al (2018) AMBER 2018. University of California, San Francisco Wang JM, Wolf RM, Caldwell JW, Kollman PA, Case DA (2004) Development and testing of a general amber force field. J Comput Chem 25:1157–1174 Horn HW, Swope WC, Pitera JW, Madura JD, Dick TJ, Hura GL, Head-Gordon T (2004) Development of an improved four-site water model for biomolecular simulations: TIP4P-Ew. J Chem Phys 120:9665–9678 Berendsen HJ, Jv Postma, van Gunsteren WF, DiNola A, Haak JR (1984) Molecular dynamics with coupling to an external bath. J Chem Phys 81(8):3684–3690 Salomon-Ferrer R, Gotz AW, Poole D, Le Grand S, Walker RC (2013) Routine microsecond molecular dynamics simulations with AMBER on GPUs. 2. Explicit solvent particle mesh Ewald. J Chem Theory Comput 9(9):3878–3888 Gotz AW, Williamson MJ, Xu D, Poole D, Le Grand S, Walker RC (2012) Routine microsecond molecular dynamics simulations with AMBER on GPUs. 1. Generalized born. J Chem Theory Comput 8(5):1542–1555 Le Grand S, Götz AW, Walker RC (2013) SPFP: Speed without compromise—a mixed precision model for GPU accelerated molecular dynamics simulations. Comput Phys Comm 184(2):374–380 Wang J, Wolf RM, Caldwell JW, Kollman PA, Case DA (2004) Development and testing of a general amber force field. J Comput Chem 25(9):1157–1174 Schultz AJ, Moustafa SG, Lin W, Weinstein SJ, Kofke DA (2016) Reformulation of ensemble averages via coordinate mapping. J Chem Theory Comput 12(4):1491–1498 Purohit A, Schultz AJ, Kofke DA (2019) Force-sampling methods for density distributions as instances of mapped averaging. Mol Phys 117(20):2822–2829 Moustafa SG, Schultz AJ, Kofke DA (2015) Very fast averaging of thermal properties of crystals by molecular simulation. Phys Rev E 92(4):043303 We appreciate IT Services at the University of Manchester for providing the Computational Shared Facility to run the simulations, EPSRC under grant codes EP/L015218/1 and EP/N025105/1 for a PhD studentship (JK), and the National Institutes of Health for their support of the SAMPL project (R01GM124270) of David L. Mobley (UC Irvine) and Jas Kalayan. Richard H. Henchman Present address: Sydney Medical School, The University of Sydney, Sydney, NSW, 2006, Australia Manchester Institute of Biotechnology, The University of Manchester, 131 Princess Street, Manchester, M1 7DN, UK Fabio Falcioni, Jas Kalayan & Richard H. Henchman School of Chemistry, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK Fabio Falcioni Jas Kalayan Correspondence to Fabio Falcioni or Richard H. Henchman. Below is the link to the electronic supplementary material. Electronic supplementary material 1 (PDF 4380 kb) Falcioni, F., Kalayan, J. & Henchman, R.H. Energy-entropy prediction of octanol–water logP of SAMPL7 N-acyl sulfonamide bioisosters. J Comput Aided Mol Des 35, 831–840 (2021). https://doi.org/10.1007/s10822-021-00401-w Issue Date: July 2021 DOI: https://doi.org/10.1007/s10822-021-00401-w SAMPL Free energy method Molecular dynamics simulation
CommonCrawl
Let y = y(x) be the solution of the differential equation $$\left( {(x + 2){e^{\left( {{{y + 1} \over {x + 2}}} \right)}} + (y + 1)} \right)dx = (x + 2)dy$$, y(1) = 1. If the domain of y = y(x) is an open interval ($$\alpha$$, $$\beta$$), then | $$\alpha$$ + $$\beta$$| is equal to ______________. Let a curve y = y(x) be given by the solution of the differential equation $$\cos \left( {{1 \over 2}{{\cos }^{ - 1}}({e^{ - x}})} \right)dx = \sqrt {{e^{2x}} - 1} dy$$. If it intersects y-axis at y = $$-$$1, and the intersection point of the curve with x-axis is ($$\alpha$$, 0), then e$$\alpha$$ is equal to __________________. Let y = y(x) be the solution of the differential equation xdy $$-$$ ydx = $$\sqrt {({x^2} - {y^2})} dx$$, x $$ \ge $$ 1, with y(1) = 0. If the area bounded by the line x = 1, x = e$$\pi$$, y = 0 and y = y(x) is $$\alpha$$e2$$\pi$$ + $$\beta$$, then the value of 10($$\alpha$$ + $$\beta$$) is equal to __________. JEE Main 2021 (Online) 26th February Morning Shift The difference between degree and order of a differential equation that represents the family of curves given by $${y^2} = a\left( {x + {{\sqrt a } \over 2}} \right)$$, a > 0 is _________. Questions Asked from Differential Equations (Numerical) JEE Main 2022 (Online) 27th July Morning Shift (1) JEE Main 2022 (Online) 26th July Evening Shift (1) JEE Main 2022 (Online) 25th July Evening Shift (2) JEE Main 2022 (Online) 29th June Evening Shift (1) JEE Main 2022 (Online) 29th June Morning Shift (1) JEE Main 2022 (Online) 27th June Evening Shift (1) JEE Main 2022 (Online) 26th June Morning Shift (2) JEE Main 2021 (Online) 27th August Morning Shift (1) JEE Main 2021 (Online) 27th July Evening Shift (1) JEE Main 2021 (Online) 27th July Morning Shift (2) JEE Main 2021 (Online) 25th July Morning Shift (1) JEE Main 2021 (Online) 22th July Evening Shift (1) JEE Main 2021 (Online) 20th July Evening Shift (1) JEE Main 2021 (Online) 18th March Evening Shift (1) JEE Main 2021 (Online) 26th February Morning Shift (2) JEE Main 2021 (Online) 25th February Evening Shift (1) JEE Main 2020 (Online) 9th January Morning Slot (1)
CommonCrawl
Effect of width, amplitude, and position of a core mantle boundary hot spot on core convection and dynamo action Wieland Dietrich1, Johannes Wicht2 & Kumiko Hori1,3 Within the fluid iron cores of terrestrial planets, convection and the resulting generation of global magnetic fields are controlled by the overlying rocky mantle. The thermal structure of the lower mantle determines how much heat is allowed to escape the core. Hot lower mantle features, such as the thermal footprint of a giant impact or hot mantle plumes, will locally reduce the heat flux through the core mantle boundary (CMB), thereby weakening core convection and affecting the magnetic field generation process. In this study, we numerically investigate how parametrised hot spots at the CMB with arbitrary sizes, amplitudes, and positions affect core convection and hence the dynamo. The effect of the heat flux anomaly is quantified by changes in global flow symmetry properties, such as the emergence of equatorial antisymmetric, axisymmetric (EAA) zonal flows. For purely hydrodynamic models, the EAA symmetry scales almost linearly with the CMB amplitude and size, whereas self-consistent dynamo simulations typically reveal either suppressed or drastically enhanced EAA symmetry depending mainly on the horizontal extent of the heat flux anomaly. Our results suggest that the length scale of the anomaly should be on the same order as the outer core radius to significantly affect flow and field symmetries. As an implication to Mars and in the range of our model, the study concludes that an ancient core field modified by a CMB heat flux anomaly is not able to heterogeneously magnetise the crust to the present-day level of north–south asymmetry on Mars. The resulting magnetic fields obtained using our model either are not asymmetric enough or, when they are asymmetric enough, show rapid polarity inversions, which are incompatible with thick unidirectional magnetisation. Within our solar system, the three terrestrial planets, Earth, Mercury, and Mars, harbour or once harboured a dynamo process in the liquid part of their iron-rich cores. Vigorous core convection shaped by rapid planetary rotation is responsible for the generation of global magnetic fields. Unlike the dynamo regions of giant planets or the convective zone of the sun, the amount of heat escaping the cores of terrestrial planets is determined by the convection of the overlying mantle. As this vigorous core convection assures efficient mixing and hence a virtually homogeneous temperature T core at the core side of the core mantle boundary (CMB), the temperature gradient at and thus the flux through the CMB are entirely controlled by the lower mantle temperature T lm. The heat flux through the CMB is then $$ q_{\text{cmb}} = k \frac{T_{\text{lm}}-T_{\text{core}}}{\delta_{\text{cmb}}} \, $$ ((1)) where δ cmb is the vertical thickness of the thermal boundary layer on the mantle side and k is the thermal conductivity. Hot mantle features, such as convective upwellings, thermal footprints of giant impacts, or chemical heterogeneities, locally reduce the heat flux through the CMB (e.g., Roberts and Zhong 2006, Roberts et al. 2009). Seismologic investigations of Earth have revealed strong anomalies in the lowermost mantle that can be interpreted in terms of temperature variations (Bloxham 2000). In recent years, several authors have investigated the consequences of laterally varying the CMB heat flux. A seismologically based pattern has been used in many studies on the characteristics of Earth (e.g., Olson et al. 2010; Takahashi et al. 2008); this pattern shows a strong effect on the core convection and secular variation of the magnetic field (e.g., Davies et al. 2008). For Mars, low-degree mantle convection or giant impacts may have significantly affected the core convection and the morphology of the induced magnetic field (Harder and Christensen 1996; Roberts and Zhong 2006; Roberts et al. 2009). For example, the strong southern hemispherical preference of the crustal magnetisation can be explained by an ancient dynamo that operated more efficiently in the southern hemisphere. However, this remains a matter of debate because post-dynamo processes may simply have reduced a once more homogeneous crustal magnetisation in the northern hemisphere (Nimmo et al. 2008; Marinova et al. 2008). The asymmetry of Mercury's magnetic field, which is significantly stronger in the north than in the south, could also be partly explained by a non-homogeneous CMB heat flux (e.g., Cao et al. 2014; Wicht and Heyner 2014). The magnetic field generation process inside Earth's outer core relies on thermal and compositional convection. Thermal convection is driven by secular cooling or the latent heat released upon inner core freezing. Compositional convection arises because the light elements mixed into the outer core alloy are not as easily contained in the inner core. A large fraction is thus released at the inner core front. For planets with no solid inner core, like ancient Mars or early Earth, only the thermal component of the buoyant force can power convection and hence the dynamo process (e.g., Breuer et al. 2010). The buoyancy sources are then not concentrated at the bottom but homogeneously distributed over the core shell. When modelling these processes, secular cooling can be modelled using a buoyancy source equivalent to internal heating, whereas a basal heating source can be used when thermal and/or compositional buoyancy fluxes arise from the inner core boundary. Kutzner and Christensen (2000) and Hori et al. (2012) investigated the dynamic consequences when a core model is driven by either internal or basal heating. In general, the effects of large-scale CMB heat flux anomalies on convection and magnetic field generation are stronger when the system is driven by internal (as in ancient Mars) rather than basal heating, which is more realistic for present-day Earth (Hori et al. 2014). For example, the equatorial symmetry of the flow is more easily broken in the former than the latter case (Wicht and Heyner 2014). The Mars Global Surveyor (MGS) mission revealed a remarkable equatorial asymmetry in the distribution of magnetised crust (Acuña et al. 1999; Langlais et al. 2004). Interestingly, the crustal topographic dichotomy is well aligned with the pattern of crustal magnetisation (Citron and Zhong 2012). This is the magnetic imprint of an ancient core dynamo driven by thermal convection in the core or tidal dissipation (Arkani-Hamed 2009), which ceased to exist and further magnetise the crust roughly 3.5 Gyrs ago (Lillis et al. 2008). Assuming the hemispherical crustal magnetisation is of internal origin, most numerical models attempting to design a Mars core dynamo model quantified the success of their efforts by comparing the resulting modelled magnetic fields to the actual crustal magnetisation pattern (Stanley et al. 2008; Amit et al. 2011; Dietrich and Wicht 2013; Monteux et al. 2015). Hereafter, the study by Dietrich and Wicht (2013) is abbreviated as DW13. The CMB heat flux in such models is typically modified by a large-scale sinusoidal perturbation increasing the CMB heat flux in the southern hemisphere and reducing it in the northern hemisphere. Because one hemisphere is more efficiently cooled than the other, a strong latitudinal temperature anomaly arises and in turn drives fierce zonal flows via a thermal wind. Such equatorially antisymmetric, axisymmetric (EAA) flows are reported to reach up to 85 % of the total kinetic energy (Amit et al. 2011, DW13) in self-consistent dynamo models. Although such flows are indicative of the induction of hemispherical fields, it remains unclear to what extent their prevalence is due to the (probably unrealistic) large, strong, and axisymmetric forcing patterns. As a consequence of this forcing, the induction process is more concentrated in the southern hemisphere, leading to a hemispherical magnetic field. Even though such hemispherical fields can match the degree of hemisphericity in the crustal magnetisation at the planetary surface, they also show a strong time variability (periodic oscillations). If the typical stable chron epoch is much smaller than the typical crustal buildup time, the system is not able to magnetise the crust to the required intensity (DW13), which requires the relatively homogeneous magnetisation of a layer with a thickness of at least 20 km (Langlais et al. 2004). In this study, we focus on not only somewhat more complex but also more realistic CMB heat flux variations. The heat flux is reduced in a more localised area of varying position and horizontal extent. Such anomalies may more realistically reflect the effect of mantle plumes or impacts, but may not yield the fundamental equatorial asymmetry in the temperature as efficiently as the simplistic Y 10 pattern. Further tilting the anomalies away from the axis of rotation may result in the superposition of EAA and equatorially symmetric, non-axisymmetric (ESN) temperature and flow patterns. We also investigate the influence of the shell geometry and the vigour of convection. More generally, we aim to estimate how large, strong, and aligned CMB heat flux anomalies must be to affect core convection and the magnetic field process. It is also of interest to quantify the interaction of flow and field under the control of a laterally variable CMB heat flux. In particular, we address the question of whether the conclusions of DW13 regarding the crustal magnetisation still hold when applied to Mars. Various comparable models have been investigated over the last decade. Those focusing on exploring parameter dependencies, e.g., Amit et al. (2011) and DW13, mainly investigated magnetic cases with a CMB heat flux described by a fundamental spherical harmonic. Those studies featured tilted cases and various amplitudes. The recent study by Monteux et al. (2015) presented simulations featuring anomalies of a smaller horizontal length scale. However, a comprehensive parametrisation of the anomaly width, amplitude, and position has not yet been reported. The rather dramatic results of DW13 may only hold when anomalies of a planetary scale are at work. We therefore also test the robustness of their results with respect to the most common model assumptions. We model the liquid outer core of a terrestrial planet as a spherical shell (with inner and outer radii of r icb and r cmb, respectively) containing a viscous, incompressible, and electrically conducting fluid. The core fluid is subject to rapid rotation, vigorous convection, and Lorentz forces due to the induced magnetic fields. The evolution of the fluid flow is thus given by the dimensionless Navier–Stokes equation: $$ \begin{aligned} {}\mathrm{E} \left(\!\frac{\partial \vec u}{\partial t} + \vec u \cdot \vec \nabla \vec u \!\right) \!&=\! -\vec \nabla \Pi+ E \nabla^{2} \vec u \!- \!2 \hat z\! \times \vec u + \frac{\mathrm{Ra E}}{\text{Pr}} \frac {\vec r}{r_{\text{cmb}}} T\\ &\quad+ \frac 1 {\text{Pm}} (\vec \nabla \times \vec B) \times \vec B \, \end{aligned} $$ where \(\vec u\) is the velocity field, Π is the non-hydrostatic pressure, \(\hat z\) is the direction of the rotation axis, T is the superadiabatic temperature fluctuation, and \(\vec B\) is the magnetic field. The evolution of the thermal energy is affected by temperature diffusion and advection by the flow, such that $$ \frac {\partial T}{\partial t} + \vec u \cdot \vec \nabla T= \frac 1 {\text{Pr}} \nabla^{2} T + \epsilon \, $$ where ε is a uniform heat source density. The generation of magnetic fields is controlled by the induction equation $$ \frac{\partial \overrightarrow{B}}{\partial t}=\overrightarrow{\nabla}\times \left(\overrightarrow{u}\times \overrightarrow{B}\right)+\frac{1}{\mathrm{Pm}}{\nabla}^2\overrightarrow{B}\kern1em . $$ We use the shell thickness D=r cmb−r icb as the length scale, the viscous diffusion time D 2/ν as the time scale, and (ρ μ λ Ω)1/2 as the magnetic scale. The mean superadiabatic CMB heat flux density q 0 serves to define the temperature scale q 0 D/c p ρ κ. Furthermore, ν is the viscous diffusivity, ρ is the constant background density, μ is the magnetic permeability, λ is the magnetic diffusivity, Ω is the rotation rate, κ is the thermal diffusivity, and c p is the heat capacity. Non-penetrative and no-slip velocity boundary conditions are used, and the magnetic fields are matched to the potential field outside the fluid region. For an internally heated Mars-like setup, we fix the heat flux at both boundaries and power the system exclusively by internal heat sources. Because the flux at the inner boundary is set to zero, this leads to a simple balance of heat between the source density ε and the mean CMB heat flux q 0. To model the secular core cooling, we fix the mean heat flux at the outer boundary (q 0=1) and balance the heat source such that $$ \epsilon= \frac{1-\beta}{1-\beta^{3}} \frac{q_{0}}{3 \text{Pr}} \, $$ where β=r icb/r cmb is the aspect ratio of the spherical shell. The non-dimensional control parameters are the Prandtl number Pr=ν/κ, which is the ratio between the viscous and thermal diffusivities, and the magnetic Prandtl number Pm=ν/λ, which is the ratio of the viscous and magnetic diffusivities. The Ekman number E=ν/Ω D 2 relates the viscous and rotational time scales, and the Rayleigh number Ra=α g q o D 4/ν κ 2 controls the vigour of convection. We fix Pr=1 and E=10−4 and use Pm=2 for the dynamo cases. The Rayleigh number is varied between Ra=2×107 and 1.6×108. Modelling the anomaly In recent studies focusing on the mantle control of Mars and Earth, it is common for the horizontal variation of the CMB heat flux to be described in terms of spherical harmonics. Especially for Mars, the majority of studies rely on spherical harmonics of degree l=1 and order m=0, i.e., a simple cosine variation (e.g., Stanley et al. 2008; Amit et al. 2011; Aurnou and Aubert 2011, DW13). Notable exceptions are the study by Sreenivasan and Jellinek (2012), in which a localised temperature anomaly was used, and that by Monteux et al. (2015), in which the anomaly pattern was derived from realistic impact models. The former study relies on fixed temperature boundary conditions, basal heating, and strong CMB temperature anomalies and is thus quite different from our systematic approach, which features fixed flux boundary conditions, internal heating, and anomaly amplitudes not exceeding the mean CMB heat flux. Here, we explore more locally confined variations of the CMB heat flux. The thermal CMB anomaly q ′ is characterised by four parameters: its amplitude q ∗, its opening angle Ψ, and the colatitude and longitude (τ,ϕ 0) of its midpoint. The anomaly has the form $$ q^{\prime} =\left\{ \begin{array}{ll} \frac{1}{2} \left(\cos\left(2\pi\frac{\alpha}{\Psi}\right) + 1 \right)~\text{for}~\alpha <\Psi \\ 0 ~\text{else}\, \end{array}\right. $$ where α(θ,ϕ) is the opening angle between the central vector \(\vec r(\theta,\phi)\) and the mid-point vector \(\vec r (\tau, \phi _{0})\). Furthermore, the anomaly is normalised, such that the net heat flux through the CMB is constant. The total CMB heat flux is then given by the mean heat flux q 0, the anomaly q ′, and the normalisation C(Ψ) as $$ q_{\text{cmb}} = q_{0} - q^{\ast} \left(\frac{q^{\prime}}{C(\Psi)} - 1 \right) \, $$ $$ C(\Psi) = \int_{S_{\Psi}} q^{\prime} \sin \theta d \theta d \phi \ = \frac{1-\cos \Psi}{4} \, $$ where S Ψ is the area of the anomaly up to its rim, which is given by Ψ. Note that the case of (Ψ = 180°, τ = 0) is identical to the spherical harmonic Y 10 mentioned above. Figure 1 shows the CMB heat flux profiles for τ = 0, q ∗= 1, and various widths Ψ. For such parameters, the heat flux anomaly is axisymmetric and reduces the heat flux at the northern pole to zero. Latitudinal profile of axisymmetric CMB heat flux. The lines show the total CMB heat flux modified by an anomaly with different opening angles Ψ (each value of Ψ is represented by a different colour). Note that all profiles are normalised such that there is no net contribution from the anomaly to the mean CMB heat flux (q 0=1, grey). Parameters: τ=0, q ∗=1 Numerical model and runs The magnetohydrodynamics (MHD) equations (Eqs. 2, 3, and 4) were numerically solved using the MagIC3 code in its shared memory version (Wicht 2002; Christensen and Wicht 2007). The numerical resolution is given by 49, 288, and 144 grid points in the radial direction, the azimuthal direction, and along the latitude, respectively. For the higher Rayleigh number cases, the numerical resolution was increased to 61, 320, and 160 points, respectively. We conducted a broad parameter study in which eight anomaly widths between Ψ= 180° (planetary scale) and Ψ= 10° (most concentrated hot spot) were used. Furthermore, we tested four different anomaly amplitudes ranging from q ∗=0.2 to q ∗=1. The peak position of the anomaly was tilted at six different angles between τ= 0° (polar anomaly) and τ= 90° (equatorial anomaly). Because the ancient Martian core was fully liquid at the time the magnetisation was acquired, the thick shell regime is investigated here. Particularly, an inner core with an Earth-like aspect ratio of β=0.35 is retained in most of the models to ensure consistency with the existing literature regarding Earth and Mars. However, the influence of the aspect ratio β was tested by varying the aspect ratio between β=0.35 and β=0.15, which corresponds to the smallest inner core size. The vigour of the convection was varied using four different Rayleigh numbers from Ra=2×107, which is only slightly supercritical, to Ra=1.6×108, which ensures rich turbulent dynamics. We repeated the numerical experiments for simulations including the magnetic field for various amplitudes q ∗ and tilt angles τ. Together with reference cases with no thermal boundary heterogeneity for each β and Ra, these parameters represent 129 hydrodynamic and 73 self-consistent dynamo simulations (202 simulations in total). Each case was time integrated until a statistically steady state was reached and then time-averaged over a significant fraction of the viscous (magnetic) diffusion time. The moderate Ekman number allowed each of the hydro cases to be modelled within a computational time of 1 or 2 days when parallelised over 12 cores, whereas dynamo simulations usually require several days. Previous work and output parameters Figure 2 illustrates the mean flow and field properties for an unperturbed reference dynamo case with homogeneous boundary heat flux (Fig. 2 a) and a commonly studied model with a heterogeneous CMB heat flux (Fig. 2 b). In the reference case, ESN convective columns (e.g., Busse 1970) account for 72 % of the total kinetic energy. Table 1 shows the kinetic energy symmetry contributions in the two cases. Axisymmetric flow contributions consist of the zonal flow and the meridional circulation. Both are predominantly equatorially symmetric but have low amplitudes. These equatorially symmetric, axisymmetric (ESA) kinetic energy contributions therefore amount to only 3 % of the total kinetic energy (Table 1). The zonal temperature \(\overline {T}\) is also equatorially symmetric, and its colatitudinal gradient is in very good agreement with the z-variation of the zonal flow (last two plots in the first row of Fig. 2 a), which proves that this z-variation is caused by thermal wind. The mean radial and azimuthal magnetic fields (first two plots in the second row of Fig. 2) show the typical equatorial antisymmetry of a dipole-dominated magnetic field. This field is produced by non-axisymmetric flows (intensity \(| u_{r}^{\prime }|\)) in an α 2-mechanism (Olson et al. 1999). The final two plots in the second row of Fig. 2 show the Hammer–Aitoff projections of the radial field at the CMB and the radial flow at mid-depth. Mean flow and magnetic properties. a Homogeneous reference case (q ∗=0, Ra=4×107). b Standard boundary forcing case (q ∗=1.0, τ=0, Ψ= 180°). The first row in each part shows, from left to right, the zonal flow (\(\overline {u_{\phi }}\)), the stream function of the meridional circulation (ζ), the zonal temperature (\(\overline {T}\)), and the two sides of the thermal wind equation (\(\partial \overline {u_{\phi }} / \partial z\) and RaE/(2r 0Pr) ∂ T/∂ θ). The second row in each part contains the radial field (\(\overline {B_{r}}\)), the mean azimuthal field (\(\overline {B_{\phi }}\)), the intensity of non-axisymmetric radial flows (\(|u_{r}^{\prime }|\)), and hammer projections of the radial field at the surface (top) and the radial flow at mid-depth (bottom). Parameters: Ra=4×107, q ∗=1.0, τ=0, β=0.35, Pm=2 Table 1 Flow symmetries We compare the homogeneous case with the most commonly studied CMB heat flux anomaly: a Y 10 anomaly with a strong amplitude (q ∗=1.0). Such an anomaly cools the southern hemisphere more efficiently than the northern hemisphere; hence, the temperature decreases from the hot north to the cool south, leading to a negative temperature gradient along the colatitude. Such strong temperature anomalies are known to modify the leading order vorticity balance between the pressure and the Coriolis force. The curl of the Navier–Stokes Eq. 2 gives to first order (neglecting viscous and inertia terms): $$ \nabla \times \widehat{z}\times \overrightarrow{u}=\frac{\mathrm{RaE}}{2 \Pr}\frac{1}{r_{\mathrm{cmb}}}\nabla \times \left(\overrightarrow{r}T\right)\kern1em . $$ In models with a homogeneous heat flux and relatively small Ekman number, the right-hand side of Eq. 9 is small, and the flow (at least the convective flow) tends to be vertically invariant. Because of the rigid walls applied here, the zonal flow is weak and ageostrophic in the reference case. However, for the boundary-forced system, the right-hand side becomes large. The heterogeneous CMB heat flux mainly cools the southern hemisphere and leaves the northern hemisphere hot. The large-scale temperature asymmetry that develops between the north and south is responsible for driving a significant axisymmetric thermal wind. For the mean azimuthal component of Eq. 9, it is thus found that $$ \frac{\partial \overline{u_{\phi }}}{\partial z}=\frac{\mathrm{RaE}}{2 \Pr}\frac{1}{r_{\mathrm{cmb}}}\frac{\partial \overline{T}}{\partial \theta}\kern1em . $$ Figure 2 compares the right- and left-hand sides of Eq. 10 and demonstrates that this thermal wind balance is indeed well fulfilled in both the homogeneous and Y 10 cases. The larger north–south gradient in the latter case drives a zonal wind system with retrograde and prograde jets in the northern and southern hemispheres, respectively. This EAA flow system dominates the kinetic energy once the amplitude of the boundary pattern is sufficiently large (see Table 1). We therefore quantify the influence of the boundary forcing by measuring the relative contribution C EAA of EAA flows by evaluating (in spectral space) the relative kinetic energy of spherical harmonic flow contributions of order m=0 (axisymmetric) and odd degree l=2n+1 (equatorially antisymmetric) $$ {C}_{\mathrm{EAA}}=\frac{\underset{r_{\mathrm{icb}}}{\overset{r_{\mathrm{cmb}}}{\int }}\sum_l{E}_{2l+1,0}^k{r}^2dr}{\underset{r_{\mathrm{icb}}}{\overset{r_{\mathrm{cmb}}}{\int }}\sum_{l,m}{E}_{l,m}^k{r}^2dr}\kern1em . $$ The radial flows required to induce radial fields are then concentrated in the southern polar region (Fig. 2, bottom row), where the cooling is more efficient. Hence, the induced radial field is also concentrated in the respective hemisphere. The dominant magnetic field production, however, remains the thermal wind shear, which produces a strong axisymmetric azimuthal field via the Ω-effect. The bottom row of Fig. 2 shows the hemisphere-constrained radial field and the amplified azimuthal toroidal field created by shearing motions around the equator. DW13 reported that the Ω-effect dominates once the Y 10 amplitude exceeds 60 % of the mean heat flux. The dynamo then switches from α 2- to α Ω-type, initiating periodic polarity reversals that are characteristic of this dynamo type (see also Dietrich et al. 2013). Note that EAA symmetric flows can emerge in dynamo models with a homogeneous heat flux (Landeau and Aubert 2011) when they are internally heated and satisfy a large convective supercriticality. Conversely, the anomalies themselves can also drive sufficiently complex flows to induce magnetic fields in the absence of thermal and compositional buoyancy fluxes (Aurnou and Aubert 2011). The magnetic field is stronger in the convectively more active hemisphere and shows weaker magnetic flux in the less active hemisphere. Amit et al. (2011) gave an estimate of the mean crustal magnetisation per hemisphere on present-day Mars, which can be compared to the geometry of the model output fields. DW13 defined the magnetic field hemisphericity H sur at the planetary surface as: $$ H_{\text{sur}} = \left\vert \frac{B_{N} - B_{S}}{B_{N}+B_{S}} \right\vert \, $$ where B N and B S are the radial magnetic fluxes in the northern and southern hemispheres, respectively. Note that the crustal value is 0.55±0.1 (Amit et al. 2011, DW13). As radial fields are induced by convective flows, an equivalent convective hemisphericity Γ can be defined as $$ \varGamma =\left|\frac{\varGamma_N-{\varGamma}_S}{\varGamma_N+{\varGamma}_S}\right|\kern1em . $$ A formal derivation of Γ N and Γ S is given in the 'Results and discussions' section. We further quantify the mean flow amplitude with the hydrodynamic Reynolds number Re for the hydrodynamic simulations and the magnetic Reynolds number Rm for the dynamo simulations. In the latter case, the mean core magnetic field strength is given by the Elsasser number Λ: $$ \text{Re} = \frac{\text{UD}}{\nu} \, \qquad \text{Rm} = \frac{\text{UD}}{\lambda} \,\qquad \Lambda = \frac{B^{2}}{\mu_{0} \lambda \rho \Omega} $$ As suggested in the study by Hori et al. (2014) and DW13, heat flux anomalies applied along the equator (τ= 90°) yield special solutions. In this case, the equatorial symmetry of the CMB heat flux is re-established as in the case with homogeneous boundaries, but the azimuthal symmetry and axisymmetry are broken. It has been reported that anomalies of a planetary scale modelled by spherical harmonics of degree and order unity (Y 11) lead to flows dominated in spectral space by azimuthal order m=1 resembling the anomaly. In the same manner as Hori et al. (2014), we measure the dominance of m=1 by evaluating $$ {E}_{m1}=\frac{\underset{r_{\mathrm{icb}}}{\overset{r_{\mathrm{cmb}}}{\int }}\sum_l{E}_{l,1}^k{r}^2dr}{\underset{r_{\mathrm{icb}}}{\overset{r_{\mathrm{cmb}}}{\int }}\sum_{l,m}{E}_{l,m}^k{r}^2dr}\kern1em . $$ Of the 202 simulations performed in this study, selected numerical models are presented in Table 2 to provide an overview of and compare a number of hydrodynamic runs (centre column) and their equivalent dynamo runs (right column). All results in Table 2 were calculated with fixed parameters Pr=1, Pm=0/2 (hydrodynamic/dynamo runs), Ra=4×107, E=10−4, and β=0.35 and with variable forcing amplitude q ∗, anomaly width Ψ, and tilt angle τ. Table 2 Selected numerical models with fixed Rayleigh Ra=4×107, Ekman E=10−4, and Prandtl Pr=1 numbers and aspect ratio β=0.35 As noted above, the amplitude of the CMB heat flux variation is determined by the thermal lower mantle structure. For Earth, the amplitude can exceed the superadiabatic part of the homogeneous flux, indicating that values of q ∗>1 may be possible. Here, we restricted ourselves to q ∗≤1, thereby avoiding models with local core heating that may lead to stable stratification. To isolate the influence of the anomaly amplitude, the tilt angle was fixed at τ=0, and the amplitude q ∗ and the width Ψ were varied in small steps. Figure 3 a shows the strength of the EAA contribution C EAA with respect to the total kinetic energy. For the largest and strongest anomaly with Ψ= 180° and q ∗=1, the highest value of C EAA is found (black line in Fig. 3 a). When the width Ψ of the anomaly was reduced and its amplitude was kept fixed, the EAA symmetry contribution reduced accordingly. Hence, the enormous EAA contributions found in DW13 are strongly related to the large scale of the anomaly chosen there. It is thus not (or not only) the breaking of the equatorial symmetry that leads to strong antisymmetric flow contributions. An anomaly with a weaker amplitude (q ∗<1) reduces the strength of the EAA contribution C EAA almost linearly. Interestingly, halving the anomaly width has almost the same effect as halving the anomaly amplitude. As an example, for Ψ= 180° and q ∗=0.5, the EAA contribution is 0.338, whereas Ψ= 90° and q ∗=1.0 yield an EAA contribution of 0.331. Influence of various model parameters on EAA contribution. Impact of a anomaly amplitude, b tilt angle, c shell geometry, and d vigour of convection on the EAA contribution for varying anomaly width Ψ. The coloured vertical lines denote the colatitude of the tangent cylinder. Reference parameters (unless specified otherwise): Ra=4×107, q ∗=1.0, τ=0, β=0.35 Figure 4 shows the zonally averaged temperatures and azimuthal flows for various combinations of anomaly amplitudes q ∗ and widths Ψ in four selected models with polar anomalies (τ=0). Note that the thermal wind balance (Eq. 10) is always satisfied. One could expect that narrower anomalies (smaller Ψ) with stronger horizontal heat flux gradients would have stronger and more concentrated thermal winds, but this does not seem to be the case. The large-scale temperature anomaly between the north and south develops independent of the anomaly width. The first three models in Fig. 4 have similar EAA contribution strengths C EAA (see also Table 1), supporting the quasi-linear increase of EAA symmetry with amplitude and width. Zonal temperature and flow. The zonally averaged temperature (top) and differential rotation (bottom) are shown for four example cases with variable anomaly width Ψ and amplitude q ∗. Note that the first three cases have similar EAA symmetric flows. The black arcs denote the width of the anomaly and the arc thickness scales with the amplitude. Parameters: Ra=4×107, τ=0 Latitudinal position Thus far, we have focused on polar anomalies, where the anomaly peak vector is aligned with the rotation axis, which is a special situation. Because those break only the equatorial symmetry and not the azimuthal symmetry, the total CMB heat flux remains colatitude-dependent but axisymmetric. Mantle plumes and giant impacts generally do not sit on or reach the pole. Therefore, we explored several tilt angles τ. The magnetic case with a planetary-scale heat flux anomaly was explored in DW13 across a variety of tilt angles, and it was demonstrated that all tilt angles τ< 80° lead to a fairly strong EAA contribution. Such behaviour was also observed in a study by Amit et al. (2011). Figure 3 b shows the EAA flow contribution C EAA for four tilt angles τ. The first case, with τ= 22°, was set up such that the anomaly peak vector was located at the colatitude of the tangent cylinder. As will be discussed later, the tangent cylinder and the shell geometry were expected to have a strong influence on the dynamics, but our results show that they have no particular influence on the location of the heat flux anomaly for the thick shells studied here. However, tilting the anomaly further from the axis of rotation did not significantly affect the strength of the EAA symmetry for moderate tilt angles (τ≤ 45°). In this case, non-axisymmetric flow contributions are changed little, and the EAA contribution remains surprisingly strong. For Ψ= 180°, we can decompose the heat flux anomaly into Y 10 and Y 11 contributions. The effective Y 10 contribution is simply q ∗ cos(τ). The EAA remained significantly stronger than even the effective Y 10 contribution would suggest. For example, for τ= 45°, we expected \(C_{\text {EAA}} = C_{\text {EAA}}(\tau =0)/ \sqrt {2} = 0.42\), but we obtained C EAA=0.54. The nearly equatorial case showed distinct behaviour. In general, it might be concluded that any anomaly smaller than Ψ= 60° might only weakly influence the core convection. Note that this behaviour is independent of the tilt angle or anomaly amplitude. Figure 5 illustrates the time-averaged temperature and zonal flow in four different meridional cuts. The sample model here is for q ∗=1, Ψ= 180°, and τ= 45°. The first plot is positioned at a longitude of ϕ=0 and includes the location of the centre of the anomaly (black line). At this value of ϕ, the CMB heat flux at θ= 45° is exactly zero. The three other plots are taken at intervals of ϕ= 90° eastwards. Remarkably, large-scale equatorial asymmetry in the temperature is visible in all cuts; hence, the EAA symmetry driven by the thermal wind is a meaningful measure for the tilted cases as well. Interestingly, not much is visible of the broken azimuthal symmetry. Meridional cuts of flow and temperature. The temperature (top) and differential rotation (bottom) are shown for four different meridional cuts at ϕ for an anomaly of width Ψ= 180°, amplitude q ∗=1, and tilt angle τ= 45°. The thick black line denotes the central anomaly peak vector. Parameters: Ra=4×107 For numerical reasons, we have kept in our model an inner core that is purely driven by internal heat sources. Hori et al. (2010) have shown that such an inner core has little impact on the solution for homogeneous outer boundary conditions. Figure 3 c proves that this remains true for the inhomogeneous heat flux explored here. The aspect ratio affects the critical Rayleigh number for the onset of convection. Therefore, the cases compared here have different supercriticality. However, this also seems to have little impact for the limited range of Rayleigh numbers we studied. Vigour of convection The Rayleigh number Ra can influence the EAA instability in at least two ways. It directly scales buoyancy forces and thus also scales the thermal wind strength (see Eq. 10). The EAA contribution should therefore grow linearly with Ra. However, Ra also controls the convective vigour and length scale. As Ra grows, the scale decreases while the vigour increases, and both of these changes lead to more effective mixing, which should counteract the impact of the inhomogeneous boundary condition. Figure 3 d illustrates that the EAA contribution decreases with growing Ra such that the latter effect seems to dominate. However, for large anomalies, the relative EAA flow symmetry seems to become saturated for all Ra. It has been reported that EAA flows enforced by boundary anomalies dramatically affect the morphology and time dependence of the magnetic field (DW13). A polar planetary-scale anomaly with amplitude q ∗=1 and width Ψ= 180° transforms a strong and stationary dipolar-dominated magnetic field into a weaker wave-like hemispherical dynamo that is dominated by an axisymmetric toroidal field (DW13). The altered magnetic field configuration also indicates the strength of the EAA symmetry. DW13 suggested that the magnetic field significantly enhances the EAA contribution. A flow that inhibits a strong EAA symmetry results from a strongly asymmetric temperature anomaly, which is maintained by the thermal wind and hence confines the convective motions into a single hemisphere. This weakens the global amount of convection, potentially inducing magnetic fields. Another possible effect is that the magnetic field may relax the strong rotational constraint on the flow, thus allowing the convection to develop in more three-dimensional rather than columnar structures. This would support the convection in the magnetically active hemisphere and increase the temperature asymmetry and hence the EAA symmetry. Finally, the strong azimuthal toroidal field that emerges in the equatorial region, created by fierce equatorial antisymmetric shear associated with the EAA symmetric differential rotation, can potentially suppress columnar convective flows within this equatorial region (DW13). Figure 6 compares simulations including the magnetic field generation process (solid lines) with the previously discussed hydrodynamic cases (dashed). Figure 6 a is restricted to polar anomalies (fixed τ=0) and takes several anomaly amplitudes q ∗, whereas Fig. 6 b fixes the amplitude q ∗ and varies the tilt angle τ; both parts of the figure plot the EAA contribution against the anomaly width Ψ. For smaller or vanishing anomalies, the magnetic field actually suppresses the effect of the heat flux inhomogeneity and reduces the EAA contribution to nearly zero. The magnetic field now further suppresses the equatorially asymmetric contributions that were already relatively weak in the non-magnetic case. Effect of magnetic fields on EAA convective mode. Relative EAA kinetic energy contribution for a polar anomalies (fixed τ=0) with various amplitudes q ∗ and for b fixed q ∗=1.0 with various tilt angles τ as function of the anomaly width Ψ. For comparison, the hydrodynamic reference cases are included as dashed curves. The EAA contributions C EAA for the homogeneously cooled reference cases are 0.01 (dynamo) and 0.06 (hydrodynamic). Oscillatory dynamos are indicated by empty circles. Parameters: Ra=4×107, Pm=2 (dynamo) or Pm=0 (hydrodynamic) If the anomalies reach a width of Ψ= 90°, the magnetic field drastically enhances the EAA symmetry relative to the hydrodynamic runs. For q ∗=0.75 and Ψ= 90°, the magnetic field changed the EAA contribution from 0.25 to 0.67, indicating a stronger convection in the magnetically more active (southern) hemisphere. All dynamos reach a strong magnetic field (Elsasser number Λ≥1), which is when the leading force balance in the momentum equation is between the Coriolis and Lorentz forces. Note that the azimuthal vorticity balance (thermal wind) is still exclusively between the Coriolis force and the buoyancy (see the last two plots in the upper row of the lower set in Fig. 2). Figure 6 b illustrates that the magnetic effect on the EAA contribution remains similar as the tilt angle τ varies. The magnetic field increases the EAA contribution for larger opening angles but decreases it for smaller values of Ψ. Open circles in Fig. 6 show the locations at which oscillatory reversing dynamos have been found. This behaviour is promoted by a strong Ω-effect and therefore requires a large thermal wind shear. Because the main thermal wind contribution is EAA, the respective measure of the EAA symmetry is a good proxy for the Ω-effect (Dietrich et al. 2013). Figure 6 demonstrates that oscillatory dynamos are only found for relatively large EAA contributions. The tilt angle also plays a role in this behaviour. For τ=0, only models with C EAA>0.7 are oscillatory, whereas for τ= 45°, a non-reversing dynamo still exists at C EAA=0.68. Even though a strong EAA symmetric flow leads to strong shear around the equatorial region, yielding a strong Ω-effect (Dietrich et al. 2013), it can co-exist with non-reversing fields. If, despite the strong shearing, a sufficient fraction of toroidal field is created by non-axisymmetric helical flows, the field remains stable. For a more detailed investigation of how the magnetic field affects the flow, a measure of the heat transport efficiency was developed. We correlate the convective motion in terms of non-axisymmetric radial flows \(u_{r}^{\prime }\) with non-axisymmetric temperature perturbations T ′ over the azimuth and time. Radial integration of this measure gives the mean vertical convective heat transport Γ c as a function of the colatitude alone: $$ \Gamma_{c} (\theta) = \int_{r_{\text{i}}}^{r_{\text{o}}} \overline{u_{r}^{\prime} T^{\prime}}(r,\theta) \, r^{2} \, dr \, $$ where the overbar denotes the correlation over the azimuthal angle ϕ and time. This measure is closely related to the definition of the Nusselt number proposed by Otero et al. (2002), where another integration along the colatitude is taken. Because of the large temperature variation along the colatitude, we keep Γ c as a function of the colatitude θ. Figure 7 shows the colatitudinal profiles of Γ c for a few cases; in each panel, the hydrodynamic (black) and self-consistent dynamo (red) simulations are shown. Of all the simulations, we chose to investigate the reference case with homogeneous heat flux, the case with Ψ= 90° and q ∗=0.75, and the most commonly studied case (Y 10) given by Ψ= 180° and q ∗=1.0 (Fig. 7 a, b, and c, respectively). The study case in Fig. 7 b was selected because it shows an enormous difference between the hydrodynamic and dynamo simulations and the magnetic field is non-reversing. The case in Fig. 7 c features a reversing magnetic field and hence oscillates between weak (hydrodynamic) and strong (dynamo) EAA symmetry. The correlation needed to calculate Γ c in Fig. 7 c is taken over only three time steps, during which the magnetic field remains strong and does not change sign, whereas for the other cases, the correlations are taken over the full magnetic diffusion time with tens of snapshots. Radially integrated convective heat transport. Colatitudinal profiles of the vertical convective heat transport Γ c for hydrodynamic (black) and magnetic (red) simulations. The horizontal dashed lines denote the hemispherical average of each Γ c . Blue lines denote the relative contribution λ ϕ of the mean azimuthal toroidal field. a Reference case with homogeneous boundary heat flux. b Ψ= 90° and q ∗=0.75. c Y 10 case with a large-scale anomaly: Ψ= 180° and q ∗=1.0. Note that whereas a and b show long time averages, c is calculated from a few snapshots because of the reversing magnetic field. Parameters: Ra=4×107, τ=0, Pm=2 DW13 suggested that the emerging strong axisymmetric toroidal field induced by the EAA shear suppresses the radial, non-axisymmetric convective flows. To test this hypothesis, we calculated the strength of the mean azimuthal magnetic field relative to the total magnetic field. Because some magnetic fields oscillate, we took the time-averaged root mean square azimuthal field rather than the time-averaged field and obtained: $$ \lambda_{\phi}(\theta) = \frac{1}{\sqrt{B^{2}}} \int_{r_{\text{i}}}^{r_{\text{o}}} \sqrt{\overline{B_{\phi}}^{2}}(r,\theta) \, r^{2} \, dr \, $$ which is included in Fig. 7 in blue. As suggested in DW13, we found that λ ϕ is large when the reduction of convective flows is large (see Fig. 8). The vertical convective heat transport Γ c is clearly suppressed when the toroidal field indicated by λ ϕ is large. For the case in Fig. 7 c, the enhancement of Γ c in the magnetically more active hemisphere is visible as well. Hemisphericity of vertical heat transport. Asymmetry of the vertical heat transport in the northern and southern hemispheres. Pure hydrodynamic models (dashed) and dynamo simulations (solid) are included. Parameters: Ra=4×107, τ=0, Pm=2 (dynamo) or Pm=0 (hydrodynamic) Furthermore, we defined the convective hemisphericity Γ, which is equivalent to the magnetic field hemisphericity H sur but is based on the vertical heat transport Γ c integrated over either the northern or southern hemisphere, such that $$ \varGamma \kern0.3em =\kern0.3em \left|\frac{\varGamma_N-{\varGamma}_S}{\varGamma_N+{\varGamma}_S}\right|\kern1em ,\ \mathrm{where}{\varGamma}_c^{N,S}=\underset{0,\pi /2}{\overset{\pi /2,\pi }{\int }}{\varGamma}_c\left(\theta \right) \sin \theta d\theta \kern1em . $$ These values are plotted in Fig. 7 as dashed vertical lines, and the convective hemisphericity Γ is plotted in Fig. 8 for all polar anomalies (τ=0). According to our results, it can be concluded that the magnetic field (mainly the axisymmetric part of B ϕ ) is responsible for the increased equatorial asymmetry of the convective cooling and thus the boost of EAA symmetric flows (see Fig. 6). For the tilted cases (Fig. 6 b), the magnetically driven EAA enhancement also appeared for all tilt angles. Even for τ= 63°, the EAA contribution reached nearly 0.6 for the largest anomaly width Ψ. However, for these tilted cases, the EAA mode increased linearly with Ψ, where for smaller tilt angles, saturation occurred. Equatorial anomalies For anomalies tilted towards the equator (τ= 90°), the flows show a strong azimuthal alteration along with the outer boundary heat flux. Hence, significant kinetic energy contributions are expected from flows with an equivalent spectral order of m=1 (see definition of E m1, Eq. 15). Figure 9 gives an overview of the spectral response E m1 for the hydrodynamic cases with an anomaly amplitude of q ∗=1. Large equatorial anomalies led to an increase from E m1=0.026 in the homogeneous reference case to E m1=0.25 for the largest anomaly width. If the anomaly is tilted away from the equator again, both the equatorial and azimuthal symmetry are broken. Hence, it would be expected that apart from the equatorial case with τ= 90°, smaller tilt angles will also show an enhancement of E m1. Figure 9 also shows various tilt angles between τ= 90° and 45°, where the E m1 amplitude decreases with decreasing tilt angle. Interestingly, the cases with more concentrated anomalies, e.g., Ψ= 90°, yielded higher values of E m1 than the planetary-scale anomalies with Ψ= 180°. For larger Ψ reaching far enough across the equator, it seems that the equatorial asymmetry takes control, and EAA symmetric flows are established. It then seems reasonable for E m1 to decay only for τ≠ 90°. If the magnetic field is included (Fig. 9 b), the systematic behaviour of E m1 found in the hydrodynamic simulations is rather equivalent. This suggests that the magnetic field is not important for E m1 as a measure of the dynamic response to breaking the azimuthal symmetry of the outer boundary heat flux. m=1 dominance for equatorial anomalies. Relative kinetic energy contribution of the m=1 flows (E m1, solid lines) and the EAA contribution (dashed) for anomalies of various tilt angles (τ) as a function of anomaly width Ψ for a hydrodynamic and b self-consistent dynamo simulations. Parameters: Ra=4×107, q ∗=1.0, Pm=2 (dynamo) or Pm=0 (hydrodynamic) Application to Mars For application to Mars, the time-averaged surface hemisphericity of the radial field is correlated with the EAA symmetry. The two are dynamically linked because a strong EAA kinetic energy contribution relies on strongly equatorially asymmetric convection, which in turn induces a hemispherical field. The magnetic field is extrapolated by a potential field towards the Martian surface with an outer core radius of r cmb=1680 km and a surface radius of r sur=3385 km. The magnetic field hemisphericity H sur at the surface gives a ratio that is a function of the radial field intensities B N and B S integrated over the northern and southern hemispheres, as defined in Eq. 12. Figure 10 shows the correlation between the EAA symmetry and the surface hemisphericity of the radial field. In these simulations, a weak EAA led to weak H sur, as expected. For large EAA symmetry enforced by boundary forcing, the magnetic fields tend to be more hemispherical. The lower bound on the Martian crustal value of H sur is 0.45 (Amit et al. 2011), which is only reached when the anomalies have a large horizontal extent (Ψ> 120°) and amplitude (q ∗≥0.75). Even though the EAA contribution can become dominant for smaller and weaker anomalies, it is far more challenging to induce a magnetic field of sufficient surface hemisphericity in this case. Furthermore, all magnetic fields with sufficiently high H sur values are oscillatory, which makes a thick unidirectional magnetisation unlikely (see discussion in DW13). Magnetic field hemisphericity vs. EAA. Hemisphericity of the radial magnetic field at the Martian surface plotted against the EAA flow contribution. The different colours, symbols, and symbol sizes represent different tilt angles (τ), anomaly amplitudes (q ∗), and widths of the anomaly (Ψ), respectively, and the crosses indicate reversing dynamos. The specific values of C EAA and further characteristic quantities for the oscillatory dynamos can be found in Fig. 6 and Table 2. The small inset plot (top left) includes only points with H sur≥0.2 and C EAA≥0.5. The vertical grey dashed line indicates the lower limit of the Martian crustal value of H sur. Parameters: The Rayleigh and Ekman numbers are kept constant throughout all simulations in the plot (Ra=4×107, E=10−4) DW13 explored the parameter dependence of a polar Y 10 anomaly, changing the Rayleigh, magnetic Prandtl, and Ekman numbers within the numerically accessible limits (see their Fig. 11). Not unexpectedly, decreasing the Ekman number led to smaller hemisphericities because the geostrophic geometry was more severely enforced. This can be counteracted by increasing the anomaly amplitude; however, increasing Ra or Pm also seemed to help, likely because inertia or Lorentz forces more significantly contribute to balancing the Coriolis force. At realistically small Ekman numbers of approximately 3×10−15 and appropriate Ra and Pm for Mars, this likely means that unrealistically large heat flux variations would be required to yield the observed hemisphericity. Inertial forces are thought to be small in planetary cores, whereas Lorentz forces should not significantly exceed the strength reached at the smaller Ekman number of 105 explored by DW13. The generally oscillatory nature of high hemisphericities remains a problem at all parameter combinations and geometries explored in DW13 and in the present study. Latitudinal temperature variations paired with their respective gradients in convective efficiency never fail to drive strong thermal winds. These in turn lead to a significant Ω-effect that seems to favour oscillatory dynamos (Dietrich et al. 2013). None of the variations in the general set-up explored in this paper have indicated that this fundamental mechanism is incorrect. We constructed a suite of 202 numerical models of spherical shell convection and magnetic field generation in which the outer boundary heat flux was perturbed by an anomaly of variable width, amplitude, and position. The convection was driven exclusively by secular cooling, which is an appropriate model for terrestrial planets in the early stages of their evolution, when no inner core is present. The dynamic response of the flow was measured in terms of the expected spectral equivalent of the boundary forcing. For anomalies breaking the equatorial symmetry, the relative strength of EAA kinetic energy was used, and for anomalies breaking the azimuthal symmetry, the relative strength of flows with a spectral order of m= 1 was investigated (E m1). For hydrodynamic models without a magnetic field, the strength of the EAA symmetry was found to increase almost linearly with the amplitude and width of the anomaly. These flows are driven by a large-scale equatorial asymmetry in the axisymmetric temperature field. Hence, a more localised CMB heat flux anomaly does not lead to a stronger or more confined thermal wind, even though the horizontal variation of the heat flux is locally much larger. The simulations also indicated that models perturbed by narrower anomalies or anomalies that are not aligned with the axis of rotation also yield the same fundamental temperature asymmetry. For example, if the anomaly peak is tilted by an angle τ≤ 45° from the axis of rotation, the EAA symmetry is quite similar to that in the case of the polar anomaly (τ=0). Furthermore, this suggests that the system is more sensitive to changes in the equatorial symmetry than in the azimuthal symmetry. For equatorial anomalies (τ= 90°), the spectral response (E m1) reached up to 25 % of the kinetic energy and was only mildly affected by the magnetic field. Interestingly, for tilt angles 45° ≤τ≤ 80°, the contribution of E m1 could be measured as well and was found to typically be the strongest for moderately sized anomalies 60° ≤Ψ≤ 90°. Larger anomalies broke the equatorial symmetry as well, thus increasing the EAA symmetry at the cost of the E m1 symmetry. For numerical reasons, the models were run with an inner core present; as such, we further tested the influence of smaller aspect ratios, which proved to be negligible. A similar conclusion was reached by Hori et al. (2010) for a homogeneous outer boundary heat flux and may be extended with this study to boundary-forced models. As the primary purpose of our model is to comprehensively parametrise the various boundary anomalies, out of various other model parameters (Ra, E, Pm, Pr), we only tested the influence of increasing the convective vigour. Our model also indicated that stronger convective stirring does not suppress fundamental temperature asymmetries or the EAA mode. However, it was shown that the model is slightly more sensitive to boundary forcing when convective driving is weaker. DW13 provides a discussion of the possible dependence on the Ekman number (rotation rate) and the magnetic Prandtl number, showing the robustness of a similar model featuring Y 10-forcing. In the presence of dynamo action, the behaviour is far more non-linear. For narrow anomalies, the flow is equatorially symmetrised by the dipole field, whereas anomalies with widths Ψ≥ 90°, amplitudes q ∗≥0.5, and tilt angles τ≤ 45° strongly boost the EAA symmetry. It was shown that the strong azimuthal toroidal field around the equator suppresses the remaining columnar convection and further increases the asymmetry in the temperature and, as a consequence, the antisymmetry in the core flow. Hence, the magnetic field prevents narrow heat flux anomalies from affecting the core convection, where it drastically increases their respective effects when they reach a horizontal extent on the same order as the core radius. This effect is independent of the tilt angle, amplitude, and width of the anomaly when τ≤ 45°, q ∗≥0.5, and Ψ≥ 90°. For all models within these boundaries, this in turn also implies that CMB heat flux anomalies can be smaller, weaker, and non-polar while still yielding effects similar to those of the fundamental Y 10 anomaly. A similar observation was reported recently by Monteux et al. (2015). Hence, our results suggest that the core dynamos of ancient Mars or early Earth are sensitive to CMB heat flux anomalies only if they are strong in amplitude and large in horizontal extent. Regarding the hemispherical magnetisation of the Martian crust, it seems rather unlikely that the magnetising field is as hemispherical as the crustal pattern suggests; hence, the crustal magnetisation dichotomy seems only realistically explained by additional demagnetisation events of external origins in the northern hemisphere. The results of the numerical models clearly indicate that a sufficiently hemispherical field is possible only if the anomaly is of core scale and significantly affects the CMB heat flux. However, all of these geometrically corresponding hemispherical dynamos show rather frequent polarity reversals and hence would require a crustal rock magnetisation time on the order of the magnetic diffusion time (tens of thousands of years), which might be a rather unrealistic scenario for a thick magnetised layer of at least 20 km. Note that it is possible to create a magnetic field that shows a smaller degree of equatorial asymmetry and is stable in time. However, at more realistic model parameters, e.g., a smaller Ekman number, it seems likely that these models remain applicable only when a much stronger forcing is applied to sufficiently break the typical z-invariance of the flow (geostrophy). One common feature consistently found here and in DW13 is that a stronger thermal forcing naturally leads to oscillatory fields. Thus, the main results obtained in this study are Fundamental large-scale equatorial asymmetry in the temperature and hence EAA symmetric flows emerge independent of the width, position, and amplitude of a CMB heat flux anomaly. The magnetic field prevents narrow heat flux anomalies from affecting the core convection but drastically increases their respective effects when the anomalies reach horizontal extents on the same order as the core radius. At least for the parameters and geometries explored here and in DW13, it is not possible for a hemispherical dynamo to explain the observed dichotomy in Martian crustal magnetisation. Acuña, MH, Connerney JEP, Ness NF, Lin RP, Mitchell D, Carlson CW, et al.Global distribution of crustal magnetization discovered by the Mars Global Surveyor MAG/ER experiment. Science. 1999; 284:790. doi:10.1126/science.284.5415.790 Amit, H, Christensen UR, Langlais B. The influence of degree-1 mantle heterogeneity on the past dynamo of Mars. Phys Earth Planet In. 2011; 189:63–79. doi:10.1016/j.pepi.2011.07.008. Arkani-Hamed, J. Did tidal deformation power the core dynamo of Mars?Icarus. 2009; 201:31–43. doi:10.1016/j.icarus.2009.01.005. Aurnou, JM, Aubert J. End-member models of boundary-modulated convective dynamos. Phys Earth Planet In. 2011; 187:353–63. doi:10.1016/j.pepi.2011.05.011. Bloxham, J. Sensitivity of the geomagnetic axial dipole to thermal core-mantle interactions. Nature. 2000; 405:63–5. Breuer, D, Labrosse S, Spohn T. Thermal evolution and magnetic field generation in terrestrial planets and satellites. Space Sci Rev. 2010; 152:449–500. doi:10.1007/s11214-009-9587-5. Busse, FH. Thermal instabilities in rapidly rotating systems. J Fluid Mech. 1970; 44:441–60. doi:10.1017/S0022112070001921. Cao, H, Aurnou JM, Wicht J, Dietrich W, Soderlund KM, Russell CT. A dynamo explanation for Mercury's anomalous magnetic field. Geophys Res Lett. 2014; 41:4127–34. doi:10.1002/2014GL060196. Christensen, UR, Wicht J. Numerical dynamo simulations In: Olson, P, editor. Treatise on Geophysics. Amsterdam: Elsevier: 2007. p. 245–82. Citron, RI, Zhong S. Constraints on the formation of the Martian crustal dichotomy from remnant crustal magnetism. Phys Earth Planet In. 2012; 212:55–63. doi:10.1016/j.pepi.2012.09.008. Davies, CJ, Gubbins D, Willis AP, Jimack PK. Time-averaged paleomagnetic field and secular variation: predictions from dynamo solutions based on lower mantle seismic tomography. Phys Earth Planet In. 2008; 169:194–203. doi:10.1016/j.pepi.2008.07.021. Dietrich, W, Schmitt D, Wicht J. Hemispherical Parker waves driven by thermal shear in planetary dynamos. Europhys Lett. 2013; 104:49001. doi:10.1209/0295-5075/104/49001. Dietrich, W, Wicht J. A hemispherical dynamo model: implications for the Martian crustal magnetization. Phys Earth Planet In. 2013; 217:10–21. doi:10.1016/j.pepi.2013.01.001.1402.0337. Harder, H, Christensen UR. A one-plume model of Martian mantle convection. Nature. 1996; 380:507–9. doi:10.1038/380507a0. Hori, K, Wicht J, Christensen UR. The effect of thermal boundary conditions on dynamos driven by internal heating. Phys Earth Planet In. 2010; 182:85–97. doi:10.1016/j.pepi.2010.06.011. Hori, K, Wicht J, Christensen UR. The influence of thermo-compositional boundary conditions on convection and dynamos in a rotating spherical shell. Phys Earth Planet In. 2012; 196:32–48. doi:10.1016/j.pepi.2012.02.002. Hori, K, Wicht J, Dietrich W. Ancient dynamos of terrestrial planets more sensitive to core-mantle boundary heat flows. Planet Space Sci. 2014; 98:30–40. doi:10.1016/j.pss.2013.04.007. Kutzner, C, Christensen U. Effects of driving mechanisms in geodynamo models. Geophys Res Lett. 2000; 27:29–32. doi:10.1029/1999GL010937. Langlais, B, Purucker ME, Mandea M. Crustal magnetic field of Mars. J Geophys Res. 2004; 109:2008. doi:10.1029/2003JE002048. Landeau, M, Aubert J. Equatorially asymmetric convection inducing a hemispherical magnetic field in rotating spheres and implications for the past Martian dynamo. Phys Earth Planet In. 2011; 185:61–73. doi:10.1016/j.pepi.2011.01.004. Lillis, RJ, Frey HV, Manga M, Mitchell DL, Lin RP, Acuña MH, et al.An improved crustal magnetic field map of Mars from electron reflectometry: highland volcano magmatic history and the end of the Martian dynamo. Icarus. 2008; 194:575–96. doi:10.1016/j.icarus.2007.09.032. Marinova, MM, Aharonson O, Asphaug E. Mega-impact formation of the Mars hemispheric dichotomy. Nature. 2008; 453:1216–9. doi:10.1038/nature07070. Monteux, J, Amit H, Choblet G, Langlais B, Tobie G. Giant impacts, heterogeneous mantle heating and a past hemispheric dynamo on Mars. Phys Earth Planet In. 2015; 240:114–24. doi:10.1016/j.pepi.2014.12.005. Nimmo, F, Hart SD, Korycansky DG, Agnor CB. Implications of an impact origin for the Martian hemispheric dichotomy. Nature. 2008; 453:1220–1223. doi:10.1038/nature07025. Olson, P, Christensen U, Glatzmaier GA. Numerical modeling of the geodynamo: mechanisms of field generation and equilibration. J Geophys Res. 1999; 1041:10383–404. doi:10.1029/1999JB900013. Olson, PL, Coe RS, Driscoll PE, Glatzmaier GA, Roberts PH. Geodynamo reversal frequency and heterogeneous core-mantle boundary heat flow. Phys Earth Planet In. 2010; 180:66–79. doi:10.1016/j.pepi.2010.02.010. Otero, J, Wittenberg RW, Worthing RA, Doering CR. Bounds on Rayleigh Bénard convection with an imposed heat flux. J Fluid Mech. 2002; 473:191–9. doi:10.1017/S0022112002002410. Roberts, JH, Zhong S. Degree-1 convection in the Martian mantle and the origin of the hemispheric dichotomy. J Geophys Res. 2006; 111:6013. doi:10.1029/2005JE002668. Roberts, JH, Lillis RJ, Manga M. Giant impacts on early Mars and the cessation of the Martian dynamo. J Geophys Res. 2009; 114:4009. doi:http://dx.doi.org/10.1029/2008JE003287. Sreenivasan, B, Jellinek MA. Did the Tharsis plume terminate the Martian dynamo?Earth Planet Sci Lett. 2012; 349:209–17. doi:10.1016/j.epsl.2012.07.013. Stanley, S, Elkins-Tanton L, Zuber MT, Parmentier EM. Mars' paleomagnetic field as the result of a single-hemisphere dynamo. Science. 2008; 321:1822. doi:10.1126/science.1161119. Takahashi, F, Tsunakawa H, Matsushima M, Mochizuki N, Honkura Y. Effects of thermally heterogeneous structure in the lowermost mantle on the geomagnetic field strength. Earth Planet Sci Lett. 2008; 272:738–46. doi:10.1016/j.epsl.2008.06.017. Wicht, J. Inner-core conductivity in numerical dynamo simulations. Phys Earth Planet In. 2002; 132:281–302. Wicht, J, Heyner D. Mercury's Magnetic Field in the MESSENGER era In: Jin, S, editor. Boca Raton: Taylor & Francis: 2014. p. 223–62. This work was supported in part by the Science and Technology Facilities Council (STFC). Numerical simulations were undertaken on ARC1, part of the High Performance Computing facilities at the University of Leeds, UK. Department of Applied Mathematics, University of Leeds, Woodhouse Lane, Leeds, LS9 2JT, West Yorkshire, UK Wieland Dietrich & Kumiko Hori Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, Göttingen, 37077, Germany Johannes Wicht Solar-Terrestrial Environment Laboratory, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan Kumiko Hori Wieland Dietrich Correspondence to Wieland Dietrich. WD proposed the topic, designed the study, and carried out the numerical experiments. JW developed the numerical implementation and extended the code to include the variable anomaly. KH and JW collaborated with WD in the construction of the manuscript. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Dietrich, W., Wicht, J. & Hori, K. Effect of width, amplitude, and position of a core mantle boundary hot spot on core convection and dynamo action. Prog. in Earth and Planet. Sci. 2, 35 (2015). https://doi.org/10.1186/s40645-015-0065-2 Core convection Geodynamo Ancient Martian dynamo Inhomogeneous CMB heat flux Multidisciplinary Researches on Deep Interiors of the Earth and Planets
CommonCrawl
Why $P(a \leq X < b) = P(a < X \leq b) = P(a < X < b) = P(a \leq X \leq b)$? Let $F_X$ denote cumulative distribution function and $f_X$ denote probability density function. If $X$ is a continuous random variable, then the following holds true: $$\begin{align*}P(a \leq X < b) = P(a < X \leq b) = P(a < X < b) = P(a \leq X \leq b) &= \int_{a}^{b} f_X(x)dx \\ &= F_X(b) - F_X(a)\end{align*}$$ How can I easily explain that the used inequality symbols $\{<, \leq, >, \geq \}$ don't matter? Don't they? probability probability-distributions random-variables density-function wenoweno $\begingroup$ Not having these differences is close to being the definition of a continuous distribution function $\endgroup$ – Henry $P(X=a)=0$ for any $a$. Note that $[a,b]\setminus [a,b)=\{b\}$ so $P(a\leq X \leq b) -P(a\leq X <b)=P(X=b)=0$. Similar argument holds for other cases. Kavi Rama MurthyKavi Rama Murthy Not the answer you're looking for? Browse other questions tagged probability probability-distributions random-variables density-function or ask your own question. Example of non continuous random variable with continuous CDF Finding the distribution of the reciprocal of a random variable What makes a random variable absolutely continuous? What is missing in my solution of "from PDF to CDF and $P(X > 0.5)$"? How do I define probability space $(\Omega, \mathcal F, \mathbb{P})$ for continuous random variable? What exactly do probability density functions "do"? Probability density of analytical function on 3 random variables Integration by parts in expectation of a random variable Obtaining density from cumulative distribution function Prove that $P[a \leq X \leq b] = F_X(b) - F_X(a^-)$ if $a \leq b$
CommonCrawl
Feedback stabilization for unbounded bilinear systems using bounded control Feedback stabilization for unbounded bilinear systems using bounded control El Ayadi,, Rachid;Ouzahra,, Mohamed 2019-12-18 00:00:00 Abstract In this paper, we deal with the distributed bilinear system |$ \frac{d z(t)}{d t}= A z(t) + v(t)Bz(t), $| where A is the infinitesimal generator of a semigroup of contractions on a real Hilbert space H. The linear operator B is supposed bounded with respect to the graph norm of A. Then we give sufficient conditions for weak and strong stabilizations. Illustrating examples are provided. 1. Introduction In this paper, we deal with the infinite dimensional bilinear system $$\begin{equation} \displaystyle\frac{d z(t)}{d t}= A z(t) + v(t)Bz(t),\quad\;z(0)=z_0\in H, \end{equation}$$ (1.1) where A is an unbounded operator of H with domain D(A) and generates a semigroup of contractions |$(S(t))_{t\geqslant 0}$| on a real Hilbert space H, whose norm and scalar products are denoted by ∥⋅∥ and ⟨⋅, ⋅⟩, respectively; the linear operator B, with domain D(A) ⊂ D(B), is A-bounded in the sense that there exists |$\alpha ,\beta>0$| such that |$\|Bz\|\leqslant \alpha \|Az\|+\beta \|z\|,\; \forall z\in D(A), $| or equivalently (see Desch & Schappacher 1984) $$\begin{equation} \|Bz\|\leqslant M\left(\|Az\|+\|z\|\right),\quad\; \forall z\in D(A)\; (\mbox{for some }\; M>0). \end{equation}$$ (1.2) The real valued function v(⋅) denotes the control and z(t) is the corresponding mild solution of (1.1). In the case where B ∈ L(H) (L(H) is the set of bounded linear operators on H), the problem of stabilization by non-linear feedback controls has been studied by many authors (see e.g. Ball & Slemrod, 1979; Berrahmoune, 1999; Bounit & Hammouri 1999; Ouzahra, 2008). In Ball & Slemrod, 1979, a result of weak stabilization was obtained under the following condition: $$\begin{equation} \langle BS(t)y,S(t)y \rangle=0,\quad\;\; \forall t\geqslant 0 \Longrightarrow y=0, \end{equation}$$ (1.3) by using the quadratic feedback $$\begin{equation} v(t) = -\langle z(t),Bz(t)\rangle\cdot \end{equation}$$ (1.4) On the other hand, if (1.3) is replaced by the following inequality: $$\begin{equation} \int_0^T|\langle BS(s)y, S(s)y \rangle|\, \mathrm{d}s \geqslant \boldsymbol{\mu} \|y\|^2,\quad\;\; \forall y\in H\;(\mbox{for some}\; \boldsymbol{\mu},T>0), \end{equation}$$ (1.5) then (see Berrahmoune, 1999; Ouzahra, 2008) the feedback (1.4) is a strongly stabilizing control and guarantees the following decay estimate: $$\begin{equation} \|z(t)\|=O\left(t^{\frac{-1}{2}}\right),\quad\; \mbox{as}\; t\to +\infty\cdot \end{equation}$$ (1.6) In the study by Bounit & Hammouri (1999), the authors considered the control $$\begin{equation} v(t)=-\frac{\langle Bz(t),z(t) \rangle}{1+|\langle Bz(t),z(t) \rangle|} \end{equation}$$ (1.7) and showed that if the resolvent of A is compact and B is bounded, self-adjoint and monotone, then the feedback (1.7) strongly stabilizes (1.1) provided that (1.3) holds. In the study by Ouzahra (2010), it has been shown that the control $$\begin{equation} v(t)=-\displaystyle\frac{\left \langle z(t),Bz(t)\right \rangle}{\|z(t)\|^2}\textbf{1}_{\{t\geqslant0; z(t)\ne0\}}, \end{equation}$$ (1.8) weakly stabilizes (1.1) provided that B is a compact operator such that (1.3) holds. On the other hand, it has been shown that if B is a bounded operator satisfying (1.5), then (see the studies by Ouzahra, 2010, 2011) the control (1.8) exponentially stabilizes (1.1). The question of well-posedness of linear and bilinear systems with unbounded control operator has been treated in the studies by Weiss (1989, 1994), Idrissi (2003), Bounit & Idrissi (2005), Idrissi & Bounit (2008), Berrahmoune (2010) and El Ayadi et al. (2012). In the study by Berrahmoune (2010), the author considered the case where A is self-adjoint, and B is positive self-adjoint and bounded from the subspace |$V=D((I-A)^{\frac{1}{2}})$| of H to its dual space |$V^{^{\prime}}.$| Then he established the weak and strong stabilities of the closed-loop system $$\begin{equation} \displaystyle\frac{d z(t)}{d t}=Az(t)+f(\langle Bz(t),z(t)\rangle)Bz(t),\quad\;z(0)=z_0, \end{equation}$$ (1.9) for an appropriate function |$f : \mathbb{R}\longrightarrow \mathbb{R}$|⁠. Moreover, in the study by El Ayadi et al. (2012), it has been supposed that B is an unbounded linear operator from H to a Banach extension X of H with a continuous embedding H ↪ X. Then, it has been shown that (1.1) is strongly stabilizable, and a polynomial decay estimate of the stabilized state has been provided. It is noted that the above-mentioned results of El Ayadi et al. (2012) use spectral decomposition of the system at hand which reduces the class of considered systems to parabolic ones. Here, we deal with control operators which are relatively bounded. Such class of operators is very interesting either in theoretical or practical point of view. Indeed, various properties of semigroup under bounded perturbations are preserved when dealing with relatively bounded perturbations of the generator (see e.g. Pazy, 1978; Engel & Nagel, 2000). On the other hand, relatively bounded control operators arise as models for many dynamical processes (see e.g. Hislop & Sigal, 1996; Yarotsky, 2006). The article is organized as follows: the next section provides background material on non-linear semigroups and unbounded operators. In the third section we present an existence and uniqueness result. In the fourth section we present our main results and we study the weak and strong stabilizations. In the last section, we give illustrating examples. 2. Review on non-linear semigroups and unbounded operators In this section, we recall some existing results related to non-linear semigroups and unbounded operators. Let us begin with the following definitions and results concerning certain classes of unbounded operators (see Hille & Phillips, 1957). Definition 2.1 (Hille & Phillips, 1957, p. 391) Let A be the infinitesimal generator of a |$C_0$|-semigroup. A linear operator C is said to belong to the class F(A) if D(A) = D(C) and |$CR(\lambda _0,A)\in L(H)$| for some |$\lambda _0\in \boldsymbol{\rho} (A),$| where |$\boldsymbol{\rho}(A)$| is the resolvent set of A and |$R(\lambda _0,A)$| is its resolvent operator. Remark 2.1 (Hille & Phillips, 1957, p. 391) If C ∈ F(A), then |$CR(\lambda ,A)=CR(\lambda _0,A)+(\lambda _0-\lambda )CR(\lambda _0,A)R(\lambda ,A)$|⁠. Then, the operator |$CR(\lambda ,A)$| is bounded for all |$\lambda \in \boldsymbol{\rho} (A)$|⁠. If C is A-bounded (i.e. C verifies (1.2) for some M > 0), then for all y ∈ D(A) and |$\lambda \in \boldsymbol{\rho} (A) $| we have |$\|Cy\|\leqslant M\|(-A+\lambda I)y\|+M|\lambda | \|y\|+M\|y\|.$| Thus, for all x ∈ H, we have |$\|CR(\lambda ,A)x\|\leqslant M\|x\|+(M|\lambda |+M)\|R(\lambda ,A)x\| \leqslant (2M+|\lambda | M)\|x\|.$| Then the operator |$C|_{D(A)}, $| restriction of C to D(A), is an element of F(A). Definition 2.2 (Hille & Phillips, 1957, p. 392) Let A be the infinitesimal generator of a |$C_0$|-semigroup. A linear operator C is said to belong to the class |$\widetilde{F}(A)$| if D(A) ⊂ D(C), |$CR(\lambda _0,A)\in L(H),$| for some |$\lambda _0\in \boldsymbol{\rho}(A),$| for all x ∈ H, we have x ∈ D(C) if and only if the limit |$\lim _{\lambda \to \infty }\lambda CR(\lambda ,A)x=y$| exists, in which case Cx = y. In the sequel, for linear operators |$\Lambda $| and C with domains |$D(\Lambda )$| and D(C) (respectively) such that |$D(C)\supset D(\Lambda )$|⁠, we set |$\|C\|_{\Lambda }=\sup \left \{\|Cx\| \;/\; x\in D(\Lambda ), \|x\|\leqslant 1\right \}.$| Theorem 2.1 (Hille & Phillips, 1957, p. 392) Let C ∈ F(A). Then C has a unique extension |$\widetilde{C}\in \widetilde{F}(A)$|⁠, called A-extension of C and is defined by |$\widetilde{C}x=\lim _{\lambda \rightarrow +\infty }\lambda CR(\lambda ,A)x,\;\forall x\in D(\widetilde{C}):=\{x\in H \; /\;\lim _{\lambda \rightarrow +\infty }\lambda CR(\lambda ,A)x\; \mbox{exists} \}$|⁠. If C is the restriction of a closed operator |$C_1$|⁠, then |$C\subset \widetilde{C} \subset C_1$|⁠. Remark 2.2 If C is bounded, then |$\widetilde{C}$| is bounded and |$\|\widetilde{C}\|=\|C\|_A$|⁠. If C is A-bounded then |$C|_{D(A)}$| has a unique extension |$\widetilde{C}\in \widetilde{F}(A)$|⁠. Proposition 2.1 (Hille & Phillips, 1957, p. 394) Let |$C\in \widetilde{F}(A)$| and suppose that |$CS(t_0)$| is bounded on D(A) for some |$t_0>0 $|⁠. Then for all |$t\geqslant t_0$|⁠, we have S(t)H ⊂ D(C) and CS(t) is bounded with |$\|CS(t)\|=\|CS(t)\|_A$|⁠. Moreover, the function t ↦ CS(t) is strongly continuous for |$t>t_0, $| and we have $$\begin{equation} \limsup_{t\to +\infty} t^{-1}\ln(\|CS(t)\|)\leqslant \omega_0, \end{equation}$$ (2.1) where |$\boldsymbol{\omega}_0$| is the growth bound of the semigroup S(t). Remark 2.3 If C is an A-bounded operator such that |$\|CS(t_0)\|$| is bounded on D(A) for some |$t_0>0$|⁠, then |$C|_{D(A)}$| has a unique extension |$\widetilde{C}$|⁠, which satisfies |$S(t)H\subset D(\widetilde{C})$| for all |$t\geqslant t_0$|⁠. This property will be useful to establish our weak stabilization result. Let us now recall the notion of non-linear semigroups (Pazy, 1978). Definition 2.3 (Pazy, 1978) Let H be a Hilbert space. A (generally non-linear) strongly continuous semigroup |$(T(t))_{t\geqslant 0}$| on H is a family of continuous maps T(t) : H→H satisfying T(0) = identity, |$T(t + s) = T(t) T(s)$|⁠, for all |$t, s\in \mathbb{R}^+,$| for every y ∈ H, T(t)y → y, as |$t\to 0^+.$| In this case, the mapping defined by |${\mathcal A}y = \lim _{h\to 0^+}\frac{T(h)y - y}{h}$| for all |$y\in D({\mathcal A}):=\{y\in H/\; \lim _{h\to 0^+}\frac{T(h)y - y}{h} $| exists in H} is called the infinitesimal generator of the semigroup T(t). If in addition |$\|T(t)y_1-T(t)y_2\|\leqslant \|y_1-y_2\|$|⁠, for every |$t\geqslant 0$| and |$y_1,y_2\in H$|⁠, then T(t) is said to be a contraction semigroup (or a semigroup of contractions) on H. In this case, |${\mathcal A}$| is dissipative, i.e. |$\langle{\mathcal A}y_1-{\mathcal A}y_2,y_1-y_2\rangle \leqslant 0,\;$| for all |$y_1,y_2\in D({\mathcal A}).$| For |$\phi \in H$|⁠, define the positive orbit through |$\phi $| by |$ O^+(\phi )=\cup _{t>0}T(t)\phi $|⁠. The |$\omega $|-limit set of |$\phi $| is the (possibly empty) set given by |$\omega (\phi )= \{\psi \in H;\; $| there exists a sequence |$ t_n\to +\infty ,\; $| such that |$ T(t_n)\phi \to \psi , \; $| as |$ n\to +\infty \}.$| The weak |$\omega $|-limit set of |$\phi $| is the (possibly empty) set given by |$\omega _w(\phi )= \{\psi \in H;\; $| there exists a sequence |$ t_n\to +\infty ,\; $| such that |$ T(t_n)\phi \rightharpoonup \psi , \; \mbox{as}\; n\to +\infty \}.$| The sets |$\omega (\phi )$| and |$\omega _w(\phi )$| are invariant under the action of any contraction semigroup |$(T(t))_{t\geqslant 0}$| (see Pazy, 1978). Moreover, given a semigroup of contraction |$S(t)=e^{{\mathcal A}t},$| we denote by |$E_{\mathcal A}$| the set of equilibrium states given by |$E_{\mathcal A}={\mathcal A}^{-1}(0)=\{y\in D({\mathcal A}),\;{\mathcal A}y=0\}.$| We have |$E_{\mathcal A}=\{y\in H;\; e^{t{\mathcal A}}y=y,\, \forall t\geqslant 0\}.$| Indeed, for y ∈ H such that |$e^{t{\mathcal A}}y=y, \; \forall t\geqslant 0$|⁠, we have |$\lim\nolimits _{t\to 0^+}\dfrac{e^{t{\mathcal A}}y-y}{t}=0$|⁠, then |$y\in D({\mathcal A})$| and |${\mathcal A}y=0$|⁠. Now, let |$y\in D({\mathcal A})$| be such that |${\mathcal A}y=0$|⁠. We know that |$\frac{d^+}{dt} e^{t{\mathcal A}}y={\mathcal A}e^{t{\mathcal A}}y, \, \forall t\geqslant 0$| (see the study by Komura, 1967). It follows that $$\begin{align*} \left\|y-e^{t{\mathcal A}}y\right\|^2& = \int_{0}^{t}\frac{\mathrm{d}^+}{\mathrm{d}s}\left\|y-e^{s{\mathcal A}}y\right\|^2 \mathrm{d}s\\ & = \int_{0}^{t}2\left\langle{\mathcal A}y-{\mathcal A}e^{s{\mathcal A}}y\;,\;y-e^{s{\mathcal A}}y \right\rangle \mathrm{d}s, \end{align*}$$ from which, we deduce that |$\|y-e^{t{\mathcal A}}y\|^2= 0,\; \forall t\geqslant 0$|⁠. Thus, |$e^{t{\mathcal A}}y=y, \; \forall t\geqslant 0$|⁠. The following result concerns the asymptotic behaviour of the system (1.1) in connection with the structure of the weak |$\omega $|-limit set. Theorem 2.2 (see Pazy, 1978) Let |${\mathcal A}$| be an infinitesimal generator of a non-linear semigroup of contractions |$(T(t))_{t\geqslant 0}$| on H and let |$\phi \in H$|⁠. The following conditions are necessary and sufficient for the existence of the weak limit of |$T(t)\phi $|⁠, as |$t\to +\infty $|⁠: |$E_{{\mathcal A}}={\mathcal A}^{-1}(0)\neq \emptyset $|⁠, |$\omega _{w}(\phi )\subset E_{{\mathcal A}}.$| The next result discusses the case of strong |$\omega $|-limit set. Theorem 2.3 (see Pazy, 1978) Let |${\mathcal A}$| be an infinitesimal generator of a non-linear semigroup of contractions |$(T(t))_{t\geqslant 0}$| on H and let |$\phi \in H.$| The following conditions are necessary and sufficient for the existence of the strong limit of |$T(t)\phi $|⁠, as |$t\to +\infty $|⁠: |$E_{{\mathcal A}} \neq \emptyset $| and |$\omega (\phi )\neq \emptyset $|⁠, |$\omega (\phi )\subset E_{{\mathcal A}}.$| 3. Considered systems and well-posedness In this section, we reconsider the system (1.1) with the same hypotheses on A and B. The purpose of this section is to study the feedback stabilization of the system (1.1) using the bounded control $$\begin{equation} v(t)=-\boldsymbol{\rho}\;\frac{\langle By(t),y(t)\rangle}{1+\langle By(t),y(t)\rangle}, \end{equation}$$ (3.1) where |$\boldsymbol{\rho}>0$| is the gain control and y is the solution of the corresponding closed-loop system, i.e. $$\begin{equation} \displaystyle\frac{d z(t)}{d t}={\mathcal A} z(t), \end{equation}$$ (3.2) where |${{\mathcal A}} y = A y+V(y)By,\, \forall y\in D({\mathcal A})=D(A)$| and |$V(y)=-\boldsymbol{\rho} \;\frac{\langle By,y\rangle }{1+\langle By,y\rangle }, \, \forall y\in D(A)$|⁠. Remark 3.1 For |$B=B^{\ast }\geqslant 0$| on D(A), we have the following Cauchy–Schwartz-like inequality: $$ |\langle By_1,y_2 \rangle|\leqslant \sqrt{\boldsymbol{\varphi}(y_1)}\sqrt{\boldsymbol{\varphi}(y_2)},\quad \, \forall (y_1,y_2)\in D(A)^2, $$ with |$\boldsymbol{\varphi}(y_1)=\langle By_1,y_1 \rangle $| and |$\boldsymbol{\varphi}(y_2)=\langle By_2,y_2 \rangle $|⁠. Indeed, since |$B=B^{\ast }\geqslant 0$| on D(A), we have $$\begin{equation} \boldsymbol{\mu}^2\varphi(y_2)+2\boldsymbol{\mu}<By_1,y_2>+\varphi(y_1)=\langle B(y_1+\boldsymbol{\mu} y_2),y_1+\boldsymbol{\mu} y_2\rangle\geqslant 0,\quad \, \forall \boldsymbol{\mu}\in\mathbb{R}, \end{equation}$$ (3.3) then |$|\langle By_1,y_2 \rangle |\leqslant \sqrt{\boldsymbol{\varphi} (y_1)}\sqrt{\boldsymbol{\varphi} (y_2)}.$| For all |$y_1\in D(A),$| we have |$\langle By_1,y_1\rangle =0 \Rightarrow By_1=0.$| Indeed, by simplifying with |$\mu>0 $| and |$\mu <0$| in (3.3), and letting |$\boldsymbol{\mu} \to 0^{\pm }$|⁠, we obtain |$\langle By_1, y_2\rangle = 0, \, \forall y_2\in D(A)$|⁠. Thus, we conclude by using the density of D(A) in H. In the sequel, we will analyse the well-posedness of the system (3.2). Theorem 3.1 Let A generate a semigroup S(t) of contractions on H, and let B : D(B)→H be a linear A-bounded operator such that (i) ⟨By, z⟩ = ⟨y, Bz⟩, ∀ y, z ∈ D(A), (ii) ⟨By, y⟩ ⩾ 0, ∀ y ∈ D(A). Then for any |$0<\boldsymbol{\rho} <\min (1,\frac{1}{M} )$|⁠, (where M is the constant given in (1.2)) and for all |$z_{0}\in H$|⁠, the system (3.2) admits a unique solution |$z\in{\mathcal C}([0,+\infty [;H)$|⁠. Furthermore, |${{\mathcal A}}$| generates a contraction semigroup |$e^{t{{\mathcal A}}}$| on H, and for all |$z_{0}\in H$| the solution of (3.2) is given by |$z(t) = e^{t {{\mathcal A}}}z_{0}\cdot $| Proof. Let us set |$\boldsymbol{\varphi} (y)= \langle By,y \rangle ,\, \forall y\in D(A)$| and let us consider the map $$ \boldsymbol{\phi} =g(\boldsymbol{\varphi}),\;\mbox{with}\;g(z)=\dfrac{1}{2}(z-\ln(1+z)).$$ Since B is self-adjoint, we have $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)=t^2\boldsymbol{\varphi}(y_1)+2t(1-t) \langle By_1,y_2 \rangle+(1-t)^2\boldsymbol{\varphi}(y_2), \forall t\in [0,1], \; \forall (y_1,y_2)\in D({A})^2.$$ Taking into account Remark 3.1, 1., we deduce that $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)=t^2\boldsymbol{\varphi}(y_1)+2t(1-t) \sqrt{\boldsymbol{\varphi}(y_1)} \sqrt{\boldsymbol{\varphi}(y_2)}+(1-t)^2\boldsymbol{\varphi}(y_2), \forall t\in [0,1], \; \forall (y_1,y_2)\in D({A})^2.$$ Hence, $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)\leqslant \big(t\sqrt{\boldsymbol{\varphi}(y_1)}+(1-t)\sqrt{\boldsymbol{\varphi}(y_2)}\big)^2.$$ This together with the convexity of |$t\mapsto t^2$| gives $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)\leqslant t\boldsymbol{\varphi}(y_1)+(1-t)\boldsymbol{\varphi}(y_2).$$ Thus, the map |$\boldsymbol{\varphi} $| (and so is |$\boldsymbol{\phi} $|⁠) is convex. It follows that the Gâteaux derivative |$\boldsymbol{\phi} ^{\prime} : D({A} )\longrightarrow H$| of |$\boldsymbol{\phi} $| is monotone. On the other hand, for all h ∈ H and y ∈ D(A) we have $$\begin{align*} \boldsymbol{\phi}^{\prime}(y)\cdot h &=\lim\limits_{t\to 0}\dfrac{\boldsymbol{\phi}(y+th)-\boldsymbol{\phi}(y)}{t}\\ & =\dfrac{1}{2}\;\dfrac{\langle By,y \rangle}{1+\langle By,y\rangle }\langle (B+B^{\ast})y,h\rangle.\end{align*}$$ Then using the fact that B is symetric, we obtain $$ \boldsymbol{\phi}^{\prime}(y)=\frac{\langle By,y\rangle}{1+\langle By,y\rangle}By.$$ On the other hand, we have the following expression regarding the Gâteaux derivative of |$\boldsymbol{\phi} $|⁠: $$ \langle \boldsymbol{\phi}^{\prime}(tx+(1-t)y),x-y\rangle= \dfrac{\big(t\langle Bx,y\rangle+(1-t)\langle By,y\rangle \big)\big(t\langle Bx,x-y\rangle+(1-t)\langle By,x-y\rangle \big)}{1+t^2\langle Bx,x\rangle+2t(1-t)\langle Bx,y\rangle+(1-t)^2\langle By,y\rangle}, \, \forall t\in[0,1],$$ from which, we deduce that |$\boldsymbol{\phi} ^{\prime}$| is hemicontinous, i.e. for any x, y ∈ D(A), the mapping |$ : t \mapsto \langle \boldsymbol{\phi} ^{\prime}(tx+(1-t)y),x-y) \rangle $| is continuous on [0, 1]. Now, it comes from (1.2) that for all y ∈ D(A), we have $$ \|V(y)By\|\leqslant \boldsymbol{\rho}\;M(\|Ay\|+\|y\|).$$ Since A generates a semigroup of contractions then − A is maximal monotone and |$\rho \phi ^{\prime}(\cdot )=-V(\cdot ) B(\cdot )$| is monotone hemicontinuous and |$\boldsymbol{\rho} M<1$|⁠, we have that (see the book by Brezis, 1973) the operator |$-{\mathcal A} : y\longrightarrow -Ay-V(y) By $| is maximal monotone, and hence |${\mathcal A}$| generates a semigroup of contractions |$e^{t{{\mathcal A}}}z_0$|⁠, and the function |$ z(t)=e^{t{{\mathcal A}}}z_0$| is a solution of (3.2) (see Brezis, 1973). Remark 3.2 For all |$z_{0}\in D(A )$|⁠, we have $$\begin{equation} \|z(t)\|\leqslant \|z_0\|,\quad\; \forall t\geqslant 0 \end{equation}$$ (3.4) and z(t) ∈ D(A) admits a right derivative at t (see the study by Komura, 1967) as well as $$\begin{equation} \displaystyle\frac{d^+z(t)}{d t}={{\mathcal A}} z(t)\cdot \end{equation} $$ (3.5) $$\begin{equation} \|{{\mathcal A}} z(t)\|\leqslant \|{{\mathcal A}} z_0\|\cdot \end{equation}$$ (3.6) Remark 3.3 Under the assumptions of Theorem 3.1, we have For all y ∈ D(A) $$\begin{align*} \|By\|\ & \leqslant M\|{\mathcal A}y-V(y) By\|+M \|y\|\\ &\leqslant M\|{\mathcal A}y\|+M\boldsymbol{\rho}\|By\|+M \|y\| \end{align*}$$ and hence $$\begin{equation} \|By\|\leqslant \frac{M}{(1-M \boldsymbol{\rho})}\|{{\mathcal A}}y\|+\frac{M}{(1-M \boldsymbol{\rho})}\|y\|, \end{equation}$$ (3.7) For all y ∈ D(A) $$\begin{align*} \|Ay\|\ & \leqslant \|{\mathcal A}y\|+\boldsymbol{\rho} \|By\|\\ &\leqslant \|{\mathcal A}y\|+M\boldsymbol{\rho}\|Ay\|+ M \boldsymbol{\rho}\|y\| \end{align*}$$ then $$\begin{equation} \|Ay\|\leqslant \frac{1}{1-\boldsymbol{\rho} M}\|{\mathcal A}y\|+ \frac{\boldsymbol{\rho} M}{1-\boldsymbol{\rho} M}\|y\|, \end{equation}$$ (3.8) The stabilizing control (3.1) is uniformly bounded with respect to initial state |$ |v(t)| \leqslant \boldsymbol{\rho} $| and if the system (3.2) is subject to the control constraint |$|v(t)| \leqslant \boldsymbol{\alpha} , \; \boldsymbol{\alpha}>0,$| then one may take |$\boldsymbol{\rho} \leqslant \alpha $|⁠. 4. Stabilization problem In this section, we give sufficient conditions to obtain weak and strong stabilizations of (1.1) using the control (3.1). 4.1 Weak stability In the sequel, we assume that B is A-bounded so |$B|_{D(A)}$| admits an A-extension denoted by |$\tilde{B}$|⁠. Let us now introduce the following sets: |${\mathcal M} = \{\boldsymbol{\varphi} \in D( A) \;/\; \langle Be^{tA}\boldsymbol{\varphi} ,e^{tA}\boldsymbol{\varphi} \rangle =0, \; \forall t\geqslant 0 \} $| and |$\widetilde{{\!\mathcal M}} = \{\boldsymbol{\varphi} \in H \;/\; \langle Be^{tA}\boldsymbol{\varphi} ,e^{tA}\boldsymbol{\varphi} \rangle =0, \; \forall t\geqslant 0 \} $| and let us consider the following hypotheses: |$(\boldsymbol{C}_1)$|⁠: For all sequence |$(x_n)\subset D(A),$| we have $$\left .\begin{array}{l} 1.\; x_n\rightharpoonup x\in H, \\ 2.\; \|Ax_n\|\; \mbox{is bounded,} \\ 3.\; \langle Bx_n,x_n\rangle \to 0. \end{array} \right \}\;\Longrightarrow\ x\in E_{{{\mathcal A}}}.$$ |$(\boldsymbol{C}_2)$|⁠: If there exists |$t_0>0$| such that |$\|BS(t_0)\| $| is bounded in D(A) and for all sequence |$(y_n)\subset D( A)$| and y ∈ H such that |$y_n\rightharpoonup y$| in H, there exists a subsequence |$(y_{\gamma (n)})$| such that for all |$t\geqslant t_0$|, we have |$BS(t)y_{\gamma (n)}\rightarrow \widetilde{B}S(t)y$| in H, as |$n\to +\infty $|⁠. Remark 4.1 As a class of operators that satisfy the assumption |$(\boldsymbol{C}_1)$|⁠, the set of operators B satisfying |$ \langle By,y\rangle \geqslant \mu \|y\|^s$|⁠, for all y ∈ D(B) (for some |$\mu ,s>0$|⁠). As an other example of operators that satisfies |$(\boldsymbol{C}_1)$|⁠, let |$A=\Delta $| with Newmann boundary conditions and |$By=-\Delta y+ Ny, \, \forall y\in D(A)$| with |$Ny=\sum _{j=1}^{+\infty }\frac{1}{j^2}\langle y,\boldsymbol{\varphi} _j\rangle \boldsymbol{\varphi} _j$| where |$(\boldsymbol{\varphi} _j)_{j\geqslant 0}$| are the orthonormal basis of |$L^2(\boldsymbol{\Omega})$| formed with the eigenfunctions of A. Here, B is A-bounded and for any sequence |$(x_n)\subset D(A)$| such that |$x_n\rightharpoonup x\in H$| and |$\langle Bx_n,x_n\rangle \to 0,$| we have |$ \langle -\Delta x_n,x_n\rangle +\langle Nx_n,x_n\rangle \to 0$|⁠. Then, it follows from the positivity of the operators |$-\Delta $| and N that|$ \nabla x_n \to 0 \;\mbox{and}\; \langle Nx_n,x_n\rangle \to 0.$| Now using that N is a compact operator, we derive that ⟨Nx, x⟩ = 0. Finally, it follows from the definition of the operator N that |$x=\lambda \textbf{1}\in \ker A\cap \ker B.$| As an example where the assumption |$(\boldsymbol{C}_2)$| holds, one can take |$B=A=\Delta $| (see the study by Haraux, 2001). We note that |$E_{\mathcal A}=\ker (A)\cap \ker (B)$|⁠. Indeed, we have |$y\in E_{\mathcal A} \Longleftrightarrow Ay+V(y)By=0,$| so the inclusion (⁠|$\supset $|⁠) is clear. Moreover, |$y\in E_{\mathcal A} \Rightarrow \langle Ay,y \rangle +V(y)\langle By,y\rangle =0$|⁠. Thus, since A and − B are dissipative, we have |$ y\in E_{\mathcal A} \Rightarrow \langle Ay,y\rangle = \langle By,y\rangle =0. $| Then, taking into account Remark 3.1, 2., it comes By = 0 and then Ay = 0. The following theorem concerns the weak stability of (3.2). Theorem 4.1 Suppose that the hypotheses of Theorem 3.1 are verified. 1. If |$(\boldsymbol{C}_1)$| holds then for all |$z_0 \in D(A)$| the solution of (3.2) is weakly convergent to |$z^{\ast }\in E_{\mathcal A}$|⁠, as |$t\to +\infty $|⁠. 2. If |$(\boldsymbol{C}_2)$| holds, then for all |$z_0 \in D(A)$| the solution of (3.2) satisfies (a) |$z(t)\rightharpoonup z^{\ast }\in E_{A}$|⁠, if |$\widetilde{{\mathcal M}} \subset E_A,$| (b) |$z(t)\rightharpoonup 0$|⁠, if |$\widetilde{{\mathcal M}} =\{0\}$|⁠. Proof. 1. Let |$z_0 \in D({{A}} )$|⁠. According to Remark 3.2, the function t → z(t) admits a right derivative at all time, then we have $$ \dfrac{d^+\|z(\tau)\|^2}{d\tau}=2\langle{{\mathcal A}} z(\tau),z(\tau)\rangle,\;\; \forall \tau \geqslant 0.$$ By integrating this equality between s and t, we obtain $$\begin{equation} \frac{1}{2}\left[\|z(t)\|^2-\|z(s)\|^2\right]=\int_s^t \langle{{\mathcal A}} z(\tau),z(\tau)\rangle\;\mathrm{d}\tau,\quad \; \forall t\geqslant s \geqslant 0. \end{equation}$$ (4.1) Since A is dissipative, we have $$ \int_s^t \langle{{\mathcal A}} z(\tau),z(\tau)\rangle\;\mathrm{d}\tau \leqslant -\boldsymbol{\rho} \int_s^t \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle}\, \mathrm{d}\tau,\quad \; \forall t\geqslant s \geqslant 0. $$ It comes from (4.1) that $$ \int_0^t \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle} \, \mathrm{d}\tau \leqslant \frac{1}{2\boldsymbol{\rho}}\|z_0\|^2. $$ Thus, $$\begin{equation} \int_{0}^{+\infty}\frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle}\;\mathrm{d}\tau <+\infty. \end{equation}$$ (4.2) Moreover, from Remark 3.2 and inequality (3.7) we get $$\begin{equation} \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\left[\frac{M}{(1-M \boldsymbol{\rho})}\|{{\mathcal A}}z_0\|+\frac{M}{(1-M \boldsymbol{\rho})}\|z_0\|\right]\|z_0\|} \leqslant \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle},\quad\;\forall \tau>0. \end{equation}$$ (4.3) Hence, $$\begin{equation} \int_{0}^{+\infty}\langle Bz(\tau),z(\tau)\rangle^2\;\mathrm{d}\tau <+\infty. \end{equation}$$ (4.4) From (3.4), it follows that |$\boldsymbol{\omega} _w(z_0)\neq \emptyset $|⁠. Let |$\boldsymbol{\varphi} _0 \in \boldsymbol{\omega} _w(z_0)$| and let |$t_n\to +\infty $| such that |$z(t_n)=e^{t_n{{\mathcal A}}}z_0\rightharpoonup \boldsymbol{\varphi} _0$|⁠, as |$n\to +\infty $|⁠. We shall prove that there exists a sequence |$(s_{n})$| such that |$s_n\to +\infty $|⁠, |$z(s_n)\rightharpoonup \boldsymbol{\varphi} _0$| and |$~{\lim _{n\to +\infty }\langle Bz(s_{n}),z(s_{n})\rangle =0.}$| For this end, let us consider |$\boldsymbol{\varepsilon}>0$| and |$H_{\boldsymbol{\varepsilon}} =\{t>0 \; /\; \langle Bz(t),z(t)\rangle \geqslant \boldsymbol{\varepsilon} \}$| and let us define the map G : t →< Bz(t), z(t) >. From (4.4), we have |$G \in L^2(0,+\infty )$|⁠, thus the Lebesgue measure of |$H_{\boldsymbol{\varepsilon}} $| is finite. Then for any A > 0, the set |$E=\{k\in \mathbb{N}\; /\; \; t_k>A\}$| is infinite, hence |$\bigcup _{k\in E}[t_k,t_k+\boldsymbol{\varepsilon} ]\not \subset H_{\boldsymbol{\varepsilon}} .$| It follows that for |$\boldsymbol{\varepsilon}>0$| and A > 0, there exist |$t_k>A$| and |$ s\in [t_k,t_k+\boldsymbol{\varepsilon} ]$| such that |$G(s)<\boldsymbol{\varepsilon} $|⁠. Let us now proceed to the construction of sequence |$(s_j)_{j\geqslant 1}$|⁠. For |$\boldsymbol{\varepsilon} =\dfrac{1}{1}$| and A = 1, then there exist |$t_{k_1}>1$| and |$s_1\in [t_{k_1},t_{k_1}+\dfrac{1}{1} ]$| such that |$G(s_1) < \dfrac{1}{1}$|⁠. Now, let |$j\in \mathbb{N}^{\ast }$| and suppose that the terms |$t_{k_j}$| and |$s_j$| are constructed. For |$\boldsymbol{\varepsilon} =\dfrac{1}{j+1}$| and |$A=t_{k_j}+j>0$|⁠, there exist |$t_{k_{j+1}}>A$| and |$s_{j+1}\in [t_{k_{j+1}},t_{k_{j+1}}+\frac{1}{j+1} ]$| such that |$G(s_{j+1}) < \dfrac{1}{j+1}$|⁠. Then from the construction of |$(s_j)_{j\geqslant 1}$| and |$(t_{k_j})_{j\geqslant 1}, $| we have |$G(s_j)\leqslant \dfrac{1}{j}$|⁠. On the other hand, we have |$t_{k_{j+1}}-t_{k_j}>j$|⁠, then |$t_{k_{q+1}}-t_{k_1}>\sum _{j=1}^{q}j=\frac{q(q+1)}{2}$| for all |$q\geqslant 1$|⁠, and so |$\lim _{j\to +\infty }t_{k_j}=+\infty $|⁠. Thus, |$\lim_{j\to +\infty }s_j=+\infty $| (recall that |$s_j\geqslant t_{k_j}$|⁠). We can therefore choose a subsequence |$(t_{k_j})$| of |$(t_{n})$| and a sequence |$(s_j)$| such that |$t_{k_j} \leqslant s_j \leqslant t_{k_j}+\frac{1}{j}$|⁠, |$G(s_j) < \frac{1}{j}$| and |$s_j\to +\infty $|⁠. From (3.5) for all |$t\geqslant 0$|⁠, we have |$\|{\mathcal A}e^{t{{\mathcal A}}}z_0\|\leqslant \|{{\mathcal A}}z_0\|$|⁠. Then $$\begin{equation} \left\|e^{s_j{{\mathcal A}}}z_0-e^{t_{k_j}{{\mathcal A}}}z_0\right\|\leqslant \|{{\mathcal A}}z_0\|\left|s_j-t_{k_j}\right|\leqslant \frac{\|{{\mathcal A}}z_0\|}{j}. \end{equation}$$ (4.5) We deduce that |$z(s_j)\rightharpoonup \boldsymbol{\varphi} _0$| and |$\langle Bz(s_j),z(s_j)\rangle \to 0$|⁠, as |$j\to +\infty .$| Moreover, according to the Remark 3.3, 2., |$(\|Az(s_j)\|)_{j\geqslant 1}$| is bounded. It follows from |$(\boldsymbol{C}_1)$| that |$\varphi _0\in E_{{\mathcal A}}$|⁠, which gives |$\boldsymbol{\omega} _w(z_0)\subset E_{{\mathcal A}}$|⁠. Finally, Theorem 2.2 implies that z(t) is weakly convergent as |$t\to +\infty $|⁠. 2. (a) Let |$\boldsymbol{\varphi} _0$| and |$(s_j)_{j\geqslant 1}$| be as in 1. and suppose that the condition |$(\boldsymbol{C}_2)$| holds. From (4.2) we have $$\begin{equation} \lim_{j\to+\infty}\int_{s_{j}}^{t+s_{j}} \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle}\;\mathrm{d}\tau=0,\quad\;\forall t>0. \end{equation}$$ (4.6) It follows from the invariance of |$\boldsymbol{\omega} _w(z_0)$| under the semigroup |$e^{t{\mathcal A}}$| that for all t > 0, we have $$\begin{equation} \langle Bz(t+s_j),z(t+s_j)\rangle\to 0,\quad\; \mbox{as}\; j\to+\infty. \end{equation}$$ (4.7) Moreover, from the variation of constants formula, we have $$ z(t+s_j)=S(t)z(s_j)+\int_{s_j}^{t+s_j} v(z(\tau))S(t+s_j-\tau)Bz(\tau)\, \mathrm{d}\tau,\quad\;\forall t>0.$$ It follows that $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \int_{s_j}^{t+s_j} |v(z(\tau))|\;\|Bz(\tau)\|\, \mathrm{d}\tau,\quad\;\forall t>0\cdot$$ Using (3.4)–(3.7), we deduce that $$\begin{equation} \|Bz(\tau)\|\leqslant \frac{M}{(1-M \boldsymbol{\rho})}\|{{\mathcal A}}z_0\|+\frac{M}{(1-M \boldsymbol{\rho})}\|z_0\|=:M_{z_0,\boldsymbol{ \rho}}\cdot \end{equation}$$ (4.8) Then $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \; M_{z_0, \boldsymbol{\rho}} \int_{s_j}^{t+s_j} |v(z(\tau))|d\tau,\quad\;\forall t>0\cdot$$ The Schwartz's inequality gives $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \boldsymbol{\rho}\; \sqrt{t}\; M_{z_0, \boldsymbol{\rho}}\; \sqrt{\int_{s_j}^{t+s_j} \frac{<Bz(\tau),z(\tau)>^2}{(1+<Bz(\tau),z(\tau)>)^2}\, \mathrm{d}\tau},\quad\;\forall t>0,$$ and hence $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \boldsymbol{\rho}\; \sqrt{t}\; M_{z_0, \rho}\; \sqrt{\int_{s_j}^{t+s_j} \frac{<Bz(\tau),z(\tau)>^2}{1+<Bz(\tau),z(\tau)>}\ \textrm{d}\tau},\quad\;\forall t>0.$$ Thus, using (4.2) we deduce that $$\begin{equation} \lim_{j\to +\infty}[z(t+s_j)-S(t)z(s_j)]=0,\quad\;\forall t>0. \end{equation}$$ (4.9) Using the fact that B is self-adjoint, we can write |$\langle BS(t)z(s_j), S(t)z(s_j)\rangle =\langle S(t)z(s_j)-z(t+s_j), BS(t)z(s_j)\rangle +\langle Bz(t+s_j), S(t)z(s_j)-z(t+s_j)\rangle + \langle Bz(t+s_j), z(t+s_j)\rangle$|⁠. Furthermore, from (1.2), (3.4), (3.8) and the fact that S(t) is of contractions, we get $$\begin{align*} \|BS(t)z(s_j)\| & \leqslant M\left(\|AS(t)z(s_j)\|+\|S(t)z(s_j)\|\right)\\ &\leqslant \frac{M}{1-\boldsymbol{\rho} M}\| ({\mathcal A}z_0\|+ \|z_0\|)\cdot \end{align*}$$ This, together with (4.7), (4.8) and (4.9) gives $$\begin{equation} \langle BS(t)z(s_j),S(t)z(s_j)\rangle\to 0,\quad\; \mbox{as}\; j\to +\infty. \end{equation}$$ (4.10) Now, from the condition |$(\boldsymbol{C}_2)$|⁠, there exists a subsequence of |$(s_j), $| still denoted by |$(s_j), $| such that for all |$t\geqslant t_0$| we have $$\begin{equation} \langle BS(t)z(s_j),S(t)z(s_j)\rangle\to \langle \widetilde{B}S(t){\boldsymbol{\varphi}_0},S(t){\boldsymbol{\varphi}_0}\rangle,\quad\; \mbox{as}\; j\to +\infty. \end{equation}$$ (4.11) We conclude that $$\begin{equation} \langle \widetilde{B}S(t){\boldsymbol{\varphi}_0},S(t){\boldsymbol{\varphi}_0}\rangle=0. \end{equation}$$ (4.12) In other words, |$\boldsymbol{\varphi} _0\!\in\! \widetilde{{\mathcal M}}$| and hence |$\boldsymbol{\omega} _w(z_0)\!\subset \!\widetilde{{\mathcal M}}$|⁠. But |$\widetilde{{\mathcal M}}\subset E_A,$| then |$S(t)\boldsymbol{\varphi} _0=\boldsymbol{\varphi} _0$| and so |$\langle \widetilde{B}{\boldsymbol{\varphi} _0},{\boldsymbol{\varphi} _0}\rangle =0$|⁠. Moreover, since |$\boldsymbol{\omega} _w(z_0)$| is invariant by |$e^{t{{\mathcal A}}}$|⁠, we have |$\boldsymbol{\varphi} (t)=e^{t{{\mathcal A}}}\boldsymbol{\varphi} _0\in \boldsymbol{\omega} _w(z_0)$|⁠, thus |$\langle \widetilde{B}\boldsymbol{\varphi} (t),\varphi (t)\boldsymbol{\varphi} =0.$| Consequently, |$\boldsymbol{\varphi} (t)=e^{t{{\mathcal A}}}\boldsymbol{\varphi} _0=S(t)\boldsymbol{\varphi} _0=\boldsymbol{\varphi} _0$|⁠, and hence |$\boldsymbol{\varphi} _0\in E_{{\mathcal A}}$|⁠. We deduce that |$ \boldsymbol{\omega} _w(z_0)\subset E_{{\mathcal A}}$|⁠. Finally, from Theorem 2.2, there exists |$z^{\ast }\in E_A$| such that |$z(t)\rightharpoonup z^{\ast },$| as |$t\to +\infty$|⁠. (b) In the case |$\widetilde{{\mathcal M}}=\{0\}$|⁠, it follows from |$\boldsymbol{\omega} _w(z_0)\subset \widetilde{{\mathcal M}}$| that |$z(t)\rightharpoonup 0,$| as |$t\to +\infty .$| 4.2 Strong stability In this part, we study the strong stability of the closed-loop system (3.2). Let us start with the two following lemmas. Lemma 4.1 Suppose that the assumptions of Theorem 3.1 hold and that the embedding D(A) ↪ H is compact. Then the operator |$(I-{{\mathcal A}})^{-1}$| is compact from H to itself. Proof. Since |${\mathcal A}$| generates a non-linear semigroup of contractions, the operator |$(I-{{\mathcal A}})^{-1}$| is defined on H (see Theorem 2, in the study by Komura, 1967). In order to prove the compactness of the operator |$(I-{{\mathcal A}})^{-1}$|⁠, let us consider a bounded sequence |$(v_n)$| in H. We shall prove that the sequence |$u_n=(I-{{\mathcal A}})^{-1}v_n$|⁠, defined in D(A), has a converging subsequence (see Definition 2.5, p. 13 in the study by Toledano et al., 1997). We have $$ \langle v_n,u_n\rangle=\|u_n\|^2-\langle{{\mathcal A}}u_n,u_n\rangle.$$ Then, using the fact that the operator |${{\mathcal A}}$| is dissipative, we get |$\|u_n\|^2 \leqslant \langle v_n,u_n\rangle \leqslant \|v_n\| \|u_n\|$|⁠. It follows that the sequence |$(u_n)$| is bounded. Since |${\mathcal A}u_n=u_n-v_n,$| then the sequence |$({\mathcal A}u_n)$| is bounded. From (3.7) we can see that the sequence |$(Bu_n)$| is bounded. It follows from the equality $$ Au_n={\mathcal A}u_n+\boldsymbol{\rho} \frac{\langle B{u_n},u_n\rangle}{1+\langle B{u_n},u_n\rangle}B{u_n}$$ that |$(Au_n)$| is bounded. Furthermore, |$(u_n)\subset D(A)$|⁠, then we can conclude by the fact that D(A) is compactly embedded in H. Remark 4.2 Under the assumptions of Lemma 4.1, we have |$\boldsymbol{\omega} (z_0)\neq \emptyset $|⁠, for all |$z_0\in D(\mathcal A)$| (see the study by Pazy, 1978). Now, we are ready to establish the strong stabilization result. This is the subject of the two next theorems. Theorem 4.2 Let assumptions of Theorem 3.1 be verified, let D(A) be compactly embedded in H and let us set |$U(A)=\{y\in H\;/\; \|e^{tA}y\|=\|e^{tA^{\ast }}y\|=\|y\|\}$|⁠. Then for all |$z_0 \in D(A)$|⁠, we have |$\boldsymbol{\omega} (z_0)\subset{\mathcal M} \cap U(A)$|⁠. If |$U(A)\cap{\mathcal M}\subset E_A$|⁠, then for all |$z_0 \in D(A)$| we have |$ z(t)\to z^{\ast } \in E_A$|⁠, as |$t\to +\infty $|⁠. Proof. Let |$z_0\in D(A)$|⁠, |$\boldsymbol{\varphi} _0\in \boldsymbol{\omega} (z_0)$| and let |$(t_n)_{n\geqslant 0 }$| be such that |$t_n\to +\infty $| and |$e^{t_n{\mathcal A}}z_0\to \boldsymbol{\varphi} _0$| as |$n\to +\infty .$| For t > 0, there exists |$N\in \mathbb{N}$| such that for all |$n\geqslant N$|⁠, we have |$t_n\geqslant t.$| Using the fact that |$e^{t{\mathcal A}}$| is a semigroup of contractions, we obtain |$ \|e^{t_n{\mathcal A}}z_0\|\leqslant \|e^{t{\mathcal A}}z_0\|$|⁠; and letting |$n\to +\infty ,$| we get $$\begin{equation} \|\boldsymbol{\varphi}_0\|\leqslant \left\|e^{t{\mathcal A}}z_0\right\|=\|z(t)\|. \end{equation}$$ (4.13) It follows from the inequality (4.13) that |$\|\boldsymbol{\varphi} _0\|\leqslant \|e^{(t+t_n){\mathcal A}}z_0\|$|⁠. Then, letting |$n\to +\infty $| we obtain |$\|\boldsymbol{\varphi}_0\|\leqslant \|e^{t{\mathcal A}}\boldsymbol{\varphi} _0\|$|⁠. This together with the fact that |$(e^{t{\mathcal A}})_{t\geqslant 0}$| is a contraction semigroup leads to |$\|e^{t{\mathcal A}}\boldsymbol{\varphi} _0\|=\|\boldsymbol{\varphi} _0\|$| for all |$t\geqslant 0$|⁠. Furthermore, we have |$\boldsymbol{\omega} (z_0)\subset D({\mathcal A})$| (see Theorem 5, in the study by Dafermos & Slemrod, 1973) and so |$\boldsymbol{\varphi} _0 \in D(A)$|⁠. From (4.5) we have |$e^{s_j{\mathcal A}}z_0\to \boldsymbol{\varphi} _0$|⁠, as |$j\to +\infty $|⁠. On the other hand, $$\begin{align*} \langle BS(t)z(s_j),S(t)z(s_j)\rangle & =\langle BS(t)z(s_j),S(t)(z(s_j)-\boldsymbol{\varphi}_0)\rangle+\langle BS(t)\boldsymbol{\varphi}_0,S(t)z(s_j)\rangle \\ &=\langle BS(t)z(s_j),S(t)(z(s_j)-\boldsymbol{\varphi}_0)\rangle+\langle S(t)^{\ast}BS(t)\boldsymbol{\varphi}_0,z(s_j)\rangle.\end{align*}$$ Since |$(\|BS(t)z(s_j)\|)_{j\geqslant 1}$| is bounded, the semigroup |$(S(t))_{t\geqslant 0}$| is of contractions and |$z(s_j)\to \varphi _0$|⁠, as |$j\to +\infty $|⁠, then |$\langle BS(t)z(s_j),S(t)z(s_j)\rangle \to \langle BS(t)\boldsymbol{\varphi} _0,S(t)\boldsymbol{\varphi} _0\rangle $|⁠, as |$j\to +\infty $|⁠, this combined with (4.10), gives |$\boldsymbol{\varphi} _0\in{\mathcal M}$| and so |$e^{t{\mathcal A}}\boldsymbol{\varphi} _0=e^{tA}\boldsymbol{\varphi} _0$|⁠. We conclude that |$\boldsymbol{\varphi} _0\in U(A)\cap{\mathcal M}.$| Let |$\boldsymbol{\varphi} _0\in \boldsymbol{\omega} (z_0)$|⁠. From 1. we have |$\boldsymbol{\varphi} _0\in U(A)\cap{\mathcal M} \subset E_A$|⁠. Since |$\boldsymbol{\varphi} (t)=e^{t{\mathcal A}}\boldsymbol{\varphi} _0\in \boldsymbol{\omega} (z_0)$|⁠, then |$\boldsymbol{\varphi} (t)\in U(A)\cap{\mathcal M} \subset E_A.$| It follows that |${\mathcal A}\boldsymbol{\varphi} (t)=A\boldsymbol{\varphi} (t)=0$|⁠. In particular, |${\mathcal A}\boldsymbol{\varphi}_0=0$|⁠. In other words, |$\boldsymbol{\varphi} _0\in E_{\mathcal A}$|⁠. From Theorem 2.3, we deduce the existence of the strong limit of z(t), as |$t\to +\infty $|⁠. Thus, |$z(t)\to \boldsymbol{\varphi} _0\in E_A$|⁠, as |$t\to +\infty $|⁠. Theorem 4.3 Suppose that the hypotheses of Theorem 3.1 are verified. Let D(A) be compactly embedded in H, |${\mathcal M}\cap U(A) =\{0\}$|⁠. Then for all |$z_0 \in H$|⁠, we have z(t) → 0, as |$t\to +\infty $|⁠. Proof. Let |$z_0\in D(A)$|⁠. It follows from Theorem 4.2 that |${\mathcal M}\cap U(A)=\{0\}\subset E_A$|⁠. Moreover, we have |$\boldsymbol{\omega} (z_0)\subset{\mathcal M}\cap U(A)$|⁠. Then |$\boldsymbol{\omega} (z_0)=\{0\}$| and hence z(t) → 0, as |$t\to +\infty $|⁠. Now let us consider |$z_0\in \overline{D({{\mathcal A}} )}$| and let |$\boldsymbol{\varepsilon}>0$|⁠. Then there exists |$z_{\boldsymbol{\varepsilon}} \in D({{\mathcal A}} )$| such that |$\|z_0-z_\varepsilon \|\leqslant \frac{\boldsymbol{\varepsilon} }{2}$|⁠. It follows from the fact that |$e^{t{\mathcal A}}$| is a contraction semigroup that $$\begin{equation} \left\|e^{t{\mathcal A}}z_0-e^{t{\mathcal A}}z_\varepsilon\right\|\leqslant \frac{\boldsymbol{\varepsilon}}{2}, \;\;\;\forall t\geqslant 0. \end{equation}$$ (4.14) Since |$z_\varepsilon \in D({{\mathcal A}} )$| and |${\mathcal M}\cap U(A)=\{0\}$|⁠, we deduce from Theorem 4.2, 1. that |$\boldsymbol{\omega} (z_{\boldsymbol{\varepsilon}} )=\{0\} $|⁠. Thus, there exists |$T_{\boldsymbol{\varepsilon}}>0$| such that $$\begin{equation} \left\|e^{t{\mathcal A}}z_{\boldsymbol{\varepsilon}}\right\|\leqslant \frac{\boldsymbol{\varepsilon}}{2},\;\;\;\forall t\geqslant T_{\boldsymbol{\varepsilon}} . \end{equation}$$ (4.15) It follows from (4.14) and (4.15) that for all |$t\geqslant T_{\boldsymbol{\varepsilon}} ,$| we have |$\|e^{t{\mathcal A}}z_0\|\leqslant \boldsymbol{\varepsilon} $|⁠. We conclude that for all |$z_0\in \overline{D({\mathcal A})}=H$|⁠, we have z(t) → 0, as |$t\to +\infty .$| 5. Applications 5.1 Heat equation Let |$\boldsymbol{\Omega} =]0,1[$| and let us consider the bilinear system given by the following heat equation: $$\begin{equation} \left\{ \begin{array}{lll} \frac{\partial z}{\partial t}(t,x)=\Delta z (t,x)+v(t)a(x)z(t,x), \; \mbox{on} \; ]0,\quad+\infty[\times\boldsymbol{\Omega}&& \\ z^{\prime}(t,0)=z^{\prime}(t,1)=0,\quad\; \forall \;t>0 && \\ z(0,x)=z_0(x),\quad\; \mbox{on} \;\boldsymbol{\Omega} && \end{array} \right. \end{equation}$$ (5.1) where |$a\in L^{\gamma }(\boldsymbol{\Omega} )\;(\gamma> 2)$| and |$\;a \notin L^{\gamma +2}(\boldsymbol{\Omega} )$| (as an example, one can take |$a(x)=x^{-\frac{1}{\gamma +1}}, \; \forall x\in ]0,1[$|⁠). Let |$ H= L^2(\boldsymbol{\Omega} )$| and |$Az=\Delta z$|⁠, for all |$z\in D(A) =\{z\in L^2(\boldsymbol{\Omega} )\;/\;\Delta z \in L^2(\boldsymbol{\Omega} ),\;z^{\prime}(0)=z^{\prime}(1)=0\}$|⁠. Observing that |$Ba^{\frac{\gamma }{2}}=a^{\frac{\gamma +2}{2}}\not \in L^2(\boldsymbol{\Omega} )$|⁠, we deduce that the operator defined by Bz = az, for all |$z\in D(B):=H^1(\boldsymbol{\Omega} )$| is not bounded from |$L^2(\boldsymbol{\Omega} ) $| to |$L^2(\boldsymbol{\Omega} )$|⁠. Now, we have |$\|az\|_{L^{2}(\boldsymbol{\Omega} )}\leqslant \|a\|_{L^{\gamma }(\boldsymbol{\Omega} )}\|z\|_{L^{\gamma ^{\ast }}(\boldsymbol{\Omega} ), }\; \mbox{for all}\; z\in H^1(\boldsymbol{\Omega} )$| with |$\boldsymbol{\gamma} ^{\ast }=\frac{2\boldsymbol{\gamma} }{\boldsymbol{\gamma} -2}$|⁠. By the Sobolev embedding |$H^1(\boldsymbol{\Omega} )\hookrightarrow L^q(\boldsymbol{\Omega} ),\; 1 \leqslant q < +\infty $| (see Corollary 9.14 in the book by Brezis, 2010), and the fact that for all z ∈ D(A) we have |$\|\nabla z\|^2=|\langle \Delta z,z \rangle | \leqslant \frac{1}{2}(\|\Delta z\|^2+\|z\|^2).$| We conclude that B is A-bounded. We can state the following stabilization result: Proposition 5.1 Suppose that |$a(x)\geqslant 0,\;\mbox{a.e. on}\; \boldsymbol{\Omega} $| and |$\int _{\Omega }a(x)\, \mathrm{d}x \neq 0$|⁠. Then there exists |$\boldsymbol{\rho} _0>0$| such that, for any |$0<\boldsymbol{\rho} < \boldsymbol{\rho} _0$| and for all |$z_0\in L^2(0,1)$|⁠, the control |$v(t)=- \boldsymbol{\rho} \; \frac{\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\;\mathrm{d}x}{1+\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\;\mathrm{d}x} $| strongly stabilizes (5.1). Proof. Here, A and B satisfy the assumptions of Theorem 3.1 and we have |$U(A)=E_A\!=\!\{c 1_{\boldsymbol{\Omega}} \; / \; c\!\in\! \mathbb{R}\}$|⁠. Since |$\int _{\Omega }a(x)\, \mathrm{d}x \neq 0,$| we have |${\mathcal M}\cap U(A)=\{0\}$|⁠, then we conclude by Theorem 4.2. 5.2 Transport equation Let |$\boldsymbol{\Omega} =]0,+\infty [$| and let us consider the following bilinear transport equation: $$\begin{equation} \left\{ \begin{array}{lll} \frac{\partial z}{\partial t}(t,x)=-\frac{\partial z}{\partial x} (t,x)+v(t)a(x)z(t,x),\; \mbox{on} \; ]0,+\infty[\times\boldsymbol{\Omega}&& \\ z(t,0)=0,\; \mbox{on} \; ]0,+\infty[&& \\ z(0,x)=z_0(x)\; \mbox{on} \;\boldsymbol{\Omega}&& \end{array} \right. \end{equation}$$ (5.2) where a is such that a(x) > 0, a.e. |$x\in \boldsymbol{\Omega} $||$\int _0^{+\infty }x\;a^2(x)\;\mathrm{d}x<+\infty $| (for example, |$a(x)=\frac{1}{\sqrt{x(x^2+1)}}$|⁠). Let |$ H= L^2(\boldsymbol{\Omega} )$| and |$Az=-\frac{\partial z}{\partial x}$|⁠, for all |$z\in D(A) =\{y\in H^1(\boldsymbol{\Omega} )\;/\; y(0)=0 \; \}$| and Bz = az, for all z ∈ D(B) := D(A). Thus, B is unbounded from |$L^2(\boldsymbol{\Omega} ) $| to |$L^2(\boldsymbol{\Omega} )$|⁠. Moreover, by Morrey's inequality (see Corollary 9.14 in the book by Brezis, 2010), there exists C > 0 such that for all z ∈ D(A) we have |$|z(x)|\leqslant C\sqrt{x}\;\|\nabla z\|_{L^2(\boldsymbol{\Omega} )}$|⁠, a.e. |$x\in \boldsymbol{\Omega} $|⁠, then |$\int _{0}^{+\infty }|a(x)z(x)|^2\, \mathrm{d}x\leqslant C^2\; \|\nabla z\|_{L^2(\boldsymbol{\Omega} )}^2\; \int _0^{+\infty }xa^2(x)\;\mathrm{d}x$|⁠, then B is A-bounded i.e. (1.2) holds for |$M=C\sqrt{\int _0^{+\infty }x\;a^2(x)\;\mathrm{d}x}$|⁠. Furthermore, for all sequence |$(y_n)_{n\in \mathbb{N}} \subset H_0^1(\boldsymbol{\Omega} )$| such that |$y_n\rightharpoonup y$| in H and |$<By_n,y_n>\rightarrow 0$|⁠, as |$n\rightarrow +\infty $|⁠, there exists a subsequence of |$(y_n)_{n\in \mathbb{N}}$| still denoted by |$(y_n)_{n\in \mathbb{N}}$| such that |$a(x)y_n^2(x)\rightarrow 0$|⁠, a.e. |$x\in \boldsymbol{\Omega} $| as |$n\to +\infty $|⁠. If in addition |$(Ay_n)_{n\in \mathbb{N}}$| is bounded in H, then for all test function |$\varphi $| we have |$|y_n(x)\;\boldsymbol{\varphi} (x)|\leqslant C\|\nabla y_n\|_{L^2(\boldsymbol{\Omega} )}\sqrt{x}\;|\boldsymbol{\varphi} (x)|$|⁠. This combined with the dominated convergence theorem gives |$\langle y_n,\varphi \rangle \rightarrow 0$| as |$n\rightarrow +\infty $|⁠, we conclude that |$y_n\rightharpoonup 0$| in H, as |$n\to +\infty $|⁠. Then y = 0, thus the condition |$(\boldsymbol{C}_1)$| is verified. As a consequence of Theorem 4.1, we have the following result: Proposition 5.2 Suppose that a(x) > 0, a.e. |$x\in \boldsymbol{\Omega} $| and |$\int _0^{+\infty }x\;a^2(x)\;\mathrm{d}x<+\infty $| . Then there exists |$\boldsymbol{\rho} _0>0$| such that for any |$0<\boldsymbol{\rho} < \rho _0$| and for all |$z_0\in L^2(\boldsymbol{\Omega} )$|⁠, the system (5.2) controlled by the feedback |$v(t)=- \boldsymbol{\rho} \; \frac{\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\;\mathrm{d}x}{1+\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\, \mathrm{d}x}$| admits a unique solution |$z\in{\mathcal C}(0,+\infty ;H)$|⁠, and we have |$z(t)\rightharpoonup 0$| in |$L^2(\boldsymbol{\Omega} )$|⁠, as |$t\to +\infty $|⁠. 6. Conclusion In this paper, a set of stabilization results for unbounded bilinear systems has been given. Sufficient conditions for weak and strong stabilizations have been given. The stabilization control is uniformly bounded with respect to time and initial states. The established results can be applied to different type of bilinear system including parabolic and hyperbolic case. Though the established results enable us to discuss various types of stabilization problems as illustrated above, there are other interesting situations which are not covered by the present study. This is the case when the control is exercised through the boundary or a point for systems governed by partial differential equations. Acknowledgements The authors would like to thank the anonymous referees for their valuable comments. References Ball , J. & Slemrod , M. ( 1979 ) Feedback stabilization of distributed semilinear control systems . Appl. Math. Optim. , 5 , 169 – 179 . Google Scholar Crossref Search ADS WorldCat Berrahmoune , L. ( 1999 ) Stabilization and decay estimate for distributed bilinear systems . Systems Control Lett. , 36 , 167 – 171 . Google Scholar Crossref Search ADS WorldCat Berrahmoune , L. ( 2010 ) Stabilization of unbounded bilinear control systems in Hilbert space . J. Math. Anal. Appl. , 372 , 645 – 655 . Google Scholar Crossref Search ADS WorldCat Bounit , H. & Hammouri , H. ( 1999 ) Feedback stabilization for a class of distributed semilinear control systems . Nonlinear Anal. , 37 , 953 – 969 . Google Scholar Crossref Search ADS WorldCat Bounit , H. & Idrissi , A. ( 2005 ) Regular bilinear systems . IMA J. Math. Control Inform., 22 , 26 – 57 . Google Scholar Crossref Search ADS WorldCat Brezis , H. ( 1973 ) Opérateurs Maximaux Monotones et Semi-groupes de Contractions dans les Espaces de Hilbert . Mathematics Studies, vol. 5. North-Holland . Brezis , H. ( 2010 ) Functional Analysis, Sobolev Spaces and Partial Differential Equations . New York : Springer . Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Dafermos , C. M. & Slemrod , M. ( 1973 ) Asymptotic behavior of nonlinear contraction semigroups . New York : J. Funct. Anal ., 13 , 97 – 106 . Google Scholar Crossref Search ADS WorldCat Desch , W. & Schappacher , W. ( 1984 ) On relatively bounded perturbations of linear |$c_0$|-semigroups . Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) , 11 , 327 – 341 . OpenURL Placeholder Text WorldCat El Ayadi , R. , Ouzahra , M. & Boutoulout , A. ( 2012 ) Strong stabilisation and decay estimate for unbounded bilinear systems . Int. J. Control , 85 , 1497 – 1505 . Google Scholar Crossref Search ADS WorldCat Engel , K.-J. & Nagel , R. ( 2000 ) One-Parameter Semigroups for Linear Evolution Equations . Graduate Texts in Mathematics , vol. 194. New York : Springer . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Haraux , A . ( 2001 ) Some sharp estimates for parabolic equations . J. Funct. Anal. , 187 , 110 – 128 . Google Scholar Crossref Search ADS WorldCat Hille , E. & Phillips , R. S. ( 1957 ) Functional Analysis and Semi-groups . Unite States of America : American Mathematical Society Colloquium Publications , vol. 31. Providence, R.I. Revised ed . MR 0089373 . Hislop , P. D. & Sigal , I. M. ( 1996 ) Perturbation theory: relatively bounded perturbations . Introduction to Spectral Theory . New York, NY : Springer , pp. 149 – 159 . ISO 690 . Idrissi , A . ( 2003 ) On the unboundedness of control operators for bilinear systems . Quaestiones Math. , 26 , 105 – 123 . Google Scholar Crossref Search ADS WorldCat Idrissi , A. & Bounit , H. ( 2008 ) Time-varying bilinear systems . SIAM J. Control Optim. , 47 , 1097 – 1126 . Google Scholar Crossref Search ADS WorldCat Komura , Y. ( 1967 ) Nonlinear semigroups in Hilbert space . J. Math. Soc. Japan , 19 , 493 – 507 . Google Scholar Crossref Search ADS WorldCat Ouzahra , M. ( 2008 ) Strong stabilization with decay estimate of semilinear systems . Systems Control Lett. , 57 , 813 – 815 . Google Scholar Crossref Search ADS WorldCat Ouzahra , M. ( 2010 ) Exponential and weak stabilization of constrained bilinear systems . SIAM J. Control Optim. , 48 , 3962 – 3974 . Google Scholar Crossref Search ADS WorldCat Ouzahra , M. ( 2011 ) Exponential stabilization of distributed semilinear systems by optimal control . J. Math. Anal. Appl. , 380 , 117 – 123 . Google Scholar Crossref Search ADS WorldCat Pazy , A. ( 1978 ) On the asymptotic behavior of semigroups of nonlinear contrations in Hilbert space . J. Funct. Anal. , 27 , 292 – 307 . Google Scholar Crossref Search ADS WorldCat Toledano , J. M. A. , Benavides , T. D. & Acedo , G. L. ( 1997 ) Measures of Noncompactness in Metric Fixed Point Theory , vol. 99. Birkhäuser Basel : Springer Science & Business Media . Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Weiss , G. ( 1989 ) Admissibility of unbounded control operators . SIAM J. Control Optim. , 27 , 527 – 545 . Google Scholar Crossref Search ADS WorldCat Weiss , G . ( 1994 ) Regular linear systems with feedback . Math. Control Signals Systems , 7 , 23 – 57 . Google Scholar Crossref Search ADS WorldCat Yarotsky , D. A. ( 2006 ) Ground states in relatively bounded quantum perturbations of classical lattice systems . Comm. Math. Phys. , 261 , 799 – 819 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2019. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. [email protected] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press http://www.deepdyve.com/lp/oxford-university-press/feedback-stabilization-for-unbounded-bilinear-systems-using-bounded-GO3SAROC3w El Ayadi,, Rachid; Ouzahra,, Mohamed IMA Journal of Mathematical Control and Information , Volume 36 (4) – Dec 18, 2019 /lp/ou_press/feedback-stabilization-for-unbounded-bilinear-systems-using-bounded-GO3SAROC3w IMA Journal of Mathematical Control and... / © The Author(s) 2019. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. 10.1093/imamci/dny016 Abstract In this paper, we deal with the distributed bilinear system |$ \frac{d z(t)}{d t}= A z(t) + v(t)Bz(t), $| where A is the infinitesimal generator of a semigroup of contractions on a real Hilbert space H. The linear operator B is supposed bounded with respect to the graph norm of A. Then we give sufficient conditions for weak and strong stabilizations. Illustrating examples are provided. 1. Introduction In this paper, we deal with the infinite dimensional bilinear system $$\begin{equation} \displaystyle\frac{d z(t)}{d t}= A z(t) + v(t)Bz(t),\quad\;z(0)=z_0\in H, \end{equation}$$ (1.1) where A is an unbounded operator of H with domain D(A) and generates a semigroup of contractions |$(S(t))_{t\geqslant 0}$| on a real Hilbert space H, whose norm and scalar products are denoted by ∥⋅∥ and ⟨⋅, ⋅⟩, respectively; the linear operator B, with domain D(A) ⊂ D(B), is A-bounded in the sense that there exists |$\alpha ,\beta>0$| such that |$\|Bz\|\leqslant \alpha \|Az\|+\beta \|z\|,\; \forall z\in D(A), $| or equivalently (see Desch & Schappacher 1984) $$\begin{equation} \|Bz\|\leqslant M\left(\|Az\|+\|z\|\right),\quad\; \forall z\in D(A)\; (\mbox{for some }\; M>0). \end{equation}$$ (1.2) The real valued function v(⋅) denotes the control and z(t) is the corresponding mild solution of (1.1). In the case where B ∈ L(H) (L(H) is the set of bounded linear operators on H), the problem of stabilization by non-linear feedback controls has been studied by many authors (see e.g. Ball & Slemrod, 1979; Berrahmoune, 1999; Bounit & Hammouri 1999; Ouzahra, 2008). In Ball & Slemrod, 1979, a result of weak stabilization was obtained under the following condition: $$\begin{equation} \langle BS(t)y,S(t)y \rangle=0,\quad\;\; \forall t\geqslant 0 \Longrightarrow y=0, \end{equation}$$ (1.3) by using the quadratic feedback $$\begin{equation} v(t) = -\langle z(t),Bz(t)\rangle\cdot \end{equation}$$ (1.4) On the other hand, if (1.3) is replaced by the following inequality: $$\begin{equation} \int_0^T|\langle BS(s)y, S(s)y \rangle|\, \mathrm{d}s \geqslant \boldsymbol{\mu} \|y\|^2,\quad\;\; \forall y\in H\;(\mbox{for some}\; \boldsymbol{\mu},T>0), \end{equation}$$ (1.5) then (see Berrahmoune, 1999; Ouzahra, 2008) the feedback (1.4) is a strongly stabilizing control and guarantees the following decay estimate: $$\begin{equation} \|z(t)\|=O\left(t^{\frac{-1}{2}}\right),\quad\; \mbox{as}\; t\to +\infty\cdot \end{equation}$$ (1.6) In the study by Bounit & Hammouri (1999), the authors considered the control $$\begin{equation} v(t)=-\frac{\langle Bz(t),z(t) \rangle}{1+|\langle Bz(t),z(t) \rangle|} \end{equation}$$ (1.7) and showed that if the resolvent of A is compact and B is bounded, self-adjoint and monotone, then the feedback (1.7) strongly stabilizes (1.1) provided that (1.3) holds. In the study by Ouzahra (2010), it has been shown that the control $$\begin{equation} v(t)=-\displaystyle\frac{\left \langle z(t),Bz(t)\right \rangle}{\|z(t)\|^2}\textbf{1}_{\{t\geqslant0; z(t)\ne0\}}, \end{equation}$$ (1.8) weakly stabilizes (1.1) provided that B is a compact operator such that (1.3) holds. On the other hand, it has been shown that if B is a bounded operator satisfying (1.5), then (see the studies by Ouzahra, 2010, 2011) the control (1.8) exponentially stabilizes (1.1). The question of well-posedness of linear and bilinear systems with unbounded control operator has been treated in the studies by Weiss (1989, 1994), Idrissi (2003), Bounit & Idrissi (2005), Idrissi & Bounit (2008), Berrahmoune (2010) and El Ayadi et al. (2012). In the study by Berrahmoune (2010), the author considered the case where A is self-adjoint, and B is positive self-adjoint and bounded from the subspace |$V=D((I-A)^{\frac{1}{2}})$| of H to its dual space |$V^{^{\prime}}.$| Then he established the weak and strong stabilities of the closed-loop system $$\begin{equation} \displaystyle\frac{d z(t)}{d t}=Az(t)+f(\langle Bz(t),z(t)\rangle)Bz(t),\quad\;z(0)=z_0, \end{equation}$$ (1.9) for an appropriate function |$f : \mathbb{R}\longrightarrow \mathbb{R}$|⁠. Moreover, in the study by El Ayadi et al. (2012), it has been supposed that B is an unbounded linear operator from H to a Banach extension X of H with a continuous embedding H ↪ X. Then, it has been shown that (1.1) is strongly stabilizable, and a polynomial decay estimate of the stabilized state has been provided. It is noted that the above-mentioned results of El Ayadi et al. (2012) use spectral decomposition of the system at hand which reduces the class of considered systems to parabolic ones. Here, we deal with control operators which are relatively bounded. Such class of operators is very interesting either in theoretical or practical point of view. Indeed, various properties of semigroup under bounded perturbations are preserved when dealing with relatively bounded perturbations of the generator (see e.g. Pazy, 1978; Engel & Nagel, 2000). On the other hand, relatively bounded control operators arise as models for many dynamical processes (see e.g. Hislop & Sigal, 1996; Yarotsky, 2006). The article is organized as follows: the next section provides background material on non-linear semigroups and unbounded operators. In the third section we present an existence and uniqueness result. In the fourth section we present our main results and we study the weak and strong stabilizations. In the last section, we give illustrating examples. 2. Review on non-linear semigroups and unbounded operators In this section, we recall some existing results related to non-linear semigroups and unbounded operators. Let us begin with the following definitions and results concerning certain classes of unbounded operators (see Hille & Phillips, 1957). Definition 2.1 (Hille & Phillips, 1957, p. 391) Let A be the infinitesimal generator of a |$C_0$|-semigroup. A linear operator C is said to belong to the class F(A) if D(A) = D(C) and |$CR(\lambda _0,A)\in L(H)$| for some |$\lambda _0\in \boldsymbol{\rho} (A),$| where |$\boldsymbol{\rho}(A)$| is the resolvent set of A and |$R(\lambda _0,A)$| is its resolvent operator. Remark 2.1 (Hille & Phillips, 1957, p. 391) If C ∈ F(A), then |$CR(\lambda ,A)=CR(\lambda _0,A)+(\lambda _0-\lambda )CR(\lambda _0,A)R(\lambda ,A)$|⁠. Then, the operator |$CR(\lambda ,A)$| is bounded for all |$\lambda \in \boldsymbol{\rho} (A)$|⁠. If C is A-bounded (i.e. C verifies (1.2) for some M > 0), then for all y ∈ D(A) and |$\lambda \in \boldsymbol{\rho} (A) $| we have |$\|Cy\|\leqslant M\|(-A+\lambda I)y\|+M|\lambda | \|y\|+M\|y\|.$| Thus, for all x ∈ H, we have |$\|CR(\lambda ,A)x\|\leqslant M\|x\|+(M|\lambda |+M)\|R(\lambda ,A)x\| \leqslant (2M+|\lambda | M)\|x\|.$| Then the operator |$C|_{D(A)}, $| restriction of C to D(A), is an element of F(A). Definition 2.2 (Hille & Phillips, 1957, p. 392) Let A be the infinitesimal generator of a |$C_0$|-semigroup. A linear operator C is said to belong to the class |$\widetilde{F}(A)$| if D(A) ⊂ D(C), |$CR(\lambda _0,A)\in L(H),$| for some |$\lambda _0\in \boldsymbol{\rho}(A),$| for all x ∈ H, we have x ∈ D(C) if and only if the limit |$\lim _{\lambda \to \infty }\lambda CR(\lambda ,A)x=y$| exists, in which case Cx = y. In the sequel, for linear operators |$\Lambda $| and C with domains |$D(\Lambda )$| and D(C) (respectively) such that |$D(C)\supset D(\Lambda )$|⁠, we set |$\|C\|_{\Lambda }=\sup \left \{\|Cx\| \;/\; x\in D(\Lambda ), \|x\|\leqslant 1\right \}.$| Theorem 2.1 (Hille & Phillips, 1957, p. 392) Let C ∈ F(A). Then C has a unique extension |$\widetilde{C}\in \widetilde{F}(A)$|⁠, called A-extension of C and is defined by |$\widetilde{C}x=\lim _{\lambda \rightarrow +\infty }\lambda CR(\lambda ,A)x,\;\forall x\in D(\widetilde{C}):=\{x\in H \; /\;\lim _{\lambda \rightarrow +\infty }\lambda CR(\lambda ,A)x\; \mbox{exists} \}$|⁠. If C is the restriction of a closed operator |$C_1$|⁠, then |$C\subset \widetilde{C} \subset C_1$|⁠. Remark 2.2 If C is bounded, then |$\widetilde{C}$| is bounded and |$\|\widetilde{C}\|=\|C\|_A$|⁠. If C is A-bounded then |$C|_{D(A)}$| has a unique extension |$\widetilde{C}\in \widetilde{F}(A)$|⁠. Proposition 2.1 (Hille & Phillips, 1957, p. 394) Let |$C\in \widetilde{F}(A)$| and suppose that |$CS(t_0)$| is bounded on D(A) for some |$t_0>0 $|⁠. Then for all |$t\geqslant t_0$|⁠, we have S(t)H ⊂ D(C) and CS(t) is bounded with |$\|CS(t)\|=\|CS(t)\|_A$|⁠. Moreover, the function t ↦ CS(t) is strongly continuous for |$t>t_0, $| and we have $$\begin{equation} \limsup_{t\to +\infty} t^{-1}\ln(\|CS(t)\|)\leqslant \omega_0, \end{equation}$$ (2.1) where |$\boldsymbol{\omega}_0$| is the growth bound of the semigroup S(t). Remark 2.3 If C is an A-bounded operator such that |$\|CS(t_0)\|$| is bounded on D(A) for some |$t_0>0$|⁠, then |$C|_{D(A)}$| has a unique extension |$\widetilde{C}$|⁠, which satisfies |$S(t)H\subset D(\widetilde{C})$| for all |$t\geqslant t_0$|⁠. This property will be useful to establish our weak stabilization result. Let us now recall the notion of non-linear semigroups (Pazy, 1978). Definition 2.3 (Pazy, 1978) Let H be a Hilbert space. A (generally non-linear) strongly continuous semigroup |$(T(t))_{t\geqslant 0}$| on H is a family of continuous maps T(t) : H→H satisfying T(0) = identity, |$T(t + s) = T(t) T(s)$|⁠, for all |$t, s\in \mathbb{R}^+,$| for every y ∈ H, T(t)y → y, as |$t\to 0^+.$| In this case, the mapping defined by |${\mathcal A}y = \lim _{h\to 0^+}\frac{T(h)y - y}{h}$| for all |$y\in D({\mathcal A}):=\{y\in H/\; \lim _{h\to 0^+}\frac{T(h)y - y}{h} $| exists in H} is called the infinitesimal generator of the semigroup T(t). If in addition |$\|T(t)y_1-T(t)y_2\|\leqslant \|y_1-y_2\|$|⁠, for every |$t\geqslant 0$| and |$y_1,y_2\in H$|⁠, then T(t) is said to be a contraction semigroup (or a semigroup of contractions) on H. In this case, |${\mathcal A}$| is dissipative, i.e. |$\langle{\mathcal A}y_1-{\mathcal A}y_2,y_1-y_2\rangle \leqslant 0,\;$| for all |$y_1,y_2\in D({\mathcal A}).$| For |$\phi \in H$|⁠, define the positive orbit through |$\phi $| by |$ O^+(\phi )=\cup _{t>0}T(t)\phi $|⁠. The |$\omega $|-limit set of |$\phi $| is the (possibly empty) set given by |$\omega (\phi )= \{\psi \in H;\; $| there exists a sequence |$ t_n\to +\infty ,\; $| such that |$ T(t_n)\phi \to \psi , \; $| as |$ n\to +\infty \}.$| The weak |$\omega $|-limit set of |$\phi $| is the (possibly empty) set given by |$\omega _w(\phi )= \{\psi \in H;\; $| there exists a sequence |$ t_n\to +\infty ,\; $| such that |$ T(t_n)\phi \rightharpoonup \psi , \; \mbox{as}\; n\to +\infty \}.$| The sets |$\omega (\phi )$| and |$\omega _w(\phi )$| are invariant under the action of any contraction semigroup |$(T(t))_{t\geqslant 0}$| (see Pazy, 1978). Moreover, given a semigroup of contraction |$S(t)=e^{{\mathcal A}t},$| we denote by |$E_{\mathcal A}$| the set of equilibrium states given by |$E_{\mathcal A}={\mathcal A}^{-1}(0)=\{y\in D({\mathcal A}),\;{\mathcal A}y=0\}.$| We have |$E_{\mathcal A}=\{y\in H;\; e^{t{\mathcal A}}y=y,\, \forall t\geqslant 0\}.$| Indeed, for y ∈ H such that |$e^{t{\mathcal A}}y=y, \; \forall t\geqslant 0$|⁠, we have |$\lim\nolimits _{t\to 0^+}\dfrac{e^{t{\mathcal A}}y-y}{t}=0$|⁠, then |$y\in D({\mathcal A})$| and |${\mathcal A}y=0$|⁠. Now, let |$y\in D({\mathcal A})$| be such that |${\mathcal A}y=0$|⁠. We know that |$\frac{d^+}{dt} e^{t{\mathcal A}}y={\mathcal A}e^{t{\mathcal A}}y, \, \forall t\geqslant 0$| (see the study by Komura, 1967). It follows that $$\begin{align*} \left\|y-e^{t{\mathcal A}}y\right\|^2& = \int_{0}^{t}\frac{\mathrm{d}^+}{\mathrm{d}s}\left\|y-e^{s{\mathcal A}}y\right\|^2 \mathrm{d}s\\ & = \int_{0}^{t}2\left\langle{\mathcal A}y-{\mathcal A}e^{s{\mathcal A}}y\;,\;y-e^{s{\mathcal A}}y \right\rangle \mathrm{d}s, \end{align*}$$ from which, we deduce that |$\|y-e^{t{\mathcal A}}y\|^2= 0,\; \forall t\geqslant 0$|⁠. Thus, |$e^{t{\mathcal A}}y=y, \; \forall t\geqslant 0$|⁠. The following result concerns the asymptotic behaviour of the system (1.1) in connection with the structure of the weak |$\omega $|-limit set. Theorem 2.2 (see Pazy, 1978) Let |${\mathcal A}$| be an infinitesimal generator of a non-linear semigroup of contractions |$(T(t))_{t\geqslant 0}$| on H and let |$\phi \in H$|⁠. The following conditions are necessary and sufficient for the existence of the weak limit of |$T(t)\phi $|⁠, as |$t\to +\infty $|⁠: |$E_{{\mathcal A}}={\mathcal A}^{-1}(0)\neq \emptyset $|⁠, |$\omega _{w}(\phi )\subset E_{{\mathcal A}}.$| The next result discusses the case of strong |$\omega $|-limit set. Theorem 2.3 (see Pazy, 1978) Let |${\mathcal A}$| be an infinitesimal generator of a non-linear semigroup of contractions |$(T(t))_{t\geqslant 0}$| on H and let |$\phi \in H.$| The following conditions are necessary and sufficient for the existence of the strong limit of |$T(t)\phi $|⁠, as |$t\to +\infty $|⁠: |$E_{{\mathcal A}} \neq \emptyset $| and |$\omega (\phi )\neq \emptyset $|⁠, |$\omega (\phi )\subset E_{{\mathcal A}}.$| 3. Considered systems and well-posedness In this section, we reconsider the system (1.1) with the same hypotheses on A and B. The purpose of this section is to study the feedback stabilization of the system (1.1) using the bounded control $$\begin{equation} v(t)=-\boldsymbol{\rho}\;\frac{\langle By(t),y(t)\rangle}{1+\langle By(t),y(t)\rangle}, \end{equation}$$ (3.1) where |$\boldsymbol{\rho}>0$| is the gain control and y is the solution of the corresponding closed-loop system, i.e. $$\begin{equation} \displaystyle\frac{d z(t)}{d t}={\mathcal A} z(t), \end{equation}$$ (3.2) where |${{\mathcal A}} y = A y+V(y)By,\, \forall y\in D({\mathcal A})=D(A)$| and |$V(y)=-\boldsymbol{\rho} \;\frac{\langle By,y\rangle }{1+\langle By,y\rangle }, \, \forall y\in D(A)$|⁠. Remark 3.1 For |$B=B^{\ast }\geqslant 0$| on D(A), we have the following Cauchy–Schwartz-like inequality: $$ |\langle By_1,y_2 \rangle|\leqslant \sqrt{\boldsymbol{\varphi}(y_1)}\sqrt{\boldsymbol{\varphi}(y_2)},\quad \, \forall (y_1,y_2)\in D(A)^2, $$ with |$\boldsymbol{\varphi}(y_1)=\langle By_1,y_1 \rangle $| and |$\boldsymbol{\varphi}(y_2)=\langle By_2,y_2 \rangle $|⁠. Indeed, since |$B=B^{\ast }\geqslant 0$| on D(A), we have $$\begin{equation} \boldsymbol{\mu}^2\varphi(y_2)+2\boldsymbol{\mu}<By_1,y_2>+\varphi(y_1)=\langle B(y_1+\boldsymbol{\mu} y_2),y_1+\boldsymbol{\mu} y_2\rangle\geqslant 0,\quad \, \forall \boldsymbol{\mu}\in\mathbb{R}, \end{equation}$$ (3.3) then |$|\langle By_1,y_2 \rangle |\leqslant \sqrt{\boldsymbol{\varphi} (y_1)}\sqrt{\boldsymbol{\varphi} (y_2)}.$| For all |$y_1\in D(A),$| we have |$\langle By_1,y_1\rangle =0 \Rightarrow By_1=0.$| Indeed, by simplifying with |$\mu>0 $| and |$\mu <0$| in (3.3), and letting |$\boldsymbol{\mu} \to 0^{\pm }$|⁠, we obtain |$\langle By_1, y_2\rangle = 0, \, \forall y_2\in D(A)$|⁠. Thus, we conclude by using the density of D(A) in H. In the sequel, we will analyse the well-posedness of the system (3.2). Theorem 3.1 Let A generate a semigroup S(t) of contractions on H, and let B : D(B)→H be a linear A-bounded operator such that (i) ⟨By, z⟩ = ⟨y, Bz⟩, ∀ y, z ∈ D(A), (ii) ⟨By, y⟩ ⩾ 0, ∀ y ∈ D(A). Then for any |$0<\boldsymbol{\rho} <\min (1,\frac{1}{M} )$|⁠, (where M is the constant given in (1.2)) and for all |$z_{0}\in H$|⁠, the system (3.2) admits a unique solution |$z\in{\mathcal C}([0,+\infty [;H)$|⁠. Furthermore, |${{\mathcal A}}$| generates a contraction semigroup |$e^{t{{\mathcal A}}}$| on H, and for all |$z_{0}\in H$| the solution of (3.2) is given by |$z(t) = e^{t {{\mathcal A}}}z_{0}\cdot $| Proof. Let us set |$\boldsymbol{\varphi} (y)= \langle By,y \rangle ,\, \forall y\in D(A)$| and let us consider the map $$ \boldsymbol{\phi} =g(\boldsymbol{\varphi}),\;\mbox{with}\;g(z)=\dfrac{1}{2}(z-\ln(1+z)).$$ Since B is self-adjoint, we have $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)=t^2\boldsymbol{\varphi}(y_1)+2t(1-t) \langle By_1,y_2 \rangle+(1-t)^2\boldsymbol{\varphi}(y_2), \forall t\in [0,1], \; \forall (y_1,y_2)\in D({A})^2.$$ Taking into account Remark 3.1, 1., we deduce that $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)=t^2\boldsymbol{\varphi}(y_1)+2t(1-t) \sqrt{\boldsymbol{\varphi}(y_1)} \sqrt{\boldsymbol{\varphi}(y_2)}+(1-t)^2\boldsymbol{\varphi}(y_2), \forall t\in [0,1], \; \forall (y_1,y_2)\in D({A})^2.$$ Hence, $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)\leqslant \big(t\sqrt{\boldsymbol{\varphi}(y_1)}+(1-t)\sqrt{\boldsymbol{\varphi}(y_2)}\big)^2.$$ This together with the convexity of |$t\mapsto t^2$| gives $$ \boldsymbol{\varphi}(ty_1+(1-t)y_2)\leqslant t\boldsymbol{\varphi}(y_1)+(1-t)\boldsymbol{\varphi}(y_2).$$ Thus, the map |$\boldsymbol{\varphi} $| (and so is |$\boldsymbol{\phi} $|⁠) is convex. It follows that the Gâteaux derivative |$\boldsymbol{\phi} ^{\prime} : D({A} )\longrightarrow H$| of |$\boldsymbol{\phi} $| is monotone. On the other hand, for all h ∈ H and y ∈ D(A) we have $$\begin{align*} \boldsymbol{\phi}^{\prime}(y)\cdot h &=\lim\limits_{t\to 0}\dfrac{\boldsymbol{\phi}(y+th)-\boldsymbol{\phi}(y)}{t}\\ & =\dfrac{1}{2}\;\dfrac{\langle By,y \rangle}{1+\langle By,y\rangle }\langle (B+B^{\ast})y,h\rangle.\end{align*}$$ Then using the fact that B is symetric, we obtain $$ \boldsymbol{\phi}^{\prime}(y)=\frac{\langle By,y\rangle}{1+\langle By,y\rangle}By.$$ On the other hand, we have the following expression regarding the Gâteaux derivative of |$\boldsymbol{\phi} $|⁠: $$ \langle \boldsymbol{\phi}^{\prime}(tx+(1-t)y),x-y\rangle= \dfrac{\big(t\langle Bx,y\rangle+(1-t)\langle By,y\rangle \big)\big(t\langle Bx,x-y\rangle+(1-t)\langle By,x-y\rangle \big)}{1+t^2\langle Bx,x\rangle+2t(1-t)\langle Bx,y\rangle+(1-t)^2\langle By,y\rangle}, \, \forall t\in[0,1],$$ from which, we deduce that |$\boldsymbol{\phi} ^{\prime}$| is hemicontinous, i.e. for any x, y ∈ D(A), the mapping |$ : t \mapsto \langle \boldsymbol{\phi} ^{\prime}(tx+(1-t)y),x-y) \rangle $| is continuous on [0, 1]. Now, it comes from (1.2) that for all y ∈ D(A), we have $$ \|V(y)By\|\leqslant \boldsymbol{\rho}\;M(\|Ay\|+\|y\|).$$ Since A generates a semigroup of contractions then − A is maximal monotone and |$\rho \phi ^{\prime}(\cdot )=-V(\cdot ) B(\cdot )$| is monotone hemicontinuous and |$\boldsymbol{\rho} M<1$|⁠, we have that (see the book by Brezis, 1973) the operator |$-{\mathcal A} : y\longrightarrow -Ay-V(y) By $| is maximal monotone, and hence |${\mathcal A}$| generates a semigroup of contractions |$e^{t{{\mathcal A}}}z_0$|⁠, and the function |$ z(t)=e^{t{{\mathcal A}}}z_0$| is a solution of (3.2) (see Brezis, 1973). Remark 3.2 For all |$z_{0}\in D(A )$|⁠, we have $$\begin{equation} \|z(t)\|\leqslant \|z_0\|,\quad\; \forall t\geqslant 0 \end{equation}$$ (3.4) and z(t) ∈ D(A) admits a right derivative at t (see the study by Komura, 1967) as well as $$\begin{equation} \displaystyle\frac{d^+z(t)}{d t}={{\mathcal A}} z(t)\cdot \end{equation} $$ (3.5) $$\begin{equation} \|{{\mathcal A}} z(t)\|\leqslant \|{{\mathcal A}} z_0\|\cdot \end{equation}$$ (3.6) Remark 3.3 Under the assumptions of Theorem 3.1, we have For all y ∈ D(A) $$\begin{align*} \|By\|\ & \leqslant M\|{\mathcal A}y-V(y) By\|+M \|y\|\\ &\leqslant M\|{\mathcal A}y\|+M\boldsymbol{\rho}\|By\|+M \|y\| \end{align*}$$ and hence $$\begin{equation} \|By\|\leqslant \frac{M}{(1-M \boldsymbol{\rho})}\|{{\mathcal A}}y\|+\frac{M}{(1-M \boldsymbol{\rho})}\|y\|, \end{equation}$$ (3.7) For all y ∈ D(A) $$\begin{align*} \|Ay\|\ & \leqslant \|{\mathcal A}y\|+\boldsymbol{\rho} \|By\|\\ &\leqslant \|{\mathcal A}y\|+M\boldsymbol{\rho}\|Ay\|+ M \boldsymbol{\rho}\|y\| \end{align*}$$ then $$\begin{equation} \|Ay\|\leqslant \frac{1}{1-\boldsymbol{\rho} M}\|{\mathcal A}y\|+ \frac{\boldsymbol{\rho} M}{1-\boldsymbol{\rho} M}\|y\|, \end{equation}$$ (3.8) The stabilizing control (3.1) is uniformly bounded with respect to initial state |$ |v(t)| \leqslant \boldsymbol{\rho} $| and if the system (3.2) is subject to the control constraint |$|v(t)| \leqslant \boldsymbol{\alpha} , \; \boldsymbol{\alpha}>0,$| then one may take |$\boldsymbol{\rho} \leqslant \alpha $|⁠. 4. Stabilization problem In this section, we give sufficient conditions to obtain weak and strong stabilizations of (1.1) using the control (3.1). 4.1 Weak stability In the sequel, we assume that B is A-bounded so |$B|_{D(A)}$| admits an A-extension denoted by |$\tilde{B}$|⁠. Let us now introduce the following sets: |${\mathcal M} = \{\boldsymbol{\varphi} \in D( A) \;/\; \langle Be^{tA}\boldsymbol{\varphi} ,e^{tA}\boldsymbol{\varphi} \rangle =0, \; \forall t\geqslant 0 \} $| and |$\widetilde{{\!\mathcal M}} = \{\boldsymbol{\varphi} \in H \;/\; \langle Be^{tA}\boldsymbol{\varphi} ,e^{tA}\boldsymbol{\varphi} \rangle =0, \; \forall t\geqslant 0 \} $| and let us consider the following hypotheses: |$(\boldsymbol{C}_1)$|⁠: For all sequence |$(x_n)\subset D(A),$| we have $$\left .\begin{array}{l} 1.\; x_n\rightharpoonup x\in H, \\ 2.\; \|Ax_n\|\; \mbox{is bounded,} \\ 3.\; \langle Bx_n,x_n\rangle \to 0. \end{array} \right \}\;\Longrightarrow\ x\in E_{{{\mathcal A}}}.$$ |$(\boldsymbol{C}_2)$|⁠: If there exists |$t_0>0$| such that |$\|BS(t_0)\| $| is bounded in D(A) and for all sequence |$(y_n)\subset D( A)$| and y ∈ H such that |$y_n\rightharpoonup y$| in H, there exists a subsequence |$(y_{\gamma (n)})$| such that for all |$t\geqslant t_0$|, we have |$BS(t)y_{\gamma (n)}\rightarrow \widetilde{B}S(t)y$| in H, as |$n\to +\infty $|⁠. Remark 4.1 As a class of operators that satisfy the assumption |$(\boldsymbol{C}_1)$|⁠, the set of operators B satisfying |$ \langle By,y\rangle \geqslant \mu \|y\|^s$|⁠, for all y ∈ D(B) (for some |$\mu ,s>0$|⁠). As an other example of operators that satisfies |$(\boldsymbol{C}_1)$|⁠, let |$A=\Delta $| with Newmann boundary conditions and |$By=-\Delta y+ Ny, \, \forall y\in D(A)$| with |$Ny=\sum _{j=1}^{+\infty }\frac{1}{j^2}\langle y,\boldsymbol{\varphi} _j\rangle \boldsymbol{\varphi} _j$| where |$(\boldsymbol{\varphi} _j)_{j\geqslant 0}$| are the orthonormal basis of |$L^2(\boldsymbol{\Omega})$| formed with the eigenfunctions of A. Here, B is A-bounded and for any sequence |$(x_n)\subset D(A)$| such that |$x_n\rightharpoonup x\in H$| and |$\langle Bx_n,x_n\rangle \to 0,$| we have |$ \langle -\Delta x_n,x_n\rangle +\langle Nx_n,x_n\rangle \to 0$|⁠. Then, it follows from the positivity of the operators |$-\Delta $| and N that|$ \nabla x_n \to 0 \;\mbox{and}\; \langle Nx_n,x_n\rangle \to 0.$| Now using that N is a compact operator, we derive that ⟨Nx, x⟩ = 0. Finally, it follows from the definition of the operator N that |$x=\lambda \textbf{1}\in \ker A\cap \ker B.$| As an example where the assumption |$(\boldsymbol{C}_2)$| holds, one can take |$B=A=\Delta $| (see the study by Haraux, 2001). We note that |$E_{\mathcal A}=\ker (A)\cap \ker (B)$|⁠. Indeed, we have |$y\in E_{\mathcal A} \Longleftrightarrow Ay+V(y)By=0,$| so the inclusion (⁠|$\supset $|⁠) is clear. Moreover, |$y\in E_{\mathcal A} \Rightarrow \langle Ay,y \rangle +V(y)\langle By,y\rangle =0$|⁠. Thus, since A and − B are dissipative, we have |$ y\in E_{\mathcal A} \Rightarrow \langle Ay,y\rangle = \langle By,y\rangle =0. $| Then, taking into account Remark 3.1, 2., it comes By = 0 and then Ay = 0. The following theorem concerns the weak stability of (3.2). Theorem 4.1 Suppose that the hypotheses of Theorem 3.1 are verified. 1. If |$(\boldsymbol{C}_1)$| holds then for all |$z_0 \in D(A)$| the solution of (3.2) is weakly convergent to |$z^{\ast }\in E_{\mathcal A}$|⁠, as |$t\to +\infty $|⁠. 2. If |$(\boldsymbol{C}_2)$| holds, then for all |$z_0 \in D(A)$| the solution of (3.2) satisfies (a) |$z(t)\rightharpoonup z^{\ast }\in E_{A}$|⁠, if |$\widetilde{{\mathcal M}} \subset E_A,$| (b) |$z(t)\rightharpoonup 0$|⁠, if |$\widetilde{{\mathcal M}} =\{0\}$|⁠. Proof. 1. Let |$z_0 \in D({{A}} )$|⁠. According to Remark 3.2, the function t → z(t) admits a right derivative at all time, then we have $$ \dfrac{d^+\|z(\tau)\|^2}{d\tau}=2\langle{{\mathcal A}} z(\tau),z(\tau)\rangle,\;\; \forall \tau \geqslant 0.$$ By integrating this equality between s and t, we obtain $$\begin{equation} \frac{1}{2}\left[\|z(t)\|^2-\|z(s)\|^2\right]=\int_s^t \langle{{\mathcal A}} z(\tau),z(\tau)\rangle\;\mathrm{d}\tau,\quad \; \forall t\geqslant s \geqslant 0. \end{equation}$$ (4.1) Since A is dissipative, we have $$ \int_s^t \langle{{\mathcal A}} z(\tau),z(\tau)\rangle\;\mathrm{d}\tau \leqslant -\boldsymbol{\rho} \int_s^t \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle}\, \mathrm{d}\tau,\quad \; \forall t\geqslant s \geqslant 0. $$ It comes from (4.1) that $$ \int_0^t \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle} \, \mathrm{d}\tau \leqslant \frac{1}{2\boldsymbol{\rho}}\|z_0\|^2. $$ Thus, $$\begin{equation} \int_{0}^{+\infty}\frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle}\;\mathrm{d}\tau <+\infty. \end{equation}$$ (4.2) Moreover, from Remark 3.2 and inequality (3.7) we get $$\begin{equation} \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\left[\frac{M}{(1-M \boldsymbol{\rho})}\|{{\mathcal A}}z_0\|+\frac{M}{(1-M \boldsymbol{\rho})}\|z_0\|\right]\|z_0\|} \leqslant \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle},\quad\;\forall \tau>0. \end{equation}$$ (4.3) Hence, $$\begin{equation} \int_{0}^{+\infty}\langle Bz(\tau),z(\tau)\rangle^2\;\mathrm{d}\tau <+\infty. \end{equation}$$ (4.4) From (3.4), it follows that |$\boldsymbol{\omega} _w(z_0)\neq \emptyset $|⁠. Let |$\boldsymbol{\varphi} _0 \in \boldsymbol{\omega} _w(z_0)$| and let |$t_n\to +\infty $| such that |$z(t_n)=e^{t_n{{\mathcal A}}}z_0\rightharpoonup \boldsymbol{\varphi} _0$|⁠, as |$n\to +\infty $|⁠. We shall prove that there exists a sequence |$(s_{n})$| such that |$s_n\to +\infty $|⁠, |$z(s_n)\rightharpoonup \boldsymbol{\varphi} _0$| and |$~{\lim _{n\to +\infty }\langle Bz(s_{n}),z(s_{n})\rangle =0.}$| For this end, let us consider |$\boldsymbol{\varepsilon}>0$| and |$H_{\boldsymbol{\varepsilon}} =\{t>0 \; /\; \langle Bz(t),z(t)\rangle \geqslant \boldsymbol{\varepsilon} \}$| and let us define the map G : t →< Bz(t), z(t) >. From (4.4), we have |$G \in L^2(0,+\infty )$|⁠, thus the Lebesgue measure of |$H_{\boldsymbol{\varepsilon}} $| is finite. Then for any A > 0, the set |$E=\{k\in \mathbb{N}\; /\; \; t_k>A\}$| is infinite, hence |$\bigcup _{k\in E}[t_k,t_k+\boldsymbol{\varepsilon} ]\not \subset H_{\boldsymbol{\varepsilon}} .$| It follows that for |$\boldsymbol{\varepsilon}>0$| and A > 0, there exist |$t_k>A$| and |$ s\in [t_k,t_k+\boldsymbol{\varepsilon} ]$| such that |$G(s)<\boldsymbol{\varepsilon} $|⁠. Let us now proceed to the construction of sequence |$(s_j)_{j\geqslant 1}$|⁠. For |$\boldsymbol{\varepsilon} =\dfrac{1}{1}$| and A = 1, then there exist |$t_{k_1}>1$| and |$s_1\in [t_{k_1},t_{k_1}+\dfrac{1}{1} ]$| such that |$G(s_1) < \dfrac{1}{1}$|⁠. Now, let |$j\in \mathbb{N}^{\ast }$| and suppose that the terms |$t_{k_j}$| and |$s_j$| are constructed. For |$\boldsymbol{\varepsilon} =\dfrac{1}{j+1}$| and |$A=t_{k_j}+j>0$|⁠, there exist |$t_{k_{j+1}}>A$| and |$s_{j+1}\in [t_{k_{j+1}},t_{k_{j+1}}+\frac{1}{j+1} ]$| such that |$G(s_{j+1}) < \dfrac{1}{j+1}$|⁠. Then from the construction of |$(s_j)_{j\geqslant 1}$| and |$(t_{k_j})_{j\geqslant 1}, $| we have |$G(s_j)\leqslant \dfrac{1}{j}$|⁠. On the other hand, we have |$t_{k_{j+1}}-t_{k_j}>j$|⁠, then |$t_{k_{q+1}}-t_{k_1}>\sum _{j=1}^{q}j=\frac{q(q+1)}{2}$| for all |$q\geqslant 1$|⁠, and so |$\lim _{j\to +\infty }t_{k_j}=+\infty $|⁠. Thus, |$\lim_{j\to +\infty }s_j=+\infty $| (recall that |$s_j\geqslant t_{k_j}$|⁠). We can therefore choose a subsequence |$(t_{k_j})$| of |$(t_{n})$| and a sequence |$(s_j)$| such that |$t_{k_j} \leqslant s_j \leqslant t_{k_j}+\frac{1}{j}$|⁠, |$G(s_j) < \frac{1}{j}$| and |$s_j\to +\infty $|⁠. From (3.5) for all |$t\geqslant 0$|⁠, we have |$\|{\mathcal A}e^{t{{\mathcal A}}}z_0\|\leqslant \|{{\mathcal A}}z_0\|$|⁠. Then $$\begin{equation} \left\|e^{s_j{{\mathcal A}}}z_0-e^{t_{k_j}{{\mathcal A}}}z_0\right\|\leqslant \|{{\mathcal A}}z_0\|\left|s_j-t_{k_j}\right|\leqslant \frac{\|{{\mathcal A}}z_0\|}{j}. \end{equation}$$ (4.5) We deduce that |$z(s_j)\rightharpoonup \boldsymbol{\varphi} _0$| and |$\langle Bz(s_j),z(s_j)\rangle \to 0$|⁠, as |$j\to +\infty .$| Moreover, according to the Remark 3.3, 2., |$(\|Az(s_j)\|)_{j\geqslant 1}$| is bounded. It follows from |$(\boldsymbol{C}_1)$| that |$\varphi _0\in E_{{\mathcal A}}$|⁠, which gives |$\boldsymbol{\omega} _w(z_0)\subset E_{{\mathcal A}}$|⁠. Finally, Theorem 2.2 implies that z(t) is weakly convergent as |$t\to +\infty $|⁠. 2. (a) Let |$\boldsymbol{\varphi} _0$| and |$(s_j)_{j\geqslant 1}$| be as in 1. and suppose that the condition |$(\boldsymbol{C}_2)$| holds. From (4.2) we have $$\begin{equation} \lim_{j\to+\infty}\int_{s_{j}}^{t+s_{j}} \frac{\langle Bz(\tau),z(\tau)\rangle^2}{1+\langle Bz(\tau),z(\tau)\rangle}\;\mathrm{d}\tau=0,\quad\;\forall t>0. \end{equation}$$ (4.6) It follows from the invariance of |$\boldsymbol{\omega} _w(z_0)$| under the semigroup |$e^{t{\mathcal A}}$| that for all t > 0, we have $$\begin{equation} \langle Bz(t+s_j),z(t+s_j)\rangle\to 0,\quad\; \mbox{as}\; j\to+\infty. \end{equation}$$ (4.7) Moreover, from the variation of constants formula, we have $$ z(t+s_j)=S(t)z(s_j)+\int_{s_j}^{t+s_j} v(z(\tau))S(t+s_j-\tau)Bz(\tau)\, \mathrm{d}\tau,\quad\;\forall t>0.$$ It follows that $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \int_{s_j}^{t+s_j} |v(z(\tau))|\;\|Bz(\tau)\|\, \mathrm{d}\tau,\quad\;\forall t>0\cdot$$ Using (3.4)–(3.7), we deduce that $$\begin{equation} \|Bz(\tau)\|\leqslant \frac{M}{(1-M \boldsymbol{\rho})}\|{{\mathcal A}}z_0\|+\frac{M}{(1-M \boldsymbol{\rho})}\|z_0\|=:M_{z_0,\boldsymbol{ \rho}}\cdot \end{equation}$$ (4.8) Then $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \; M_{z_0, \boldsymbol{\rho}} \int_{s_j}^{t+s_j} |v(z(\tau))|d\tau,\quad\;\forall t>0\cdot$$ The Schwartz's inequality gives $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \boldsymbol{\rho}\; \sqrt{t}\; M_{z_0, \boldsymbol{\rho}}\; \sqrt{\int_{s_j}^{t+s_j} \frac{<Bz(\tau),z(\tau)>^2}{(1+<Bz(\tau),z(\tau)>)^2}\, \mathrm{d}\tau},\quad\;\forall t>0,$$ and hence $$ \|z(t+s_j)-S(t)z(s_j)\|\leqslant \boldsymbol{\rho}\; \sqrt{t}\; M_{z_0, \rho}\; \sqrt{\int_{s_j}^{t+s_j} \frac{<Bz(\tau),z(\tau)>^2}{1+<Bz(\tau),z(\tau)>}\ \textrm{d}\tau},\quad\;\forall t>0.$$ Thus, using (4.2) we deduce that $$\begin{equation} \lim_{j\to +\infty}[z(t+s_j)-S(t)z(s_j)]=0,\quad\;\forall t>0. \end{equation}$$ (4.9) Using the fact that B is self-adjoint, we can write |$\langle BS(t)z(s_j), S(t)z(s_j)\rangle =\langle S(t)z(s_j)-z(t+s_j), BS(t)z(s_j)\rangle +\langle Bz(t+s_j), S(t)z(s_j)-z(t+s_j)\rangle + \langle Bz(t+s_j), z(t+s_j)\rangle$|⁠. Furthermore, from (1.2), (3.4), (3.8) and the fact that S(t) is of contractions, we get $$\begin{align*} \|BS(t)z(s_j)\| & \leqslant M\left(\|AS(t)z(s_j)\|+\|S(t)z(s_j)\|\right)\\ &\leqslant \frac{M}{1-\boldsymbol{\rho} M}\| ({\mathcal A}z_0\|+ \|z_0\|)\cdot \end{align*}$$ This, together with (4.7), (4.8) and (4.9) gives $$\begin{equation} \langle BS(t)z(s_j),S(t)z(s_j)\rangle\to 0,\quad\; \mbox{as}\; j\to +\infty. \end{equation}$$ (4.10) Now, from the condition |$(\boldsymbol{C}_2)$|⁠, there exists a subsequence of |$(s_j), $| still denoted by |$(s_j), $| such that for all |$t\geqslant t_0$| we have $$\begin{equation} \langle BS(t)z(s_j),S(t)z(s_j)\rangle\to \langle \widetilde{B}S(t){\boldsymbol{\varphi}_0},S(t){\boldsymbol{\varphi}_0}\rangle,\quad\; \mbox{as}\; j\to +\infty. \end{equation}$$ (4.11) We conclude that $$\begin{equation} \langle \widetilde{B}S(t){\boldsymbol{\varphi}_0},S(t){\boldsymbol{\varphi}_0}\rangle=0. \end{equation}$$ (4.12) In other words, |$\boldsymbol{\varphi} _0\!\in\! \widetilde{{\mathcal M}}$| and hence |$\boldsymbol{\omega} _w(z_0)\!\subset \!\widetilde{{\mathcal M}}$|⁠. But |$\widetilde{{\mathcal M}}\subset E_A,$| then |$S(t)\boldsymbol{\varphi} _0=\boldsymbol{\varphi} _0$| and so |$\langle \widetilde{B}{\boldsymbol{\varphi} _0},{\boldsymbol{\varphi} _0}\rangle =0$|⁠. Moreover, since |$\boldsymbol{\omega} _w(z_0)$| is invariant by |$e^{t{{\mathcal A}}}$|⁠, we have |$\boldsymbol{\varphi} (t)=e^{t{{\mathcal A}}}\boldsymbol{\varphi} _0\in \boldsymbol{\omega} _w(z_0)$|⁠, thus |$\langle \widetilde{B}\boldsymbol{\varphi} (t),\varphi (t)\boldsymbol{\varphi} =0.$| Consequently, |$\boldsymbol{\varphi} (t)=e^{t{{\mathcal A}}}\boldsymbol{\varphi} _0=S(t)\boldsymbol{\varphi} _0=\boldsymbol{\varphi} _0$|⁠, and hence |$\boldsymbol{\varphi} _0\in E_{{\mathcal A}}$|⁠. We deduce that |$ \boldsymbol{\omega} _w(z_0)\subset E_{{\mathcal A}}$|⁠. Finally, from Theorem 2.2, there exists |$z^{\ast }\in E_A$| such that |$z(t)\rightharpoonup z^{\ast },$| as |$t\to +\infty$|⁠. (b) In the case |$\widetilde{{\mathcal M}}=\{0\}$|⁠, it follows from |$\boldsymbol{\omega} _w(z_0)\subset \widetilde{{\mathcal M}}$| that |$z(t)\rightharpoonup 0,$| as |$t\to +\infty .$| 4.2 Strong stability In this part, we study the strong stability of the closed-loop system (3.2). Let us start with the two following lemmas. Lemma 4.1 Suppose that the assumptions of Theorem 3.1 hold and that the embedding D(A) ↪ H is compact. Then the operator |$(I-{{\mathcal A}})^{-1}$| is compact from H to itself. Proof. Since |${\mathcal A}$| generates a non-linear semigroup of contractions, the operator |$(I-{{\mathcal A}})^{-1}$| is defined on H (see Theorem 2, in the study by Komura, 1967). In order to prove the compactness of the operator |$(I-{{\mathcal A}})^{-1}$|⁠, let us consider a bounded sequence |$(v_n)$| in H. We shall prove that the sequence |$u_n=(I-{{\mathcal A}})^{-1}v_n$|⁠, defined in D(A), has a converging subsequence (see Definition 2.5, p. 13 in the study by Toledano et al., 1997). We have $$ \langle v_n,u_n\rangle=\|u_n\|^2-\langle{{\mathcal A}}u_n,u_n\rangle.$$ Then, using the fact that the operator |${{\mathcal A}}$| is dissipative, we get |$\|u_n\|^2 \leqslant \langle v_n,u_n\rangle \leqslant \|v_n\| \|u_n\|$|⁠. It follows that the sequence |$(u_n)$| is bounded. Since |${\mathcal A}u_n=u_n-v_n,$| then the sequence |$({\mathcal A}u_n)$| is bounded. From (3.7) we can see that the sequence |$(Bu_n)$| is bounded. It follows from the equality $$ Au_n={\mathcal A}u_n+\boldsymbol{\rho} \frac{\langle B{u_n},u_n\rangle}{1+\langle B{u_n},u_n\rangle}B{u_n}$$ that |$(Au_n)$| is bounded. Furthermore, |$(u_n)\subset D(A)$|⁠, then we can conclude by the fact that D(A) is compactly embedded in H. Remark 4.2 Under the assumptions of Lemma 4.1, we have |$\boldsymbol{\omega} (z_0)\neq \emptyset $|⁠, for all |$z_0\in D(\mathcal A)$| (see the study by Pazy, 1978). Now, we are ready to establish the strong stabilization result. This is the subject of the two next theorems. Theorem 4.2 Let assumptions of Theorem 3.1 be verified, let D(A) be compactly embedded in H and let us set |$U(A)=\{y\in H\;/\; \|e^{tA}y\|=\|e^{tA^{\ast }}y\|=\|y\|\}$|⁠. Then for all |$z_0 \in D(A)$|⁠, we have |$\boldsymbol{\omega} (z_0)\subset{\mathcal M} \cap U(A)$|⁠. If |$U(A)\cap{\mathcal M}\subset E_A$|⁠, then for all |$z_0 \in D(A)$| we have |$ z(t)\to z^{\ast } \in E_A$|⁠, as |$t\to +\infty $|⁠. Proof. Let |$z_0\in D(A)$|⁠, |$\boldsymbol{\varphi} _0\in \boldsymbol{\omega} (z_0)$| and let |$(t_n)_{n\geqslant 0 }$| be such that |$t_n\to +\infty $| and |$e^{t_n{\mathcal A}}z_0\to \boldsymbol{\varphi} _0$| as |$n\to +\infty .$| For t > 0, there exists |$N\in \mathbb{N}$| such that for all |$n\geqslant N$|⁠, we have |$t_n\geqslant t.$| Using the fact that |$e^{t{\mathcal A}}$| is a semigroup of contractions, we obtain |$ \|e^{t_n{\mathcal A}}z_0\|\leqslant \|e^{t{\mathcal A}}z_0\|$|⁠; and letting |$n\to +\infty ,$| we get $$\begin{equation} \|\boldsymbol{\varphi}_0\|\leqslant \left\|e^{t{\mathcal A}}z_0\right\|=\|z(t)\|. \end{equation}$$ (4.13) It follows from the inequality (4.13) that |$\|\boldsymbol{\varphi} _0\|\leqslant \|e^{(t+t_n){\mathcal A}}z_0\|$|⁠. Then, letting |$n\to +\infty $| we obtain |$\|\boldsymbol{\varphi}_0\|\leqslant \|e^{t{\mathcal A}}\boldsymbol{\varphi} _0\|$|⁠. This together with the fact that |$(e^{t{\mathcal A}})_{t\geqslant 0}$| is a contraction semigroup leads to |$\|e^{t{\mathcal A}}\boldsymbol{\varphi} _0\|=\|\boldsymbol{\varphi} _0\|$| for all |$t\geqslant 0$|⁠. Furthermore, we have |$\boldsymbol{\omega} (z_0)\subset D({\mathcal A})$| (see Theorem 5, in the study by Dafermos & Slemrod, 1973) and so |$\boldsymbol{\varphi} _0 \in D(A)$|⁠. From (4.5) we have |$e^{s_j{\mathcal A}}z_0\to \boldsymbol{\varphi} _0$|⁠, as |$j\to +\infty $|⁠. On the other hand, $$\begin{align*} \langle BS(t)z(s_j),S(t)z(s_j)\rangle & =\langle BS(t)z(s_j),S(t)(z(s_j)-\boldsymbol{\varphi}_0)\rangle+\langle BS(t)\boldsymbol{\varphi}_0,S(t)z(s_j)\rangle \\ &=\langle BS(t)z(s_j),S(t)(z(s_j)-\boldsymbol{\varphi}_0)\rangle+\langle S(t)^{\ast}BS(t)\boldsymbol{\varphi}_0,z(s_j)\rangle.\end{align*}$$ Since |$(\|BS(t)z(s_j)\|)_{j\geqslant 1}$| is bounded, the semigroup |$(S(t))_{t\geqslant 0}$| is of contractions and |$z(s_j)\to \varphi _0$|⁠, as |$j\to +\infty $|⁠, then |$\langle BS(t)z(s_j),S(t)z(s_j)\rangle \to \langle BS(t)\boldsymbol{\varphi} _0,S(t)\boldsymbol{\varphi} _0\rangle $|⁠, as |$j\to +\infty $|⁠, this combined with (4.10), gives |$\boldsymbol{\varphi} _0\in{\mathcal M}$| and so |$e^{t{\mathcal A}}\boldsymbol{\varphi} _0=e^{tA}\boldsymbol{\varphi} _0$|⁠. We conclude that |$\boldsymbol{\varphi} _0\in U(A)\cap{\mathcal M}.$| Let |$\boldsymbol{\varphi} _0\in \boldsymbol{\omega} (z_0)$|⁠. From 1. we have |$\boldsymbol{\varphi} _0\in U(A)\cap{\mathcal M} \subset E_A$|⁠. Since |$\boldsymbol{\varphi} (t)=e^{t{\mathcal A}}\boldsymbol{\varphi} _0\in \boldsymbol{\omega} (z_0)$|⁠, then |$\boldsymbol{\varphi} (t)\in U(A)\cap{\mathcal M} \subset E_A.$| It follows that |${\mathcal A}\boldsymbol{\varphi} (t)=A\boldsymbol{\varphi} (t)=0$|⁠. In particular, |${\mathcal A}\boldsymbol{\varphi}_0=0$|⁠. In other words, |$\boldsymbol{\varphi} _0\in E_{\mathcal A}$|⁠. From Theorem 2.3, we deduce the existence of the strong limit of z(t), as |$t\to +\infty $|⁠. Thus, |$z(t)\to \boldsymbol{\varphi} _0\in E_A$|⁠, as |$t\to +\infty $|⁠. Theorem 4.3 Suppose that the hypotheses of Theorem 3.1 are verified. Let D(A) be compactly embedded in H, |${\mathcal M}\cap U(A) =\{0\}$|⁠. Then for all |$z_0 \in H$|⁠, we have z(t) → 0, as |$t\to +\infty $|⁠. Proof. Let |$z_0\in D(A)$|⁠. It follows from Theorem 4.2 that |${\mathcal M}\cap U(A)=\{0\}\subset E_A$|⁠. Moreover, we have |$\boldsymbol{\omega} (z_0)\subset{\mathcal M}\cap U(A)$|⁠. Then |$\boldsymbol{\omega} (z_0)=\{0\}$| and hence z(t) → 0, as |$t\to +\infty $|⁠. Now let us consider |$z_0\in \overline{D({{\mathcal A}} )}$| and let |$\boldsymbol{\varepsilon}>0$|⁠. Then there exists |$z_{\boldsymbol{\varepsilon}} \in D({{\mathcal A}} )$| such that |$\|z_0-z_\varepsilon \|\leqslant \frac{\boldsymbol{\varepsilon} }{2}$|⁠. It follows from the fact that |$e^{t{\mathcal A}}$| is a contraction semigroup that $$\begin{equation} \left\|e^{t{\mathcal A}}z_0-e^{t{\mathcal A}}z_\varepsilon\right\|\leqslant \frac{\boldsymbol{\varepsilon}}{2}, \;\;\;\forall t\geqslant 0. \end{equation}$$ (4.14) Since |$z_\varepsilon \in D({{\mathcal A}} )$| and |${\mathcal M}\cap U(A)=\{0\}$|⁠, we deduce from Theorem 4.2, 1. that |$\boldsymbol{\omega} (z_{\boldsymbol{\varepsilon}} )=\{0\} $|⁠. Thus, there exists |$T_{\boldsymbol{\varepsilon}}>0$| such that $$\begin{equation} \left\|e^{t{\mathcal A}}z_{\boldsymbol{\varepsilon}}\right\|\leqslant \frac{\boldsymbol{\varepsilon}}{2},\;\;\;\forall t\geqslant T_{\boldsymbol{\varepsilon}} . \end{equation}$$ (4.15) It follows from (4.14) and (4.15) that for all |$t\geqslant T_{\boldsymbol{\varepsilon}} ,$| we have |$\|e^{t{\mathcal A}}z_0\|\leqslant \boldsymbol{\varepsilon} $|⁠. We conclude that for all |$z_0\in \overline{D({\mathcal A})}=H$|⁠, we have z(t) → 0, as |$t\to +\infty .$| 5. Applications 5.1 Heat equation Let |$\boldsymbol{\Omega} =]0,1[$| and let us consider the bilinear system given by the following heat equation: $$\begin{equation} \left\{ \begin{array}{lll} \frac{\partial z}{\partial t}(t,x)=\Delta z (t,x)+v(t)a(x)z(t,x), \; \mbox{on} \; ]0,\quad+\infty[\times\boldsymbol{\Omega}&& \\ z^{\prime}(t,0)=z^{\prime}(t,1)=0,\quad\; \forall \;t>0 && \\ z(0,x)=z_0(x),\quad\; \mbox{on} \;\boldsymbol{\Omega} && \end{array} \right. \end{equation}$$ (5.1) where |$a\in L^{\gamma }(\boldsymbol{\Omega} )\;(\gamma> 2)$| and |$\;a \notin L^{\gamma +2}(\boldsymbol{\Omega} )$| (as an example, one can take |$a(x)=x^{-\frac{1}{\gamma +1}}, \; \forall x\in ]0,1[$|⁠). Let |$ H= L^2(\boldsymbol{\Omega} )$| and |$Az=\Delta z$|⁠, for all |$z\in D(A) =\{z\in L^2(\boldsymbol{\Omega} )\;/\;\Delta z \in L^2(\boldsymbol{\Omega} ),\;z^{\prime}(0)=z^{\prime}(1)=0\}$|⁠. Observing that |$Ba^{\frac{\gamma }{2}}=a^{\frac{\gamma +2}{2}}\not \in L^2(\boldsymbol{\Omega} )$|⁠, we deduce that the operator defined by Bz = az, for all |$z\in D(B):=H^1(\boldsymbol{\Omega} )$| is not bounded from |$L^2(\boldsymbol{\Omega} ) $| to |$L^2(\boldsymbol{\Omega} )$|⁠. Now, we have |$\|az\|_{L^{2}(\boldsymbol{\Omega} )}\leqslant \|a\|_{L^{\gamma }(\boldsymbol{\Omega} )}\|z\|_{L^{\gamma ^{\ast }}(\boldsymbol{\Omega} ), }\; \mbox{for all}\; z\in H^1(\boldsymbol{\Omega} )$| with |$\boldsymbol{\gamma} ^{\ast }=\frac{2\boldsymbol{\gamma} }{\boldsymbol{\gamma} -2}$|⁠. By the Sobolev embedding |$H^1(\boldsymbol{\Omega} )\hookrightarrow L^q(\boldsymbol{\Omega} ),\; 1 \leqslant q < +\infty $| (see Corollary 9.14 in the book by Brezis, 2010), and the fact that for all z ∈ D(A) we have |$\|\nabla z\|^2=|\langle \Delta z,z \rangle | \leqslant \frac{1}{2}(\|\Delta z\|^2+\|z\|^2).$| We conclude that B is A-bounded. We can state the following stabilization result: Proposition 5.1 Suppose that |$a(x)\geqslant 0,\;\mbox{a.e. on}\; \boldsymbol{\Omega} $| and |$\int _{\Omega }a(x)\, \mathrm{d}x \neq 0$|⁠. Then there exists |$\boldsymbol{\rho} _0>0$| such that, for any |$0<\boldsymbol{\rho} < \boldsymbol{\rho} _0$| and for all |$z_0\in L^2(0,1)$|⁠, the control |$v(t)=- \boldsymbol{\rho} \; \frac{\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\;\mathrm{d}x}{1+\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\;\mathrm{d}x} $| strongly stabilizes (5.1). Proof. Here, A and B satisfy the assumptions of Theorem 3.1 and we have |$U(A)=E_A\!=\!\{c 1_{\boldsymbol{\Omega}} \; / \; c\!\in\! \mathbb{R}\}$|⁠. Since |$\int _{\Omega }a(x)\, \mathrm{d}x \neq 0,$| we have |${\mathcal M}\cap U(A)=\{0\}$|⁠, then we conclude by Theorem 4.2. 5.2 Transport equation Let |$\boldsymbol{\Omega} =]0,+\infty [$| and let us consider the following bilinear transport equation: $$\begin{equation} \left\{ \begin{array}{lll} \frac{\partial z}{\partial t}(t,x)=-\frac{\partial z}{\partial x} (t,x)+v(t)a(x)z(t,x),\; \mbox{on} \; ]0,+\infty[\times\boldsymbol{\Omega}&& \\ z(t,0)=0,\; \mbox{on} \; ]0,+\infty[&& \\ z(0,x)=z_0(x)\; \mbox{on} \;\boldsymbol{\Omega}&& \end{array} \right. \end{equation}$$ (5.2) where a is such that a(x) > 0, a.e. |$x\in \boldsymbol{\Omega} $||$\int _0^{+\infty }x\;a^2(x)\;\mathrm{d}x<+\infty $| (for example, |$a(x)=\frac{1}{\sqrt{x(x^2+1)}}$|⁠). Let |$ H= L^2(\boldsymbol{\Omega} )$| and |$Az=-\frac{\partial z}{\partial x}$|⁠, for all |$z\in D(A) =\{y\in H^1(\boldsymbol{\Omega} )\;/\; y(0)=0 \; \}$| and Bz = az, for all z ∈ D(B) := D(A). Thus, B is unbounded from |$L^2(\boldsymbol{\Omega} ) $| to |$L^2(\boldsymbol{\Omega} )$|⁠. Moreover, by Morrey's inequality (see Corollary 9.14 in the book by Brezis, 2010), there exists C > 0 such that for all z ∈ D(A) we have |$|z(x)|\leqslant C\sqrt{x}\;\|\nabla z\|_{L^2(\boldsymbol{\Omega} )}$|⁠, a.e. |$x\in \boldsymbol{\Omega} $|⁠, then |$\int _{0}^{+\infty }|a(x)z(x)|^2\, \mathrm{d}x\leqslant C^2\; \|\nabla z\|_{L^2(\boldsymbol{\Omega} )}^2\; \int _0^{+\infty }xa^2(x)\;\mathrm{d}x$|⁠, then B is A-bounded i.e. (1.2) holds for |$M=C\sqrt{\int _0^{+\infty }x\;a^2(x)\;\mathrm{d}x}$|⁠. Furthermore, for all sequence |$(y_n)_{n\in \mathbb{N}} \subset H_0^1(\boldsymbol{\Omega} )$| such that |$y_n\rightharpoonup y$| in H and |$<By_n,y_n>\rightarrow 0$|⁠, as |$n\rightarrow +\infty $|⁠, there exists a subsequence of |$(y_n)_{n\in \mathbb{N}}$| still denoted by |$(y_n)_{n\in \mathbb{N}}$| such that |$a(x)y_n^2(x)\rightarrow 0$|⁠, a.e. |$x\in \boldsymbol{\Omega} $| as |$n\to +\infty $|⁠. If in addition |$(Ay_n)_{n\in \mathbb{N}}$| is bounded in H, then for all test function |$\varphi $| we have |$|y_n(x)\;\boldsymbol{\varphi} (x)|\leqslant C\|\nabla y_n\|_{L^2(\boldsymbol{\Omega} )}\sqrt{x}\;|\boldsymbol{\varphi} (x)|$|⁠. This combined with the dominated convergence theorem gives |$\langle y_n,\varphi \rangle \rightarrow 0$| as |$n\rightarrow +\infty $|⁠, we conclude that |$y_n\rightharpoonup 0$| in H, as |$n\to +\infty $|⁠. Then y = 0, thus the condition |$(\boldsymbol{C}_1)$| is verified. As a consequence of Theorem 4.1, we have the following result: Proposition 5.2 Suppose that a(x) > 0, a.e. |$x\in \boldsymbol{\Omega} $| and |$\int _0^{+\infty }x\;a^2(x)\;\mathrm{d}x<+\infty $| . Then there exists |$\boldsymbol{\rho} _0>0$| such that for any |$0<\boldsymbol{\rho} < \rho _0$| and for all |$z_0\in L^2(\boldsymbol{\Omega} )$|⁠, the system (5.2) controlled by the feedback |$v(t)=- \boldsymbol{\rho} \; \frac{\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\;\mathrm{d}x}{1+\int _{\boldsymbol{\Omega}} a(x)\;|z(x,t)|^{2}\, \mathrm{d}x}$| admits a unique solution |$z\in{\mathcal C}(0,+\infty ;H)$|⁠, and we have |$z(t)\rightharpoonup 0$| in |$L^2(\boldsymbol{\Omega} )$|⁠, as |$t\to +\infty $|⁠. 6. Conclusion In this paper, a set of stabilization results for unbounded bilinear systems has been given. Sufficient conditions for weak and strong stabilizations have been given. The stabilization control is uniformly bounded with respect to time and initial states. The established results can be applied to different type of bilinear system including parabolic and hyperbolic case. Though the established results enable us to discuss various types of stabilization problems as illustrated above, there are other interesting situations which are not covered by the present study. This is the case when the control is exercised through the boundary or a point for systems governed by partial differential equations. Acknowledgements The authors would like to thank the anonymous referees for their valuable comments. References Ball , J. & Slemrod , M. ( 1979 ) Feedback stabilization of distributed semilinear control systems . Appl. Math. Optim. , 5 , 169 – 179 . Google Scholar Crossref Search ADS WorldCat Berrahmoune , L. ( 1999 ) Stabilization and decay estimate for distributed bilinear systems . Systems Control Lett. , 36 , 167 – 171 . Google Scholar Crossref Search ADS WorldCat Berrahmoune , L. ( 2010 ) Stabilization of unbounded bilinear control systems in Hilbert space . J. Math. Anal. Appl. , 372 , 645 – 655 . Google Scholar Crossref Search ADS WorldCat Bounit , H. & Hammouri , H. ( 1999 ) Feedback stabilization for a class of distributed semilinear control systems . Nonlinear Anal. , 37 , 953 – 969 . Google Scholar Crossref Search ADS WorldCat Bounit , H. & Idrissi , A. ( 2005 ) Regular bilinear systems . IMA J. Math. Control Inform., 22 , 26 – 57 . Google Scholar Crossref Search ADS WorldCat Brezis , H. ( 1973 ) Opérateurs Maximaux Monotones et Semi-groupes de Contractions dans les Espaces de Hilbert . Mathematics Studies, vol. 5. North-Holland . Brezis , H. ( 2010 ) Functional Analysis, Sobolev Spaces and Partial Differential Equations . New York : Springer . Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Dafermos , C. M. & Slemrod , M. ( 1973 ) Asymptotic behavior of nonlinear contraction semigroups . New York : J. Funct. Anal ., 13 , 97 – 106 . Google Scholar Crossref Search ADS WorldCat Desch , W. & Schappacher , W. ( 1984 ) On relatively bounded perturbations of linear |$c_0$|-semigroups . Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) , 11 , 327 – 341 . OpenURL Placeholder Text WorldCat El Ayadi , R. , Ouzahra , M. & Boutoulout , A. ( 2012 ) Strong stabilisation and decay estimate for unbounded bilinear systems . Int. J. Control , 85 , 1497 – 1505 . Google Scholar Crossref Search ADS WorldCat Engel , K.-J. & Nagel , R. ( 2000 ) One-Parameter Semigroups for Linear Evolution Equations . Graduate Texts in Mathematics , vol. 194. New York : Springer . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Haraux , A . ( 2001 ) Some sharp estimates for parabolic equations . J. Funct. Anal. , 187 , 110 – 128 . Google Scholar Crossref Search ADS WorldCat Hille , E. & Phillips , R. S. ( 1957 ) Functional Analysis and Semi-groups . Unite States of America : American Mathematical Society Colloquium Publications , vol. 31. Providence, R.I. Revised ed . MR 0089373 . Hislop , P. D. & Sigal , I. M. ( 1996 ) Perturbation theory: relatively bounded perturbations . Introduction to Spectral Theory . New York, NY : Springer , pp. 149 – 159 . ISO 690 . Idrissi , A . ( 2003 ) On the unboundedness of control operators for bilinear systems . Quaestiones Math. , 26 , 105 – 123 . Google Scholar Crossref Search ADS WorldCat Idrissi , A. & Bounit , H. ( 2008 ) Time-varying bilinear systems . SIAM J. Control Optim. , 47 , 1097 – 1126 . Google Scholar Crossref Search ADS WorldCat Komura , Y. ( 1967 ) Nonlinear semigroups in Hilbert space . J. Math. Soc. Japan , 19 , 493 – 507 . Google Scholar Crossref Search ADS WorldCat Ouzahra , M. ( 2008 ) Strong stabilization with decay estimate of semilinear systems . Systems Control Lett. , 57 , 813 – 815 . Google Scholar Crossref Search ADS WorldCat Ouzahra , M. ( 2010 ) Exponential and weak stabilization of constrained bilinear systems . SIAM J. Control Optim. , 48 , 3962 – 3974 . Google Scholar Crossref Search ADS WorldCat Ouzahra , M. ( 2011 ) Exponential stabilization of distributed semilinear systems by optimal control . J. Math. Anal. Appl. , 380 , 117 – 123 . Google Scholar Crossref Search ADS WorldCat Pazy , A. ( 1978 ) On the asymptotic behavior of semigroups of nonlinear contrations in Hilbert space . J. Funct. Anal. , 27 , 292 – 307 . Google Scholar Crossref Search ADS WorldCat Toledano , J. M. A. , Benavides , T. D. & Acedo , G. L. ( 1997 ) Measures of Noncompactness in Metric Fixed Point Theory , vol. 99. Birkhäuser Basel : Springer Science & Business Media . Google Scholar Crossref Search ADS Google Scholar Google Preview WorldCat COPAC Weiss , G. ( 1989 ) Admissibility of unbounded control operators . SIAM J. Control Optim. , 27 , 527 – 545 . Google Scholar Crossref Search ADS WorldCat Weiss , G . ( 1994 ) Regular linear systems with feedback . Math. Control Signals Systems , 7 , 23 – 57 . Google Scholar Crossref Search ADS WorldCat Yarotsky , D. A. ( 2006 ) Ground states in relatively bounded quantum perturbations of classical lattice systems . Comm. Math. Phys. , 261 , 799 – 819 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2019. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. [email protected] IMA Journal of Mathematical Control and Information – Oxford University Press Feedback stabilization of distributed semilinear control systems Stabilization and decay estimate for distributed bilinear systems Stabilization of unbounded bilinear control systems in Hilbert space Feedback stabilization for a class of distributed semilinear control systems Regular bilinear systems citation_publisher=Springer, New York; Functional Analysis, Sobolev Spaces and Partial Differential Equations Asymptotic behavior of nonlinear contraction semigroups On relatively bounded perturbations of linear $c_0$-semigroups Strong stabilisation and decay estimate for unbounded bilinear systems citation_publisher=Springer, New York; One-Parameter Semigroups for Linear Evolution Equations Some sharp estimates for parabolic equations On the unboundedness of control operators for bilinear systems Time-varying bilinear systems Nonlinear semigroups in Hilbert space Strong stabilization with decay estimate of semilinear systems Exponential and weak stabilization of constrained bilinear systems Exponential stabilization of distributed semilinear systems by optimal control On the asymptotic behavior of semigroups of nonlinear contrations in Hilbert space citation_publisher=Springer Science & Business Media, Birkhäuser Basel; Measures of Noncompactness in Metric Fixed Point Theory Admissibility of unbounded control operators Regular linear systems with feedback Ground states in relatively bounded quantum perturbations of classical lattice systems El Ayadi, .R., & Ouzahra, .M. (2019). Feedback stabilization for unbounded bilinear systems using bounded control. IMA Journal of Mathematical Control and Information, 36(4), El Ayadi, Rachid, and Mohamed Ouzahra. "Feedback stabilization for unbounded bilinear systems using bounded control." IMA Journal of Mathematical Control and Information 36.4 (2019).
CommonCrawl
Only show content I have access to (39) Only show open access (6) Chapters (23) Last 3 years (21) Over 3 years (152) From year: To year: Subject: Show more Physics and Astronomy (60) Materials Research (50) Statistics and Probability (6) Politics and International Relations (1) Journals Show more MRS Online Proceedings Library Archive (49) Psychological Medicine (11) Proceedings of the British Society of Animal Science (6) British Journal of Nutrition (5) Epidemiology & Infection (5) Proceedings of the International Astronomical Union (5) Development and Psychopathology (4) Mineralogical Magazine (4) The British Journal of Psychiatry (4) Animal Health Research Reviews (3) Animal Science (3) International Astronomical Union Colloquium (3) Journal of the International Neuropsychological Society (3) Parasitology (3) Publications of the Astronomical Society of Australia (3) Symposium - International Astronomical Union (3) The Journal of Agricultural Science (3) The Journal of Laryngology & Otology (3) Canadian Journal of Emergency Medicine (2) Infection Control & Hospital Epidemiology (2) Publishers: Show more Societies Show more Materials Research Society (51) International Astronomical Union (12) BSAS (5) Mineralogical Society (4) Royal College of Speech and Language Therapists (4) The Royal College of Psychiatrists (4) International Neuropsychological Society INS (3) Nutrition Society (3) Royal Aeronautical Society (3) CINP (2) Canadian Association of Emergency Physicians (CAEP) (2) European Psychiatric Association (2) Nestle Foundation - enLINK (2) Society for Healthcare Epidemiology of America (SHEA) (2) The Australian Society of Otolaryngology Head and Neck Surgery (2) Developmental Origins of Health and Disease Society (1) Entomological Society of Canada TCE ESC (1) Fauna & Flora International - Oryx (1) MiMi / EMAS - European Microbeam Analysis Society (1) Society for Business Ethics (1) Society of Antiquaries of London (1) Series: Show more Cambridge Handbooks in Psychology (2) Cambridge Studies in Biological and Evolutionary Anthropology (2) Cambridge Child and Adolescent Psychiatry (1) Cambridge Companions to Religion (1) Cambridge Pocket Clinicians (1) Collections: Show more Cambridge Handbooks (2) Cambridge Handbooks of Psychology (2) Cambridge Companions (1) The Cambridge Companions to Philosophy and Religion (1) To send content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to . To send content items to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services Please confirm that you accept the terms of use. Relevance Title Sorted by Date Dysphagia presentation and management following coronavirus disease 2019: an acute care tertiary centre experience C Dawson, R Capewell, S Ellis, S Matthews, S Adamson, M Wood, L Fitch, K Reid, M Shaw, J Wheeler, P Pracy, P Nankivell, N Sharma Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 11 / November 2020 Print publication: November 2020 As the pathophysiology of Covid-19 emerges, this paper describes dysphagia as a sequela of the disease, including its diagnosis and management, hypothesised causes, symptomatology in relation to viral progression, and concurrent variables such as intubation, tracheostomy and delirium, at a tertiary UK hospital. During the first wave of the Covid-19 pandemic, 208 out of 736 patients (28.9 per cent) admitted to our institution with SARS-CoV-2 were referred for swallow assessment. Of the 208 patients, 102 were admitted to the intensive treatment unit for mechanical ventilation support, of which 82 were tracheostomised. The majority of patients regained near normal swallow function prior to discharge, regardless of intubation duration or tracheostomy status. Dysphagia is prevalent in patients admitted either to the intensive treatment unit or the ward with Covid-19 related respiratory issues. This paper describes the crucial role of intensive swallow rehabilitation to manage dysphagia associated with this disease, including therapeutic respiratory weaning for those with a tracheostomy. Incipient Alcohol Use in Childhood: Early Alcohol Sipping and its Relations with Psychopathology and Personality – Corrigendum Ashley L. Watts, Phillip K. Wood, Kristina M. Jackson, Krista M. Lisdahl, Mary M. Heitzeg, Raul Gonzalez, Susan F. Tapert, Deanna M. Barch, Kenneth J. Sher Journal: Development and Psychopathology , First View Published online by Cambridge University Press: 12 October 2020, p. 1 Heritable anisotropy associated with cognitive impairments among patients with schizophrenia and their non-psychotic relatives in multiplex families K. M. Prasad, J. Gertler, S. Tollefson, J. A. Wood, D. Roalf, R. C. Gur, R. E. Gur, L. Almasy, M. F. Pogue-Geile, V. L. Nimgaonkar Journal: Psychological Medicine , First View Published online by Cambridge University Press: 03 September 2020, pp. 1-12 To test the functional implications of impaired white matter (WM) connectivity among patients with schizophrenia and their relatives, we examined the heritability of fractional anisotropy (FA) measured on diffusion tensor imaging data acquired in Pittsburgh and Philadelphia, and its association with cognitive performance in a unique sample of 175 multigenerational non-psychotic relatives of 23 multiplex schizophrenia families and 240 unrelated controls (total = 438). We examined polygenic inheritance (h2r) of FA in 24 WM tracts bilaterally, and also pleiotropy to test whether heritability of FA in multiple WM tracts is secondary to genetic correlation among tracts using the Sequential Oligogenic Linkage Analysis Routines. Partial correlation tests examined the correlation of FA with performance on eight cognitive domains on the Penn Computerized Neurocognitive Battery, controlling for age, sex, site and mother's education, followed by multiple comparison corrections. Significant total additive genetic heritability of FA was observed in all three-categories of WM tracts (association, commissural and projection fibers), in total 33/48 tracts. There were significant genetic correlations in 40% of tracts. Diagnostic group main effects were observed only in tracts with significantly heritable FA. Correlation of FA with neurocognitive impairments was observed mainly in heritable tracts. Our data show significant heritability of all three-types of tracts among relatives of schizophrenia. Significant heritability of FA of multiple tracts was not entirely due to genetic correlations among the tracts. Diagnostic group main effect and correlation with neurocognitive performance were mainly restricted to tracts with heritable FA suggesting shared genetic effects on these traits. Incipient alcohol use in childhood: Early alcohol sipping and its relations with psychopathology and personality Published online by Cambridge University Press: 11 June 2020, pp. 1-13 Prior research has shown that sipping of alcohol begins to emerge during childhood and is potentially etiologically significant for later substance use problems. Using a large, community sample of 9- and 10-year-olds (N = 11,872; 53% female), we examined individual differences in precocious alcohol use in the form of alcohol sipping. We focused explicitly on features that are robust and well-demonstrated correlates of, and antecedents to, alcohol excess and related problems later in the lifespan, including youth- and parent-reported externalizing traits (i.e., impulsivity, behavioral inhibition and activation) and psychopathology. Seventeen percent of the sample reported sipping alcohol outside of a religiously sanctioned activity by age 9 or 10. Several aspects of psychopathology and personality emerged as small but reliable correlates of sipping. Nonreligious sipping was related to youth-reported impulsigenic traits, aspects of behavioral activation, prodromal psychotic-like symptoms, and mood disorder diagnoses, as well as parent-reported externalizing disorder diagnoses. Religious sipping was unexpectedly associated with certain aspects of impulsivity. Together, our findings point to the potential importance of impulsivity and other transdiagnostic indicators of psychopathology (e.g., emotion dysregulation, novelty seeking) in the earliest forms of drinking behavior. Does the unified protocol really change neuroticism? Results from a randomized trial Shannon Sauer-Zavala, Jay C. Fournier, Stephanie Jarvi Steele, Brittany K. Woods, Mengxing Wang, Todd J. Farchione, David H. Barlow Published online by Cambridge University Press: 21 April 2020, pp. 1-10 Neuroticism is associated with the onset and maintenance of a number of mental health conditions, as well as a number of deleterious outcomes (e.g. physical health problems, higher divorce rates, lost productivity, and increased treatment seeking); thus, the consideration of whether this trait can be addressed in treatment is warranted. To date, outcome research has yielded mixed results regarding neuroticism's responsiveness to treatment, perhaps due to the fact that study interventions are typically designed to target disorder symptoms rather than neuroticism itself. The purpose of the current study was to explore whether a course of treatment with the unified protocol (UP), a transdiagnostic intervention that was explicitly developed to target neuroticism, results in greater reductions in neuroticism compared to gold-standard, symptom focused cognitive behavioral therapy (CBT) protocols and a waitlist (WL) control condition. Patients with principal anxiety disorders (N = 223) were included in this study. They completed a validated self-report measure of neuroticism, as well as clinician-rated measures of psychological symptoms. At week 16, participants in the UP condition exhibited significantly lower levels of neuroticism than participants in the symptom-focused CBT (t(218) = −2.17, p = 0.03, d = −0.32) and WL conditions(t(207) = −2.33, p = 0.02, d = −0.43), and these group differences remained after controlling for simultaneous fluctuations in depression and anxiety symptoms. Treatment effects on neuroticism may be most robust when this trait is explicitly targeted. How We Understand Hallucinations (HUSH) K. Caldwell, R. Upthegrove, J. Ives, M. Broome, S. Wood, F. Oyebode Journal: European Psychiatry / Volume 30 / Issue S1 / March 2015 Published online by Cambridge University Press: 15 April 2020, p. 1 Evidence suggests that the subjective experience of AVHs cannot be explained by any of the existing cognitive models,[1] highlighting the obvious need to properly investigate the actual, lived experience of AVHs, and derive models/theories that fit the complexity of this. Via phenomenological interviews and ethnographic diary methods, we aim to gain a deeper insight into the experience of AVHs. To explore the phenomenological quality of AVHs, as they happen/reveal themselves to consciousness, [2] [3] without relying on existing suppositions. Participants with First Episode Psychosis were recruited from the Birmingham Early Intervention Service (EIS), BSMHFT. In-depth 'walking interviews' were carried out with each participant, together with standardised assessment measures of voices. Prior to interviews, participants were asked to complete a dairy and take photographs, further capturing aspects of their AVH experiences. 20 participants have completed interviews to date. Emerging themes cover the form and quality of voices (i.e. as being separate to self, imposing, compelling etc.), and participants' understanding and management of these experiences. Authentic descriptions gleaned from participants have the potential to increase our understanding of the relationship between the phenomenology and neurobiology of AVHs and, in turn, the experience as a whole. Lessons from a Balint group scheme led by psychiatry trainees for year 3 bristol medical students on their medicine/surgery placements K. Wood, A. Kothari, J. Malone Published online by Cambridge University Press: 23 March 2020, p. S169 The UK General Medical Council highlights the centrality of effective communication, reflective practice and the doctor-patient relationship in medical practice. A decline in empathy has been documented as occurring within clinical and early postgraduate years, potentially affecting diagnostic processes and patient engagement. Access to Balint groups can enhance awareness of the patient beyond the medical model, but remains limited at many UK medical schools. This scheme offered Balint groups to Bristol medical students in their first clinical year, demonstrating that this method is relevant beyond psychiatry. Initial focus groups with medical students indicated that many felt unable to discuss distressing aspects of clinical encounters. During 2013-2014, a Balint scheme run by psychiatry trainees was started for 150 students in their psychiatry placements. During 2014-15, the scheme was introduced to all third-year medical students on their medicine/surgery placement. Balint leaders have group supervision with a psychoanalytic psychotherapist. Evaluation of the scheme was based on pre-and post-group questionnaires and leaders' process notes. Sixteen groups led by 12 trainees were run twice over the year to serve 246 medical students. Two example cases are discussed here. Students appreciated the chance to discuss complex encounters with patients in a supportive peer environment, and work through a range of emotionally challenging issues. Novel aspects of this work include the implementation of Balint groups within medicine and surgery placements; the enrolment of psychiatry trainees as leaders with group supervision and leadership training workshops from the UK Balint Society; and the scale of the scheme. Disclosure of interest The authors have not supplied their declaration of competing interest. Outcomes in the age of competency-based medical education: Recommendations for emergency medicine training in Canada from the 2019 symposium of academic emergency physicians Teresa M. Chan, Quinten S. Paterson, Andrew K. Hall, Fareen Zaver, Robert A. Woods, Stanley J. Hamstra, Alexandra Stefan, Daniel K. Ting, Brent Thoma Journal: Canadian Journal of Emergency Medicine / Volume 22 / Issue 2 / March 2020 Published online by Cambridge University Press: 25 March 2020, pp. 204-214 Print publication: March 2020 The national implementation of competency-based medical education (CBME) has prompted an increased interest in identifying and tracking clinical and educational outcomes for emergency medicine training programs. For the 2019 Canadian Association of Emergency Physicians (CAEP) Academic Symposium, we developed recommendations for measuring outcomes in emergency medicine training in the context of CBME to assist educational leaders and systems designers in program evaluation. We conducted a three-phase study to generate educational and clinical outcomes for emergency medicine (EM) education in Canada. First, we elicited expert and community perspectives on the best educational and clinical outcomes through a structured consultation process using a targeted online survey. We then qualitatively analyzed these responses to generate a list of suggested outcomes. Last, we presented these outcomes to a diverse assembly of educators, trainees, and clinicians at the CAEP Academic Symposium for feedback and endorsement through a voting process. Academic Symposium attendees endorsed the measurement and linkage of CBME educational and clinical outcomes. Twenty-five outcomes (15 educational, 10 clinical) were derived from the qualitative analysis of the survey results and the most important short- and long-term outcomes (both educational and clinical) were identified. These outcomes can be used to help measure the impact of CBME on the practice of Emergency Medicine in Canada to ensure that it meets both trainee and patient needs. Childhood trauma and cognitive functioning in individuals at clinical high risk (CHR) for psychosis T. Velikonja, E. Velthorst, J. Zinberg, T. D. Cannon, B. A. Cornblatt, D. O. Perkins, K. S. Cadenhead, M. T. Tsuang, J. Addington, S. W. Woods, T. McGlashan, D. H. Mathalon, W. Stone, M. Keshavan, L. Seidman, C. E. Bearden Published online by Cambridge University Press: 21 January 2020, pp. 1-12 Evidence suggests that early trauma may have a negative effect on cognitive functioning in individuals with psychosis, yet the relationship between childhood trauma and cognition among those at clinical high risk (CHR) for psychosis remains unexplored. Our sample consisted of 626 CHR children and 279 healthy controls who were recruited as part of the North American Prodrome Longitudinal Study 2. Childhood trauma up to the age of 16 (psychological, physical, and sexual abuse, emotional neglect, and bullying) was assessed by using the Childhood Trauma and Abuse Scale. Multiple domains of cognition were measured at baseline and at the time of psychosis conversion, using standardized assessments. In the CHR group, there was a trend for better performance in individuals who reported a history of multiple types of childhood trauma compared with those with no/one type of trauma (Cohen d = 0.16). A history of multiple trauma types was not associated with greater cognitive change in CHR converters over time. Our findings tentatively suggest there may be different mechanisms that lead to CHR states. Individuals who are at clinical high risk who have experienced multiple types of childhood trauma may have more typically developing premorbid cognitive functioning than those who reported minimal trauma do. Further research is needed to unravel the complexity of factors underlying the development of at-risk states. Comparative efficacy of teat sealants given prepartum for prevention of intramammary infections and clinical mastitis: a systematic review and network meta-analysis Antimicrobial stewardship in livestock C. B. Winder, J. M. Sargeant, D. Hu, C. Wang, D. F. Kelton, S. J. Leblanc, T. F. Duffield, J. Glanville, H. Wood, K. J. Churchill, J. Dunn, M. D. Bergevin, K. Dawkins, S. Meadows, B. Deb, M. Reist, C. Moody, A. M. O'Connor Journal: Animal Health Research Reviews / Volume 20 / Issue 2 / December 2019 Published online by Cambridge University Press: 21 February 2020, pp. 182-198 Print publication: December 2019 A systematic review and network meta-analysis were conducted to assess the relative efficacy of internal or external teat sealants given at dry-off in dairy cattle. Controlled trials were eligible if they assessed the use of internal or external teat sealants, with or without concurrent antimicrobial therapy, compared to no treatment or an alternative treatment, and measured one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, IMI during the first 30 days in milk (DIM), or clinical mastitis during the first 30 DIM. Risk of bias was based on the Cochrane Risk of Bias 2.0 tool with modified signaling questions. From 2280 initially identified records, 32 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. Use of an internal teat sealant (bismuth subnitrate) significantly reduced the risk of new IMI at calving compared to non-treated controls (RR = 0.36, 95% CI 0.25–0.72). For comparisons between antimicrobial and teat sealant groups, concerns regarding precision were seen. Synthesis of the primary research identified important challenges related to the comparability of outcomes, replication and connection of interventions, and quality of reporting of study conduct. Comparative efficacy of blanket versus selective dry-cow therapy: a systematic review and pairwise meta-analysis C. B. Winder, J. M. Sargeant, D. F. Kelton, S. J. Leblanc, T. F. Duffield, J. Glanville, H. Wood, K. J. Churchill, J. Dunn, M. d. Bergevin, K. Dawkins, S. Meadows, A. M. O'Connor A systematic review and meta-analysis were conducted to determine the efficacy of selective dry-cow antimicrobial therapy compared to blanket therapy (all quarters/all cows). Controlled trials were eligible if any of the following were assessed: incidence of clinical mastitis during the first 30 DIM, frequency of intramammary infection (IMI) at calving, or frequency of IMI during the first 30 DIM. From 3480 identified records, nine trials were data extracted for IMI at calving. There was an insufficient number of trials to conduct meta-analysis for the other outcomes. Risk of IMI at calving in selectively treated cows was higher than blanket therapy (RR = 1.34, 95% CI = 1.13, 1.16), but substantial heterogeneity was present (I2 = 58%). Subgroup analysis showed that, for trials using internal teat sealants, there was no difference in IMI risk at calving between groups, and no heterogeneity was present. For trials not using internal teat sealants, there was an increased risk in cows assigned to a selective dry-cow therapy protocol, compared to blanket treatment, with substantial heterogeneity in this subgroup. However, the small number of trials and heterogeneity in the subgroup without internal teat sealants suggests that the relative risk between treatments may differ from the determined point estimates based on other unmeasured factors. Comparative efficacy of antimicrobial treatments in dairy cows at dry-off to prevent new intramammary infections during the dry period or clinical mastitis during early lactation: a systematic review and network meta-analysis A systematic review and network meta-analysis were conducted to assess the relative efficacy of antimicrobial therapy given to dairy cows at dry-off. Eligible studies were controlled trials assessing the use of antimicrobials compared to no treatment or an alternative treatment, and assessed one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, incidence of IMI during the first 30 days in milk (DIM), or incidence of clinical mastitis during the first 30 DIM. Databases and conference proceedings were searched for relevant articles. The potential for bias was assessed using the Cochrane Risk of Bias 2.0 algorithm. From 3480 initially identified records, 45 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. The use of cephalosporins, cloxacillin, or penicillin with aminoglycoside significantly reduced the risk of new IMI at calving compared to non-treated controls (cephalosporins, RR = 0.37, 95% CI 0.23–0.65; cloxacillin, RR = 0.55, 95% CI 0.38–0.79; penicillin with aminoglycoside, RR = 0.42, 95% CI 0.26–0.72). Synthesis revealed challenges with a comparability of outcomes, replication of interventions, definitions of outcomes, and quality of reporting. The use of reporting guidelines, replication among interventions, and standardization of outcome definitions would increase the utility of primary research in this area. A Sensitivity Analysis of the Application of Integrated Species Distribution Models to Mobile Species: A Case Study with the Endangered Baird's Tapir Cody J Schank, Michael V Cove, Marcella J Kelly, Clayton K Nielsen, Georgina O'Farrill, Ninon Meyer, Christopher A Jordan, Jose F González-Maya, Diego J Lizcano, Ricardo Moreno, Michael Dobbins, Victor Montalvo, Juan Carlos Cruz Díaz, Gilberto Pozo Montuy, J Antonio de la Torre, Esteban Brenes-Mora, Margot A Wood, Jessica Gilbert, Walter Jetz, Jennifer A Miller Journal: Environmental Conservation / Volume 46 / Issue 3 / September 2019 Published online by Cambridge University Press: 12 June 2019, pp. 184-192 Print publication: September 2019 Species distribution models (SDMs) are statistical tools used to develop continuous predictions of species occurrence. 'Integrated SDMs' (ISDMs) are an elaboration of this approach with potential advantages that allow for the dual use of opportunistically collected presence-only data and site-occupancy data from planned surveys. These models also account for survey bias and imperfect detection through the use of a hierarchical modelling framework that separately estimates the species–environment response and detection process. This is particularly helpful for conservation applications and predictions for rare species, where data are often limited and prediction errors may have significant management consequences. Despite this potential importance, ISDMs remain largely untested under a variety of scenarios. We performed an exploration of key modelling decisions and assumptions on an ISDM using the endangered Baird's tapir (Tapirus bairdii) as a test species. We found that site area had the strongest effect on the magnitude of population estimates and underlying intensity surface and was driven by estimates of model intercepts. Selecting a site area that accounted for the individual movements of the species within an average home range led to population estimates that coincided with expert estimates. ISDMs that do not account for the individual movements of species will likely lead to less accurate estimates of species intensity (number of individuals per unit area) and thus overall population estimates. This bias could be severe and highly detrimental to conservation actions if uninformed ISDMs are used to estimate global populations of threatened and data-deficient species, particularly those that lack natural history and movement information. However, the ISDM was consistently the most accurate model compared to other approaches, which demonstrates the importance of this new modelling framework and the ability to combine opportunistic data with systematic survey data. Thus, we recommend researchers use ISDMs with conservative movement information when estimating population sizes of rare and data-deficient species. ISDMs could be improved by using a similar parameterization to spatial capture–recapture models that explicitly incorporate animal movement as a model parameter, which would further remove the need for spatial subsampling prior to implementation. Maternal neglect and the serotonin system are associated with daytime sleep in infant rhesus monkeys Alexander Baxter, Elizabeth K. Wood, Christina S. Barr, Daniel B. Kay, Stephen J. Suomi, J. Dee Higley Journal: Development and Psychopathology / Volume 32 / Issue 1 / February 2020 Published online by Cambridge University Press: 04 February 2019, pp. 1-10 Print publication: February 2020 Environmental and biological factors contribute to sleep development during infancy. Parenting plays a particularly important role in modulating infant sleep, potentially via the serotonin system, which is itself involved in regulating infant sleep. We hypothesized that maternal neglect and serotonin system dysregulation would be associated with daytime sleep in infant rhesus monkeys. Subjects were nursery-reared infant rhesus macaques (n = 287). During the first month of life, daytime sleep-wake states were rated bihourly (0800–2100). Infants were considered neglected (n = 16) if before nursery-rearing, their mother repeatedly failed to retrieve them. Serotonin transporter genotype and concentrations of cerebrospinal fluid 5-hydroxyindoleacetic acid (5-HIAA) were used as markers of central serotonin system functioning. t tests showed that neglected infants were observed sleeping less frequently, weighed less, and had higher 5-HIAA than non-neglected nursery-reared infants. Regression revealed that serotonin transporter genotype moderated the relationship between 5-HIAA and daytime sleep: in subjects possessing the Ls genotype, there was a positive correlation between 5-HIAA and daytime sleep, whereas in subjects possessing the LL genotype there was no association. These results highlight the pivotal roles that parents and the serotonin system play in sleep development. Daytime sleep alterations observed in neglected infants may partially derive from serotonin system dysregulation. Point-prevalence study of antimicrobial use in public hospitals in southern Sri Lanka identifies opportunities for improving prescribing practices Tianchen Sheng, Gaya B. Wijayaratne, Thushani M. Dabrera, Richard J. Drew, Ajith Nagahawatte, Champica K. Bodinayake, Ruvini Kurukulasooriya, Truls Østbye, Kristin J. Nagaro, Cherin De Silva, Hasini Ranawakaarachchi, A. T. Sudarshana, Deverick J. Anderson, Christopher W. Woods, L. Gayani Tillekeratne Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 2 / February 2019 Published online by Cambridge University Press: 07 December 2018, pp. 224-227 A point-prevalence study of antimicrobial use among inpatients at 5 public hospitals in Sri Lanka revealed that 54.6% were receiving antimicrobials: 43.1% in medical wards, 68.0% in surgical wards, and 97.6% in intensive care wards. Amoxicillin-clavulanate was most commonly used for major indications. Among patients receiving antimicrobials, 31.0% received potentially inappropriate therapy. Meta-analysis across Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) consortium provides evidence for an association of serum vitamin D with pulmonary function Jiayi Xu, Traci M. Bartz, Geetha Chittoor, Gudny Eiriksdottir, Ani W. Manichaikul, Fangui Sun, Natalie Terzikhan, Xia Zhou, Sarah L. Booth, Guy G. Brusselle, Ian H. de Boer, Myriam Fornage, Alexis C. Frazier-Wood, Mariaelisa Graff, Vilmundur Gudnason, Tamara B. Harris, Albert Hofman, Ruixue Hou, Denise K. Houston, David R. Jacobs, Stephen B. Kritchevsky, Jeanne Latourelle, Rozenn N. Lemaitre, Pamela L. Lutsey, George O'Connor, Elizabeth C. Oelsner, James S. Pankow, Bruce M. Psaty, Rebecca R. Rohde, Stephen S. Rich, Jerome I. Rotter, Lewis J. Smith, Bruno H. Stricker, V. Saroja Voruganti, Thomas J. Wang, M. Carola Zillikens, R. Graham Barr, Josée Dupuis, Sina A. Gharib, Lies Lahousse, Stephanie J. London, Kari E. North, Albert V. Smith, Lyn M. Steffen, Dana B. Hancock, Patricia A. Cassano Journal: British Journal of Nutrition / Volume 120 / Issue 10 / 28 November 2018 Published online by Cambridge University Press: 12 September 2018, pp. 1159-1170 Print publication: 28 November 2018 The role that vitamin D plays in pulmonary function remains uncertain. Epidemiological studies reported mixed findings for serum 25-hydroxyvitamin D (25(OH)D)–pulmonary function association. We conducted the largest cross-sectional meta-analysis of the 25(OH)D–pulmonary function association to date, based on nine European ancestry (EA) cohorts (n 22 838) and five African ancestry (AA) cohorts (n 4290) in the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium. Data were analysed using linear models by cohort and ancestry. Effect modification by smoking status (current/former/never) was tested. Results were combined using fixed-effects meta-analysis. Mean serum 25(OH)D was 68 (sd 29) nmol/l for EA and 49 (sd 21) nmol/l for AA. For each 1 nmol/l higher 25(OH)D, forced expiratory volume in the 1st second (FEV1) was higher by 1·1 ml in EA (95 % CI 0·9, 1·3; P<0·0001) and 1·8 ml (95 % CI 1·1, 2·5; P<0·0001) in AA (Prace difference=0·06), and forced vital capacity (FVC) was higher by 1·3 ml in EA (95 % CI 1·0, 1·6; P<0·0001) and 1·5 ml (95 % CI 0·8, 2·3; P=0·0001) in AA (Prace difference=0·56). Among EA, the 25(OH)D–FVC association was stronger in smokers: per 1 nmol/l higher 25(OH)D, FVC was higher by 1·7 ml (95 % CI 1·1, 2·3) for current smokers and 1·7 ml (95 % CI 1·2, 2·1) for former smokers, compared with 0·8 ml (95 % CI 0·4, 1·2) for never smokers. In summary, the 25(OH)D associations with FEV1 and FVC were positive in both ancestries. In EA, a stronger association was observed for smokers compared with never smokers, which supports the importance of vitamin D in vulnerable populations. LifeLab Southampton: a programme to engage adolescents with DOHaD concepts as a tool for increasing health literacy in teenagers –a pilot cluster-randomized control trial DOHaD on International Women's Day 2020 K. Woods-Townsend, H. Leat, J. Bay, L. Bagust, H. Davey, D. Lovelock, A. Christodoulou, J. Griffiths, M. Grace, K. Godfrey, M. Hanson, H. Inskip Journal: Journal of Developmental Origins of Health and Disease / Volume 9 / Issue 5 / October 2018 Print publication: October 2018 Adolescence is a critical time point in the lifecourse. LifeLab is an educational intervention engaging adolescents in understanding Developmental Origins of Health and Disease (DOHaD) concepts and the impact of the early life environment on future health, benefitting both their long-term health and that of the next generation. We aimed to assess whether engaging adolescents with DOHaD concepts improves scientific literacy and whether engagement alone improves health behaviours. Six schools were randomized, three to intervention and three to control. Outcome measures were changed in knowledge, and intended and actual behaviour in relation to diet and lifestyle. A total of 333 students completed baseline and follow-up questionnaires. At 12 months, intervention students showed greater understanding of DOHaD concepts. No sustained changes in behaviours were identified. Adolescents' engagement with DOHaD concepts can be improved and maintained over 12 months. Such engagement does not itself translate into behaviour change. The intervention has consequently been revised to include additional components beyond engagement alone. Ice accretion and release in fuel systems: Large-scale rig investigations J. K.-W. Lam, R. D. Woods Journal: The Aeronautical Journal / Volume 122 / Issue 1253 / July 2018 Published online by Cambridge University Press: 25 May 2018, pp. 1051-1082 Print publication: July 2018 Following the B777 accident at Heathrow in 2008, the certification authorities required Boeing, Airbus, and Rolls-Royce to conduct icing analysis and tests of their Rolls-Royce Trent engined aircraft fuel systems. The experience and the test data gained from these activities were distilled and released by Airbus to the EASA ICAR project for research and analysis. This paper provided an overview of the Airbus ice accretion and release tests. Brief narratives on the test rigs, the test procedure and methodology were given and key findings from the ice release investigations were presented. The accreted ice thickness was non-uniform; however, it is found typically c. $\mathrm{2\;\mathrm{m}\mathrm{m}}$ thick. Analysis of the accreted ice collected from the rig tests showed the ice was very porous. The porosity is very much dependant on how the water was introduced and mixed in the icing test rigs. The standard Airbus method produced accreted ice of higher porosity compared to that produced by the injection method. The porosity of the accreted ice from Airbus icing investigations was found to be c. 0.90. The relationship of permeability with porosity was inferred from published data and models for freshly fallen snow in the atmosphere. Derived permeability $\mathrm{7.0\times 10^{-9}\;\mathrm{\mathrm{\mathrm{m}}^{\mathrm{2}}}}$ was then applied in the CFD analysis of pipe flow with a porous wall lining to determine the shear stress on the accreted ice. It showed that 25%, 50% and 75% of the accreted ice has interface shear strength of less than $\mathrm{15.3\;\mathrm{Pa}}$ , $\mathrm{20.7\;\mathrm{Pa}}$ and $\mathrm{26.1\;\mathrm{Pa}}$ , respectively. Pedagogical Value of Polling-Place Observation by Students Christopher B. Mann, Gayle A. Alberda, Nathaniel A. Birkhead, Yu Ouyang, Chloe Singer, Charles Stewart, Michael C. Herron, Emily Beaulieu, Frederick Boehmke, Joshua Boston, Francisco Cantu, Rachael Cobb, David Darmofal, Thomas C. Ellington, Charles J. Finocchiaro, Michael Gilbert, Victor Haynes, Brian Janssen, David Kimball, Charles Kromkowski, Elena Llaudet, Matthew R. Miles, David Miller, Lindsay Nielson, Costas Panagopoulos, Andrew Reeves, Min Hee Seo, Haley Simmons, Corwin Smidt, Robert Stein, Rachel VanSickle-Ward, Abby K. Wood, Julie Wronski Journal: PS: Political Science & Politics / Volume 51 / Issue 4 / October 2018 Good education requires student experiences that deliver lessons about practice as well as theory and that encourage students to work for the public good—especially in the operation of democratic institutions (Dewey 1923; Dewy 1938). We report on an evaluation of the pedagogical value of a research project involving 23 colleges and universities across the country. Faculty trained and supervised students who observed polling places in the 2016 General Election. Our findings indicate that this was a valuable learning experience in both the short and long terms. Students found their experiences to be valuable and reported learning generally and specifically related to course material. Postelection, they also felt more knowledgeable about election science topics, voting behavior, and research methods. Students reported interest in participating in similar research in the future, would recommend other students to do so, and expressed interest in more learning and research about the topics central to their experience. Our results suggest that participants appreciated the importance of elections and their study. Collectively, the participating students are engaged and efficacious—essential qualities of citizens in a democracy. Magnesium II Index measurements from SORCE SOLSTICE and GOES-16 EUVS Martin Snow, Janet Machol, Francis G. Eparvier, Andrew R. Jones, William E. McClintock, Thomas N. Woods Journal: Proceedings of the International Astronomical Union / Volume 13 / Issue S340 / February 2018 The solar magnesium II core-to-wing ratio has been a well-studied proxy for chromospheric activity since 1978. Daily measurements at high spectral (0.1 nm) resolution began with the launch of the Solar Radiation and Climate Experiment (SORCE) in 2003. The next generation of measurements from the Extreme Ultraviolet Sensor (EUVS) on the Geostationary Operational Environmental Satellite 16 (GOES-16) will add high time cadence (every 30 seconds) to the observational Mg II irradiance record. We present a comparison of the two measurements during the period of overlap. Email your librarian or administrator to recommend adding this to your organisation's collection. Who would you like to send this to *
CommonCrawl
Tensegrity Explained 04 Forces, Demonstrations, IP3 04 Forces, Lego 0 Comments There is a new internet trend called "tensegrity" – an amalgamation of the words tension and integrity. It is basically a trend of videos showing how objects appear to float above a structure while experiencing tensions that appear to pull parts of the floating object downwards. In the diagram below, the red vectors show the tensions acting on the "floating" object while the green vector shows the weight of the object. The main force that makes this possible is the upward tension (shown below) exerted by the string from which the lowest point of the object is suspended. The other tensions are downward and serve to balance the moment created by the weight of the object. The centre of gravity of the "floating" structure lies just in front of the supporting string. The two smaller downward vectors at the back due to the strings balance the moment due to the weight, and give the structure stability sideways. This is a fun demonstration to teach the principle of moments, and concepts of equilibrium. The next image labels the forces acting on the upper structure. Notice that the centre of gravity lies somewhere in empty space due to its shape. Only the forces acting on the upper half of the structure are drawn in this image to illustrate why it is able to remain in equilibrium These tensegrity structures are very easy to build if you understand the physics behind them. Some tips on building such structures: Make the two strings exerting the downward tensions are easy to adjust by using technic pins to stick them into bricks with holes. You can simply pull to release more string in order to achieve the right balance. The two strings should be sufficiently far apart to prevent the floating structure from tilting too easily to the side. The centre of gravity of the floating structure must be in front of the string exerting the upward tension. The base must be wide enough to provide some stability so that the whole structure does not topple. Here's another tensegrity structure that I built: this time, with a Lego construction theme. Apart from using Lego, I have also 3D-printed a tensegrity structure that only requires rubber bands to hold up. In this case, the centre of gravity of the upper structure is somewhere more central with respect to the base structure. Hence, 3 rubber bands of almost equal tension will be used to provide the balance. The STL file for the 3D model can be downloaded from Thingiverse.com. The main challenge in assembling a tensegrity structure is the adjustment of the tensions such that the upper structure is balanced. One way to simplify that, for beginners, is to use one that is supported by rubber bands as the rubber bands can adjust their lengths according to the tensions required. 3D-printed tensegrity table balanced by rubber bands Another tip is to use some blu-tack instead of tying the knots dead such as in the photo below. This is a 3D printed structure, also from Thingiverse. 3D-printed tensegrity table held up and balanced by nylon string (This post was first published on 18 April 2020 and is revised on 24 August 2022.) Equilibrium of a Wall Shelf Seng Kwang Tan 04 Forces, GeoGebra, IP3 04 Forces 0 Comments This GeoGebra interactive allows students to vary the position of the centre of gravity of a shelf in order to observe the changes of the other two force vectors. The position of the supporting cable can be adjusted too. The ability to resolve vectors allows students to apply principle of moments to understand how the vertical components of each force vary. This is meant for the JC1 topic of Forces. To embed into SLS, you can use the following code: <iframe scrolling="no" title="Equilibrium of a Wall Shelf" src="https://www.geogebra.org/material/iframe/id/xdbr7qr5/width/700/height/500/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/false/rc/false/ld/false/sdz/false/ctl/false" width="700px" height="500px" style="border:0px;"> </iframe> Forces in Equilibrium 04 Forces, GeoGebra, IP3 04 Forces 0 Comments While preparing for a bridging class for those JAE JC1s who did not do pure physics in O-levels, I prepared an app on using a vector triangle to "solve problems for a static point mass under the action of 3 forces for 2-dimensional cases". For A-level students, they can be encouraged to use either the sine rule or the cosine rule to solve for magnitudes of forces instead of scale drawing, which is often unreliable. For students who are not familiar with these rules, here is a simple summary: Sine Rule If you are trying to find the length of a side while knowing only two angles and one side, use sine rule: $$\dfrac{A}{\sin{a}}=\dfrac{B}{\sin{b}}$$ Cosine Rule If you are trying to find the length of a side while knowing only one angle and two sides, use cosine rule: $$A^2 = B^2 + C^2 – 2BC\cos{a}$$ Forces on a ladder on a wall 04 Forces, Loom 0 Comments A ladder rests on rough ground and leans against a rough wall. Its weight W acts through the centre of gravity G. Forces also act on the ladder at P and Q. These forces are P and Q respectively. Which vector triangle represents the forces on the ladder? Using Loom and GeoGebra to explain a tutorial question 04 Forces, GeoGebra, Loom 0 Comments It's Day 1 of the full home-based learning month in Singapore! As teachers all over Singapore scramble to understand the use of the myriad EdTech tools, I have finally come to settle on a few: Google Meet to do video conferencing Google Classroom for assignment that requires marking Student Learning Space for students' self-directed learning, collaborative discussion and formative assessment. Loom for lecture recording GeoGebra for visualisation The following is a video that was created using Loom to explain a question on why tension in a rope on which a weight is balanced increases when the rope straightens. Lesson Plan for Online Lecture on Forces 04 Forces, Uncategorized 0 Comments I am using this post as a way to document my brief plans for tomorrow's Google Meet lecture with the LOA students as well as to park the links to the resources and tools that I intend to use for easy retrieval. Instruction Objectives: apply the principle of moments to new situations or to solve related problems. show an understanding that, when there is no resultant force and no resultant torque, a system is in equilibrium. use a vector triangle to represent forces in equilibrium. *derive, from the definitions of pressure and density, the equation ?=??ℎ. *solve problems using the equation ?=??ℎ. *show an understanding of the origin of the force of upthrust acting on a body in a fluid. Activity 1: Find CG of ruler demonstration Having shown them the demonstration last week, I will explain the reason why one can find the CG this way: As I move the fingers inward, there is friction between the ruler and my finger. This friction depends on the normal contact force as $f=\mu N$. Drawing the free-body diagram of the ruler, there are two normal contact forces acting on the ruler by my fingers. The sum of these two upward forces must be equal to the weight of the ruler. These forces vary depending on their distance from the CG. Taking moments about the centre of gravity, $$N_1\times d_1=N_2 \times d_2$$ The finger that is nearer to the CG will always have a larger normal contact force and hence, more friction. Hence, the ruler will tend to stop sliding along that finger and allow the other finger to slide nearer. When that other finger becomes closer to the CG, the ruler also stops sliding along it and tends to then slide along the first finger. This keeps repeating until both fingers reach somewhere near the CG. Activity 2: Moments of a Force at an Angle to the line between Pivot and Point of Action. Recollection of the slides on moment of a force and torque of a couple. Give them a MCQ question to apply their learning using Nearpod's Quiz function https://np1.nearpod.com/presentation.php?id=47032717 Ask students to sketching on Nearpod's "Draw It" slides the "perpendicular distance between axis of rotation and line of action of force" and "perpendicular distance between the lines of action of the couple" for Example 5 and 6 of the lecture notes respectively. Mention that axis of rotation is commonly known as where the pivot is perpendicular distance is also the "shortest distance" Activity 3: Conditions for Equilibrium State the conditions for translational and rotational equilibrium Show how translation equilibrium is due to resultant force being zero using vector addition Show how rotational equilibrium is due to resultant moment about any axis being zero by equating sum of clockwise moments to sum of anticlockwise moments. Go through example 7 (2 methods: resolution of vectors and closed vector triangle) Useful tip: 3 non-parallel coplanar forces acting on a rigid body that is in equilibrium must act through the same point. Use 2006P1Q6 as example. Go through example 8. For 8(b), there are two methods: using concept that the 3 forces pass through the same point or closed triangle. For next lecture (pressure and upthrust): Activity 4: Hydrostatic Pressure Derive from definitions of pressure and density that $p = h\rho g$ Note that this is an O-level concept. Activity 5: Something to sink about Get students to explain how the ketchup packet sinks and floats. Students are likely to come up with answers related to relative density. As them to draw a free body diagram of the ketchup packet. However, we will use the concept of the forces acting on the ketchup packet such as weight and upthrust to explain later. Activity 6: Origin of Upthrust I designed this GeoGebra app to demonstrate that forces due to pressure at different depths are different. For a infinitesimal (extremely small) object, the forces are equal in magnitude even though they are of different directions, which is why we say pressure acts equally in all directions at a point. However, when the volume of the object increases, you can clearly see the different in magnitudes above and below the object. This gives rise to a net force that acts upwards – known as upthrust.
CommonCrawl
Michael Straka Class Groups for Cryptographic Accumulators Late last year Benedikt Bunz and Ben Fisch, both PhD students at Stanford University, released a paper along with Dan Boneh titled "Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains". In it they use some basic group theory to build a dynamic accumulator, which allows for storing and deleting elements in addition to the use of both membership and non-membership proofs. It can be used to create a vector commitment data structure analogous to Merkle trees, with the main difference being that it allows for constant-sized inclusion proofs, where a Merkle tree has $O(\log n)$ sized inclusion proofs where $n$ is the number of elements being stored. If the stored set is big enough, this can be a pretty big deal. One of the main security assumptions their construction uses is that it relies on a group of unknown order. In particular the strong RSA assumption must hold, meaning it is hard to compute a chosen root of a random group element. There are two good candidates for such a group. The simpler of the two is known as an RSA group, or a multiplicative group of order $N$ where $N = pq$ for some unknown factorization. This however requires a trusted setup, as whoever generates the modulus N must be trusted to discard $p$ and $q$. The alternative is known as a class group of imaginary quadratic order, which eliminates the need for a trusted setup and which we will be exploring in this post. Class groups come from a long line of mathematical research, and were originally discovered by Gauss in his Disquisitiones Arithmeticae in 1801. The math that's been developed on top of his work in the past two centuries involves some decently complex algebra. I'll explain most of the concepts used but will expect that you know what groups, rings, fields, homomorphisms, and isomorphisms are in an algebraic context. Feel free to look them up if not. This post is meant to be an introduction to what a class group is and to summarize the most important results to consider when implementing them as a group of unknown order; detailed proofs can be found in the list of further readings below. What is a class group? There are two equivalent ways to understand the class group which are isomorphic to one another. One coming from a subfield of mathematics known as algebraic number theory is known as the ideal class group, which is the quotient of fractional ideals by fractional principal ideals $J_K / P_K$ of a ring of integers $O_K$ of a quadratic extension $K = \mathbb{Q}(\sqrt{d})$ of the rational numbers. Later we'll walk through what this means step-by-step. The other way to look at the class group is known as the form class group and comes from the study of binary quadratic forms, or equations of the form $$ ax^2 + bxy + xy^2 = n $$ by working with an equivalence relation over forms which all have the same discriminant $b^2 - 4ac$. These two views are actually pretty different; it's not at all obvious that they represent the same object! Either is paramaterized by its integer discriminant $\Delta$, as there is a one-to-one correspondence between class groups and valid discriminants. When using the group for cryptographic purposes as a group of unknown order there are three main choices to be made: -What discriminant should be chosen -How to represent the group in a numerical setting -What algorithms should be used when performing the group operation The ideal class group makes it easier to understand how and why a particular form of discriminant should be chosen. As we will see, we want the discriminant to be the negation of a prime congruent to 3 mod 4, or $-p$ where $p \equiv 3$ (mod 4). On the other hand, the form class group is easier to represent and perform operations on numerically. With this in mind, we will start with the ideal class group and move on to the form class group from there. Ideal Class Group In basic field theory there is the concept of a field extension, or a field containing another field as a subfield. The degree of an extension is the size of the larger field's basis over the smaller base field. An example of this is the complex numbers $\mathbb{C}$, which extend the real numbers $\mathbb{R}$ by adding in $\sqrt{-1} = i$, so that $\mathbb{C} = \mathbb{R}(i)$. This is known as a degree 2 or quadratic extension, because the basis for $\mathbb{C}$ as a vector space over $\mathbb{R}$ is of size 2, and is given by $(1, i)$. Similarly, we can construct generalizations of the rational numbers by adding in $\sqrt{d}$ to $\mathbb{Q}$, for some square-free number $d$. In this case a basis for $\mathbb{Q}(\sqrt{d})$ over $\mathbb{Q}$ is also size 2 and is given by $(1, \sqrt{d})$, making $\mathbb{Q}(\sqrt{d})$ a quadratic extension of $\mathbb{Q}$. More formally, $\mathbb{Q}(\sqrt{d})$ is the smallest field containing both $\mathbb{Q}$ and $\sqrt{d}$. Once we have our quadratic extension $K = \mathbb{Q}(\sqrt{d})$, we can also get a corresponding generalization of the integers known as the ring of integers of $K$, denoted $O_K$. To obtain $O_K$, we look at $K$ and take all of its elements which are the roots of some monic polynomial with integer coefficients, also known as an integral polynomial, i.e. the set of all $\alpha \in K$ such that there exists some polynomial $$ p(x) = x^n + b_{n-1}x^{n-1} + ... b_1x + b_0 $$ where $b_{n-1}, …, b_0 \in \mathbb{Z}$ and $p(\alpha) = 0$. It turns out that $O_\mathbb{Q} = \mathbb{Z}$, so that the ring of integers of the rational numbers is simply the integers. In fact, for any finite-degree extension $K$ of $\mathbb{Q}$, $O_K$ contains $\mathbb{Z}$. As its name suggests, $O_K$ forms a ring under addition and multiplication. Something that would help us to understand $O_K$ would be to find some analogue of a basis over a vector space for it. We are helped here by the concept of a module, which is a generalization of vector spaces for rings. As an example, a $\mathbb{Z}$-module $M$ consists of the integers $\mathbb{Z}$, which form a ring, acting on some abelian group $(M,+)$. In other words, the ring of integers acts like a set of scalars and the abelian group like a set of vectors. Modules do not have as many guarantees as vector spaces; for example, a module may not have a basis. Because every ring is an abelian group under addition, every ring is a $\mathbb{Z}$-module. This implies we can act on any ring of integers $O_K$ by the integers $\mathbb{Z}$ in a way analogous to acting on vectors by scalars. While not every module has a basis as a $\mathbb{Z}$-module (a "$\mathbb{Z}$-basis"), it is true that every ring of integers $O_K$ has a $\mathbb{Z}$-basis. More specifically, if $K$ is a degree $n$ extension of $\mathbb{Q}$, then there is some $b_1, …, b_n \in O_K$ such that any $x \in O_K$ can be uniquely written as $$ x = \sum_{i=1}^n a_i b_i $$ where $a_1, …, a_n \in \mathbb{Z}$. What are the possible $\mathbb{Z}$-bases for a ring of integers of a quadratic extension? This turns out to be pretty simple. We know $\sqrt{d} \in O_K$ since $\sqrt{d} \in K = \mathbb{Q}(\sqrt{d})$ and it is the root of the integral polynomial $x^2 - d$. It also happens to be the case that if $d \equiv 1$ (mod 4) then for any $x + y \sqrt{d} \in O_K$ where $x,y \in \mathbb{Z}$ we can write $$ x + y \sqrt{d} = a + b \frac{1 + \sqrt{d}}{2} $$ for some $a,b \in \mathbb{Z}$. It can be shown that no other elements are in $O_K$, giving us two possible $\mathbb{Z}$-basis depending on $d$. If $d \equiv 2,3$ (mod 4) then we have a $\mathbb{Z}$-basis of $(1, \sqrt{d})$, and if $d \equiv 1$ (mod 4) we have a $\mathbb{Z}$-basis of $(1, \frac{1+\sqrt{d}}{2})$. Another way of phrasing this is to say that if $d \equiv 1$ (mod 4), then $O_K = \mathbb{Z}[\frac{1+\sqrt{d}}{2}]$ and $O_K = \mathbb{Z}[\sqrt{d}]$ otherwise. To define the discriminant of a finite extension $K$ of $\mathbb{Q}$, we first need the concept of an embedding of $K$ into some field $\mathbb{F}$, which is simply an injective ring homomorphism from $K$ into $\mathbb{F}$. If we have $n$ embeddings $\sigma_1, …, \sigma_n$ from $K$ into the complex numbers $\mathbb{C}$ and a basis $b_1, …, b_n$ of $O_K$ as a $\mathbb{Z}$-module, then the discriminant of $K$ is given by $$ \Delta_K = \begin{vmatrix} \sigma_1(b_1) & \sigma_1(b_2) & \dots & \sigma_1(b_n) \\ \sigma_2(b_1) & \ddots & & \sigma_2(b_n) \\ \vdots & & \ddots & \vdots \\ \sigma_n(b_1) & \dots & \dots & \sigma_n(b_n) \end{vmatrix}^2 $$ where the notation above means we take the square of the determinant of the given matrix. For our case where $K = \mathbb{Q}(\sqrt{d})$, we have two embeddings of $K$ into $\mathbb{C}$. Letting $a + b\sqrt{d} \in \mathbb{Q}(\sqrt{d})$, one is given by $\sigma_1(a + b\sqrt{d}) = a + b\sqrt{d}$ and the other by $\sigma_2(a + b\sqrt{d}) = a - b\sqrt{d}$. If $d > 0$ then we can consider these to be embeddings of $K$ into $\mathbb{R}$. In practice we want $d < 0$. As we'll see later this gives us a nicer structure by letting us take a unique "reduced" form for any element when using the form class group as a numerical representation. It also makes it less likely that the class group will be trivial and contain only one element. Given these embeddings and the $\mathbb{Z}$-bases above it is easy to check that if $d \equiv 1$ (mod 4) then $\Delta_K = d$ and if $d \equiv 2,3$ (mod 4) then $\Delta_K = 4d$. When choosing a discriminant, this makes it more convenient to pick some square-free value $\Delta \equiv 1$ (mod 4), as otherwise we need to ensure that $\Delta / 4$ is square-free. We're almost ready to construct the class group itself! To do so we'll need the concept of an ideal, which is an additive subgroup of a ring with an additional multiplicative "absorption" property. Specifically if we have an ideal $I \subset O_K$ of a ring of integers then for any $r \in R$ and $x \in I$, $rx \in I$. For any $n \in \mathbb{Z}$, $n\mathbb{Z}$ is an ideal. We can generalize the concept of an ideal of a ring to get the idea of a fractional ideal. Intuitively a fractional ideal $J$ of a ring $O_K$ is a subset of the ring's enclosing field $K$ such that we can "clear the denominators" of $J$. More formally $J$ is a nonzero subset of $K$ such that for some $r \not= 0 \in O_K$, $rJ \subset O_K$ is an ideal of $O_K$. As an example $\frac{5}{4} \mathbb{Z}$ is a fractional ideal of $\mathbb{Z}$, since it is a subset of the smallest field $\mathbb{Q}$ containing $\mathbb{Z}$ and has the property that $4(\frac{5}{4}\mathbb{Z}) = 5\mathbb{Z}$ is an ideal of $\mathbb{Z}$. For any two ideals or fractional ideals $I,J \subset O_K$ we can define a form of multiplication on them: $$ IJ = \{\sum a_i b_i | a_i \in I, b_i \in J\} $$ Since ideals are closed under addition, this gives the smallest ideal containing every element of both $I$ and $J$. A fractional ideal $I$ of $O_K$ is called principal if there is some $r \in O_K$ such that $rO_K = I$. In other words, $I$ is generated by a single element of $O_K$. Finally we need one more concept from group theory known as a quotient group. While I won't define it formally here, a basic example is given by $\mathbb{Z} / 3 \mathbb{Z}$ which has the structure of arithmetic mod 3. It can be formed by taking every element $n \in \mathbb{Z}$, computing $n3\mathbb{Z}$, then forming a group operation having the same structure as $(\mathbb{Z},+)$ over the 3 resulting distinct sets of integers. Let $J_K$ denote the set of fractional ideals of $O_K$ and $P_K$ the set of principal fractional ideals of $O_K$. Both form abelian groups under ideal multiplication defined above. This means it makes sense to take the quotient group $J_K / P_K$. This quotient group is the ideal class group. There are two extreme cases we can consider here. If $J_K = P_K$, then $J_K / P_K$ is the trivial group with one element. On the other hand, if no fractional ideals are principal then $J_K / P_K = J_K$. The order of $J_K / P_K$ can be interpreted as a measurement of the extent to which $O_K$ fails to be a principal ideal domain, or to have all of its ideals be principal. We can even take this a bit further, since for any ring of integers $O_K$ being a principal ideal domain is equivalent to every element of $O_K$ having a unique factorization. In other words, the order of a class group of $K$ is a measurement of how much its ring of integers $O_K$ fails to give each element a unique factorization! Given that $O_K$ is effectively a generalization of the integers, it's pretty neat that we can define this rigorously. The order or number of elements of $J_K / P_K$ is known as its class number, and is known to be hard to compute for large discriminants. As with all cryptographic assumptions, this has not been proven but is assumed to be true because no one has broken it efficiently. At the moment it is generally accepted that, within the context of solving discrete logarithms, a discriminant of 687 bits provides as much security as a 1024-bit RSA key, and a discriminant of 1208 bits about as much security as a 2048-bit RSA key. Form Class Group Next we will discuss the form class group, and see how it is related to the ideal class group. The form class group was originally discovered in the study of binary quadratic forms, or functions of the form $$ f(x,y) = ax^2 + bxy + cy^2 $$ where $a,b,c \in \mathbb{R}$. For convenience we will write $f = (a,b,c)$ and call $f$ a form. In practice we want to restrict ourselves to the case where $a,b,c \in \mathbb{Z}$, as our end goal is to use forms to represent the class group in a computer and being able to store forms as integer triples rather than triples of floating point values simplifies things. All binary quadratic forms in a given form class group have the same discriminant, given by $\Delta_f = b^2 - 4ac$. This is identical to the discriminant of the corresponding ideal class group. A form represents an integer $n$ if $f(x,y) = n$ for some $x,y \in \mathbb{Z}$. The form class group is made up of equivalence classes of forms, or sets of forms considered to be equivalent. This is similar to how, when doing arithmetic mod 3, the symbol 1 represents the set of all integers congruent to 1 (mod 3). We say that two forms $f_1, f_2$ are equivalent if they represent the same set of integers. In order to be a valid group, we need a group operation $*$ that will give us some representative $f_3$ of an appropriate equivalence class given forms $f_1, f_2$ so that $f_1 * f_2 = f_3$. We also need an identity form $g$ so that $f * g = f$ for all forms $f$, and for any form $f$ we need an inverse $f^{-1}$ so that $f * f^{-1} = g$. We mentioned above the the ideal class group with discriminant $\Delta \in \mathbb{Z}$ is isomorphic to the form class group with the same discriminant $\Delta$. In fact, this is only true when $\Delta < 0$, and all forms $ f = (a,b,c)$ in the group are positive definite, meaning $f(x,y) > 0$ for all possible $x,y \in \mathbb{Z}$. This is equivalent to having both $\Delta_f < 0$ and $a > 0$. From now on, we will assume that any form is positive definite. As the form class group is composed of equivalence classes of forms, it would also be good if we could reduce any form of an equivalence class down to one unique element. Since we also want to represent elements $f = (a,b,c)$ in terms of bits, it would also be good if this reduced element was reasonably small. It turns out that we can define a reduction operation that gives us both of these desirable properties. We can break down the reduction operation into two pieces; a "normalization" operation and a "reduction" operation, with the requirement that a form must be normalized before it is reduced. A form $f = (a,b,c)$ is normal if $-a < b \leq a$. We can define a normalization operation by $$ \eta(a,b,c) := (a, b+2ra, ar^2 + br + c) $$ where $r = {\lfloor \frac{a-b}{2a} \rfloor}$. When we normalize a form $f$, we want the resulting normalized form $f'$ to be equivalent to $f$. How do we know this is actually the case? It turns out we can understand equivalence of forms using actions by a certain class of matrices known as $SL_2$, or the "special linear group" of invertible matrices with determinant equal to 1. Two forms $f_1, f_2$ are in fact equivalent if there exists $\alpha, \beta, \gamma, \delta \in \mathbb{Z}$ such that $$ f_1(\alpha x + \beta y, \gamma x + \delta y) = f_2(x,y) $$ $$ \alpha \delta - \beta \gamma = 1 $$ This is actually only partially true, as we can relax the second requirement to be $\alpha \delta - \beta \gamma = \pm 1$. However, this won't actually give us a valid equivalence relation, i.e. if we let $\equiv$ denote equivalence then $f_1 \equiv f_2$ and $f_2 \equiv f_3$ wouldn't necessarily imply $f_1 \equiv f_2 \equiv f_3$. The "correct" form of equivalence mentioned above gives us an action of $SL_2$ on forms, so that if $f_1,f_2$ are equivalent then there is some invertible $$ A= \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix} $$ with det$(A) = 1$ such that $f_1 = Af_2$. With this is mind, we can show that $f$ and its normalized form $f' = \eta(f)$ are equivalent by the matrix $$ A= \begin{bmatrix} 1 & r \\ 0 & 1 \end{bmatrix} $$ Once we've normalized a form, we can then reduce it to obtain its unique reduced equivalent form. A form $f = (a,b,c)$ is reduced if it is normal and $a < c$ or $a = c$ and $b \geq 0$. We can define a reduction operation as $$ \rho(a,b,c) := (c, -b + 2sc, cs^2-bs + a) $$ where $s = {\lfloor \frac{c+b}{2c} \rfloor}$. Unlike the normalization operation $\eta$, it may need multiple iterations before our form is reduced. To reduce a form $f$, we can compute $f \leftarrow \eta(f)$ then repeatedly compute $f \leftarrow \rho(f)$ until $f$ is reduced. Similar to normalized forms, we can see that a reduced form $f'$ of $f$ is equivalent to $f$ using the matrix $$ A= \begin{bmatrix} 0 & -1 \\ 1 & r \end{bmatrix} $$ How do we know reduced elements are relatively small? If $\Delta_f < 0$ then for a reduced form $f = (a,b,c)$ we have that $$ a \leq \sqrt{\frac{|\Delta_f|}{3}} $$ Because in a reduced form $|b| \leq a$ and $c = \frac{b^2 - \Delta_f}{4a}$, the above upper bound on the size of $a$ implies that reduce elements as a whole tend to be small, having a bit representation at least as small as that of $\Delta_f$. This makes the group operation on reduced elements relatively efficient. There is also a reasonable upper bound on the number of reduction steps required, given by $\log_2 (\frac{a}{\sqrt{|\Delta|}}) + 2$. The group operation itself, known as "form composition", is a bit complicated. The basic idea is that, given two forms $f_1, f_2$ with the same discirminant $\Delta$, we can multiply them together and use a change of variables to obtain a third form $f_3$ such that $f_1f_2 = f_3$. More exactly, the game is to find integers $p,q,r,s,p',q',r',s',\alpha,\beta,\gamma$ so that \begin{eqnarray*} X &=& px_1x_2 + qx_1y_2 + ry_1x_2 + sy_1y_2\\ Y &=& p'x_1x_2 + q'x_1y_2 + r'y_1x_2 + s'y_1y_2\\ f_3(x,y) &=& \alpha x^2 + \beta xy + \gamma y^2\\ f_3(X,Y) &=& f_1(x,y)f_2(x,y)\\ \end{eqnarray*} Let LinCong$(a,b,m)$ be an algorithm which solves a linear congruence of the form $ax \equiv b$ (mod $m$) by finding some $x = \mu + \nu n$ where $n \in \mathbb{Z}$ and outputs $(\mu, \nu)$. Given two forms $f_1 = (a,b,c)$ and $f_2 = (\alpha, \beta, \gamma)$, we can define a group operation on them as follows: Set $g \leftarrow \frac{1}{2}(b + \beta)$, $h \leftarrow -\frac{1}{2}(b - \beta)$, $w \leftarrow $gcd$(a, \alpha, g)$ Set $j \leftarrow w$, $s \leftarrow \frac{a}{w}$, $t \leftarrow \frac{\alpha}{w}$, $u \leftarrow \frac{g}{w}$ Set $(\mu, \nu) \leftarrow $LinCong$(tu, hu + sc, st)$ Set $(\lambda, \rho) \leftarrow $LinCong$(tv, h - t\mu, s)$ Set $k \leftarrow \mu + \nu \lambda$, $l \leftarrow \frac{kt - h}{s}$, $m \leftarrow \frac{tuk - hu - cs}{st}$. Return $f_3 = (st, ju - (kt + ls), kl - jm)$. In practice it's best to always reduce the result $f_3$ after performing composition. This way we are guaranteed that the multiplication of two forms takes $O(\log^2 |\Delta|)$ bit operations where $\Delta$ is the discriminant of the group being used. This is not guaranteed if the two inputs forms are not reduced. In order to be a group operation forms under composition must have an identity element. If $\Delta < 0$ this turns out to be $f = (1, k, \frac{k^2-\Delta}{4})$ where $k = \Delta$ (mod 2). For any form $f = (a,b,c)$, its inverse under form composition is given by $f^{-1} = (a, -b, c)$. We're now done constructing the form class group! We have a group operation, a way to get a unique representative element from each equivalence class of forms, an identity, and inverses. There is one very important optimization we can do. Above we mentioned that we want to choose the negation of a prime $p$ as our discriminant. It turns out that this lets us simplify our composition algorithm when $f_1 = f_2$, since using $\Delta = -p$ implies that gcd$(a,b) = 1$ for any form $f = (a,b,c)$. Unlike much of the math discussed in this post, this is pretty easy to see. If gcd$(a,b) = n \not= 1$ then for some $a', b' \in \mathbb{Z}$ we have that $a na'$ and $b = nb'$, implying $\Delta = b^2 - 4ac = n(b'b - 4a'c)$ which is impossible because $-p$ can't be divisible by $n$. This, in addition to the fact that $a = \alpha$, $b = \beta$, $c = \gamma$ gives us the simplified squaring algorithm below: Set $(\mu, \nu) \leftarrow$ LinCong$(b,c,a)$ Return $f^2 = (a^2, b-2\alpha \mu, \mu^2 - \frac{b \mu - c}{a})$. This is a big deal for efficiency, since we can compute exponentiation using repeated squaring. Isomorphism Going back to the ideal class group, there is another construction equivalent to the quotient group $J_K / P_K$. Similar to equivalence of forms, we can define an equivalence of ideals in $J_K$ by saying that two fractional ideals $I,J$ of $J_K$ are equivalent if there is some $\alpha \not= 0 \in K$ such that $\alpha I = J$. The equivalence classes formed by this relation are exactly the elements of $J_K / P_K$. We can similarly represent fractional ideals by their at most 2 generating elements, so that if $I$ is generated by $\alpha, \beta$ we can represent it by $(\alpha, \beta)$. We can also get a unique "reduced" ideal from each equivalence class. Ideal and form class groups are isomorphic when the discriminant $\Delta \in \mathbb{Z}$ being used is negative. In some sense this means multiplication of fractional ideals in the ideal setting and form composition in the form class group are really the same operation, since we can move back and forth between corresponding equivalence classes of fractional ideals and of forms. We need just a few more tools before defining the isomorphism itself. If $K$ is a finite field extension of $\mathbb{F}$, so that $K$ is a finite-dimensional vector space over $\mathbb{F}$, then for any $\alpha \in K$ the map $m_\alpha (x) = \alpha x$ is an $\mathbb{F}$-linear transformation from $K$ into itself. The field norm $N(\alpha)$ is the determinant of the matrix of this linear transformation. The trace $Tr$ of $\alpha$ is the trace of this matrix. In our case where $K = \mathbb{Q}(\sqrt{d})$, for any $x = a + b\sqrt{d} \in K$ we have $N(x) = a^2 - db^2$, implying $N(x)$ is positive when $d < 0$. Next, if $I$ is a non-zero fractional ideal of $O_K$, the absolute norm of $I$ is given by the mapping $N(I) = |O_K / I|$, or the order of the quotient of $O_K$ by its ideal $I$. As their names and notation suggests, the field and absolute norms are related. If an ideal $I$ of ring of integers $O_K$ is principal so that there is some $\alpha \in O_K$ such that $\alpha O_K = I$, then $N(I) = |N(\alpha)|$. The isomorphism between ideal and form class groups is as follows. If $f = (a,b,c)$ where $a,b,c \in \mathbb{Z}$ and $\Delta_f < 0$, then we can map $f$ to a fractional ideal of the ideal class group with same discriminant by $$ \Phi(a,b,c) := (a \mathbb{Z} + \frac{-b + \sqrt{\Delta_f}}{2}\mathbb{Z}) $$ with inverse $$ \Phi^{-1}(I) := \frac{N(\alpha x - \beta y)}{N(I)} $$ where $(\alpha, \beta)$ is some $\mathbb{Z}$-basis of the fractional ideal $I$. We can see that the inverse maps a fractional ideal to a binary quadratic form using the following identity: $$ N(\alpha x + \beta y) = N(\alpha)x^2 + Tr(\alpha \beta')xy + N(\beta)y^2 $$ where for some element of a ring of integers $x = a + b\sqrt{d}$ we denote its conjugate by $x' = a - b\sqrt{d}$. If $\Delta < 0$ then the coefficient of $x^2$ given by $N(\alpha)/N(I)$ will be positive, meaning the resulting form $f = (a,b,c)$ will be positive definite as $\Delta_f < 0$ and $a > 0$. It can be shown that when $\Delta < 0$, $\Phi$ is a bijection which preserves the group structure of binary quadratic forms under form composition in mapping to fractional ideals of a ring of integers under ideal multiplication. In other words, $\Phi$ is an isomorphism. Solved and Open Conjectures We'll conclude with a handful of conjectures made in some form by Gauss in 1801 which show further why we want to use a negative discriminant: The class number of the ideal class group of $\mathbb{Q}(\sqrt{d})$ converges to infinity as $d \rightarrow -\infty$. There are exactly 13 negative discriminants having a class number of 1, in particular -3, -4, -7, -8, -11, -12, -16, -19, -27, -28, -43, -67, -163. There are infinitely many positive discriminants associated with class groups having a class number of 1. The first was proven in 1934 by Hans Heilbronn, the second in 1967 by Heegner, Stark and Baker. The third remains open. The Chia VDF Competition exposition of class groups is an excellent overview on how to implement the group numerically. The form class group algorithms above on normalization, reduction, and composition in particular were taken from here. A Survey on IQ Cryptography by Johannes Buchmann and Safuat Hamdy is a great survey from 2001 of cryptography using class groups of imaginary quadratic order. A Course in Computational Algebraic Number Theory by Henry Cohen is a well-written and comprehensive textbook on algorithms fundamental to algebraic number theory. The Structure of the Class Group of Imaginary Quadratic Fields by Nicole Miller and The Correspondence Between Binary Quadratic Forms and Quadratic Fields by Corentin Perret-Gentil are both good proof-focused introductions to the ideal and form class groups and their correspondence. Hi! I'm Michael Straka. I'm currently finishing my Master's degree in Computer Science at Stanford University with a focus in cryptography and experience working with various fintech, blockchain, and private cryptography lab companies. I've had experience implementing cutting-edge cryptographic accumulators and user-authenication services in industry, in addition to having done circuit construction for general-purpose zero-knowledge proofs. I can be reached by email at [email protected]. My Twitter, Linkedin, and Github accounts are all linked in the menu bar above.
CommonCrawl
178 publications · Export Newest first Oldest first Most cited first On Characterization of Optimal Control Model of Whooping Cough A. S. Ismail, Y. O. Aderinto Whooping cough is a vaccine avoidable public health problem which is caused by bacterium Bordetella Pertussis and it is a highly contagious disease of the respiratory system. In this paper, an SIR epidemiological model of whooping cough with optimal control strategy was formulated to control the transmission. The model was characterized to obtain the disease free and the endemic equilibrium points. Finally, the simulation was carried out using the Forward-backward sweep method by incorporating the Runge Kutta method to check the validity and the result obtained was an improvement over the existing results. https://doi.org/10.34198/ejms.8122.175188 2021, Earthline Journal of Mathematical Sciences, p. 175-188 Crossref citations: 0 Bayesian Estimation of Weighted Inverse Maxwell Distribution under Different Loss Functions Aijaz Ahmad, Rajnee Tripathi In this study, the shape parameter of the weighted Inverse Maxwell distribution is estimated by employing Bayesian techniques. To produce posterior distributions, the extended Jeffery's prior and the Erlang prior are utilised. The estimators are derived from the squared error loss function, the entropy loss function, the precautionary loss function, and the Linex loss function. Furthermore, an actual data set is studied to assess the effectiveness of various estimators under distinct loss functions. Classifying Features Persuading the Use of Long Lasting Insecticide Treated Nets (LLINs) and Its Implications in Exterminating the Incidence of Malaria-Death in Ghana Anthony Joe Turkson This is a cross-sectional quantitative study purported to identify features deemed to persuade the usage of LLINs in exterminating incidences of malaria-death in Ghana. The population consisted of mothers and caregivers of children under five in Asamankese a district in the Eastern region of Ghana. Questionnaires were developed based on the profile and the set of study objectives, it sought information on socio-economic variables, knowledge level on LLINs, and influence of climatic and environmental factors on LLINs usage. Data were coded and keyed into SPSS version 20. Frequencies, percentages, means, standard deviations, graphs and tables were used to explore the data. Chi-square test was used to do further investigation. It was revealed that LLINs usage was influenced by a group of features including: background characteristics of household; socio-economic variables, environmental variables and knowledge on importance of LLINs. There was an association between LLINs usage and monthly income of caregivers (p<0.05). Furthermore, there was a significant relationship (p<0.05) between environmental features and LLINs usage. There was a relationship (p<0.05) between one's knowledge and use of LLINs. In addition, there was a relationship between usage and the number of times per month visits were made to the hospitals for health care. Environmental factors permitted the use of LLINs, Eighty-six (86%) of the respondents who used LLINs did use it because the weather aided them. It is recommended that behavior change education be intensified in the region so that more people can accept and adopt a lifestyle that will protect them from the deadly malaria diseases. Efforts must be made by the major players in the health sector to make the net readily available in the communities at low prices to enable the ordinary Ghanaian to purchase it. Absolute Value Variational Inclusions Muhammad Aslam Noor, Khalida Inayat Noor In this paper, we consider a new system of absolute value variational inclusions. Some interesting and extensively problems such as absolute value equations, difference of monotone operators, absolute value complementarity problem and hemivariational inequalities as special case. It is shown that variational inclusions are equivalent to the fixed point problems. This alternative formulation is used to study the existence of a solution of the system of absolute value inclusions. New iterative methods are suggested and investigated using the resolvent equations, dynamical system and nonexpansive mappings techniques. Convergence analysis of these methods is investigated under monotonicity. Some special cases are discussed as applications of the main results. An Enhanced Clustering Method for Extending Sensing Lifetime of Wireless Sensor Network Yakubu Abdul-Wahab Nawusu, Alhassan Abdul-Barik, Salifu Abdul-Mumin Extending the lifetime of a wireless sensor network is vital in ensuring continuous monitoring functions in a target environment. Many techniques have appeared that seek to achieve such prolonged sensing gains. Clustering and improved selection of cluster heads play essential roles in the performance of sensor network functions. Cluster head in a hierarchical arrangement is responsible for transmitting aggregated data from member nodes to a base station for further user-specific data processing and analysis. Minimising the quick dissipation of cluster heads energy requires a careful choice of network factors when selecting a cluster head to prolong the lifetime of a wireless sensor network. In this work, we propose a multi-criteria cluster head selection technique to extend the sensing lifetime of a heterogeneous wireless sensor network. The proposed protocol incorporates residual energy, distance, and node density in selecting a cluster head. Each factor is assigned a weight using the Rank Order Centroid based on its relative importance. Several simulation tests using MATLAB 7.5.0 (R2007b) reveal improved network lifetime and other network performance indicators, including stability and throughput, compared with popular protocols such as LEACH and the SEP. The proposed scheme will be beneficial in applications requiring reliable and stable data sensing and transmission functions. https://doi.org/10.34198/ejms.8122.6782 2021, Earthline Journal of Mathematical Sciences, p. 67-82 On Generalized p-Mersenne Numbers Yüksel Soykan In this paper, we introduce the generalized p-Mersenne sequence and deal with, in detail, two special cases, namely, p-Mersenne and p-Mersenne-Lucas-sequences. We present Binet's formulas, generating functions, Simson formulas, and the summation formulas for these sequences. Moreover, we give some identities and matrices related with these sequences. https://doi.org/10.34198/ejms.8122.83120 2021, Earthline Journal of Mathematical Sciences, p. 83-120 Solution of Linear Fuzzy Fractional Differential Equations Using Fuzzy Natural Transform Hameeda Oda Al-Humedi, Shaimaa Abdul-Hussein Kadhim The purpose of this paper is to apply the fuzzy natural transform (FNT) for solving linear fuzzy fractional ordinary differential equations (FFODEs) involving fuzzy Caputo's H-difference with Mittag-Leffler laws. It is followed by proposing new results on the property of FNT for fuzzy Caputo's H-difference. An algorithm was then applied to find the solutions of linear FFODEs as fuzzy real functions. More specifically, we first obtained four forms of solutions when the FFODEs is of order α∈(0,1], then eight systems of solutions when the FFODEs is of order α∈(1,2] and finally, all of these solutions are plotted using MATLAB. In fact, the proposed approach is an effective and practical to solve a wide range of fractional models. Levels, Trends and Determinants of Infant Mortality in Nigeria: An Analysis using the Logistic Regression Model Donalben Onome Eke, Friday Ewere This paper presents a statistical analysis of the levels, trends and determinants of infant mortality in Nigeria using the logistic regression model. Infant mortality data for each of the five years preceding the 2003, 2008, 2013 and 2018 Nigeria Demographic Health Survey (NDHS) was retrieved and used for the analysis. Findings from the study revealed that infant mortality rates decline have stagnated in the five year period prior to the 2018 survey with an Annual Rate of Reduction (ARR) of 0% relative to an initial ARR of 5.7% between 2003 and 2008. The ARR of 2.039% over the 15 year period spanning 2003 to 2018 suggests that the rate of infant mortality reduction is slow. This study also showed that maternal characteristics such as age and educational levels as well as cultural practises like use of clean water and toilet facilities were statistically significant determinants of infant mortality in Nigeria with P-values < 0.05 across each of the survey years. On Some Subclasses of $m$-fold Symmetric Bi-univalent Functions associated with the Sakaguchi Type Functions Ismaila O. Ibrahim, Timilehin G. Shaba, Amol B. Patil In the present investigation, we introduce the subclasses $\Lambda_{\Sigma_m}^{\rightthreetimes}(\sigma,\phi,\upsilon)$ and $\Lambda_{\Sigma_m}^{\rightthreetimes}(\sigma,\gamma,\upsilon)$ of $m$-fold symmetric bi-univalent function class $\Sigma_m$, which are associated with the Sakaguchi type of functions and defined in the open unit disk. Further, we obtain estimates on the initial coefficients $b_{m+1}$ and $b_{2m+1}$ for the functions of these subclasses and find out connections with some of the familiar classes. https://doi.org/10.34198/ejms.8122.115 2021, Earthline Journal of Mathematical Sciences, p. 1-15 New Families of Bi-Univalent Functions Governed by Gegenbauer Polynomials Abbas Kareem Wanas The aim of this article is to initiating an exploration of the properties of bi-univalent functions related to Gegenbauer polynomials. To do so, we introduce a new families \mathbb{T}_\Sigma (\gamma, \phi, \mu, \eta, \theta, \gimel, t, \delta) and \mathbb{S}_\Sigma (\sigma, \eta, \theta, \gimel, t, \delta ) of holomorphic and bi-univalent functions. We derive estimates on the initial coefficients and solve the Fekete-Szeg problem of functions in these families.
CommonCrawl
Mori-Zwanzig reduced models for uncertainty quantification JCD Home June 2019, 6(1): 1-37. doi: 10.3934/jcd.2019001 Towards a geometric variational discretization of compressible fluids: The rotating shallow water equations Werner Bauer 1,2, and François Gay-Balmaz 3, Imperial College London, Department of Mathematics, 180 Queen's Gate, London SW7 2AZ, United Kingdomand École Normale Supérieure, Laboratoire de Météorologie Dynamique, 24 Rue Lhomond, 75005 Paris, France CNRS/École Normale Supérieure, Laboratoire de Météorologie Dynamique, 24 Rue Lhomond, 75005 Paris, France Published December 2018 Full Text(HTML) Figure(16) / Table(1) This paper presents a geometric variational discretization of compressible fluid dynamics. The numerical scheme is obtained by discretizing, in a structure preserving way, the Lie group formulation of fluid dynamics on diffeomorphism groups and the associated variational principles. Our framework applies to irregular mesh discretizations in 2D and 3D. It systematically extends work previously made for incompressible fluids to the compressible case. We consider in detail the numerical scheme on 2D irregular simplicial meshes and evaluate the scheme numerically for the rotating shallow water equations. In particular, we investigate whether the scheme conserves stationary solutions, represents well the nonlinear dynamics, and approximates well the frequency relations of the continuous equations, while preserving conservation laws such as mass and total energy. Keywords: Geometric discretization, structure-preserving schemes, fluid dynamics, Euler-Poincaré formulation, rotating shallow water equations. Mathematics Subject Classification: Primary: 65P10, 76M60, 37N10; Secondary: 37K05, 37K65. Citation: Werner Bauer, François Gay-Balmaz. Towards a geometric variational discretization of compressible fluids: The rotating shallow water equations. Journal of Computational Dynamics, 2019, 6 (1) : 1-37. doi: 10.3934/jcd.2019001 V. I. Arnold, Sur la géométrie différentielle des groupes de Lie de dimenson infinie et ses applications à l'hydrodynamique des fluides parfaits, Ann. Inst. Fourier, Grenoble, 16 (1966), 319-361. doi: 10.5802/aif.233. Google Scholar R. E. Bank and J. Xu, Asymptotically exact a posteriori error estimators, part Ⅰ: Grids with superconvergence, SIAM Journal on Numerical Analysis, 41 (2003), 2294-2312. doi: 10.1137/S003614290139874X. Google Scholar W. Bauer, Toward Goal-Oriented R-adaptive Models in Geophysical Fluid Dynamics using a Generalized Discretization Approach, Ph.D thesis, Department of Geosciences, University of Hamburg, 2013. Google Scholar W. Bauer, M. Baumann, L. Scheck, A. Gassmann, V. Heuveline and S. C. Jones, Simulation of tropical-cyclone-like vortices in shallow-water icon-hex using goal-oriented r-adaptivity, Theoretical and Computational Fluid Dynamics, 28 (2014), 107-128. doi: 10.1007/s00162-013-0303-4. Google Scholar W. Bauer, A new hierarchically-structured $n$-dimensional covariant form of rotating equations of geophysical fluid dynamics, GEM - International Journal on Geomathematics, 7 (2016), 31-101. doi: 10.1007/s13137-015-0074-8. Google Scholar W. Bauer and F. Gay-Balmaz, Variational integrators for the anelastic and pseudo-incompressible flows, preprint, 2017, arXiv: 1701.06448. Google Scholar A. M. Bloch, Nonholonomic Mechanics and Control, Volume 24 of Interdisciplinary Applied Mathematics, Springer-Verlag, New York, 2003. With the collaboration of J. Baillieul, P. Crouch and J. E. Marsden, and with scientific input from P. S. Krishnaprasad, R. M. Murray and D. Zenkov. doi: 10.1007/b97376. Google Scholar N. Bou-Rabee and J. E. Marsden, Hamilton-Pontryagin integrators on Lie groups. Part Ⅰ: Introduction and structure-preserving properties, Foundations of Computational Mathematics, 9 (2009), 197-219. doi: 10.1007/s10208-008-9030-4. Google Scholar R. Brecht, W. Bauer, A. Bihlo, F. Gay-Balmaz and S. MacLachlan, Variational integrator for the rotating shallow-water equations on the sphere, preprint, 2018, arXiv: 1808.10507. Google Scholar W. Cao, Superconvergence analysis of the linear finite element method and a gradient recovery postprocessing on anisotropic meshes, Math. Comput., 84 (2015), 89-117. doi: 10.1090/S0025-5718-2014-02846-9. Google Scholar [11] M. J. P. Cullen, A Mathematical Theory of Large-scale Atmosphere/ocean Flow, Imperial College Press, London, 2006. doi: 10.1142/p375. Google Scholar F. Demoures, F. Gay-Balmaz and T. S. Ratiu, Multisymplectic variational integrators for nonsmooth Lagrangian continuum mechanics, Forum of Mathematics, Sigma, 4 (2016), e19, 54 pp. doi: 10.1017/fms.2016.17. Google Scholar F. Demoures, F. Gay-Balmaz, M. Kobilarov and T. S. Ratiu, Multisymplectic Lie group variational integrators for a geometrically exact beam in $ \mathbb{R} ^3 $, Commun. Nonlinear Sci. Numer. Simulat., 19 (2014), 3492-3512. doi: 10.1016/j.cnsns.2014.02.032. Google Scholar M. Desbrun, E. Gawlik, F. Gay-Balmaz and V. Zeitlin, Variational discretization for rotating stratified fluids, Disc. Cont. Dyn. Syst. Series A, 34 (2014), 479-511. doi: 10.3934/dcds.2014.34.477. Google Scholar A. Ern and J.-L. Guermond, Theory and Practice of Finite Elements, Springer, 2004. doi: 10.1007/978-1-4757-4355-5. Google Scholar E. Gawlik, P. Mullen, D. Pavlov, J. E. Marsden and M. Desbrun, Geometric, variational discretization of continuum theories, Physica D, 240 (2011), 1724-1760. doi: 10.1016/j.physd.2011.07.011. Google Scholar M. Giorgetta, T. Hundertmark, P. Korn, S. Reich and M. Restelli, Conservative space and time regularizations for the icon model, Technical report, Berichte zur Erdsystemforschung, Report 67, MPI for Meteorology, Hamburg, 2009. Google Scholar F. Gay-Balmaz and V. Putkaradze, Variational discretizations for the dynamics of fluid-conveying flexible tubes, C. R. Mécanique, 344 (2016), 769-775. doi: 10.1016/j.crme.2016.08.004. Google Scholar E. Hairer, C. Lubich and G. Wanner, Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, Springer Series in Computational Mathematics, 31, Springer-Verlag, 2006. Google Scholar D. D. Holm, J. E. Marsden and T. S. Ratiu, The Euler-Poincaré equations and semidirect products with applications to continuum theories, Adv. in Math., 137 (1998), 1-81. doi: 10.1006/aima.1998.1721. Google Scholar A. Lew, J. E. Marsden, M. Ortiz and M. West, Asynchronous variational integrators, Arch. Rat. Mech. Anal., 167 (2003), 85-146. doi: 10.1007/s00205-002-0212-y. Google Scholar Y. Huang and J. Xu, Superconvergence of quadratic finite elements on mildly structured grids, Mathematics of Computation, 77 (2008), 1253-1268. doi: 10.1090/S0025-5718-08-02051-6. Google Scholar R. J. LeVeque, Balancing source terms and flux gradients in high-resolution Godunov methods: The quasi-steady wave-propagation algorithm, J. Comput. Phys., 146 (1998), 346-365. doi: 10.1006/jcph.1998.6058. Google Scholar B. Liu, G. Mason, J. Hodgson, Y. Tong and M. Desbrun, Model-reduced variational fluid simulation, ACM Trans. Graph. (SIG Asia), 34 (2015), Art. 244. doi: 10.1145/2816795.2818130. Google Scholar J. E. Marsden and M. West, Discrete mechanics and variational integrators, Acta Numer., 10 (2001), 357-514. doi: 10.1017/S096249290100006X. Google Scholar J. E. Marsden, G. W. Patrick and S. Shkoller, Multisymplectic geometry, variational integrators and nonlinear PDEs, Comm. Math. Phys., 199 (1998), 351-395. doi: 10.1007/s002200050505. Google Scholar D. Pavlov, P. Mullen, Y. Tong, E. Kanso, J. E. Marsden and M. Desbrun, Structure-preserving discretization of incompressible fluids, Physica D, 240 (2010), 443-458. doi: 10.1016/j.physd.2010.10.012. Google Scholar J. Pedlosky, Geophysical Fluid Dynamics, Springer Verlag, New York, 1979. Google Scholar S. Reich, Linearly implicit time stepping methods for numerical weather prediction, BIT Numerical Mathematics, 46 (2006), 607-616. doi: 10.1007/s10543-006-0065-0. Google Scholar S. Reich, N. Wood and A. Staniforth, Semi-implicit methods, nonlinear balance, and regularized equations, Atmospheric Science Letters, 8 (2007), 1-6. doi: 10.1002/asl.142. Google Scholar T. D. Ringler, J. Thuburn, J. B. Klemp and W. C. Skamarock, A unified approach to energy conservation and potential vorticity dynamics for arbitrarily-structured C-grids, J. Comput. Phys., 229 (2010), 3065-3090. doi: 10.1016/j.jcp.2009.12.007. Google Scholar A. Staniforth and A. A. White, Some exact solutions of geophysical fluid-dynamics equations for testing models in spherical and plane geometry, Q. J. R. Meteorol. Soc., 133 (2007), 1605-1614. doi: 10.1002/qj.122. Google Scholar A. Staniforth, N. Wood and S. Reich, A time-staggered semi-lagrangian discretization of the rotating shallow-water equations, Quarterly Journal of the Royal Meteorological Society, 132 (2006), 3107-3116. doi: 10.1256/qj.06.30. Google Scholar A. Stegner and D. Dritschel, A numerical investigation of the stability of isolated shallow-water vortices, J. Phys. Ocean., 30 (2000), 2562-2573. doi: 10.1175/1520-0485(2000)030<2562:ANIOTS>2.0.CO;2. Google Scholar J. Thuburn, T. D. Ringler, W. C. Skamarock and J. B. Klemp, Numerical representation of geostrophic modes on arbitrarily structured C-grids, J. Comput. Phys., 228 (2009), 8321-8335. doi: 10.1016/j.jcp.2009.08.006. Google Scholar J. Thuburn and C. J. Cotter, A primal-dual mimetic finite element scheme for the rotating shallow water equations on polygonal spherical meshes, Journal of Computational Physics, 290 (2015), 274-297. doi: 10.1016/j.jcp.2015.02.045. Google Scholar D. L. Williamson, J. B. Drake, J. J. Hack, R. Jakob and P. N. Swarztrauber, A standard test set for numerical approximations to the shallow-water equations in spherical geometry, J. Comput. Phys., 102 (1992), 221-224. doi: 10.1016/S0021-9991(05)80016-6. Google Scholar V. Zeitlin (Ed.), Nonlinear Dynamics of Rotating Shallow Water: Methods and Advances, Elsevier, New York, 2007. Google Scholar show all references Figure 1. Notation and indexing conventions for the 2D simplicial mesh. Figure Options Download as PowerPoint slide Figure 2. Regular mesh with equilateral triangles and irregular mesh with central refinement region, both with $ 2 \cdot 32^2 $ triangular grid cells. Figure 3. Left: contour lines of the bottom topography $ B(x, y) $ on the computational domain. Right: maximum errors in surface elevation at rest relative to $ H_0 = 750 $m for regular (upper right) and irregular (lower right) meshes. Figure 4. Frequency spectra of the disturbed lake at rest after 10 days for parameters $ f = 5.31 $ $ \rm days^{-1} $, $ H_0 = 750 $m (left) or $ f = 6.903 $ $ \rm days^{-1} $, $ H_0 = 1267.5 $m (right) determined on an irregular mesh with $ 2 \cdot 64^2 $ cells. The frequency spectra determined on regular meshes looks very similar (not shown). Figure 5. Isolated vortex test case: fluid depth $ D(x, y) $ at initial time $ t = 0 $ (left) and at $ t = 100 $days on a regular (center) and an irregular (right) mesh with $ 2 \cdot 64^2 $ triangular cells. Contours between $ 0.698 {\rm km} $ and $ 0.752 {\rm km} $ with interval of $ 0.003 {\rm km} $. Figure 6. Isolated vortex test case: relative potential vorticity $ q_{\rm rel}(x, y) $ at initial time $ t = 0 $ (left) and at $ t = 100 $days on a regular (center) and an irregular (right) mesh with $ 2 \cdot 64^2 $ triangular cells. Contours between $ -1.5 {\rm days^{-1}km^{-1}} $ and $ 12.5 {\rm days^{-1}km^{-1}} $ with interval of $ 1 {\rm days^{-1}km^{-1}} $. Figure 8. Isolated vortex test case: $ L_2 $ and $ L_\infty $ error values of numerical solutions for $ D $ and $ q_{\rm rel} $ after 1 day as a function of grid resolution for a fluid in semi-geostrophic (left), quasi-geostrophic (middle), and incompressible (right) regime for regular and irregular meshes. Figure 7. Isolated vortex test case: relative errors of total energy $ E(t) $ on meshes with $ 2\cdot 64^2 $ cells (upper row) and with $ 2\cdot 32^2 $ cells (lower row) for a fluid in semi-geostrophic ($ 1^{st}, 2^{nd} $ column), in quasi-geostrophic ($ 3^{rd}, 4^{th} $ column), and in incompressible ($ 5^{th}, 6^{th} $ column) regime for regular ($ 1^{st}, 3^{rd}, 5^{th} $ column) and irregular ($ 2^{nd}, 4^{th}, 6^{th} $ column) meshes. Figure 9. Initial fluid depth $ D $ and relative potential vorticity $ q_{\rm rel} $ in geostrophic balance. Contours for $ D $ between $ 9.93 {\rm km} $ and $ 10 {\rm km} $ with interval of $ 0.005 {\rm km} $ and for $ q_{\rm rel} $ between $ -0.45 {\rm days^{-1}km^{-1}} $ and $ 1.7 {\rm days^{-1}km^{-1}} $ with interval of $ 0.1 {\rm days^{-1}km^{-1}} $. Figure 10. Snapshots of relative potential vorticity $ q_{\rm rel} $ for $ H_0 = 10 $km on regular (upper row) and irregular (lower row) meshes with $ 2 \cdot 256^2 $ cells. Contours between $ -0.45 {\rm days^{-1}km^{-1}} $ and $ 1.7 {\rm days^{-1}km^{-1}} $ with interval of $ 0.1 {\rm days^{-1}km^{-1}} $. Figure 12. Vortex interaction test case: relative errors of $ E(t) $ (upper row), of $ PV(t) $ (middle row), and $ PE(t) $ (lower row) for a fluid in semi-geostrophic ($ 1^{st}, 2^{nd} $ column), in quasi-geostrophic ($ 3^{rd}, 4^{th} $ column), and in incompressible ($ 5^{th}, 6^{th} $ column) regime for regular ($ 1^{st}, 3^{rd}, 5^{th} $ column) and irregular ($ 2^{nd}, 4^{th}, 6^{th} $ column) meshes with $ 2 \cdot 256^2 $ cells. Figure 11. Comparison of $ q_{\rm rel} $ and $ D $ for fluids in semi-geostrophic (left), quasi-geostrophic (middle), and incompressible regimes (right) for a regular mesh with $ 2 \cdot 256^2 $ cells. Contours for $ D $ between $ -0.12 {\rm km} +H_0 $ and $ 0.02 {\rm km} + H_0 $ with interval of $ 0.01 {\rm km} $. Contours for $ q_{\rm rel} $; left: between $ -13 {\rm days^{-1}km^{-1}} $ and $ 50 {\rm days^{-1}km^{-1}} $ with interval of $ 3 {\rm days^{-1}km^{-1}} $; middle: between $ -7 {\rm days^{-1}km^{-1}} $ and $ 25 {\rm days^{-1}km^{-1}} $ with interval of $ 2 {\rm days^{-1}km^{-1}} $; right: between $ -0.45 {\rm days^{-1}km^{-1}} $ and $ 1.7 {\rm days^{-1}km^{-1}} $ with interval of $ 0.1 {\rm days^{-1}km^{-1}} $. Figure 13. Initial fields of fluid depth (left) and relative potential vorticity (right) in geostrophic balance for the shear flow test case on a regular mesh with $ N = 2 \cdot 256^2 $ cells. Contours for $ q_{\rm rel} $ between $ -11 {\rm days^{-1}km^{-1}} $ and $ 11 {\rm days^{-1}km^{-1}} $ with interval of $ 1 {\rm days^{-1}km^{-1}} $, and for $ D $ between $ -0.06 {\rm km} +H_0 $ and $ 0.04 {\rm km} + H_0 $ with interval of $ 0.002 {\rm km} $. Figure 14. Shear flow test case: snapshots of relative potential vorticity $ q_{\rm rel} $ on regular (upper row) and irregular (lower row) mesh with $ 2\cdot 256^2 $ cells. Contours between $ -11 {\rm days^{-1}km^{-1}} $ and $ 11 {\rm days^{-1}km^{-1}} $ with interval of $ 2 {\rm days^{-1}km^{-1}} $. Figure 15. Shear flow test case: snapshots of $ D $ on regular (upper row) and irregular (lower row) mesh with $ 2\cdot 256^2 $ cells. Contours between $ -0.06 {\rm km} +H_0 $ and $ 0.04 {\rm km} + H_0 $ with interval of $ 0.004 {\rm km} $. Figure 16. Shear flow test case: relative errors of total energy $ E(t) $ (upper row), of mass-weighted potential vorticity $ PV(t) $ (middle row), and potential enstrophy $ PE(t) $ (lower row) for a fluid in quasi-geostrophic regime for regular (left) and irregular (right) meshes with $ 2 \cdot 256^2 $ cells. Table 1. Continuous and discrete objects Continuous diffeomorphism Discrete diffeomorphisms $ \operatorname{Diff}(M)\ni\varphi $ $ \mathsf{D}( \mathbb{M} )\ni q $ Lie algebra Discrete diffeomorphisms $ \mathfrak{X} (M)\ni\mathbf{u} $ $ \mathfrak{d} ( \mathbb{M} ) \ni A $ Group action on functions Group action on discrete functions $ f \mapsto f \circ \varphi $ $ F\mapsto q^{-1} F $ Lie algebra action on functions Lie algebra action on discrete functions $ f\mapsto \mathbf{d} f \cdot \mathbf{u} $ $ F\mapsto -A F $ Group action on densities Group action on discrete densities $ \rho \mapsto ( \rho \circ \varphi)J \varphi $ $ D\mapsto \Omega^{-1} q^\mathsf{T}\Omega D $ Lie algebra action on densities Lie algebra action on discrete densities $ \rho \mapsto \operatorname{div}(\rho\mathbf{u} ) $ $ D \mapsto \Omega^{-1} A^\mathsf{T}\Omega D $ Hamilton's principle Lagrande-d'Alembert principle $ \delta \int_0^T L_{ \rho _0}( \varphi , \dot{\varphi}) dt = 0, $ for arbitrary variations $ \delta \varphi $ $ \delta \int_0^T L_{ D _0}( q , \dot q ) dt = 0 $, $ \dot q q ^{-1} \in \mathcal{S} \cap \mathcal{R} $, for variations $ \delta q q ^{-1} \in \mathcal{S} \cap \mathcal{R} $ Eulerian velocity and density Eulerian discrete velocity and discrete density $ \mathbf{u} = \dot{ \varphi } \circ \varphi^{-1} $, $ \rho = (\rho _0 \circ \varphi ^{-1} ) J \varphi^{-1} $ $ A = \dot{ q} q^{-1} $, $ D = \Omega^{-1} q^{-\mathsf{T}}\Omega D_0 $ Euler-Poincaré principle Euler-Poincaré-d'Alembert principle $ \delta \int_0^T \ell( \mathbf{u} , \rho ) dt = 0 $, $ \delta \mathbf{u} = \partial _t \boldsymbol{\zeta} + [\boldsymbol{\zeta}, \mathbf{u} ] $, $ \delta \rho = - \operatorname{div}( \rho \boldsymbol{\zeta} ) $ $ \delta \int_0^T \ell( A , D ) dt = 0 $, $ \delta A = \partial_t B+[B, A] $, $ \delta D = - \Omega ^{-1} B^\mathsf{T} \Omega D $, $ A, B \in \mathcal{S} \cap \mathcal{R} $ Compressible Euler equations Discrete compressible Euler equations Form Ⅰ: $ \partial _t ( \rho ( \mathbf{u} ^\flat + \mathbf{R} ^\flat )) + \mathbf{i} _{\rho \mathbf{u} } \omega + \operatorname{div}( \rho\mathbf{u} ) ( \mathbf{u} ^\flat + \mathbf{R} ^\flat ) \\ = - \rho \mathbf{d} \big( \frac{1}{2} | \mathbf{u} | ^2 + \frac{\partial \varepsilon }{\partial \rho } \big) $ Form Ⅰ: on 2D simplicial gridEquation (43) Form Ⅱ : $ \rho\partial _t \mathbf{u} ^\flat + \mathbf{i} _{\rho \mathbf{u} } \omega = - \rho \mathbf{d} \big( \frac{1}{2} | \mathbf{u} | ^2 + \frac{\partial \varepsilon }{\partial \rho } \big) $ Form Ⅱ: Table Options Emanuel-Ciprian Cismas. Euler-Poincaré-Arnold equations on semi-direct products II. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 5993-6022. doi: 10.3934/dcds.2016063 Qi Hong, Jialing Wang, Yuezheng Gong. Second-order linear structure-preserving modified finite volume schemes for the regularized long wave equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6445-6464. doi: 10.3934/dcdsb.2019146 Zhigang Wang. Vanishing viscosity limit of the rotating shallow water equations with far field vacuum. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 311-328. doi: 10.3934/dcds.2018015 Jeffrey K. Lawson, Tanya Schmah, Cristina Stoica. Euler-Poincaré reduction for systems with configuration space isotropy. Journal of Geometric Mechanics, 2011, 3 (2) : 261-275. doi: 10.3934/jgm.2011.3.261 Takeshi Fukao, Shuji Yoshikawa, Saori Wada. Structure-preserving finite difference schemes for the Cahn-Hilliard equation with dynamic boundary conditions in the one-dimensional case. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1915-1938. doi: 10.3934/cpaa.2017093 Vikas S. Krishnamurthy. The vorticity equation on a rotating sphere and the shallow fluid approximation. Discrete & Continuous Dynamical Systems, 2019, 39 (11) : 6261-6276. doi: 10.3934/dcds.2019273 Makoto Okumura, Daisuke Furihata. A structure-preserving scheme for the Allen–Cahn equation with a dynamic boundary condition. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4927-4960. doi: 10.3934/dcds.2020206 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3319-3341. doi: 10.3934/dcds.2020407 Marcel Oliver, Sergiy Vasylkevych. Hamiltonian formalism for models of rotating shallow water in semigeostrophic scaling. Discrete & Continuous Dynamical Systems, 2011, 31 (3) : 827-846. doi: 10.3934/dcds.2011.31.827 Lin Lu, Qi Wang, Yongzhong Song, Yushun Wang. Local structure-preserving algorithms for the molecular beam epitaxy model with slope selection. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4745-4765. doi: 10.3934/dcdsb.2020311 Makoto Okumura, Takeshi Fukao, Daisuke Furihata, Shuji Yoshikawa. A second-order accurate structure-preserving scheme for the Cahn-Hilliard equation with a dynamic boundary condition. Communications on Pure & Applied Analysis, 2022, 21 (2) : 355-392. doi: 10.3934/cpaa.2021181 Anthony Bloch, Leonardo Colombo, Fernando Jiménez. The variational discretization of the constrained higher-order Lagrange-Poincaré equations. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 309-344. doi: 10.3934/dcds.2019013 Casimir Emako, Farah Kanbar, Christian Klingenberg, Min Tang. A criterion for asymptotic preserving schemes of kinetic equations to be uniformly stationary preserving. Kinetic & Related Models, 2021, 14 (5) : 847-866. doi: 10.3934/krm.2021026 Mouhamadou Aliou M. T. Baldé, Diaraf Seck. Coupling the shallow water equation with a long term dynamics of sand dunes. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1521-1551. doi: 10.3934/dcdss.2016061 Anwar Ja'afar Mohamad Jawad, Mohammad Mirzazadeh, Anjan Biswas. Dynamics of shallow water waves with Gardner-Kadomtsev-Petviashvili equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1155-1164. doi: 10.3934/dcdss.2015.8.1155 Yuto Miyatake, Tai Nakagawa, Tomohiro Sogabe, Shao-Liang Zhang. A structure-preserving Fourier pseudo-spectral linearly implicit scheme for the space-fractional nonlinear Schrödinger equation. Journal of Computational Dynamics, 2019, 6 (2) : 361-383. doi: 10.3934/jcd.2019018 Andrei Cozma, Christoph Reisinger. Exponential integrability properties of Euler discretization schemes for the Cox--Ingersoll--Ross process. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3359-3377. doi: 10.3934/dcdsb.2016101 Madalina Petcu, Roger Temam. The one dimensional shallow water equations with Dirichlet boundary conditions on the velocity. Discrete & Continuous Dynamical Systems - S, 2011, 4 (1) : 209-222. doi: 10.3934/dcdss.2011.4.209 Chengchun Hao. Cauchy problem for viscous shallow water equations with surface tension. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 593-608. doi: 10.3934/dcdsb.2010.13.593 Denys Dutykh, Dimitrios Mitsotakis. On the relevance of the dam break problem in the context of nonlinear shallow water equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (4) : 799-818. doi: 10.3934/dcdsb.2010.13.799 PDF downloads (316) HTML views (1319) Werner Bauer François Gay-Balmaz
CommonCrawl
Does Schnorr's 2021 factoring method show that the RSA cryptosystem is not secure? Claus Peter Schnorr recently posted a 12-page factoring method by SVP algorithms. Is it correct? It says that the algorithm factors integers $N \approx 2^{400}$ and $N \approx 2^{800}$ by $4.2 \cdot 10^{9}$ and $8.4 \cdot 10^{10}$ arithmetic operations. Does this lead to the conclusion that the RSA cryptosystem is no longer secure in some sense? Moderator note: a new paper was published 2021-07-08, replacing an earlier, withdrawn paper. rsa factoring fgrieu♦ BlancoBlanco $\begingroup$ news.ycombinator.com/item?id=26321962: $\endgroup$ – Blanco $\begingroup$ Some (including e-print admin) claim personal confirmation form Schnorr that he indeed posted the e-print: twitter.com/FredericJacobs/status/1367115794363088897 and twitter.com/Leptan/status/1367103240228261894 $\endgroup$ – bas $\begingroup$ Testing Schnorr's factoring Claim in Sage github.com/lducas/SchnorrGate $\endgroup$ – Fabiano Taioli $\begingroup$ This related question might be of interest: quantumcomputing.stackexchange.com/q/16421/2293 $\endgroup$ $\begingroup$ On the recent PKC 2021 (Public Key Cryptography) Léo Ducas gave an invited talk about "Lattices & Factoring" that might be helpful for some background. $\endgroup$ – j.p. If the claim was true, then there would be an extremely simple way to prove it: $10^{10}$ arithmetic operations is nothing. There are tons of 800-bit factoring challenges available online. The author could just solve them and include the factorization in the submission; the lack of such a straightforward validation should be taken as empirical evidence that the claim is, as of today, unsubstantiated at best. Leo Ducas, one of the top experts in lattice-based cryptography (and especially in its cryptanalysis) has implemented the March 3 version of the paper. The preliminary experimental evaluations seem to indicate that the method cannot outperform the state of the art (quoted from here): This suggest that the approach may be sensible, but that not all short vectors give rise to factoring relations, and that obtaining a sufficient success rate requires much larger lattice dimension than claimed in [Sch21]. Personnal study (unfortunately, never written down cleanly) of this approach suggested me that this approach requires solving SVP in dimensions beyond reasonable, leading to a factorization algorithm much slower than the state of the art. My impression is that this is the consensus among experts having spent some time on it as well. The corresponding Twitter thread is here. Furthermore, this Twitter thread points to what seems to be a fatal mistake in Schnorr's paper. (Warning: personal view) I'm also basing my conclusion on the fact that several top experts on SVP and CVP algorithms have looked at the paper and concluded that it is incorrect (I cannot provide names, since it was in the context of anonymous reviews). Of course, the latter should not be treated as clear evidence, since I'm not providing proof of this statement - please treat it simply as it is, a claim I'm making. (This statement refers to the version of the paper which was initially uploaded, and whose eprint extracted abstract contained a claim that RSA was "destroyed", together with the sample running times given in OP's question. Schnorr himself still claimed, after being asked by mail about the paper, that the latest version breaks RSA - and so does its abstract; with respect to this claim and given the lack of solved RSA challenge, I stand by my statement that it should be regarded as essentially unsubstantiated). Among the potential issues (again to be treated with care, as pointers to help people willing to look further into where the paper might fail): The proof of (5.8) does only show the existence of many smooth triples, but says nothing about the probability of successfully finding factoring relations. The paper relies on the Schnorr-Hörner pruning strategy, which is known to be flawed. No justification is given for the cost indicated for (3.2) Geoffroy CouteauGeoffroy Couteau $\begingroup$ Paradoxically, complicated mathematical proofs are harder to verify (or falsify) than algorithmic claims which would solve existing challenges; therefore one can believe in a false proof longer than in a faulty algorithm. Checking a solution to one of the challenges doesn't involve much understanding (of the algorithm or underlying theory) at all. Either Schnorr can provide the factors or he can't. $\endgroup$ – Peter - Reinstate Monica $\begingroup$ We know prof. Schnorr is a talented mathematician. But, do we know whether he has learned to program? $\endgroup$ $\begingroup$ If nobody can take his paper and implement it then it is no threat to RSA. Maybe it's a small number of arithmetic operations on n! bit integers. $\endgroup$ $\begingroup$ Did those anonymous reviewers point out where in the paper it didn't work? Did it generate CVP instances that weren't actually solvable by current algorithms? Did the CVP solutions not correspond to valid fac-relations (except for neglectable probability)? Were the various fac-relations generated not independent (and thus $n$ of them weren't enough to actually generate a factorization)? $\endgroup$ – poncho $\begingroup$ @bas Given his research area and the fact that he taught both mathematics and computer science, he almost certainly has the programming skills needed to implement his algorithm. Even if programming isn't his forte, if his results destroys RSA as he claims, he should be able to demonstrate it in code. Even an inefficient implementation of the algorithm, if correct, would settle the question. $\endgroup$ – John Coleman Big issue at the bottom of page 2 where the determinant is quoted as $N^{n+1}\frac{n(n+1)}2 \ln n$ when it should be $N^{n+1} n! \ln n$. If this formula is used directly in the numerical estimates then are likely to be highly inaccurate. UPDATE (5th March): I've taken a bit more time to work through the paper in detail. There's no asymptotic analysis of the complexity of the overall method and the striking claim seems to be based on a faulty numerical calculation that I cannot account for even with the corrected determinant expression. The calculation in question is at the bottom of page 3. In the sentence beginning "By theorem 6.2..." a bound is given for a vector $|\mathbf b_1|^2$ of the form $\gamma_k(\alpha\gamma_k^2)^{\frac{h-1}2}\mathrm {det}(\mathcal L(\mathbf R_{95,f}))^{\frac 2{96}}$ this formula is correct. However by the parameters of this section $k=24$ and so the Hermite constant $\gamma_{24}=4$, $\alpha=4/3$, $h=4$, $N\approx 2^{800}$, $n=95$ and so our determinant is roughly $2^{800*96}95!(800\ln 2)$. The quoted estimate for this bound in 0.8408696 whereas I estimate it to be greater than $10^{486}$, giving a bound for $\mathbf b_1$ of around $10^{243}$ and hence a bound for $|u-vN|$ of a similar order of magnitude. The ensuing input for the Dickman $\rho$ function is then around 90 and $\rho(90)\approx 10^{-202}$ and so this process would have to be repeated about $10^{202}$ times to produce enough fac-relations. The calculation for the $N\approx 2^{400}$ example is less complete, but I suspect a similar error. The lattice theorems do seem sound and the above does not rule out a part of the parameter space that does perform better. UPDATE (16 March): The new version now has a different scaling to the lattice ($N'=N^{\frac1{n+1}}$ in place of $N$. This drastically reduces the determinant. However the statement and proof of Lemma 3.1 are no longer clear. It is claimed that bounds on $|u-vN'|$ are produced which in turn makes it likely that numbers of size around $|u-vN'|$ are more likely to be smooth (note $N'$ is roughly 8-bit number in the example). However, to form fac-relations $|u-vN|$ is what needs to be smooth and this is of very different size ($N$ is a 800-bit number in our example). The factorial has been incorporated into the numerical example at the bottom of page 3, but the two figures quoted for $\mathrm{det}(R_{n/f})^{2/96}$ are $6.99\times 10^{10}$ and $2.13\times 10^{-3}$ whereas I calculate $95!^{\frac2{96}}N'^2(\log N)^{\frac2{96}}\approx 1.44\times 10^{8}$ and a corresponding bound on $|\mathbf b_1|^2 of 5.67\times 10^{10}$. If this could be converted into a bound on $|u-vN|$ of a commensurable size then the Dickman calculation is less daunting, however the old version of Lemma 3.1 is then quoted which bounds $|u-vN|$ rather than $|u-vN'|$. In conclusion shorter lattice vectors are being produced, but they don't seem to hold the same meaning as the previous construction. Daniel SDaniel S The error in upper bounding the determinant of the matrix $\mathbf{R}_{n,f}$ by $$N^{n+1} \frac{n(n+1)}{2} \ln N$$ instead of by $$N^{n+1} n! \ln N$$ seems to lead to an error in upper bounding the initial short vector by $$\Vert \mathbf{b}_1 \Vert< \exp(2 \ln n/2)$$ instead of by $$\Vert \mathbf{b}_1 \Vert< \exp(\ln n)$$ which is an exponential relative error. Later on it is stated on page 5 that For short vectors $\mathbf{b}=\sum_{i=1}^n u_i \mathbf{b}_i \in \mathcal{L} \setminus \mathbf{0}$ the stages $u = (u_t, ..., u_n)$ have large success rate. Unfortunately the actual initial vector may not be that short due to the indicated error in the bound. kodlukodlu This remark is about the eprint version 20210303:182120 and makes use of the notation defined therein. Existing answers here have already pointed to the matrix $R_{n,f}$ defined at the bottom of page 2. It is given as $$R_{n,f}=\begin{pmatrix} N f(1) & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \vdots & \vdots \\ \vdots & \ddots & \ddots & 0 & 0 \\ 0 & \cdots & 0 & N f(n) & 0 \\ N\ln p_1 & \cdots & \cdots & N\ln p_n & N\ln N \end{pmatrix} = [b_1,\ldots,b_{n+1}]$$ I would like to add the following observations: $N$ is a common factor in that matrix; the shortest vector coefficients $(e_1,\ldots,e_{n+1})$ with respect to that basis $R_{n,f}$ will be the same when that factor is omitted. The above suggests that there is a typo in the specification of $R_{n,f}$ or an incomplete simplification. Consider the $(n+1)$-th component of a lattice vector. This will be small only if $\prod_{i=1}^n p_i^{e_i} \approx N^{-e_{n+1}}$, thus $u/v\approx N^{-e_{n+1}}$. On the other hand, for $|u-Nv|$ to be small enough to have a reasonable chance of being smooth, we need $u/v\approx N$. This suggests the restriction $e_{n+1}=-1$. I do not find that restriction mentioned anywhere in the paper. Anyway, the only solutions worth considering would be those with $e_{n+1}\in\{\pm 1\}$, which remarkably exclude $e_{n+1}=0$, and if $e_{n+1}=+1$, then all other $e_i$ would need to have their signs flipped. Note that previous versions of the paper used a close-vector problem approach with slightly different lattice bases. The previous item suggests that there is an incompletely described attempt to turn a close-vector problem into a short-vector problem. ccornccorn Not the answer you're looking for? Browse other questions tagged rsa factoring or ask your own question. Is "Fast Factoring Integers by SVP Algorithms, corrected" correct? Schnorr RSA factoring (round 2) Why has the RSA factoring challenge been withdrawn? Attacks on the RSA Cryptosystem Factorization of the public value $N$ from the RSA cryptosystem What is the efficiency of the new Crown Sterling semiprime factoring method? Show that $\text{FACTORING} \le_P \text{SQROOT}$ Is RSA-OAEP secure against Shor's factoring algorithm Is it proven that breaking RSA is equivalent to factoring as of 2021?
CommonCrawl
Dual functionality of the amyloid protein TasA in Bacillus physiology and fitness on the phylloplane Jesús Cámara-Almirón ORCID: orcid.org/0000-0002-6107-42601 na1, Yurena Navarro1 na1, Luis Díaz-Martínez1, María Concepción Magno-Pérez-Bryan1, Carlos Molina-Santiago ORCID: orcid.org/0000-0002-1131-89171, John R. Pearson ORCID: orcid.org/0000-0002-2267-08442, Antonio de Vicente ORCID: orcid.org/0000-0003-2716-98611, Alejandro Pérez-García ORCID: orcid.org/0000-0002-1065-03601 & Diego Romero ORCID: orcid.org/0000-0002-2052-65151 Bacterial adhesion Cellular microbiology Bacteria can form biofilms that consist of multicellular communities embedded in an extracellular matrix (ECM). In Bacillus subtilis, the main protein component of the ECM is the functional amyloid TasA. Here, we study further the roles played by TasA in B. subtilis physiology and biofilm formation on plant leaves and in vitro. We show that ΔtasA cells exhibit a range of cytological symptoms indicative of excessive cellular stress leading to increased cell death. TasA associates to the detergent-resistant fraction of the cell membrane, and the distribution of the flotillin-like protein FloT is altered in ΔtasA cells. We propose that, in addition to a structural function during ECM assembly and interactions with plants, TasA contributes to the stabilization of membrane dynamics as cells enter stationary phase. In response to a wide range of environmental factors1,2, some bacterial species establish complex communities called biofilms3. To do so, planktonic cells initiate a transition into a sedentary lifestyle and trigger a cell differentiation program that leads to: (1) a division of labor, in which different subpopulations of cells are dedicated to covering different processes needed to maintain the viability of the community4,5, and (2) the secretion of a battery of molecules that assemble the extracellular matrix (ECM)3,6. Studies of Bacillus subtilis biofilms have contributed to our understanding of the intricate developmental program that underlies biofilm formation7,8,9,10 that ends with the secretion of ECM components. It is known that the genetic pathways involved in biofilm formation are active during the interaction of several microbial species with plants11,12. In B. subtilis, the lipopeptide surfactin acts as a self-trigger of biofilm formation on the melon phylloplane, which is connected with the suppressive activity of this bacterial species against phytopathogenic fungi13. Currently, the B. subtilis ECM is known to consist mainly of exopolysaccharide (EPS) and the TasA and BslA proteins7. The EPS acts as the adhesive element of the biofilm cells at the cell-to-surface interface, which is important for biofilm attachment14, and BslA is a hydrophobin that forms a thin external hydrophobic layer and is the main factor that confers hydrophobic properties to biofilms15. Both structural factors contribute to maintain the defense function performed by the ECM11,15. TasA is a functional amyloid protein that forms fibers resistant to adverse physicochemical conditions that confer biofilms with structural stability16,17. Additional proteins are needed for the polymerization of these fibers: TapA appears to favor the transition of TasA into the fiber state, and the signal peptidase SipW processes both proteins into their mature forms18,19. The ability of amyloids to transition from monomers into fibers represents a structural, biochemical, and functional versatility that microbes exploit in different contexts and for different purposes20. Like in eukaryotic tissues, the bacterial ECM is a dynamic structure that supports cellular adhesion, regulates the flux of signals to ensure cell differentiation21,22, provides stability and serves as an interface with the external environment, working as a formidable physicochemical barrier against external assaults23,24,25. In eukaryotic cells, the ECM plays an important role in signaling26,27 and has been described as a reservoir for the localization and concentration of growth factors, which in turn form gradients that are critical for the establishment of developmental patterning during morphogenesis28,29,30. Interestingly, in senescent cells, partial loss of the ECM can influence cell fate, e.g., by activating the apoptotic program31,32. In both eukaryotes and prokaryotes, senescence involves global changes in cellular physiology, and in some microbes, this process begins with the entry of the cells into stationary phase33,34,35. This process triggers a response typified by molecular mechanisms evolved to overcome environmental adversities and to ensure survival, including the activation of general stress response genes36,37, a shift to anaerobic respiration38, enhanced DNA repair39, and induction of pathways for the metabolism of alternative nutrient sources or sub-products of primary metabolism40. Based on previous works13, we hypothesize that the ECM makes a major contribution to the ecology of B. subtilis in the poorly explored phyllosphere. Our study of the ecology of B. subtilis NCIB3610-derived strains carrying single mutations in different ECM components in the phyllosphere highlights the role of TasA in bacteria-plant interactions. Moreover, we demonstrate a complementary role for TasA in the stabilization of the bacteria's physiology. In ΔtasA cells, gene expression changes and dynamic cytological alterations eventually lead to a premature increase in cell death within the colony. Complementary evidences prove that these alterations are independent of the structural role of TasA in ECM assembly. All these results indicate that these two complementary roles of TasA, both as part of the ECM and in contributing to the regulation of cell membrane dynamics, are important to preserve cell viability within the colony and for the ecological fitness of B. subtilis in the phylloplane. TasA contributes to the fitness of Bacillus on the phylloplane Surfactin, a member of a subfamily of lipopeptides produced by B. subtilis and related species, contributes to multicellularity in B. subtilis biofilms41. We previously reported how a mutant strain defective for lipopeptide production showed impaired biofilm assembly on the phylloplane13. These observations led us to evaluate the specific contributions made by the ECM structural components TasA and the EPS to B. subtilis fitness on melon leaves. Although not directly linked to the surfactin-activated regulatory pathway, we also studied the gene encoding the hydrophobin protein BslA (another important ECM component). A tasA mutant strain (ΔtasA) is defective in the initial cell attachment to plant surfaces (4 h and 2 days post-inoculation) (Fig. 1A). As expected, based on their structural functions, all of the matrix mutants showed reduced adhesion and survival (Supplementary Figs. 1A and 1B); however, the population of ΔtasA cells continuously and steadily decreased over time compared to the populations of eps or bslA mutant cells (Fig. 1B and Supplementary Fig. 1B). Examination of plants inoculated with the wild-type strain (WT) or with the ΔtasA strain via scanning electron microscopy (SEM) revealed variability in the colonization patterns of the strains. WT cells assembled in ordered and compact colonies, with the cells embedded in a network of extracellular material (Fig. 1C, top). In contrast, the ΔtasA cells were prone to irregular distribution as large masses of cells on the leaves, which also showed collapsed surfaces or lack of surface integrity, suggesting alterations in cellular structures (Fig. 1C, center). Finally, eps and bslA mutant cells formed flat colonies (Supplementary Fig. 2A) with the same colonization defects observed in the tasA mutant cells (Supplementary Fig. 1C). Fig. 1: TasA is essential for the fitness of Bacillus on the melon phylloplane. a Adhesion of the WT, ΔtasA and JC81 (TasA Lys68Ala, Asp69Ala) strains to melon leaves at 4 h (hpi) and 2 days post-inoculation (dpi). Statistically significant differences between WT and ΔtasA were found at 2 dpi. At 4 hpi, N = 3 for all the strains. At 2 dpi N = 6 for the WT strain, N = 6 for the ΔtasA strain and N = 3 for the JC81 strain. N refers to the number independent experiments. In each experiment, 10 leaves were analyzed. Average values are shown. Error bars represent the SEM. Statistical significance was assessed via two-tailed independent t-tests at each time-point (*p value = 0.0262). b The persistence of the ΔtasA cells at 21 days was significantly reduced compared with that of the WT cells. The persistence of JC81 cells on melon leaves was reduced compared to that of the WT cells. The first point is taken at 4 hpi. Average values of five biological replicates are shown with error bars representing the SEM. Statistical significance was assessed by two-tailed independent t-test at each time-point (*p value = 0.0329). c Representative scanning electron microscopy micrographs of inoculated plants taken 20 days post-inoculation show the WT cells (top) distributed in small groups covered by extracellular material and the ΔtasA cells (bottom) in randomly distributed plasters of cells with no visible extracellular matrix. JC81 (TasA Lys68Ala, Asp69Ala) strain shows an intermediate colonization pattern between those of the WT and ΔtasA null mutant strains. Scale bars = 25 µm (left panels) and 5 µm (right panels). Experiments have been repeated at least three times with similar results. d The WT and ΔtasA strains showed comparable biocontrol activity against the fungal phytopathogen Podosphaera xanthii. However, JC81 (TasA Lys68Ala, Asp69Ala) failed to control the disease. Biocontrol activity was measured after 15 days post-inoculation of the pathogen. For the WT strain, N = 22. For the ΔtasA strain and JC81 strain, N = 12. For the Δpps strain, N = 9. N refers to number of plants analyzed over three independent assays. Three leaves per plant were infected and inoculated. Average values are shown with error bars indicating the SEM. The Δpps is a mutant strain in fengycin production and it is used as a negative control. Statistical significance was assessed by two-tailed independent Mann–Whitney tests between each strain and the Δpps mutant (****p < 0.0001). e MALDI-TOF/TOF MS analysis revealed higher fengycin levels on melon leaves treated with ΔtasA (right) cells compared with that on leaves treated with WT cells (left) after 20 days post-inoculation. Source data are provided as a Source Data file. Based on the reduced fitness exhibited by the single ECM component mutant strains and their deficiencies in biofilm formation, we hypothesized that these strains may also be defective in their antagonistic interactions with Podosphaera xanthi (an important fungal biotrophic phytopathogen of crops42) on plant leaves. Strains with mutations in eps and bslA partially ameliorated the disease symptoms, although their phenotypes were not significantly different from those of the WT strain (Supplementary Fig. 1D). However, contrary to our expectations, the ΔtasA strain retained similar antagonistic activity to that of the WT strain (Fig. 1D). The simplest explanation for this finding is that the antifungal activity exhibited by the ΔtasA cells is due to higher production of antifungal compounds. In situ mass spectrometry analysis revealed a consistently higher relative amount of the antifungal compound plipastatin (also known as fengycin, the primary antifungal compound produced by B. subtilis) on leaves treated with ΔtasA cells compared to those treated with WT cells (Fig. 1E). These observations argue in favor of the relevance of the ECM and specifically TasA in the colonization, survival, and antagonistic activity of B. subtilis on the phylloplane. Loss of TasA causes a global change in bacterial cell physiology The increased fengycin production and the previously reported deregulation of the expression pattern of the tapA operon in a ΔtasA mutant strain23 led us to explore whether loss of tasA disrupts the genetic circuitry of the cells. We sequenced and analyzed the whole transcriptomes of ΔtasA and WT cells grown in vitro on MSgg agar plates, a chemically defined medium specifically formulated to support biofilm. We observed that deletion of tasA resulted in pleiotropic effects on the overall gene expression profile of this mutant (Fig. 2A and Supplementary Fig. 3A), with 601, 688, and 333 induced genes and 755, 1077, and 499 repressed genes at 24, 48, and 72 h, respectively (Supplementary Fig. 3A). A closer look at the data allowed us to cluster the expression of different genes into groups with similar expression profiles over the time course (Fig. 2A). In general, four different expression profiles were found in which genes show positive (profiles 1 and 2) or negative (profiles 3 and 4) variations from 24 h to 48 h, and genes with expression levels that remain either stable (profiles 1 and 3) or altered (profiles 2 and 4) from 48 h to 72 h (Fig. 2A). Profiles 1 and 2 included genes related to the SOS response (profile 1), transcription and replication (profile 1 and 2), purine biosynthetic process (profile 2), and toxin-antitoxin systems (profile 2). Profiles 3 and 4 included genes related to sporulation (profiles 3 and 4), cellular metabolism in general (profile 3) and lipids (profile 3), carbohydrates (profile 3 and 4), monosaccharides (profile 3), polysaccharides (profile 4), or peptidoglycans (profile 3) in particular. These gene expression profiles reflect a general picture that suggests: (i) the existence of cellular stress and DNA damage, in which the cells needs to fully activate different sets of genes to cope with and compensate for the damage and maintain viability, (ii) a decrease in the overall cellular energy metabolism, and (iii) strong repression of the sporulation pathway. To study the observed alterations in gene expression in ΔtasA cells, the differentially expressed genes at all the time points were classified into their different regulons. Indeed, the sigK, sigG, gerR, and gerE regulators (Supplementary Data 2 and 3), which control the expression of many of the genes related to sporulation, were repressed in the ΔtasA cells from 48 h (Fig. 2B and Supplementary Fig. 3B), consistent with the delayed sporulation defect previously reported in ECM mutants10,23 (Supplementary Fig. 4). In contrast, the expression levels of biofilm-related genes, including the epsA-O, and tapA operons, were higher in the ΔtasA cells at all times compared to their expression levels in WT cells (Fig. 2B and Supplementary Fig. 3B) (Supplementary Data 1–3). We found repression of sinR at 24 h (Supplementary Data 1), induction of the slrR transcriptional regulator at all times (Supplementary Data 1–3), and repression of the transition state genes transcriptional regulator abrB at 24 h and 48 h (Supplementary Data 1 and Supplementary Data 2), which could explain the induction of the ECM-related genes7. Fig. 2: The tasA mutant displays major gene expression changes. a Heatmap and gene profiles (1–4) of genes with similar expression patterns. Genes in the same profile are categorized (right) according to their gene ontology (GO) terms. Each gene profile represents statistically significant genes. Statistical significance was assessed by a χ2 test comparing the number of clustered genes with the expected theoretical number of genes if gene distribution were random. In the heatmap, induced genes are colored in dark gray and repressed genes in light gray. b Differentially expressed genes at 72 h clustered into different regulons. The bigger circles indicate de main regulator of that regulon, which is surrounded by arrows pointing to smaller circles that are the differentially expressed genes. The thickness of the arrows indicates expression levels. The color in the arrows indicates induction (red) or repression (green). The analysis of the transcriptional changes in the tasA mutant cells highlighted the broad metabolic rearrangements that take place in ΔtasA colonies from 24 h to 72 h, including the expression alteration of genes implicated in energy metabolism, secondary metabolism and general stress, among other categories (Supplementary Data 1–3, Fig. 2B and Supplementary Fig. 3B). First, the alsS and alsD genes, which encode acetolactate synthase and acetolactate decarboxylase, respectively, were clearly induced at all times (Supplementary Data 1–3). This pathway feeds pyruvate into acetoin synthesis, a small four-carbon molecule that is produced in B. subtilis during fermentative and overflow metabolism43. Additionally, we found induction of several regulators and genes that are involved in anaerobic respiration and fermentative metabolism. The two-component regulatory system resD and resE, which senses oxygen limitation, and their target genes44, were induced in ΔtasA cells at 24 h and 48 h (Supplementary Data 1 and 2 and Supplementary Fig. 3B). Consistently, induction of the transcriptional regulator fnr and the anaerobic respiration related genes narGHIJK, which encode the nitrate reductase complex, as well as all of the proteins required for nitrate respiration were induced at 24 h and 48 h (Supplementary Data 1 and 2 and Supplementary Fig. 3B). Second, we observed induction at all times of the genes involved in fengycin biosynthesis (Supplementary Data 1–3), consistent with the overproduction of this antifungal lipopeptide in planta (Fig. 1E), genes involved in the biosynthesis of surfactin, subtilosin, bacilysin, and bacillaene (all secondary metabolites with antimicrobial activities45,46,47,48) (Fig. 2B, Supplementary Fig. 3B and Supplementary Data 1–3), as well as the operon encoding the iron-chelating protein bacillibactin (dhbACEBF) (Supplementary Data 1–3). The induction of all of these genes is possibly due to the repression of transcriptional repressors of transition state genes that occurs at 24 h and 48 h, e.g., abrB (which controls activation of the genes involved in the synthesis of fengycin, bacilysin, subtilosin, and bacillaene) and abh (which contributes to the transcriptional control of the genes involved in surfactin production) (Supplementary Data 1 and 2). The transcriptional changes of other regulators, such as resD (for subtilosin) or comA (for surfactin), both upregulated at 24 h and 48 h (Supplementary Data 1 and 2), also contribute to the induction of the genes that participate in the synthesis of all of these secondary metabolites and might explain their overall activation at 72 h (Fig. 2B). Finally, the gene encoding the regulator AscR was induced at 48 h and 72 h. AscR controls transcription of the snaA (snaAtcyJKLMNcmoOcmoJIrbfKsndAytnM) and yxe (yxeKsnaByxeMyxeNyxeOsndByxeQ) operons which are induced at all times (Supplementary Data 1–3). The products of these operons are members of alternative metabolic pathways that process modified versions of the amino acid cysteine. More specifically, the products of the snaA operon degrade alkylated forms of cysteine that are produced during normal metabolic reactions due to aging of the molecular machinery40. The yxe operon is implicated in the detoxification of S-(2-succino)cysteine, a toxic form of cysteine that is produced via spontaneous reactions between fumarate and cellular thiol groups in the presence of excess nutrients, which subsequently leads to increased bacterial stress49,50. Additional signs of excess cellular stress in the ΔtasA cells were: (i) the strong overexpression of the sigma factor SigB (σB) at 24 h and 72 h (Fig. 2B, Supplementary Fig. 3B and Supplementary Data 1 and 3), which controls the transcription of genes related to the general stress response36, and (ii) the repression at 24 h of lexA (Supplementary Data 1), a transcriptional repressor of the SOS response regulon, as well as the induction of other genes that confer resistance to different types of stress, i.e., ahpC and ahpF (induced at all times, Supplementary Data 1–3) against peroxide stress or liaH and liaI (induced at 48 h, Supplementary Data 2), which confer resistance to cell wall antibiotics. Indeed, ~41% of the SigB-regulated genes are induced at 24 h (Supplementary Fig. 3B and Supplementary Data 1), and these genes are involved in multiple and different functions, including protease and chaperone activity, DNA repair or resistance against oxidative stress. At 72 h, ~10% of the genes of the SigB regulon were still upregulated, suggesting the existence of cellular stress during colony development (Fig. 2B and Supplementary Data 3). Furthermore, the activation of the SOS response points toward the existence of DNA damage in ΔtasA cells, another sign of stress, with induction of uvrA (at 24 and 72 h, Supplementary Data 1 and 3) and uvrB (at 24 h, Supplementary Data 1), both of which are involved in DNA repair. The presence of DNA damage in ΔtasA cells is further indicated by the induction of almost all of the genes belonging to the lysogenic bacteriophage PBSX at 72 h, a feature that has been reported to occur in response to mutations as well as to DNA or peptidoglycan damage51,52 (Fig. 2B and Supplementary Data 3). In general, the transcriptional changes observed in the ΔtasA cells illustrate an intrinsic major physiological change that progresses over time and suggest the accumulation of excessive cellular stress. These changes result in the early entry of the cells into stationary phase, indicated by the state of the ΔtasA colony at 72 h compared to the WT (Fig. 2B) and supported by increased expression levels of genes related to: (i) biofilm formation (ii) synthesis of secondary metabolites (siderophores, antimicrobials, etc.); (iii) anaerobic respiration, fermentative metabolic pathways, and overflow metabolism; (iv) paralogous metabolism and assimilation of modified or toxic metabolic intermediates; (v) general stress and DNA damage; and (vi) induction of the lysogenic bacteriophage PBSX. ΔtasA cells exhibit impaired respiration and metabolic activity Our transcriptomic analysis suggested that ΔtasA cells exhibit a shift from aerobic respiration to fermentation and anaerobic respiration as well as activation of secondary metabolism, physiological features typical of stationary phase cells38,53. Based on the higher abundance of fengycin on leaves treated with ΔtasA cells and its key role in the antagonistic interaction between B. subtilis and fungal pathogens, we further investigated the kinetics of fengycin production in vitro. Flow cytometry analysis of cells expressing YFP under the control of the fengycin operon promoter demonstrated the induction of fengycin production in a subpopulation of cells (26.5%) at 48 h in the WT strain. However, more than half of the ΔtasA population (67.3%) actively expressed YFP from the fengycin operon promoter at this time point (Fig. 3A, top). At later stages of growth (72 h), the promoter was still active in the ΔtasA cells, and the population of positive cells was consistently higher than that in the WT strain (Fig. 3A, bottom). Mass spectrometry analysis of cell-free supernatants from WT or ΔtasA MOLP (a medium optimized for lipopeptide production) liquid cultures demonstrated that this expression level was sufficient for the tasA mutant cells to produce nearly an order of magnitude more fengycin (Fig. 3B, bottom spectrum) consistent with our findings in plants (Fig. 1E). Additionally, relatively higher levels of fengycin were detected in cells or agar fractions of ΔtasA colonies compared to WT colonies grown on solid MSgg, the medium used in all of our experimental settings (Supplementary Fig. 5, top and bottom spectra respectively). Similar results were obtained for the lipopeptide surfactin in these fractions (Fig. 3B, top spectrum, and Supplementary Fig. 5), consistent with our RNA-seq analysis (Supplementary Data 1–3). In agreement to these observations, in vitro experiments showed that the cell-free supernatants from ΔtasA cells exhibited antifungal activity against P. xanthii conidia equivalent to that of WT cells, even in highly diluted spent medium (Fig. 3C). These results confirm the robust antimicrobial potency of ΔtasA cells and imply that primary metabolic intermediates are diverted to different pathways to support the higher secondary metabolite production in the ΔtasA mutant cells. Fig. 3: ΔtasA cells produce larger amounts of fengycin. a Flow cytometry results of cells encoding the promoter of the fengycin production operon fused to YFP show that a higher percentage of ΔtasA cells (blue) expressed YFP compared with the percentage of YFP-expressing WT cells (red) at 48 h (top) and 72 h (bottom). A non-fluorescent negative control corresponding to the unlabeled WT strain at 72 h is shown (gray) in both experiments. b MALDI-TOF/TOF MS analysis of solid medium or spent MOLP medium after 72 h of growth showed higher fengycin levels in ΔtasA cultures (right) compared to that in WT cultures (left). c Serial dilutions of spent medium after 72 h of incubation deposited over infected leaf disks showed that the liquid medium from ΔtasA cultures retained as much antifungal activity as the medium from WT cultures. N = 5 independent experiments. In each experiment, three leaf disks were examined. Average values are shown. Error bars represent the SEM. Source data are provided as a Source Data file. Consistent with these findings, we observed two complementary results that indicate less efficient metabolic activity in ΔtasA cells compared to that in WT cells: first, the induction at 24 h and 48 h of the genes responsible for the synthesis of the anaerobic respiration machinery (Supplementary Data 1 and 2, and Supplementary Fig. 3B) mentioned above, and second, the differential expression at 72 h of the nasD and nasF genes (parts of the anaerobic respiration machinery) and the differential expression of genes at all times encoding several terminal oxidases found in the electron transport chain (Supplementary Data 1–3). The analysis of the respiration rates of these strains using the tetrazolium-derived dye 5-cyano-2,3-ditolyl tetrazolium chloride (CTC) and flow cytometry revealed a higher proportion of ΔtasA cells with lower respiration rates at 24 h and 72 h compared to the WT proportions (69.10% vs. 43.07% at 24 h and 74.56 vs. 65.11% at 72 h, respectively) (Supplementary Table 1 and Fig. 4A). Second, the expression levels of the alsSD genes, which are responsible for the synthesis of acetoin (a metabolite produced by fermentative pathways) were higher in the ΔtasA strain than in the WT strain at all times (Supplementary Data 1–3). Indeed, all of the factors required for acetoin synthesis from pyruvate were overexpressed at 72 h, whereas some key factors involved in the divergent or gluconeogenetic pathways were repressed (Supplementary Data 1–3 and Supplementary Fig. 6). Expression of alsS and alsD is induced by acetate, low pH and anaerobiosis43,54,55. Acetoin, in contrast to acetate, is a neutral metabolite produced to manage the intracellular pH and to ameliorate over-acidification caused by the accumulation of toxic concentrations of acetate or lactate, and its production is favored during bacterial growth under aerobic conditions56. Reduced respiration rates typically result in the accumulation of higher cellular proton concentrations, which leads to cytoplasmic acidification. These observations led us to postulate that the activation of the alsSD genes and the lower respiration rates observed in ΔtasA colonies might also reflect acidification of the intracellular environment, a potential cause of stationary phase-related stress. Measurements of the intracellular pH levels using the fluorescent probe 5-(6)carboxyfluorescein diacetate succinimidyl ester confirmed a significant decrease in the intracellular pH of nearly one unit (−0.92 ± 0.33,) in ΔtasA cells at 72 h (Fig. 4B) compared to that in WT cells. Fig. 4: Respiration rates and cell viability are compromised in ΔtasA cells. a Flow cytometry density plots of cells double stained with the HPF (Y axis) and CTC (X axis) dyes show that ΔtasA cells were metabolically less active (lower proportion of cells reducing CTC) and were under oxidative stress as early as 24 h (higher proportion of HPF-stained cells). b Measurements of intracellular pH show significant cytoplasmic acidification in the ΔtasA cells at 72 h. Average values of four biological replicates are shown. Error bars indicate the SEM. Statistical significance was assessed by one-way ANOVA with Tukey multiple comparison test (*p < 0.05). c The population dynamics in ΔtasA (dashed line) and WT colonies (solid line) grown on MSgg agar at 30 °C showed a difference of nearly one order of magnitude in the ΔtasA colony from 48 h. Average values of three biological replicates are shown. Error bars represent the SEM. Statistical significance was assessed by two-sided independent t-tests at each time point (**p value = 0.0084 *p value = 0.0410). d Left. Quantification of the proportion of dead cells treated with the BacLight LIVE/DEAD viability stain in WT and ΔtasA colonies at different time-points reveled a significantly higher population of dead cells at 48 h and 72 h in ΔtasA colonies compared to that found in the WT colonies. N = 5 colonies of the corresponding strains examined over three independent experiments. Average values are shown. Error bars represent the SEM. For each experiment and sample, at least three fields-of-view were measured. Statistical significance was assessed via two-tailed independent t-tests at each time-point (****p < 0.0001). Right. Representative confocal microscopy images of fields corresponding to LIVE/DEAD-stained WT or ΔtasA samples at 72 h. Scale bars = 10 µm. Source data are provided as a Source Data file. Loss of TasA increases membrane fluidity and cell death The reduction in metabolic activity of ΔtasA cells, along with their acidification of the intracellular environment, might be expected to result in reduced bacterial viability. Measurements of the dynamics of viable bacterial cell density, expressed as CFU counts, showed that after 48 h, ΔtasA colonies possessed nearly an order of magnitude fewer CFUs than did WT colonies (Fig. 4C). These results suggest the hypothesis that ΔtasA colonies might exhibit higher rates of cell death than WT colonies. To test this possibility, we analyzed the live and dead sub-populations using the BacLight LIVE/DEAD viability stain and confocal microscopy (Fig. 4D right). The proportion of dead cells in ΔtasA colonies ranged from between 16.80% (16.80 ± 1.17) and 20.06% (20.06 ± 0.79) compared to 4.45% (4.45 ± 0.67) and 3.24% (3.24 ± 0.51) found in WT colonies at 48 h and 72 h, respectively (Fig. 4D, left). The significantly higher rate of cell death in ΔtasA compared to WT is consistent with the drastically lower bacterial counts found in the ΔtasA mutant colonies after 48 h. To rule out the influence of media composition on the observed phenotype, we performed the same experiments on solid LB medium, on which B. subtilis can still form a biofilm, as reflected by the wrinkly phenotype of the colonies (Supplementary Fig. 7A). We found that ΔtasA colonies exhibited a significantly higher proportion of dead cells at 48 h (17.86 ± 0.92) than did WT colonies (3.88 ± 0.33) (Supplementary Fig. 7B). Interestingly, the higher rate of cell death exhibited by the tasA mutant was not reproducible when both strains were grown in liquid MSgg with shaking, conditions that promote planktonic growth. WT and ΔtasA cultures showed similar growth rates under these conditions (Supplementary Fig. 7C), and the proportion of cell death was measured in exponential (ΔtasA 0.32 ± 0.03 vs WT 0.78 ± 0.30) or stationary phase cultures (ΔtasA 2.31 ± 0.44 vs WT 0.56 ± 0.08) (Supplementary Fig. 7D), indicating that the lower viability of ΔtasA cells is observable when biofilms form on solid media. The impaired respiration rates and the acidification of the cellular environment found in the ΔtasA cells are causes of cellular stress that can lead to ROS generation57,58, a well-known trigger of stress-induced cell death59. To determine if ΔtasA cells possess abnormal ROS levels, we monitored ROS generation using hydroxyphenyl fluorescein (HPF), a fluorescent indicator of the presence of highly reactive hydroxyl radicals. Flow cytometry analysis revealed a larger proportion of HPF-positive cells (which have increased ROS levels) in the ΔtasA strain at 24 h compared to the WT proportion (42.38% vs. 28.61%, respectively) (Fig. 4A and Supplementary Table 1). To test whether this higher ROS production has negative effects on cellular components and functions, we first performed TUNEL assays to fluorescently stain bacterial cells containing DNA strand breaks, a known hallmark of the cell death induced by cellular damage and a frequent outcome of ROS production. At 24 h and 48 h, we found a significantly higher number of fluorescently stained ΔtasA cells compared with the number of fluorescently stained WT cells (Fig. 5A, left and 5B, left). These results indicated that DNA damage appears to occur not only earlier, but also with a higher frequency, in ΔtasA cells than in WT cells. A sizeable number of stained cells was also found at 72 h in the ΔtasA colonies, the same time-point at which the TUNEL signal started to increase in the WT colonies (Fig. 5A left). The TUNEL signal in the ΔtasA cells at this time-point was not significantly different from that of the WT cells (Fig. 5B left), probably due to the increased cell death in the ΔtasA cells. Fig. 5: The ΔtasA cells exhibit higher levels of DNA damage. a CLSM analysis of TUNEL assays revealed significant DNA damage in the ΔtasA cells (bottom panels) compared to that in the WT cells (top panels). Cells were counterstained with DAPI DNA stain (top images). Scale bars = 5 µm. Experiments have been repeated at least three times with similar results. b Quantification of the TUNEL signals in WT and ΔtasA colonies. The results showed significant differences in the DNA damage levels between ΔtasA and WT cells after 24 and 48 h of growth. For the WT strain, N = 5 at 24 h, N = 8 at 48 h and N = 7 at 72 h. For the ΔtasA strain N = 5 at all times. N refers to the number of colonies examined over three independent experiments. For each experiment and sample, at least three fields-of-view were measured. Error bars indicate the SEM. Statistical significance was assessed via two-tailed independent t-tests at each time-point (**p value = 0.061 ***p value = 0.002). Source data are provided as a Source Data file. Next, we examined the cellular membrane potential, another phenotype related to cell death, using the fluorescent indicator tetramethylrhodamine, methyl ester (TMRM). Consistent with all previous analysis, the alterations in the membrane potential of the ΔtasA cells were significantly different at all time points compared with the corresponding values for the WT cells (Fig. 6A left panel and 6B left). As a control, ΔtasA cells after 72 h of growth and treated with carbonyl cyanide m-chlorophenyl hydrazine (CCCP), a chemical ionophore that uncouples the proton gradient and can depolarize the membrane, showed a strong decrease in the fluorescence signal (Supplementary Figs. 8A, B). These results indicate that after 48 h (the same time point at which the cell death rate increases and the cell population plateaus in ΔtasA colonies) ΔtasA cells also exhibit increased membrane hyperpolarization compared with that in the WT cells, a feature that has been linked to mitochondrial-triggered cell death in eukaryotic cells60,61,62. Fig. 6: ΔtasA cell membrane exhibit cytological anomalies. a Left panel. A TMRM assay of WT and ΔtasA cells, located at the top or bottom respectively in each set, showed a decrease in membrane potential in the WT cells, whereas the ΔtasA cells exhibited hyperpolarization at 48 and 72 h. Center panel. Assessment of the lipid peroxidation levels using BODIPY 581/591 C11 reagent in WT and ΔtasA cells after treatment with 5 mM CuHpx and analysis by CLSM. The ratio images represent the ratio between the two states of the lipid peroxidation sensor: reduced channel(590–613 nm emission)/oxidized channel(509–561 nm emission). The ratio images were pseudo-colored depending on the reduced/oxidized ratio values. A calibration bar (from 0 to 50) is located at the bottom of the panel. Confocal microscopy images show that CuHpx treatment was ineffective in the WT strain at 72 h, whereas the mutant strain showed symptoms of lipid peroxidation. Right panel. Laurdan GP analyzed via fluorescence microscopy. The images were taken at two different emission wavelengths (gel phase, 432–482 nm and liquid phase, 509–547 nm) that correspond to the two possible states of the Laurdan reagent depending on the lipid environment. The Laurdan GP images represent the Laurdan GP value of each pixel (see Methods section). The Laurdan GP images were pseudo-colored depending on the laurdan GP values. A calibration bar (from 0 to 1) is located at the bottom of the set. The Laurdan GP images show an increase in membrane fluidity (lower Laurdan GP values) in the tasA mutant cells at 48 and 72 h. All scale bars are equal to 5 µm. b Left. Quantification of the TMRM signal. N = 3. Center. Quantification of lipid peroxidation. N = 3. For the WT at 48 h, N = 4. Right. Quantification of laurdan GP values. For the WT strain, N = 6 at 48 h and N = 4 at 72 h. For the ΔtasA strain, N = 5 at all times. N refers to the number of colonies examined over three independent experiments. Average values are shown. Error bars represent the SEM. For each experiment and sample, at least three fields-of-view were measured. Statistical significance in the TMRM experiments was assessed via two-tailed independent t-tests at each time-point (****p < 0.0001). Statistical significance in the lipid peroxidation experiments was assessed via two-tailed independent t-tests at each time-point (****p < 0.0001, *p value = 0.0115). Statistical significance in the laurdan GP experiments was assessed via two-tailed independent Mann–Whitney tests at each time-point (**p value = 0.0087 at 48 h and **p value = 0.0095 at 72 h respectively). Source data are provided as a Source Data file. The differences in ROS production, DNA damage level and membrane hyperpolarization between the WT and ΔtasA cell populations are consistent with increased cellular stress being the cause of the higher cell death rate observed in ΔtasA colonies after 24 h. To test the idea that loss of tasA results in increased cellular stress that leads to abnormal cellular physiology and increased cell death, we investigated the level of membrane lipid peroxidation, a chemical modification derived from oxidative stress that subsequently affects cell viability by inducing toxicity and apoptosis in eukaryotic cells63,64. Staining with BODIPY 581/591 C11, a fluorescent compound that is sensitive to the lipid oxidation state and localizes to the cell membrane, showed no significant detectable differences in the levels of lipid peroxidation at any time point (Supplementary Fig. 16). However, treatment with cumene hydroperoxide (CuHpx), a known inducer of lipid peroxide formation65, resulted in different responses in the two strains. WT cells showed high reduced/oxidized ratios at 48 and 72 h and, thus, a low level of lipid peroxidation (Fig. 6A, center panel and Fig. 6B, center). In contrast, the comparatively lower reduced/oxidized ratios in ΔtasA cells at 48 and 72 h indicated increased lipid peroxidation. These results demonstrate that the ΔtasA strain is less tolerant to oxidative stress than is the WT strain, and, therefore, is more susceptible to ROS-induced damage. This finding along with the increased ROS production in ΔtasA cells, led us to study the integrity and functionality of the plasma membrane. First, no clear differences in the integrity, shape, or thickness of the cell membrane or cell wall were observed via transmission electron microscopy (TEM) of negatively stained thin sections of embedded ΔtasA or WT cells at 24 h and 72 h under our experimental conditions (Supplementary Fig. 9). Next, we examined membrane fluidity, an important functional feature of biological membranes that affects their permeability and binding of membrane-associated proteins, by measuring the Laurdan generalized polarization (Laurdan GP)63,66. Our results show that the Laurdan GP values were significantly lower at 48 h and 72 h in ΔtasA cells compared with the values in WT cells (0.65 ± 0.03 or 0.82 ± 0.03 respectively, at 48 h, and 0.87 ± 0.006 or 0.73 ± 0.007 respectively, at 72 h) (Fig. 6A, right panel and Fig. 6B, right). These results indicate incremental changes in membrane fluidity, comparable to that resulting from treatment of cells with benzyl alcohol, a known membrane fluidifier (Supplementary Fig. 10A, top and center panels, B). Membrane fluidity has been associated with higher ion, small molecule, and proton permeability67,68, which might contribute to the higher concentration of fengycin found in the in cell-free supernatants of ΔtasA cultures (Fig. 3B). These effects could also explain why ΔtasA cells are impaired in energy homeostasis as well as the subsequent effects on the intracellular pH and membrane potential that eventually contribute to cell death. TasA associates to detergent-resistant fractions of the membrane The negative effects on membrane potential and fluidity observed in the ΔtasA cells suggest alterations in membrane dynamics, which in bacterial cells are directly related to functional membrane microdomains (FMM); FMMs are specialized membrane domains that also regulate multiple important cellular functions69,70,71,72. The bacterial flotillins FloT and FloA are localized in FMMs and are directly involved in the regulation of membrane fluidity69. This line of evidence led us to propose a connection between the membrane fluidity and permeability of ΔtasA cells and changes in the FMMs. We initially studied the membrane distribution of FloT as a marker for FMMs in WT and ΔtasA cells using a FloT-YFP translational fusion construct and confocal microscopy (Fig. 7A). The WT strain showed the typical FloT distribution pattern, in which the protein is located within the bacterial membrane in the form of discrete foci73 (Fig. 7A, top). However, in the ΔtasA cells, the fluorescent signal was visible only in a subset of the population, and the normal distribution pattern was completely lost (Fig. 7A bottom). In agreement with these findings, quantification of the fluorescent signal in WT and ΔtasA samples showed significant decreases in the signal in the ΔtasA mutant cells at 48 and 72 h (Fig. 7B). Consistently, our RNA-seq data showed fluctuations in the floT expression levels at all times (Supplementary Data 1–3). Fig. 7: TasA is located in the DRM fraction of the cell membrane. a Representative confocal microscopy images showing WT or ΔtasA cells expressing the floT-yfp construct at 72 hours. WT images show the typical punctate pattern associated to FloT. That pattern is lost in ΔtasA cells. b Quantification of fluorescence signal in WT (N = 5 at 48 h and N = 4 at 72 h) and ΔtasA (N = 3 at 48 h and N = 4 at 72 h) samples. N indicates the number of colonies examined over three independent experiments. Average values are shown. Error bars represent the SEM. Statistical significance was assessed via two-tailed independent t-tests at each time-point (***p value = 0.0002 ****p < 0.0001). For each experiment and sample, at least three fields-of-view were measured. c Western blot of different membrane fractions exposed to an anti-TasA or anti-YFP antibodies. Both antibodies have been used with the same set of samples in two independent immunoblots. Experiments have been repeated at least three times with similar results. Immunoblot images have been cropped (top image) or cropped and spliced (bottom image) for illustrative purposes. Black lines over the blot images delineate boundaries of immunoblot splicing. The three slices shown slices are derived from a single blot. Raw images of the blots presented in this figure can be found in the source data. Source data are provided as a Source Data file. The alterations in floT expression and the loss of the normal FloT distribution pattern in the cell membrane that occurs in the ΔtasA mutant cells led us to consider the presence of TasA in FMMs. Membranes from both prokaryotes and eukaryotes can be separated into detergent-resistant (DRM) and detergent-sensitive fractions (DSM) based on their solubility in detergent solutions73. Although it is important to point out that the DRM and FMMs (or lipid rafts in eukaryotes) are not equivalent, the DRM fraction has a differential lipid composition and is enriched with proteins, rendering it more resistant to detergents; furthermore, many of the proteins present in FMMs are also present in the DRM74. Immunodetection assays of the DRM, DSM, and cytosolic fractions of each strain using an anti-TasA antibody showed the presence of anti-TasA reactive bands of the expected size primarily in the DRM fraction and in the cytosol (Fig. 7C, top). As controls, the fractions from the tasA mutant showed no signal (Fig. 7C, top). Western blots of the same fractions isolated from WT and ΔtasA strains carrying a FloT-YFP translational fusion with an anti-YFP antibody (Fig. 7C, bottom) confirmed that FloT was mainly present in the DRM of WT cells (Fig. 7C, bottom, lane 1). The signal was barely noticeable in the same fraction from ΔtasA cells (Fig. 7C, bottom, lane 4), mirroring the reduced fluorescence levels observed via microscopy (Fig. 7A), and consistent with the RNA-seq data. These results confirm that TasA is indeed associated with the DRM fraction of the cell membrane. Furthermore, we asked whether the loss of FloT foci is somehow related to the increased cell death observed in the absence of TasA. We used a LIVE/DEAD viability stain in a ΔfloT colony and in a ΔfloTfloA colony, a double mutant for the two flotillin-like proteins in the B. subtilis genome. The results show no significant differences in the proportion of cell death compared to the WT strain at 48 h and 72 h (Supplementary Fig. 11). These experiments demonstrate that the increased cell death is not caused by loss of the FloT distribution pattern that occurs in the tasA mutant. All together, these results allow us to conclude that TasA is located in the DRM fraction of the cell membrane where it contributes to membrane stability and fluidity, and that its absence leads to alterations in membrane dynamics and functionality, eventually leading to cell death. Mature TasA is required to maintain viable bacterial physiology TasA is a secreted protein located in the ECM and additionally found associated to the DRM fraction of the cell membrane (Fig. 7C). Reaching these sites requires the aid of secretion-dedicated chaperones, the translocase machinery and the membrane-bound signal peptidase SipW75. It is known that TasA processing is required for assembly of the amyloid fibrils and biofilm formation18,76. However, formation of the mature amyloid fibril requires the accessory protein TapA, which is also secreted via the same pathway19, is present in the mature amyloid fibers and is found on the cell surface76. Considering these points, we first wondered whether TapA is involved in the increased cell death observed in the ΔtasA mutant. By applying the BacLight LIVE/DEAD viability stain to a ΔtapA colony, we found a similar proportion of live to dead cells as that found in the WT colony at 72 h (Fig. 8A), suggesting that the tapA mutant does not exhibit the cytological alterations and cellular damage that occurs in ΔtasA cells. ΔtapA cells produce a much lower number of TasA fibers but still expose TasA in their surfaces76; thus we reasoned that mature TasA is necessary for preserving the cell viability levels observed in the WT strain. To test this possibility, we constructed a strain bearing a mutation in the part of the tasA gene that encodes the TasA signal peptide77. To avoid confounding effects due to expression of the mutated tasA gene in the presence of the endogenous operon, we performed this analysis in a strain in which the entire tapA operon was deleted and in which the modified operon encoding the mutated tasA allele was inserted into the neutral lacA locus. The strain carrying this construct was designated as "TasA SiPmutant" (for Signal Peptide mutant) and included three amino acid substitutions from the initial lysines of the signal peptide. Specifically, the introduced mutations were Lys4Ala, Lys5Ala, and Lys6Ala. The endogenous version of TasA successfully restored biofilm formation (Supplementary Fig. 12A), while the phenotype of SiP mutant on MSgg medium at 72 h was different from those of both the WT and tasA mutant strains (Fig. 8B and Supplementary Fig. 2B). Immunodetection analysis of TasA in fractionated biofilms confirmed the presence of TasA in the cells and ECM fractions from the WT strain and the strain expressing the endogenous version of tasA (Fig. 8C). However, a faint anti-TasA reactive signal was observed in both fractions of the SiP mutant (Fig. 8C). This result indicates that TasA is not efficiently processed in the SiP mutant and, thus, the protein levels in the ECM were drastically lower. The faint signal detected in the cell fraction might be due to the fact that the pre-processed protein is unstable in the cytoplasm and is eventually degraded over time77. Consistent with our hypothesis, the levels of cell death in the SiP mutant were significantly different from those of the WT strain (Fig. 8A, D). Taken together, these results rest relevance to TapA to the increase cell death observed in the absence of TasA and indicate that TasA must be processed to preserve the level of cell viability found in WT colonies. Fig. 8: Mature TasA is required to stabilize cell viability within the colony. a Representative confocal microscopy images of fields corresponding to LIVE/DEAD-stained WT, ΔtapA or signal peptide mutant (SiPmutant, Lys4Ala, Lys5Ala, Lys6Ala) cells at 72 h. Scale bars = 10 µm. b Colony phenotypes of WT, ΔtasA and the SiPmutant strains on MSgg agar at 72 h. Scale bars = 1 cm. c Western blot of the cell and matrix fractions of the three strains at 72 h exposed to an anti-TasA antibody. Experiments have been repeated at least three times with similar results. Immunoblot images have been cropped and spliced for illustrative purposes. Black lines over the blot images delineate boundaries of immunoblot splicing. The three slices shown are derived from a single blot. Raw images of the blots presented in this figure can be found in the source data file available with this manuscript. d Quantification of the proportion of dead cells in WT (N = 5), ΔtapA (N = 5), or SiPmutant (N = 10) colonies at 72 h. N refers to the number of colonies examined over three independent experiments. The WT data in Fig. 4 is from the same experiment as the data displayed in this figure and has been used as a control for the comparison between the WT, ΔtapA and SiPmutant colonies. Error bars represent the SEM. For each experiment and sample, at least three fields-of-view were measured. Statistical significance was assessed via two-tailed independent t-tests at each time-point (****p < 0.0001). Source data are provided as a Source Data file. A TasA variant restores cellular physiological status but not biofilm The fact that the ΔtapA strain forms altered and fewer TasA fibers but does have normal cell death rates, as well as the increased membrane fluidity and the changes in expression and loss of the normal distribution pattern of the flotillin-like protein FloT in the ΔtasA strain led us to hypothesize that the TasA in the DRM, and not that in the ECM, is responsible for maintaining the normal viability levels within the WT colonies. To explore this hypothesis, we performed an alanine scanning experiment with TasA to obtain an allele encoding a stable version of the protein that could support biofilm formation. To produce these constructs, we used the same genetic background described in the above section. The strain JC81, which expresses the TasA (Lys68Ala, Asp69Ala) variant protein, failed to fully restore the WT biofilm formation phenotype (Fig. 9A, Supplementary Figs. 2B and 12A). Immunodetection analysis of TasA in fractionated biofilms confirmed the presence of the mutated protein in the cells and in the ECM (Fig. 9B, left and Supplementary Fig. 12B). Tandem mass spectrometry analysis revealed that the mutated protein found in the ECM corresponded to the mature form of TasA (Supplementary Fig. 13A, left and Supplementary Fig. 13A, right), indicating exclusively a malfunction in the protein's structural role in proper ECM assembly. Electron microscopy coupled to immunodetection with anti-TasA and immunogold-conjugated secondary antibodies showed the presence of a dense mass of extracellular material in JC81 cells with an absence of well-defined TasA fibers, as opposed to WT cells, in which we also observed a higher number of gold particles, indicative of the higher reactivity of the sample (Supplementary Fig. 13B, left and center panels). The cell membrane fractionation analysis revealed, however, the presence of mutated TasA in the DRM, DSM and cytosolic fractions (Fig. 9B right). Accordingly, JC81 was reverted to a physiological status comparable to that of the WT strain. This feature was demonstrated by similar expression levels of genes encoding factors involved in the production of secondary metabolites (i.e., ppsD, albE, bacB, and srfAA) or acetoin (alsS), indicating comparable metabolic activities between the two strains (Fig. 9C). Further evidence confirmed the restoration of the metabolic status in JC81. First, similar proportions of WT and JC81 cells expressing YFP from the fengycin operon promoter were detected after 72 h of growth via flow cytometry analysis (Fig. 9D, green curve). In agreement with these findings, there were no differences in the proportions of cells respiring or accumulating ROS or in the intracellular pH values between the JC81 and WT strains (Figs. 9E and 10A). Consistently, the population dynamics of JC81 resembled that of the WT strain (Fig. 10B), and, as expected, its level of cell death was comparable to that of the WT strain (Fig. 10C). Finally, there were no differences in any of the examined parameters related to oxidative damage and stress-induced cell death (i.e., DNA damage, membrane potential, susceptibility to lipid peroxidation, and membrane fluidity) between JC81 and WT cells (Supplementary Figs. 14 and 15 respectively), and the mutated allele complemented the sporulation defect observed in the ECM mutants (Supplementary Fig. 4). To further confirm these results, we performed a viability assay in a mixed ΔtasA and Δeps colony co-inoculated at a 1:1 ratio, and we found that, despite the ability of the mixed colony to rescue the wrinkly phenotype typical of a WT colony (Supplementary Fig. 17A, top), the proportion of cell death is significantly higher than that observed in WT cells at 48 h (10.39 ± 1.20) and 72 h (14.04 ± 0.72) (Supplementary Fig. 17B). In addition, exogenous TasA did not revert the colony morphology phenotype of ΔtasA cells on solid MSgg (Supplementary Fig. 18A) or the increased cell death rate observed in the ΔtasA strain (Supplementary Fig. 18B). These results show that the extracellular TasA provided by the Δeps strain is sufficient to complement the ECM assembly and biofilm formation defects but not to prevent cell death, similar to the effects of exogenous TasA supplementation. Thus, TasA must be produced by the cells to reach the cell membrane and exert this function. Interestingly, a ΔsinI strain, which is mutant for the sinI anti-repressor that inhibits sinR, and therefore, has strong repression of ECM genes and is unable to assemble biofilms (Supplementary Fig. 17A, bottom), showed similar levels of cell death as the WT strain at all times (1.53 ± 0.25 at 48 h and 2.51 ± 0.50 at 72 h) (Supplementary Fig. 17B). This effect might reflect that even a basal amount of TasA78 in the cell membrane is sufficient to prevent cell death but insufficient to assemble a proper ECM, confirming that indeed, cells lacking a structured ECM do not exhibit the physiological changes observed in cells lacking TasA. Fig. 9: A TasA variant rescues cellular physiological status but not biofilm. a Colony phenotypes of the three strains on MSgg agar at 72 h. Scale bars = 1 cm. b A western blot of the cell (left) and membrane fractions (right) at 72 h exposed to an anti-TasA antibody. Experiments have been repeated at least three times with similar results. Immunoblot images have been cropped and spliced for illustrative purposes. Black lines over the blot images delineate boundaries of immunoblot splicing. Two independents immunoblot images are shown. The two slices shown in the left image are derived from a single blot. The two slices shown in the right image are derived from a single blot. Raw images of the blots presented in this figure can be found in the source data file available with this manuscript. c Relative expression levels of ppsD, alsS, albE, bacB, and srfAA genes in JC81 compared to the WT strain. Average values of at least three biological replicates (N = 3 except for ppsD, albE, bacB, and srfAA in ΔtasA, where N = 4) are shown with error bars representing the SEM. d Flow cytometry analysis of cells expressing the promoter of the fengyncin production operon in the WT, ΔtasA, JC81 strains at 72 h. A non-fluorescent negative control corresponding to the unlabeled WT strain at 72 h is shown (gray). The flow cytometry data shown in Fig. 3 is from the same experiment as the data shown in this figure and are repeated here for comparative purposes with the data from strain JC81. e Density plots of cells double stained with the HPF (Y axis) and CTC (X axis) dyes show that JC81 behaved similarly to the WT strain. Fig. 10: The expression of the TasA variant prevents the increase of cell death. a Intracellular pH measurements of the WT and JC81 strains. Average values of four biological replicates are shown. Error bars represent the SEM. b Population dynamics (CFU counts) in WT and JC81 colonies. The WT data in Fig. 4C is from the same experiment as the data shown in this figure and is included here as a control in the comparison between the WT and JC81 strains. Average values of four biological replicates are shown. Error bars represent the SEM. c Left. Quantification of the proportion of dead cells in WT and JC81 colonies. N = 5 colonies of the corresponding strains examined over three independent experiments. Average values are shown. Error bars represent the SEM. For each experiment and sample, at least three fields-of-view were measured. Statistical significance was assessed via two-tailed independent t-tests at each time-point (***p value = 0.0003). The WT data in Fig. 4 is from the same experiment as the data displayed in this figure and has been used as a control for the comparison between the WT and JC81 colonies. Right. Representative confocal microscopy images of fields corresponding to LIVE/DEAD-stained WT or JC81 cells at 72 hours. Scale bars = 10 µm. Source data are provided as a Source Data file. Taken together, these findings assign TasA complementary functions, via its localization to the DRM fraction of the cell membrane, contributing to cell membrane dynamics and cellular physiology during normal colony development that prevent premature cell death, a role beyond the well-known structural function of amyloid proteins in biofilm ECMs. The TasA variant impairs B. subtilis fitness on the phylloplane Our analysis of the intrinsic physiological changes in ΔtasA cells showed how the absence of TasA leads to the accumulation of canonical signs of cellular damage and stress-induced cell death, a physiological condition typical of stationary phase cells. These observations help to reconcile two a priori contradictory features of B. subtilis ecology on plant leaves: the reduced persistence of the ΔtasA mutant on the melon phylloplane versus its ability to efficiently exert biocontrol against the fungus P. xanthii, which occurs via overproduction of fengycin and other antimicrobial molecules. Following this line of thought, we predicted that the JC81 strain, which expresses a version of TasA that is unable to restore biofilm formation but preserves the physiological status of the cells, would show overall signs of reduced fitness on melon leaves. The JC81 cells retained their initial ability to adhere to melon leaves (Fig. 1A); however, their persistence decreased (Fig. 1B) and their colonization showed a pattern somewhat intermediate between those of the WT and ΔtasA strains (Fig. 1C, bottom images). In agreement with our prediction, the reduced fitness of this strain resulted in a failure to manage P. xanthii infection (Fig. 1D). Thus, we conclude that the ECM, by means of the amyloid protein TasA, is required for normal colonization and persistence of B. subtilis on the phyllosphere. These ecological features depend on at least two complementary roles of TasA: one role related to ECM assembly and a new proposed role in the preservation of the physiological status of cells via stabilization of membrane dynamics and the prevention of premature cell death. The ECM provides cells with a myriad of advantages, such as robustness, colony cohesiveness, and protection against external environmental stressors7,23,24. Studies of B. subtilis biofilms have revealed that the ECM is mainly composed of polysaccharides14 and the proteins TasA and BslA7,15. TasA is a confirmed functional amyloid that provides structural support to the biofilm in the form of robust fibers16. A recent study demonstrated that there is heterogeneity in the secondary structure of TasA; however, in biofilms, its predominant conformation is in the form of stable fibers enriched in β-sheets17. In this study, we demonstrate that in addition to its structural role in ECM assembly, TasA is also required for normal colony development – both of which are functions that contribute to the full fitness of Bacillus cells on the phylloplane. The physiological alterations observed in ΔtasA null strain reflect a process of progressive cellular deterioration characteristic of senescence79,80,81, including early activation of secondary metabolism, low energy metabolic activity, and accumulation of damaged molecular machinery that is required for vital functions. Indeed, it has been previously demonstrated that such metabolic changes can trigger cell death in other bacterial species, in which over-acidification of the cytoplasm eventually leads to the activation of cell death pathways55. Interestingly, cytoplasmic acidification due to the production of acetic acid has been linked to higher ROS generation and accelerated aging in eukaryotes82. As mentioned throughout this study, ROS generation leads to ongoing DNA damage accumulation, phospholipid oxidation, and changes in cell membrane potential and functionality, all of which are major physiological changes that eventually lead to declines in cellular fitness and, ultimately, to cell death83,84,85. The fact that we could restore the physiological status of tasA null mutant cells by ectopically expressing a mutated TasA protein incapable of rescuing biofilm formation permitted us to separate two roles of TasA: (i) its structural function, necessary for ECM assembly; and (ii) its cytological functions involved in regulating membrane dynamics and stability and preventing premature cell death. Our data indicate that this previously unreported function does not involve TasA amyloid fibers or its role in ECM assembly, and that it is more likely related to the TasA found in the DRM of the cell membrane, where the FMMs (like the lipid rafts of eukaryotic cells) are located. It is not unprecedented that amyloid proteins interact with functional domains within the cell membrane. In eukaryotic cells, for instance, it has been reported that lipid rafts participate in the interaction between the amyloid precursor protein and the secretase required for the production of the amyloid-β peptide, which is responsible for Alzheimer's disease86. Indeed, our results are supported by evidence that TasA can preferentially interact with model bacterial membranes, which affects fiber assembly87, and that TasA fibers are located and attached to the cell surface via a proposed interaction with TapA, which forms foci that seem to be present on the cell wall76. Interestingly, TapA has been recently characterized as a two-domain, partially disordered protein88. Disordered domains can be flexible enough to interact with multiple partners89,90, suggesting a similar mechanism for TapA: the N-terminal domain might be involved in the interaction with other protein partners, whereas the C-terminal disordered domain might anchor the protein to the cell surface. All of these observations led us to propose that TasA may drive the stabilization of the FMMs in the cell membrane either directly via interactions with certain phospholipids or indirectly via interactions with other proteins. This model is further supported by the fact that the ΔtasA cells show alteration of FloT levels and loss of the typical FloT distribution pattern (Fig. 7A), which is typically present in the FMM, and induction of many genes that encode DRM components or other factors that interact with FloT alone or with FloT and FloA (Supplementary Data 1–3). Considering these novel findings, it is tempting to speculate that cells can regulate membrane dynamics by, among other processes, tuning the amount of TasA present in the membrane at a given moment, which would permit a better physiological response to different environmental cues. In particular, our results suggest that the membrane instability caused by the absence of TasA triggers a cascade of malfunctions in biological processes that eventually lead to cell death in a subset of the population. This alteration in membrane stability might cause, given the differences in the expression levels of many genes involved in the respiratory process (Supplementary Data 1–3), the impaired respiration observed in ΔtasA cells, which could lead to increased ROS generation, leading to the full range of transcriptional and cytological alterations found in the tasA mutant over the course of time. The physiological alterations observed in the ΔtasA strain have ecological implications. The intrinsic stress affecting the mutant cells reduced their ability to survive in natural environments; however, paradoxically, their higher induction of secondary metabolism seemed to indirectly and efficiently target fungal pathogens. This could explain why ΔtasA cells, which show clear signs of stress, display efficient biocontrol properties against P. xanthii. However, the sharp time-dependent decrease in the ΔtasA population on leaves suggests that its antifungal production could be beneficial during short-term interactions, but insufficient to support long-term antagonism unless there is efficient colonization and persistence on the plant surface. In this scenario, the deletion of tasA has a strong negative effect on bacterial cells, as we have demonstrated how a ΔtasA strain is more susceptible to ROS-induced damage (Fig. 6A, center panel, 6B, center graph), especially in the phyllosphere, where microbial cells are continuously subjected to different type of stresses, including oxidative stress91. We previously speculated that biofilm formation and antifungal production were two complementary tools used by Bacillus cells to efficiently combat fungi. Our current study supports this concept, but also enhances our understanding of the roles of the different ECM components. More specifically, we demonstrated that the amyloid protein TasA is the most important bacterial factor during the initial attachment and further colonization of the plant host, as it is necessary for the establishment of the bacterial cells over the plant leaves and for the maintenance of the normal cellular structure. The fact that the naturally occurring overexpression of the eps genes in the ΔtasA is unable to revert the adhesion defect of this strain downplay the importance of the EPS during the early establishment of physical contact. These observations are more consistent with a role for the EPS, along with BslA, in providing biofilms with protection against external stressors14,92. A similar role for a functional amyloid protein in bacterial attachment to plant surfaces was found for the Escherichia coli curli protein. Transcriptomic studies showed induction of curli expression during the earlier stages of attachment after the cells came into contact with the plant surface, and a curli mutant strain was defective in this interaction93,94. The distinct morphological and biochemical variations typical of amyloids make them perfect candidates for modulating cellular ecology. The observation that ΔtasA cells are incapable of colonization in the rhizosphere95 clearly indicates the need for more in-depth investigation into these two distinctive ecological niches to understand the true roles of specific bacterial components. In addition to demonstrating enhanced production of antifungal compounds, our study revealed additional features that might contribute to the potency of stressed Bacillus cells in arresting fungal growth, in particular the overproduction of acetoin via increased expression of the alsS and alsD genes. Acetoin is a volatile compound produced via fermentative and overflow metabolism, and it has been demonstrated to mediate communication between beneficial bacteria and plants by activating plant defense mechanisms either locally or over long distances in a phenomenon known as induced systemic resistance (ISR)96,97. In summary, we have proven that the amyloid protein TasA participates in the proper maturation of Bacillus colonies, a function that, along with its previously reported role in ECM assembly, contributes to long-term survival, efficient colonization of the phylloplane, and a competitive advantage mediated by antifungal production. The absence of TasA leads to a series of physiological changes, likely triggered by alterations in membrane stability and dynamics, and effects on the FMMs, including an arrest of cell differentiation23 that paradoxically increases the competitiveness of the mutant cells during short-term interactions via their ability to adapt to stress and their cellular response to early maturation. However, lack of TasA reduces cell fitness during mid- to long-term interactions via increased intrinsic cellular stress and the absence of a structured ECM, both of which limit the adaptability of the cells to the stressful phylloplane. Bacterial strains and culture conditions The bacterial strains used in this study are listed in Supplementary Table 2. Bacterial cultures were grown at 37 °C from frozen stocks on Luria-Bertani (LB: 1% tryptone (Oxoid), 0.5% yeast extract (Oxoid) and 0.5% NaCl) plates. Isolated bacteria were inoculated in the appropriate medium. The biotrophic fungus Podosphaera xanthii was grown at 25 °C from a frozen stock on cucumber cotyledons and maintained on them until inoculum preparation. Biofilm assays were performed on MSgg medium: 100 mM morpholinepropane sulfonic acid (MOPS) (pH 7), 0.5% glycerol, 0.5% glutamate, 5 mM potassium phosphate (pH 7), 50 μg/ml tryptophan, 50 μg/ml phenylalanine, 50 μg/ml threonine, 2 mM MgCl2, 700 μM CaCl2, 50 μM FeCl3, 50 μM MnCl2, 2 μM thiamine, 1 μM ZnCl2. For the in vitro lipopeptide detection and assays with cell-free supernatants, medium optimized for lipopeptide production (MOLP)98 was used: 30 g/liter peptone, 20 g/liter saccharose, 7 g/liter yeast extract, 1.9 g/liter KH2PO4, 0.001 mg/ liter CuSO4, 0.005 mg/liter FeCl3.6H2O, 0.004 mg/liter Na2MoO4, 0.002 mg/liter KI, 3.6 mg/liter MnSO4.H2O, 0.45 g/liter MgSO4, 0.14 mg/liter ZnSO4.7H2O, 0.01 mg/liter H3BO3, and 10 mg/liter citric acid. The pH was adjusted to 7 with 5 M NaOH prior to sterilization. For cloning and plasmid replication, Escherichia coli DH5α was used. Escherichia coli BL21(DE3) was used for protein purification. Bacillus subtilis 168 is a domesticated strain used to transform the different constructs into Bacillus subtilis NCIB3610. The antibiotic final concentrations for B. subtilis were: MLS (1 μg/ml erythromycin, 25 μg/ml lincomycin); spectinomycin (100 μg/ml); tetracycline (10 μg/ml); chloramphenicol (5 μg/ml); and kanamycin (10 μg/ml). Strain construction All of the primers used to generate the different strains are listed in Supplementary Table 3. To build the strain YNG001, the promoter of the fengycin operon was amplified with the Ppps-ecoRI.F and Ppps-HindIII.R primer pair. The PCR product was digested with EcoRI and HindIII and cloned into the pKM003 vector cut with the same enzymes. The resulting plasmid was transformed by natural competence into B. subtilis 168 replacing the amyE neutral locus. Transformants were selected via spectinomycin resistance. The same plasmid was used to build the strain YNG002 by transforming a ΔtasA strain of B. subtilis 168. Strain YNG003 was constructed using the primers amyEUP-Fw, amyEUP-Rv, Ppps-Fw, Ppps-Rv, Yfp-Fw, Yfp-Rv, Cat-Fw. Cat-Rv, amyEDOWN-Fw, and amyEDOWN-Rv to separately amplify the relevant fragments. The fragments were then joined using the NEB builder HiFi DNA Assembly Master Mix (New England Biolabs). The construct was made using pUC19 digested with BamHI as the vector backbone. The final plasmid was then transformed into B. subtilis 168 replacing amyE, and transformants were selected via chloramphenicol resistance. Strain JC97 was generated using the primers bslAUP-Fw, bslADOWN-Rv, Spc-Fw, Spc-Rv, bslaUP-Fw and bslADOWN-Rv, and XbaI-digested pUC19 as the vector backbone. The fragments were assembled using NEB Builder HiFi DNA Assembly Master Mix. Strains JC70, JC81, and JC149 were constructed via site-directed mutagenesis (QuickChange Lightning Site Directed Mutagenesis Kit – Agilent Technologies). Briefly, the tapA operon (tapA-sipW-tasA), including its promoter, was amplified using the primers TasA_1_mutb and YSRI_2, and the resulting product was digested with BamHI and SalI and cloned into the pDR183 vector99. Next, the corresponding primers (Supplementary Table 3) were used to introduce the alanine substitution mutations into the desired positions of the TasA amino acid sequence. The entire plasmid was amplified from the position of the primers using Pfu DNA polymerase. The native plasmid, which was methylated and lacked the mutations, was digested with DpnI enzyme. The plasmids containing the native version of TasA (JC70) or the mutated versions (JC81 and JC149) were transformed into the B. subtilis 168 Δ(tapA-sipW-tasA) strain replacing the lacA neutral locus. Genetic complementation was observed in strain JC70 as a control. Transformants were selected via MLS resistance. Plasmid pDFR6 (pET22b-tasA), which contains the open reading frame of the tasA gene from B. subtilis NCIB3610 without the signal peptide or the stop codon, was constructed as previously described76. Primers used in the analysis of gene expression by qRT-PCR are listed in Supplementary Table 4. All of the B. subtilis strains generated were constructed by transforming B. subtilis 168 via its natural competence and then using the positive clones as donors for transferring the constructs into B. subtilis NCIB3610 via generalized SPP1 phage transduction100. Biofilm assays To analyze colony morphology under biofilm-inducing conditions101, the bacterial strains were grown on LB plates overnight at 37 °C, and the resulting colonies were resuspended in sterile distilled water at an OD600 of 1. Next, 2-µl drops of the different bacterial suspensions were spotted on MSgg or LB agar (depending on the assay) agar plates and incubated at 30 °C. Colonies were removed at the appropriate time points (24, 48, and 72 h) for the different analyses. For the Δeps-ΔtasA co-inoculation assay, colonies were resuspended in sterile distilled water and mixed at a final OD600 of 1. Next, the bacterial suspension was inoculated onto MSgg agar plates and incubated as described above. For the external complementation assay using purified TasA, a drop containing 80 µg of protein was spotted onto MSgg agar plates and allowed to dry. Next, ΔtasA cells were inoculated on top of the dried drop and incubated as described above. For the CFU counts of the colonies from the different strains, 24-, 48- and 72-h-old colonies grown on MSgg agar plates were removed, resuspended in 1 ml of sterile distilled water, and subjected to mild sonication (three rounds of 20 second pulses at 20% amplitude). The resulting suspensions were serially diluted and plated to calculate the CFUs per colony (total CFU). To estimate the CFUs corresponding to sporulated cells (CFU endospores), the same dilutions were heated at 80 °C for 10 min and plated. The sporulation percentage was calculated as (CFU endospores/total CFU) * 100. Biofilm fractionation To analyze the presence of TasA in the different strains, biofilms were fractionated into cells and ECM101. Both fractions were analyzed separately. In all, 72-h-old colonies grown under biofilm-inducing conditions on MSgg-agar plates were carefully lifted from the plates and resuspended in 10 ml of MS medium (MSgg broth without glycerol and glutamate, which were replaced by water) with a 25 5/8 G needle. Next, the samples were subjected to mild sonication in a Branson 450 digital sonifier (4–5 5 s pulses at 20% amplitude) to ensure bacterial resuspension. The bacterial suspensions were centrifuged at 9000 × g for 20 min to separate the cells from the extracellular matrix. The cell fraction was resuspended in 10 ml of MS medium and stored at 4 °C until further processing. The ECM fraction was filtered through a 0.22-µm filter and stored at 4 °C. For protein precipitation, 2 ml of the cell or ECM fractions were used. The cell fraction was treated with 0.1 mg/ml lysozyme for 30 min at 37 °C. Next, both fractions were treated with a 10% final concentration of trichloroacetic acid and incubated in ice for 1 h. Proteins were collected by centrifugation at 13,000 × g for 20 min, washed twice with ice-cold acetone, and dried in an Eppendorf Concentrator Plus 5305 (Eppendorf). Cell membrane fractionation Crude membrane extracts were purified from 50 ml MSgg liquid cultures (with shaking) of the different B. subtilis strains. Cultures were centrifuged at 7000 × g for 10 min at 4 °C and then resuspended in 10 ml of PBS. Lysozyme was added at a final concentration of 20 µg/ml and the cell suspensions were incubated at 37 °C for 30 min. After incubation, the lysates were sonicated on ice with a Branson 450 digital sonifier using a cell disruptor tip and 45 s pulses at 50% amplitude with pauses of 30 s between pulses until the lysates were clear. Next, the cell lysates were centrifuged at 10,000 × g for 15 min to eliminate cell debris, and the supernatants were separated and passed through a 0.45-µm filter. To isolate the cell membrane, the filtered lysate was ultracentrifuged at 100,000 × g for 1 h at 4 °C. The supernatant, which contained the cytosolic proteins, was separated and kept at −20 °C. The pellet, which contained the crude membrane extract, was washed three times with PBS and processed using the CelLytic MEM protein extraction kit from Sigma. Briefly, the membrane fractions were resuspended in 600 µl of lysis and separation working solution (lysis and separation buffer + protease inhibitor cocktail) until a homogeneous suspension was achieved. Next, the suspension was incubated overnight at 4 °C on a stirring wheel. After incubation, the suspension is incubated at 37 °C for 30 min and then centrifuged at 3000 × g for 3 min. The DSM (upper phase) was separated and kept at −20 °C, and the DRM (lower phase) was washed three times with 400 µl of wash buffer by repeating the process from the 37 °C incubation step. Three washes were performed to ensure the removal of all hydrophilic proteins. The isolated DRM was kept at −20 °C until use. The DRM, DSM, and cytosolic fractions were used directly for immunodetection. Protein was expressed and purified as previously described102 with some changes. Briefly, freshly transformed BL21(DE3) E. coli colonies were picked, resuspended in 10 mL of liquid LB with 100 µg/mL of ampicillin and incubated O/N at 37 °C with shaking. The next day, the pre-inoculum was used to inoculate 500 mL of LB supplemented with ampicillin, and the culture was incubated at 37 °C until an OD600 of 0.7–0.8 was reached. Next, the culture was induced with 1-mM isopropyl β-d-1-thiogalactopyranoside (IPTG) and incubated O/N at 30 °C with shaking to induce the formation of inclusion bodies. The next day, cells were harvested via centrifugation (5000 × g, 15 min, 4 °C) resuspended in buffer A (Tris 50 mM, 150 mM NaCl, pH8), and then centrifuged again. The pellets were kept at −80 °C until purification or processed after 15 min. After thawing, cells were resuspended in buffer A, sonicated on ice (3 × 45 s, 60% amplitude) and centrifuged (15,000 × g, 60 min, 4 °C). The supernatant was discarded, as proteins were mainly expressed in inclusion bodies. The pellet was resuspended in buffer A supplemented with 2 % Triton X-100, incubated at 37 °C with shaking for 20 min and centrifuged (15,000 × g, 10 min, 4 °C). The pellet was extensively washed with buffer A, centrifuged (15,000 × g for 10 min, 4 °C), resuspended in denaturing buffer (Tris 50 mM NaCl 500 mM, 6 M GuHCl), and incubated at 60 °C overnight until complete solubilization occured. Lysates were clarified via sonication on ice (3 × 45 s, 60% amplitude) and centrifugation (15,000 × g, 1 h, 16 °C) and were then passed through a 0.45-µm filter prior to affinity chromatography. Protein was purified using an AKTA Start FPLC system (GE Healthcare). Soluble inclusion bodies were loaded into a HisTrap HP 5 mL column (GE Healthcare) previously equilibrated with binding buffer (50 mM Tris, 0.5 M NaCl, 20 mM imidazole, 8 M urea, pH 8). Protein was eluted from the column with elution buffer (50 mM Tris, 0.5 M NaCl, 500 mM imidazole, 8 M urea, pH 8). After the affinity chromatography step, the purified protein was loaded into a HiPrep 26/10 desalting column (GE Healthcare), and the buffer was exchanged to Tris 20 mM, NaCl 50 mM to perform the corresponding experiments. SDS-PAGE and immunodetection Precipitated proteins were resuspended in 1x Laemmli sample buffer (BioRad) and heated at 100 °C for 5 min. Proteins were separated via SDS-PAGE in 12% acrylamide gels and then transferred onto PVDF membranes using the Trans-Blot Turbo Transfer System (BioRad) and PVDF transfer packs (BioRad). For immunodetection of TasA, the membranes were probed with anti-TasA antibody (rabbit) used at a 1:20,000 dilution in Pierce Protein-Free (TBS) blocking buffer (ThermoFisher). For immunodetection of FloT-YFP, a commercial anti-GFP primary antibody (Clontech living colors full-length polyclonal antibody) developed in rabbit were used at a 1:1000 or dilution in the buffer mentioned above. A secondary anti-rabbit IgG antibody conjugated to horseradish peroxidase (BioRad) was used at a 1:3000 dilution in the same buffer. The membranes were developed using the Pierce ECL Western Blotting Substrate (ThermoFisher). Mass spectrometry analysis of protein bands The sequence corresponding to the band of the ECM fraction of JC81 (Supplementary Fig. 13A) was identified via tandem mass spectrometry using a "nano" ion trap system (HPLC-ESI-MS/MS). Briefly, the bands obtained after electrophoresis were cut out, washed, and destained. Subsequently, the disulfide bridges were reduced with DTT, cysteines were alkylated via the use of iodoacetamide, and in-gel trypsin digestion was performed to extract the peptides corresponding to the protein samples. This entire process was carried out automatically using an automatic digester (DigestPro, Intavis Bioanalytical Instruments). The peptides were then concentrated and desalted using a capture column C18 ZORBAX 300SB-C18 (Agilent Technologies, Germany), 5 × 0.3 mm, with 5-µm particle diameter and 300-Å pore size, using a gradient of 98% H2O:2% acetonitrile (ACN)/0.1% formic acid (FA) with a flow rate of 20 μL/min for 6 min. The capture column was connected in line to a ZORBAX 300SB-C18 analytical column (Agilent Technologies), 150 × 0.075 mm, with a 3.5-µm particle diameter and 300-Å pore size, through a 6-port valve. Elution of the samples from the capture column was performed over a gradient using FA 0.1% in water as the mobile phase A and FA 0.1% in ACN 80%/water 20% as the mobile phase B. The LC system was coupled through a nanospray source (CaptiveSpray, Bruker Daltonics) to a 3D ion trap mass spectrometer (amaZon speed ETD, Bruker Daltonics) operating in positive mode with a capillary voltage set to 1500 V and a sweep range: m/z 300–1500. "Data-dependent" acquisition was carried out in automatic mode, which allowed the sequential collection of an MS spectrum in "full scan" (m/z 300_1400) followed by an MS spectrum in tandem via CID of the eight most abundant ions. For identification, the software ProteinScape 3 (Bruker Daltonics) coupled to the search engine Mascot 3.1 (Matrix Science) was used, matching the MS/MS data against the Swiss-Prot and NCBInr databases. Bioassays on melon leaves Bacterial strains were grown in liquid LB at 30 °C overnight. The cells in the cultures were washed twice with sterile distilled water. The bacterial cell suspensions were adjusted to the same OD600 and sprayed onto leaves of 4- to 5-week-old melon plants. Two hours later, a suspension of P. xanthii conidia was sprayed onto each leaf at a concentration of 4−10 × 104 spores/ml. The plants were placed in a greenhouse or in a growth chamber at 25 °C with a 16-h photoperiod, 3800 lux, and 85% RH. The severity of the symptoms in melon leaves was evaluated by the estimation of disease severity103. Disease severity was calculated by quantifying the leaf area covered by powdery mildew using FiJi104 image software analysis and pictures of infected leaves. Briefly, the channels of the image were split and the area covered by powdery mildew was measured in 8-bit images by selecting the powdery mildew damage area (white powdery stains that cover the leaf) through image thresholding, given that the stains caused by the disease have higher pixel intensity values. Total leaf area was determined by manually selecting the leaf outline using the polygon selection tool the ratio of infection was calculated using the formula (see Eq. 1): $${\mathrm{Ratio}}\,{\mathrm{of}}\,{\mathrm{infection}} = \frac{{{\mathrm{damaged}}\,{\mathrm{area}}}}{{{\mathrm{total}}\,{\mathrm{leaf}}\,{\mathrm{area}}}} \times 100$$ The persistence of bacterial strains on plant leaves was calculated via CFU counts performed over the twenty-one days following inoculation. Three different leaves from three different plants were individually placed into sterile plastic stomacher bags and homogenized in a lab blender (Colworth Stomacher-400, Seward, London, UK) for 3 min in 10 ml of sterile distilled water. The leaf extracts were serially diluted and plated to calculate the CFUs at each time point. The plates were incubated at 37 °C for 24 h before counting. The adhesion of bacterial cells to melon leaves was estimated by comparing the number of cells released from the leaf versus the cells attached to the surface. The surfaces of individual leaves were placed in contact with 100 ml of sterile distilled water in glass beakers and, after 10 min of stirring (300 rpm), the water and leaf were plated separately. The leaves were processed as described above. Adhesion was calculated as the ratio: (water CFU/total CFU) × 100. The data from all of the different strains were normalized to the result of the WT strain (100% adhesion). Antifungal activity of cell-free supernatant against P. xanthii B. subtilis strains were grown for 72 h at 30 °C in MOLP medium, and the supernatant was centrifuged and filtered (0.22 µm). One-week-old cotyledons were disinfected with 20% commercial bleach for 30 s and then submerged two times in sterile distilled water for 2 min and then air dried. 10-mm disks were excised with a sterilized cork borer, incubated with cell-free supernatants for 2 h, and then left to dry. Finally, the disks were inoculated with P. xanthii conidia on their adaxial surface with a soft paintbrush105. Lipopeptides production analysis For the in vitro lipopeptide detection, bacteria were grown in MOLP for 72 h. The cultures were centrifuged, and the supernatants were filtered (0.22 µm) prior to analysis via MALDI-TOF/TOF mass spectrometry. For the analysis of lipopeptide production in colonies, WT or ΔtasA colonies were grown on MSgg plates for 72 h at 30 °C. For the cell fractions, whole colonies were resuspended as described above in 1 mL of sterile distilled water and centrifuged at 5000 × g for 5 min. The pellets were then resuspended in 1 ml of methanol and sonicated in a bath for 10 min. Cells were harvested via centrifugation at 5000 × g for 5 min, and the supernatant containing the solubilized lipopeptides was filtered through a 0.22-µm filter and stored at 4 °C prior to analysis. For the agar fraction, after the colonies were removed, a piece of agar of approximately the same surface was sliced out and introduced into a 2-mL Eppendorf tube containing glass beads. In all, 1 mL of methanol was added, and then the tube was vigorously vortexed until the agar was broken down. Finally, the mixture was sonicated in a bath for 10 min and centrifuged at 5000 × g for 5 min. The supernatant was filtered through a 0.22-µm filter and stored at 4 °C prior to analysis by MALDI-TOF/TOF. For in situ lipopeptide detection on inoculated leaves, leaf disks were taken 21 days post-inoculation with a sterile cork borer and then placed directly on an UltrafleXtreme MALDI plate. A matrix consisting of a combination of CHCA (α-cyano-4-hydroxycinnamic acid) and DHB (2,5-dihydroxybenzoic acid) was deposited over the disks or the supernatants (for the in vitro cultures or the colonies' analysis), and the plates were inserted into an UltrafleXtreme MALDI-TOF/TOF mass spectrometer. The mass spectra were acquired using the Bruker Daltonics FlexControl software and were processed using Bruker Daltonics FlexAnalysis. Electron microscopy analysis For the scanning electron microscopy analysis, leaf disks were taken 21 days post-inoculation as previously described and fixed in 0.1 M sodium cacodylate and 2% glutaraldehyde overnight at 4 °C. Three washes were performed with 0.1 M sodium cacodylate and 0.1 M sucrose followed by ethanol dehydration in a series of ethanol solutions from 50% to 100%. A final drying with hexamethyldisilazane was performed as indicated106. The dried samples were coated with a thin layer of iridium using an Emitech K575x turbo sputtering coater before viewing in a Helios Nanolab 650 Scanning Electron Microscope and Focus Ion Beam (SEM-FIB) with a Schottky-type field emission electron gun. For the transmission electron microscopy analysis, bacterial colonies grown on MSgg agar for the appropriate times were fixed directly using a 2% paraformaldehyde-2.5% glutaraldehyde-0.2 M sucrose mix in phosphate buffer 0.1 M (PB) overnight at 4 °C. After three washes in PB, portions were excised from each colony and then post-fixed with 1% osmium tetroxide solution in PB for 90 min at room temperature, followed by PB washes, and 15 min of stepwise dehydration in an ethanol series (30%, 50%, 70%, 90%, and 100% twice). Between the 50% and 70% steps, colonies were incubated in-bloc in 2% uranyl acetate solution in 50% ethanol at 4 °C, overnight. Following dehydration, the samples were gradually embedded in low-viscosity Spurr's resin: resin:ethanol, 1:1, 4 h; resin:ethanol, 3:1, 4 h; and pure resin, overnight. The sample blocks were embedded in capsule molds containing pure resin for 72 h at 70 °C. For the immunolabeling assays, samples from the corresponding strains were grown under biofilm-inducing conditions at 30 °C. After 48 h of incubation, carbon-coated copper grids were deposited into the wells over the pellicles formed at the interface between the medium and the air (in the case of mutants unable to form a pellicle, copper grids were deposited in the interface) and incubated with the samples at 28 °C for 2 h. After incubation, the grids were washed in PSB for 5 min, and then the samples were fixed with a solution of 2% paraformaldehyde for 10 min, washed in PBS and blocked with Pierce Protein-Free (TBS) blocking buffer (ThermoFisher) for 30 min. Anti-TasA primary antibody was used at a 1:150 dilution in blocking buffer, and grids were deposited over drops of the antibody solution and incubated for 1 h at room temperature. Samples were washed three times with TBS -T (50 mM Tris-HCl, 150 mM NaCl, pH 7.5 - Tween20 0.1%) for 5 min and then exposed to 10-nm diameter immunogold-conjugated secondary antibody (Ted Pella) for 1 h at a 1:50 dilution. The samples were then washed twice with TBS-T and once with water for 5 min each. Finally, the grids were treated with glutaraldehyde (2%) for 10 min, washed in water for 5 min, negatively stained with uranyl acetate (1%) for 20 s and, lastly, washed once with water for 30 s. The samples were left to dry and were visualized under a FEI Tecnai G2 20 TWIN Transmission Electron Microscope at an accelerating voltage of 80 KV. The images were taken using a side-mounted CCD Olympus Veleta with 2k x 2k Mp. Whole-transcriptome analysis and qRT-PCR Biofilms were grown on MSgg agar as described above. 24-, 48-, and 72-h colonies of the corresponding strains (WT or ΔtasA) were recovered and stored at −80 °C. All of the assays were performed in duplicate. The collected cells were resuspended and homogenized via passage through a 255/8 G needle in BirnBoim A107 buffer (20% sucrose, 10 mM Tris-HCl pH 8, 10 mM EDTA and 50 mM NaCl). Lysozyme (10 mg/ml) was added, and the mixture was incubated for 30 min at 37 °C. After disruption, the suspensions were centrifuged, and the pellets were resuspended in Trizol reagent (Invitrogen). Total RNA extraction was performed as instructed by the manufacturer. DNA removal was carried out via in-column treatment with the rDNAse included in the Nucleo-Spin RNA Plant Kit (Macherey-Nagel) following the instructions of the manufacturer. The integrity and quality of the total RNA was assessed with an Agilent 2100 Bioanalyzer (Agilent Technologies) and by gel electrophoresis. To perform the RNA sequencing analysis, rRNA removal was performed using the RiboZero rRNA removal (bacteria) Kit from Illumina, and 100-bp single-end read libraries were prepared using the TruSeq Stranded Total RNA Kit (Illumina). The libraries were sequenced using a NextSeq550 instrument (Illumina). The raw reads were pre-processed with SeqTrimNext108 using the specific NGS technology configuration parameters. This pre-processing removes low quality, ambiguous and low complexity stretches, linkers, adapters, vector fragments, and contaminated sequences while keeping the longest informative parts of the reads. SeqTrimNext also discarded sequences below 25 bp. Subsequently, clean reads were aligned and annotated using the B. subtilis subsp. subtilis str. 168 genome (NC_000964.3) as the reference with Bowtie2109 in BAM files, which were then sorted and indexed using SAMtools v1.484110. Uniquely localized reads were used to calculate the read number value for each gene via Sam2counts (https://github.com/vsbuffalo/sam2counts). Differentially expressed genes (DEGs) between WT and ΔtasA were analyzed via DEgenes Hunter111, which provides a combined p value calculated (based on Fisher's method112) using the nominal p values provided by from edgeR113 and DEseq2114. This combined p value was adjusted using the Benjamini-Hochberg (BH) procedure (false discovery rate approach)115 and used to rank all the obtained differentially expressed genes. For each gene, combined p value < 0.05 and log2-fold change >1 or <−1 were considered as the significance threshold. Heatmap and DEGs clusterization was performed using ComplexHeatmap116 in Rstudio. STEM117 was used to model temporal expression profiles independent of the data. Only profiles with a p value < 0.05 were considered in this study. The DEGs annotated with the B. subtilis subsp. subtilis str. 168 genome were used to identify the Gene Ontology functional categories using sma3s118 and TopGo Software119. Gephi software (https://gephi.org) was used to generate the DEG networks, and the regulon list was downloaded from subtiwiki (http://subtiwiki.uni-goettingen.de). The data were deposited in the GEO database (GEO accession GSE124307). Quantitative real-time (qRT)-PCR was performed using the iCycler-iQ system and the iQ SYBR Green Supermix Kit from Bio-Rad. The primer pairs used to amplify the target genes were designed using the Primer3 software (http://bioinfo.ut.ee/primer3/) and Beacon designer (http://www.premierbiosoft.com/qOligo/Oligo.jsp?PID=1), maintaining the parameters described elsewhere120. For the qRT-PCR assays, the RNA concentration was adjusted to 100 ng/µl. Next, 1 µg of DNA-free total RNA was retro-transcribed into cDNA using the SuperScript III reverse transcriptase (Invitrogen) and random hexamers in a final reaction volume of 20 µl according to the instructions provided by the manufacturer. The qRT-PCR cycle was: 95 °C for 3 min, followed by PCR amplification using a 40-cycle amplification program (95 °C for 20 s, 56 °C for 30 s, and 72 °C for 30 s), followed by a third step of 95 °C for 30 s. To evaluate the melting curve, 40 additional cycles of 15 s each starting at 75 °C with stepwise temperature increases of 0.5 °C per cycle were performed. To normalize the data, the rpsJ gene, encoding the 30S ribosomal protein S10, was used as a reference gene121. The target genes fenD, encoding fengycin synthetase D, alsS, encoding acetolactate synthase, albE, encoding bacteriocin subtilosin biosynthesis protein AlbE, bacB, encoding the bacilysin biosynthesis protein BacB, and srfAA encoding surfactin synthetase A, were amplified using the primer pairs given in Supplementary Table 4, resulting in the generation of fragments of 147 bp, 82 bp, 185 bp, 160 bp, and 94 bp, respectively. The primer efficiency tests and confirmation of the specificity of the amplification reactions were performed as previously described122. The relative transcript abundance was estimated using the ΔΔ cyclethreshold (Ct) method123. Transcriptional data of the target genes was normalized to the rpsJ gene and shown as the fold-changes in the expression levels of the target genes in each B. subtilis mutant strain compared to those in the WT strain. The relative expression ratios were calculated as the difference between the qPCR threshold cycles (Ct) of the target gene and the Ct of the rpsJ gene (ΔCt = Ctrgene of interest – CtrpsJ). Fold-change values were calculated as 2−ΔΔCt, assuming that one PCR cycle represents a two-fold difference in template abundance124,125. The qRT-PCR analyses were performed three times (technical replicates) using three independent RNA isolations (biological replicates). Flow cytometry assays Cells were grown on MSgg agar at 30 °C. At different time points, colonies were recovered in 500 μL of PBS and resuspended with a 255/8 G needle. For the promoter expression assays, colonies were gently sonicated as described above to ensure complete resuspension, and the cells were fixed in 4% paraformaldehyde in PBS and washed three times in PBS. To evaluate the physiological status of the different B. subtilis strains, cells were stained without fixation for 30 min with 5 mM 5-cyano-2,3-ditolyltetrazolium chloride (CTC) and 15 µM 3-(p-hydroxyphenyl) fluorescein (HPF). The flow cytometry runs were performed with 200 μl of cell suspensions in 800 μL of GTE buffer (50 mM glucose, 10 mM EDTA, 20 mM Tris-HCl; pH 8), and the cells were measured on a Beckman Coulter Gallios™ flow cytometer using 488 nm excitation. YFP and HPF fluorescence were detected with 550 SP or 525/40 BP filters. CTC fluorescence was detected with 730 SP and 695/30BP filters. The data were collected using Gallios™ Software v1.2 and further analyzed using Kaluza Analysis v1.3 and Flowing Software v2.5.1. Negative controls corresponding to unstained bacterial cells (or unlabeled cells corresponding to each strain for the promoter expression analysis) were used to discriminate the populations of stained bacteria in the relevant experiments and for each dye (Supplementary Fig. 19). Intracellular pH analysis Intracellular pH was measured as previously described55. Colonies of the different strains grown on MSgg agar at 30 °C were taken at different time points and recovered in potassium phosphate buffer (PPB) pH 7 and gently sonicated as described above. Next, the cells were incubated in 10 µl of 1 mM 5-(6)carboxyfluorescein diacetate succinimidyl (CFDA) for 15 min at 30 °C. PPB supplemented with glucose (10 mM) was added to the cells for 15 min at 30 °C to remove the excess dye. After two washes with the same buffer, the cells were resuspended in 50 mM PPB (pH 4.5). Fluorescence was measured in a FLUOstar Omega (BMG labtech) microplate spectrofluorometer using 490 nm/525 nm as the excitation and emission wavelengths, respectively. Conversion from the fluorescence arbitrary units into pH units was performed using a standard calibration curve. Confocal laser scanning microscopy Cell death in the bacterial colonies was evaluated using the LIVE/DEAD BacLight Bacterial Viability Kit (Invitrogen). Equal volumes of both components included in the kit were mixed, and 2 µl of this solution was used to stain 1 ml of the corresponding bacterial suspension. Sequential acquisitions were configured to visualize the live or dead bacteria in the samples. Acquisitions with excitation at 488 nm and emission recorded from 499 to 554 nm were used to capture the images from live bacteria, followed by a second acquisition with excitation at 561 nm and emission recorded from 592 to 688 nm for dead bacteria. For the microscopic analysis and quantification of lipid peroxidation in live bacterial samples, we used the image-iT Lipid Peroxidation Kit (Invitrogen) following the manufacturer's instructions with some slight modifications. Briefly, colonies of the different strains were grown on MSgg plates at 30 °C, isolated at different time points, and resuspended in 1 ml of liquid MSgg medium as described in the previous sections. In all, 5 mM cumene hydroperoxide (CuHpx)-treated cell suspensions of the different strains at the corresponding times were used as controls. The cell suspensions were then incubated at 30 °C for 2 h and then stained with a 10-µM solution of the imageIT lipid peroxidation sensor for 30 min. Finally, the cells were washed three times with PBS, mounted, and visualized immediately. Images of the stained bacteria were acquired sequentially to obtain images from the oxidized to the reduced states of the dye. The first image (oxidized channel) was acquired by exciting the sensor at 488 nm and recording the emissions from 509 to 561 nm, followed by a second acquisition (reduced channel) with excitement at 561 nm and recording of the emissions from 590 to 613 nm. Membrane potential was evaluated using the image-iT TMRM (tetramethylrhodamine, methyl ester) reagent (Invitrogen) following the manufacturer's instructions. Colonies grown at 30 °C on MSgg solid medium were isolated at different time points and resuspended as described above. Samples treated prior to staining with 20 µM carbonyl cyanide m-chlorophenyl hydrazine (CCCP), a known protonophore and uncoupler of bacterial oxidative phosphorylation, were used as controls (Supplementary Fig. 8). The TMRM reagent was added to the bacterial suspensions to a final concentration of 100 nM, and the mixtures were incubated at 37 °C for 30 min. After incubation, the cells were immediately visualized by confocal laser scanning microscopy (CLSM) with excitation at 561 nm and emission detection between 576 and 683 nm. The amounts of DNA damage in the B. subtilis strains at the different time points were evaluated via terminal deoxynucleotidyl transferase (TdT) dUTP Nick-End Labeling (TUNEL) using the In-Situ Cell Death Detection Kit with fluorescein (Roche) according to the manufacturer's instructions. B. subtilis colonies were resuspended in PBS and processed as described above. The cells were centrifuged and resuspended in 1% paraformaldehyde in PBS and fixed at room temperature for 1 h on a rolling shaker. The cells were then washed twice in PBS and permeabilized in 0.1% Triton X-100 and 0.1% sodium citrate for 30 min at room temperature with shaking. After permeabilization, the cells were washed twice with PBS and the pellets were resuspended in 50 µl of the TUNEL reaction mixture (45 µl label solution + 5 µl enzyme solution), and the reactions were incubated for one hour at 37 °C in the dark with shaking. Finally, the cells were washed twice in PBS, counterstained with DAPI (final concentration 500 nM), mounted, and visualized by CLSM with excitation at 488 nm and emission detection between 497 and 584 nm. Membrane fluidity was evaluated via Laurdan generalized polarization (GP)126. Colonies of the different B. subtilis strains were grown and processed as described above. The colonies were resuspended in 50 mM Tris pH 7.4 with 0.5% NaCl. Laurdan reagent (6-dodecanoyl-N,N-dimethyl-2-naphthylamine) was purchased from Sigma-Aldrich (Merck) and dissolved in N,N-dimethylformamide (DMF). Samples treated prior to staining with 2% benzyl alcohol, a substance known to increase lipid fluidity127,128, were used as positive controls (Supplementary Fig. 10). Laurdan was added to the bacterial suspensions to a final concentration of 100 µM. The cells were incubated at room temperature for 10 min, mounted, and then visualized immediately using two-photon excitation with a Spectraphysics MaiTai Pulsed Laser tuned to 720 nm (roughly equivalent to 360 nm single photon excitation), attached to a Leica SP5 microscope. Emissions between 432 and 482 nm (gel phase) and between 509 and 547 nm (liquid phase) were recorded using the internal PMT detectors. The localization of FloT in B. subtilis cells was evaluated using a FloT-YFP translational fusion in a WT genetic background (see Supplementary Table 2 for full genotype of the strains). Colonies grown at 30 °C on MSgg solid medium were isolated at different time points and resuspended as described above. Samples were mounted and visualized immediately with excitation at 514 nm and emission recorded from 518 to 596 nm. All images were obtained by visualizing the samples using an inverted Leica SP5 system with a 63x NA 1.4 HCX PL APO oil-immersion objective. For each experiment, the laser settings, scan speed, PMT or HyD detector gain, and pinhole aperture were kept constant for all of the acquired images. Image processing was performed using Leica LAS AF (LCS Lite, Leica Microsystems) and FIJI/ImageJ104 software. Images of live and dead bacteria from viability experiments were processed automatically, counting the number of live (green) or dead (red) bacteria in their corresponding channels. The percentage of dead cells was calculated dividing the number of dead cells by the total number of bacteria found on a field. For processing the lipid peroxidation images, images corresponding to the reduced and oxidized channels were smoothed and a value of 3 was then subtracted from the two channels to eliminate the background. The ratio image was calculated by dividing the processed reduced channel by the oxidized channel using the FiJi image calculator tool. The ratio images were pseudo-colored using a color intensity look-up table (LUT), and intensity values of min 0 and max 50 were selected. All of the images were batch processed with a custom imageJ macro, in which the same processing options were applied to all of the acquired images. Quantification of the lipid peroxidation was performed in Imaris v7.4 (Bitplane) by quantifying the pixel intensity of the ratio images with the Imaris "spots" tool. The Laurdan GP acquisitions were processed similarly. Images corresponding to the gel phase channel and the liquid phase channel were smoothed and a value of 10 was subtracted to eliminate the background. The Laurdan GP image was then calculated by applying the following formula (see equation 2): $${\mathrm{Laurdan}}\,{\mathrm{GP}} = \frac{{\left( {{\mathrm{gel}}\,{\mathrm{phase}}\,{\mathrm{channel}} - {\mathrm{liquid}}\,{\mathrm{phase}}\,{\mathrm{channel}}} \right)}}{{\left( {{\mathrm{gel}}\,{\mathrm{phase}}\,{\mathrm{channel}} + {\mathrm{liquid}}\,{\mathrm{phase}}\,{\mathrm{channel}}} \right)}}$$ The calculation was performed step by step using the FiJi image calculator tool. Pixels with high Laurdan GP values, typically caused by residual background noise, were eliminated with the "Remove outliers" option using a radius of 4 and a threshold of 5. Finally, the Laurdan GP images were pseudo-colored using a color intensity LUT, and intensity values of min 0 and max 1.5 were selected. This processing was applied to all of the acquisitions for this experiment. To quantify the Laurdan GP, bright field images were used for thresholding and counting to create counts masks that were applied to the Laurdan GP images to measure the mean Laurdan GP value for each bacterium. TUNEL images were analyzed by subtracting a value of 10 in the TUNEL channel to eliminate the background. The DAPI channel was then used for thresholding and counting as described above to quantify the TUNEL signal. The same parameters were used to batch process and quantify all of the images. To quantify the membrane potential, the TMRM assay images were analyzed as described above using the bright field channel of each image for thresholding and counting to calculate the mean fluorescence intensity in each bacterium. Endospores, which exhibited a bright fluorescent signal upon TMRM staining, were excluded from the analysis. This processing was applied to all of the acquisitions for this experiment. To quantify the fluorescence of the bacteria expressing the floT-yfp construct, images were analyzed as described above using the bright field channel of each image for thresholding and counting to calculate the mean fluorescence intensity in each bacterium. All of the data are representative of at least three independent experiments with at least three technical replicates. The results are expressed as the mean ± standard error of the mean (SEM). Statistical significance was assessed by performing the appropriate tests (see the figure legends). All analyses were performed using GraphPad Prism version 6. p values < 0.05 were considered significant. Asterisks indicate the level of statistical significance: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001. The RNA-seq data that support the findings of this study have been deposited in GEO database with the accession code GSE124307 [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE124307]. The source data underlying Figs. 1A, B, D, 3A, C, 4A–D, 5B, 6B, 7B, 8C, D, 9B, C, D, E, and 10A–C; and Supplementary Figs. S1A, C, D, S4, S7B, C, D, S8B, S10B, S11, S12B, S13A, S14B, S15B, S16, S17B, S18B, and S19A, B are provided as a Source Data file. Toyofuku, M. et al. Environmental factors that shape biofilm formation. Biosci. Biotechnol. Biochem. 80, 7–12 (2016). Wenbo, Z. et al. Nutrient depletion in Bacillus subtilis biofilms triggers matrix production. New J. Phys. 16, 015028 (2014). Flemming, H. C. et al. Biofilms: an emergent form of bacterial life. Nat. Rev. Microbiol. 14, 563–575 (2016). van Gestel, J., Vlamakis, H. & Kolter, R. Division of labor in biofilms: the ecology of cell differentiation. Microbiol. Spectr. 3, MB-0002-2014, (2015). Dragos, A. et al. Division of labor during biofilm matrix production. Curr. Biol. 28, 1903–1913 e1905 (2018). Branda, S. S., Vik, S., Friedman, L. & Kolter, R. Biofilms: the matrix revisited. Trends Microbiol. 13, 20–26 (2005). Vlamakis, H., Chai, Y., Beauregard, P., Losick, R. & Kolter, R. Sticking together: building a biofilm the Bacillus subtilis way. Nat. Rev. Microbiol. 11, 157–168 (2013). Kearns, D. B., Chu, F., Branda, S. S., Kolter, R. & Losick, R. A master regulator for biofilm formation by Bacillus subtilis. Mol. Microbiol 55, 739–749 (2005). Chu, F., Kearns, D. B., Branda, S. S., Kolter, R. & Losick, R. Targets of the master regulator of biofilm formation in Bacillus subtilis. Mol. Microbiol. 59, 1216–1228 (2006). Aguilar, C., Vlamakis, H., Guzman, A., Losick, R. & Kolter, R. KinD is a checkpoint protein linking spore formation to extracellular-matrix production in Bacillus subtilis biofilms. MBio 1, e00035-10 (2010). Danhorn, T. & Fuqua, C. Biofilm formation by plant-associated bacteria. Annu. Rev. Microbiol. 61, 401–422 (2007). Castiblanco, L. F. & Sundin, G. W. New insights on molecular regulation of biofilm formation in plant-associated bacteria. J. Integr. Plant Biol. 58, 362–372 (2016). Zeriouh, H., de Vicente, A., Perez-Garcia, A. & Romero, D. Surfactin triggers biofilm formation of Bacillus subtilis in melon phylloplane and contributes to the biocontrol activity. Environ. Microbiol. 16, 2196–2211 (2014). Marvasi, M., Visscher, P. T. & Casillas Martinez, L. Exopolymeric substances (EPS) from Bacillus subtilis: polymers and genes encoding their synthesis. FEMS Microbiol. Lett. 313, 1–9 (2010). Hobley, L. et al. BslA is a self-assembling bacterial hydrophobin that coats the Bacillus subtilis biofilm. Proc. Natl Acad. Sci. USA 110, 13600–13605 (2013). ADS CAS PubMed Article PubMed Central Google Scholar Romero, D., Aguilar, C., Losick, R. & Kolter, R. Amyloid fibers provide structural integrity to Bacillus subtilis biofilms. Proc. Natl Acad. Sci. USA 107, 2230–2234 (2010). Diehl, A. et al. Structural changes of TasA in biofilm formation of Bacillus subtilis. Proc. Natl Acad. Sci. USA 115, 3237–3242 (2018). Stover, A. G. & Driks, A. Secretion, localization, and antibacterial activity of TasA, a Bacillus subtilis spore-associated protein. J. Bacteriol. 181, 1664–1672 (1999). Stover, A. G. & Driks, A. Control of synthesis and secretion of the Bacillus subtilis protein YqxM. J. Bacteriol. 181, 7065–7069 (1999). Camara-Almiron, J., Caro-Astorga, J., de Vicente, A. & Romero, D. Beyond the expected: the structural and functional diversity of bacterial amyloids. Crit. Rev. Microbiol. 44, 653–666 (2018). Shank, E. A. & Kolter, R. Extracellular signaling and multicellularity in Bacillus subtilis. Curr. Opin. Microbiol. 14, 741–747 (2011). Steinberg, N. & Kolodkin-Gal, I. The matrix reloaded: probing the extracellular matrix synchronizes bacterial communities. J. Bacteriol., 197, 2092–2103 (2015). Vlamakis, H., Aguilar, C., Losick, R. & Kolter, R. Control of cell fate by the formation of an architecturally complex bacterial community. Genes Dev. 22, 945–953 (2008). Lopez, D., Vlamakis, H. & Kolter, R. Biofilms. Cold Spring Harb. Perspect. Biol. 2, a000398 (2010). PubMed PubMed Central Article CAS Google Scholar Molina-Santiago, C. et al. The extracellular matrix protects Bacillus subtilis colonies from Pseudomonas invasion and modulates plant co-colonization. Nat. Commun. 10, 1919 (2019). ADS PubMed PubMed Central Article CAS Google Scholar Juliano, R. L. & Haskill, S. Signal transduction from the extracellular matrix. J. Cell Biol. 120, 577–585 (1993). Aharoni, D., Meiri, I., Atzmon, R., Vlodavsky, I. & Amsterdam, A. Differential effect of components of the extracellular matrix on differentiation and apoptosis. Curr. Biol. 7, 43–51 (1997). Kim, S. H., Turnbull, J. & Guimond, S. Extracellular matrix and cell signalling: the dynamic cooperation of integrin, proteoglycan and growth factor receptor. J. Endocrinol. 209, 139–151 (2011). Cheresh, D. A. & Stupack, D. G. Regulation of angiogenesis: apoptotic cues from the ECM. Oncogene 27, 6285–6298 (2008). Kular, J. K., Basu, S. & Sharma, R. I. The extracellular matrix: structure, composition, age-related differences, tools for analysis and applications for tissue engineering. J. Tissue Eng. 5, 2041731414557112 (2014). Shi, Y. B., Li, Q., Damjanovski, S., Amano, T. & Ishizuya-Oka, A. Regulation of apoptosis during development: input from the extracellular matrix (review). Int. J. Mol. Med. 2, 273–282 (1998). Frisch, S. M. & Francis, H. Disruption of epithelial cell-matrix interactions induces apoptosis. J. Cell Biol. 124, 619–626 (1994). Gonidakis, S. & Longo, V. D. Assessing chronological aging in bacteria. Methods Mol. Biol. 965, 421–437 (2013). Kolter, R., Siegele, D. A. & Tormo, A. The stationary phase of the bacterial life cycle. Annu. Rev. Microbiol. 47, 855–874 (1993). Dukan, S. & Nystrom, T. Bacterial senescence: stasis results in increased and differential oxidation of cytoplasmic proteins leading to developmental induction of the heat shock regulon. Genes Dev. 12, 3431–3441 (1998). Hecker, M., Pane-Farre, J. & Volker, U. SigB-dependent general stress response in Bacillus subtilis and related gram-positive bacteria. Annu. Rev. Microbiol. 61, 215–236 (2007). Reder, A., Hoper, D., Gerth, U. & Hecker, M. Contributions of individual sigmaB-dependent general stress genes to oxidative stress resistance of Bacillus subtilis. J. Bacteriol. 194, 3601–3610 (2012). Navarro Llorens, J. M., Tormo, A. & Martinez-Garcia, E. Stationary phase in gram-negative bacteria. FEMS Microbiol. Rev. 34, 476–495 (2010). Gomez-Marroquin, M. et al. Role of Bacillus subtilis DNA glycosylase MutM in counteracting oxidatively induced DNA damage and in stationary-phase-associated mutagenesis. J. Bacteriol. 197, 1963–1971 (2015). Chan, C. M., Danchin, A., Marliere, P. & Sekowska, A. Paralogous metabolism: S-alkyl-cysteine degradation in Bacillus subtilis. Environ. Microbiol. 16, 101–117 (2014). Lopez, D., Vlamakis, H., Losick, R. & Kolter, R. Paracrine signaling in a bacterium. Genes Dev. 23, 1631–1638 (2009). Perez-Garcia, A. et al. The powdery mildew fungus Podosphaera fusca (synonym Podosphaera xanthii), a constant threat to cucurbits. Mol. Plant Pathol. 10, 153–160 (2009). Nakano, M. M., Dailly, Y. P., Zuber, P. & Clark, D. P. Characterization of anaerobic fermentative growth of Bacillus subtilis: identification of fermentation end products and genes required for growth. J. Bacteriol. 179, 6749–6755 (1997). Hartig, E. & Jahn, D. Regulation of the anaerobic metabolism in Bacillus subtilis. Adv. Microb. Physiol. 61, 195–216 (2012). Ongena, M. & Jacques, P. Bacillus lipopeptides: versatile weapons for plant disease biocontrol. Trends Microbiol. 16, 115–125 (2008). Shelburne, C. E. et al. The spectrum of antimicrobial activity of the bacteriocin subtilosin A. J. Antimicrob. Chemother. 59, 297–300 (2007). Rajavel, M., Mitra, A. & Gopal, B. Role of Bacillus subtilis BacB in the synthesis of bacilysin. J. Biol. Chem. 284, 31882–31892 (2009). Patel, P. S. et al. Bacillaene, a novel inhibitor of procaryotic protein synthesis produced by Bacillus subtilis: production, taxonomy, isolation, physico-chemical characterization and biological activity. J. Antibiot. 48, 997–1003 (1995). Niehaus, T. D. et al. Identification of a metabolic disposal route for the oncometabolite S-(2-succino)cysteine in Bacillus subtilis. J. Biol. Chem. 293, 8255–8263 (2018). Thomas, S. A., Storey, K. B., Baynes, J. W. & Frizzell, N. Tissue distribution of S-(2-succino)cysteine (2SC), a biomarker of mitochondrial stress in obesity and diabetes. Obesity 20, 263–269 (2012). McDonnell, G. E., Wood, H., Devine, K. M. & McConnell, D. J. Genetic control of bacterial suicide: regulation of the induction of PBSX in Bacillus subtilis. J. Bacteriol. 176, 5820–5830 (1994). Toyofuku, M. et al. Prophage-triggered membrane vesicle formation through peptidoglycan damage in Bacillus subtilis. Nat. Commun. 8, 481 (2017). Nystrom, T. Stationary-phase physiology. Annu. Rev. Microbiol. 58, 161–181 (2004). PubMed Article CAS PubMed Central Google Scholar Chen, Y., Gozzi, K., Yan, F. & Chai, Y. Acetic acid acts as a volatile signal to stimulate bacterial biofilm formation. MBio 6, e00392 (2015). Thomas, V. C. et al. A central role for carbon-overflow pathways in the modulation of bacterial cell death. PLoS Pathog. 10, e1004205 (2014). Xiao, Z. & Xu, P. Acetoin metabolism in bacteria. Crit. Rev. Microbiol. 33, 127–140 (2007). Larosa, V. & Remacle, C. Insights into the respiratory chain and oxidative stress. Biosci. Rep. 38, BSR20171492 (2018). Teixeira, J. et al. Extracellular acidification induces ROS- and mPTP-mediated death in HEK293 cells. Redox Biol. 15, 394–404 (2018). Redza-Dutordoir, M. & Averill-Bates, D. A. Activation of apoptosis signalling pathways by reactive oxygen species. Biochim. Biophys. Acta 1863, 2977–2992 (2016). Gross, A. et al. Biochemical and genetic analysis of the mitochondrial response of yeast to BAX and BCL-X(L). Mol. Cell. Biol. 20, 3125–3136 (2000). Giovannini, C. et al. Mitochondria hyperpolarization is an early event in oxidized low-density lipoprotein-induced apoptosis in Caco-2 intestinal cells. FEBS Lett. 523, 200–206 (2002). Perry, S. W. et al. HIV-1 transactivator of transcription protein induces mitochondrial hyperpolarization and synaptic stress leading to apoptosis. J. Immunol. 174, 4333–4344 (2005). Gaschler, M. M. & Stockwell, B. R. Lipid peroxidation in cell death. Biochem. Biophys. Res. Commun. 482, 419–425 (2017). Wong-Ekkabut, J. et al. Effect of lipid peroxidation on the properties of lipid bilayers: a molecular dynamics study. Biophys. J. 93, 4225–4236 (2007). Bindoli, A., Cavallini, L. & Jocelyn, P. Mitochondrial lipid peroxidation by cumene hydroperoxide and its prevention by succinate. Biochim. Biophys. Acta 681, 496–503 (1982). Heimburg, T. & Marsh, D. in Biological Membranes (eds Kenneth M. MerzJr & Benoît Roux) (Birkhäuser Boston, 1996). van de Vossenberg, J. L., Driessen, A. J., da Costa, M. S. & Konings, W. N. Homeostasis of the membrane proton permeability in Bacillus subtilis grown at different temperatures. Biochim. Biophys. Acta 1419, 97–104 (1999). Rossignol, M., Thomas, P. & Grignon, C. Proton permeability of liposomes from natural phospholipid mixtures. Biochim. Biophys. Acta 684, 195–199 (1982). Bach, J. N. & Bramkamp, M. Flotillins functionally organize the bacterial membrane. Mol. Microbiol. 88, 1205–1217 (2013). Schneider, J. et al. Spatio-temporal remodeling of functional membrane microdomains organizes the signaling networks of a bacterium. PLoS Genet. 11, e1005140 (2015). Yepes, A. et al. The biofilm formation defect of a Bacillus subtilis flotillin-defective mutant involves the protease FtsH. Mol. Microbiol. 86, 457–471 (2012). Mielich-Suss, B., Schneider, J. & Lopez, D. Overproduction of flotillin influences cell differentiation and shape in Bacillus subtilis. MBio 4, e00719–00713 (2013). Lopez, D. & Kolter, R. Functional microdomains in bacterial membranes. Genes Dev. 24, 1893–1902 (2010). Brown, D. A. Lipid rafts, detergent-resistant membranes, and raft targeting signals. Physiology 21, 430–439 (2006). Tjalsma, H., Bolhuis, A., Jongbloed, J. D., Bron, S. & van Dijl, J. M. Signal peptide-dependent protein transport in Bacillus subtilis: a genome-based survey of the secretome. Microbiol. Mol. Biol. Rev. 64, 515–547 (2000). Romero, D., Vlamakis, H., Losick, R. & Kolter, R. An accessory protein required for anchoring and assembly of amyloid fibres in B. subtilis biofilms. Mol. Microbiol. 80, 1155–1168 (2011). Tjalsma, H. et al. Conserved serine and histidine residues are critical for activity of the ER-type signal peptidase SipW of Bacillus subtilis. J. Biol. Chem. 275, 25102–25108 (2000). Chai, Y., Chu, F., Kolter, R. & Losick, R. Bistability and biofilm formation in Bacillus subtilis. Mol. Microbiol. 67, 254–263 (2008). Lewis, K. Programmed death in bacteria. Microbiol. Mol. Biol. Rev. 64, 503–514 (2000). Bayles, K. W. Bacterial programmed cell death: making sense of a paradox. Nat. Rev. Microbiol. 12, 63–69 (2014). Peeters, S. H. & de Jonge, M. I. For the greater good: Programmed cell death in bacterial communities. Microbiol. Res. 207, 161–169 (2018). Murakami, C. et al. pH neutralization protects against reduction in replicative lifespan following chronological aging in yeast. Cell Cycle 11, 3087–3096 (2012). Wang, C. & Youle, R. J. The role of mitochondria in apoptosis*. Annu. Rev. Genet. 43, 95–118 (2009). Cao, J. et al. Curcumin induces apoptosis through mitochondrial hyperpolarization and mtDNA damage in human hepatoma G2 cells. Free Radic. Biol. Med. 43, 968–975 (2007). Shi, C. et al. Antimicrobial activity and possible mechanism of action of citral against Cronobacter sakazakii. PLoS ONE 11, e0159006 (2016). Hicks, D. A., Nalivaeva, N. N. & Turner, A. J. Lipid rafts and Alzheimer's disease: protein-lipid interactions and perturbation of signaling. Front. Physiol. 3, 189 (2012). Malishev, R., Abbasi, R., Jelinek, R. & Chai, L. Bacterial model membranes reshape fibrillation of a functional amyloid protein. Biochemistry 57, 5230–5238 (2018). Chai, L. et al. The bacterial extracellular matrix protein TapA is a two-domain partially disordered protein. ChemBioChem 20, 355–359 (2018). Wright, P. E. & Dyson, H. J. Intrinsically disordered proteins in cellular signalling and regulation. Nat. Rev. Mol. Cell Biol. 16, 18–29 (2015). Dunker, A. K., Cortese, M. S., Romero, P., Iakoucheva, L. M. & Uversky, V. N. Flexible nets. The roles of intrinsic disorder in protein interaction networks. FEBS J. 272, 5129–5148 (2005). Vorholt, J. A. Microbial life in the phyllosphere. Nat. Rev. Microbiol. 10, 828–840 (2012). Arnaouteli, S., MacPhee, C. E. & Stanley-Wall, N. R. Just in case it rains: building a hydrophobic biofilm the Bacillus subtilis way. Curr. Opin. Microbiol. 34, 7–12 (2016). Fink, R. C. et al. Transcriptional responses of Escherichia coli K-12 and O157:H7 associated with lettuce leaves. Appl. Environ. Microbiol. 78, 1752–1764 (2012). Carter, M. Q., Louie, J. W., Feng, D., Zhong, W. & Brandl, M. T. Curli fimbriae are conditionally required in Escherichia coli O157:H7 for initial attachment and biofilm formation. Food Microbiol. 57, 81–89 (2016). Lakshmanan, V. et al. Microbe-associated molecular patterns-triggered root responses mediate beneficial rhizobacterial recruitment in Arabidopsis. Plant Physiol. 160, 1642–1661 (2012). Choudhary, D. K. & Johri, B. N. Interactions of Bacillus spp. and plants–with special reference to induced systemic resistance (ISR). Microbiol. Res. 164, 493–513 (2009). Ping, L. & Boland, W. Signals from the underground: bacterial volatiles promote growth in Arabidopsis. Trends Plant Sci. 9, 263–266 (2004). Ahimou, F., Jacques, P. & Deleu, M. Surfactin and iturin A effects on Bacillus subtilis surface hydrophobicity. Enzyme Microb. Technol. 27, 749–754 (2000). Doan, T., Marquis, K. A. & Rudner, D. Z. Subcellular localization of a sporulation membrane protein is achieved through a network of interactions along and across the septum. Mol. Microbiol. 55, 1767–1781 (2005). Yasbin, R. E. & Young, F. E. Transduction in Bacillus subtilis by bacteriophage SPP1. J. Virol. 14, 1343–1348 (1974). Branda, S. S., Chu, F., Kearns, D. B., Losick, R. & Kolter, R. A major protein component of the Bacillus subtilis biofilm matrix. Mol. Microbiol. 59, 1229–1238 (2006). El Mammeri, N. et al. Molecular architecture of bacterial amyloids in Bacillus biofilms. FASEB J. 33, 12146–12163 (2019). Romero, D., Rivera, M. E., Cazorla, F. M., de Vicente, A. & Perez-Garcia, A. Effect of mycoparasitic fungi on the development of Sphaerotheca fusca in melon leaves. Mycol. Res. 107, 64–71 (2003). Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012). Bellon-Gomez, D., Vela-Corcia, D., Perez-Garcia, A. & Tores, J. A. Sensitivity of Podosphaera xanthii populations to anti-powdery-mildew fungicides in Spain. Pest. Manag. Sci. 71, 1407–1413 (2015). Fischer, E. R., Hansen, B. T., Nair, V., Hoyt, F. H. & Dorward, D. W. Scanning electron microscopy. Curr. Protoc. Microbiol. Chapter 2, Unit 2B 2, (2012). Birnboim, H. C. & Doly, J. A rapid alkaline extraction procedure for screening recombinant plasmid DNA. Nucleic Acids Res. 7, 1513–1523 (1979). Falgueras, J. et al. SeqTrim: a high-throughput pipeline for pre-processing any type of sequence read. BMC Bioinform. 11, 38 (2010). Li, H. et al. The sequence alignment/map format and SAMtools. Bioinformatics 25, 2078–2079 (2009). González Gayte, I., Bautista Moreno, R., Seoane Zonjic, P. & Claros, M. G. DEgenes Hunter - A Flexible R Pipeline for Automated RNA-seq Studies in Organisms without Reference Genome. Genomics and Computational Biology. Vol. 3, No 3. https://doi.org/10.18547/gcb.2017.vol3.iss3.e31 (2017). Fisher, R. A. in Breakthroughs in Statistics: Methodology and Distribution (eds Samuel Kotz & Norman L. Johnson) 66–70 (Springer New York, 1992). Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics 26, 139–140 (2010). Anders, S. & Huber, W. Differential expression analysis for sequence count data. Genome Biol. 11, R106 (2010). Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B (Methodol.) 57, 289–300 (1995). MathSciNet MATH Google Scholar Gu, Z., Eils, R. & Schlesner, M. Complex heatmaps reveal patterns and correlations in multidimensional genomic data. Bioinformatics 32, 2847–2849 (2016). Ernst, J. & Bar-Joseph, Z. STEM: a tool for the analysis of short time series gene expression data. BMC Bioinform. 7, 191 (2006). Casimiro-Soriguer, C. S., Munoz-Merida, A. & Perez-Pulido, A. J. Sma3s: A universal tool for easy functional annotation of proteomes and transcriptomes. Proteomics 17, 1700071 (2017). Alexa, A. & Rahnenfuhrer, J. Gene Set Enrichment Analysis with topGO http://www.mpi-sb.mpg.de/∼alexa (2007). Thornton, B. & Basu, C. Real-time PCR (qPCR) primer design using free online software. Biochem. Mol. Biol. Educ. 39, 145–154 (2011). Leães, F. L. et al. Expression of essential genes for biosynthesis of antimicrobial peptides of Bacillus is modulated by inactivated cells of target microorganisms. Res. Microbiol. 167, 83–89 (2016). Vargas, P., Felipe, A., Michan, C. & Gallegos, M. T. Induction of Pseudomonas syringae pv. tomato DC3000 MexAB-OprM multidrug efflux pump by flavonoids is mediated by the repressor PmeR. Mol. Plant Microbe Interact. 24, 1207–1219 (2011). Livak, K. J. & Schmittgen, T. D. Analysis of relative gene expression data using real-time quantitative PCR and the 2−ΔΔCT method. Methods 25, 402–408 (2001). Pfaffl, M. W. A new mathematical model for relative quantification in real-time RT-PCR. Nucleic Acids Res. 29, e45 (2001). Rotenberg, D., Thompson, T. S., German, T. L. & Willis, D. K. Methods for effective real-time RT-PCR analysis of virus-induced gene silencing. J. Virol. Methods 138, 49–59 (2006). Strahl, H., Burmann, F. & Hamoen, L. W. The actin homologue MreB organizes the bacterial cell membrane. Nat. Commun. 5, 3442 (2014). Friedlander, G., Le Grimellec, C., Giocondi, M. C. & Amiel, C. Benzyl alcohol increases membrane fluidity and modulates cyclic AMP synthesis in intact renal epithelial cells. Biochim. Biophys. Acta 903, 341–348 (1987). Straight, P. D., Fischbach, M. A., Walsh, C. T., Rudner, D. Z. & Kolter, R. A singular enzymatic megacomplex from Bacillus subtilis. Proc. Natl Acad. Sci. USA 104, 305–310 (2007). We thank Josefa Gómez Maldonado from the Ultrasequencing Unit of the SCBI-UMA for RNA sequencing, Juan Félix López Téllez at Bionand for his technical support in the transmission electron microscopy analysis, and the flow cytometry service at Bionand. We wish to thank Daniel López and Julia García-Fernández from the National Center for Biotechnology of the Spanish National Research Council (CNB-CSIC) for critical discussion and for kindly providing some of the bacterial strains used in this study. We would like to thank Ákos T. Kovács for critical reading of the manuscript and experimental suggestions. C.M.S is funded by the program Juan de la Cierva Formación (FJCI-2015-23810). This work was supported by grants from ERC Starting Grant (BacBio 637971) and Plan Nacional de I+D+I of Ministerio de Economía y Competitividad (AGL2016-78662-R). These authors contributed equally: Jesús Cámara-Almirón, Yurena Navarro. Instituto de Hortofruticultura Subtropical y Mediterránea "La Mayora" – Departamento de Microbiología, Universidad de Málaga, Bulevar Louis Pasteur 31 (Campus Universitario de Teatinos), 29071, Málaga, Spain Jesús Cámara-Almirón, Yurena Navarro, Luis Díaz-Martínez, María Concepción Magno-Pérez-Bryan, Carlos Molina-Santiago, Antonio de Vicente, Alejandro Pérez-García & Diego Romero Nano-imaging Unit, Andalusian Centre for Nanomedicine and Biotechnology, BIONAND, Málaga, Spain John R. Pearson Jesús Cámara-Almirón Yurena Navarro Luis Díaz-Martínez María Concepción Magno-Pérez-Bryan Carlos Molina-Santiago Antonio de Vicente Alejandro Pérez-García Diego Romero D.R. conceived the study. D.R., and J.C.A. and Y.N. designed the experiments. J.C.A. and Y.N. performed the main experimental work. M.C.P.B. gave support to some physiological experiments and did q-RT-PCR experiments. C.M.S. and L.D.M. analyzed and processed the whole transcriptomes. J.C.A. and J.P. performed and designed the confocal microscopy work and data analysis. D.R., J.C.A., and Y.N. wrote the manuscript. D.R., J.C.A., C.M.S., A.V, A.P.G., and L.D.M. contributed critically to writing the final version of the manuscript. Correspondence to Diego Romero. The authors declare no competing interests Peer review information Nature Communications thanks Kürsad Turgay, Masanori Toyofuku and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Cámara-Almirón, J., Navarro, Y., Díaz-Martínez, L. et al. Dual functionality of the amyloid protein TasA in Bacillus physiology and fitness on the phylloplane. Nat Commun 11, 1859 (2020). https://doi.org/10.1038/s41467-020-15758-z DOI: https://doi.org/10.1038/s41467-020-15758-z High-level biocidal products effectively eradicate pathogenic γ-proteobacteria biofilms from aquaculture facilities Félix Acosta , Daniel Montero , Marisol Izquierdo & Jorge Galindo-Villegas Aquaculture (2021) Biological Functions of Prokaryotic Amyloids in Interspecies Interactions: Facts and Assumptions Anastasiia O. Kosolapova , Kirill S. Antonets , Mikhail V. Belousov & Anton A. Nizhnikov International Journal of Molecular Sciences (2020) Emerging Roles of Functional Bacterial Amyloids in Gene Regulation, Toxicity, and Immunomodulation Nir Salinas , Tatyana L. Povolotsky , Meytal Landau & Ilana Kolodkin-Gal Microbiology and Molecular Biology Reviews (2020) Multifunctional Amyloids in the Biology of Gram-Positive Bacteria Ana Álvarez-Mena , Jesús Cámara-Almirón , Antonio de Vicente & Diego Romero Microorganisms (2020) Protein aggregates carry non-genetic memory in bacteria after stresses O. Ye. Kukharenko , V. O. Terzova & G. V. Zubova Biopolymers and Cell (2020)
CommonCrawl
BMC Medical Imaging Technical advance paraFaceTest: an ensemble of regression tree-based facial features extraction for efficient facial paralysis classification Jocelyn Barbosa1,2, Woo-Keun Seo3,4 & Jaewoo Kang ORCID: orcid.org/0000-0001-6798-91061 BMC Medical Imaging volume 19, Article number: 30 (2019) Cite this article Facial paralysis (FP) is a neuromotor dysfunction that losses voluntary muscles movement in one side of the human face. As the face is the basic means of social interactions and emotional expressions among humans, individuals afflicted can often be introverted and may develop psychological distress, which can be even more severe than the physical disability. This paper addresses the problem of objective facial paralysis evaluation. We present a novel approach for objective facial paralysis evaluation and classification, which is crucial for deciding the medical treatment scheme. For FP classification, in particular, we proposed a method based on the ensemble of regression trees to efficiently extract facial salient points and detect iris or sclera boundaries. We also employ 2nd degree polynomial of parabolic function to improve Daugman's algorithm for detecting occluded iris boundaries, thereby allowing us to efficiently get the area of the iris. The symmetry score of each face is measured by calculating the ratio of both iris area and the distances between the key points in both sides of the face. We build a model by employing hybrid classifier that discriminates healthy from unhealthy subjects and performs FP classification. Objective analysis was conducted to evaluate the performance of the proposed method. As we explore the effect of data augmentation using publicly available datasets of facial expressions, experiments reveal that the proposed approach demonstrates efficiency. Extraction of iris and facial salient points on images based on ensemble of regression trees along with our hybrid classifier (classification tree plus regularized logistic regression) provides a more improved way of addressing FP classification problem. It addresses the common limiting factor introduced in the previous works, i.e. having the greater sensitivity to subjects exposed to peculiar facial images, whereby improper identification of initial evolving curve for facial feature segmentation results to inaccurate facial feature extraction. Leveraging ensemble of regression trees provides accurate salient points extraction, which is crucial for revealing the significant difference between the healthy and the palsy side when performing different facial expressions. Facial paralysis (FP) or facial nerve palsy is a neuromotor dysfunction that losses voluntary muscles movement in one side of the human face. As a result, this leads to the loss of the person's ability to mimic facial expressions. FP is not an uncommon medical condition. For every sixty people around the world, one of them can be affected by facial paralysis [1]. Facial paralysis often causes patients to be introverted and eventually suffer from social and psychological distress, which can be even more severe than the physical disability [2]. It is usually encountered in clinical practices, which can be classified into peripheral and central facial palsy [3, 4]. These two categories differ from each other according to the behavior of the upper layer of the human face. Peripheral facial palsy is a nerve disturbance in the pons of the brainstem, which affects the upper, middle and lower facial muscles of one side of the face. On the other hand, central facial palsy is a nerve dysfunction in the cortical areas whereby the forehead and eyes are spared, but the lower half of one side of the face is affected, unlike in peripheral FP [3–5]. Such scenario has triggered the interest of researchers and clinicians of this field, and, consequently, led them to the development of objective grading facial functions and methods in monitoring the effect of medical, rehabilitation or surgical treatment. Many computer-aided analysis systems have been introduced to measure dysfunction of one part of the face and the level of severity, but not much on facial paralysis type as the classification problem. Moreover, the efficiency of the method used for it to be universally accepted is still underway [3]. Classification of each case of facial paralysis into central or peripheral plays a critical role in helping physicians to decide for the most appropriate treatment scheme to use [4]. Image processing has been applied in the existing objective facial paralysis assessment, but the processing methods used are mostly labor-intensive, if not; suffer from the sensitivity to the extrinsic facial asymmetry caused by orientation, illumination and shadows. As such, creating a clinically usable and reliable method is still very challenging. Wachtman et al. [6, 7] measured facial paralysis by examining the facial asymmetry on static images. Their methods are prone to facial asymmetry even for healthy subjects due to sensitivity to orientation, illumination and shadows [8]. Other previous works [9–11] were also introduced but most of them are solely based on finding salient points on human face with the use of the standard edge detection tool (e.g. Canny, Sobel, SUSAN) for image segmentation. Canny edge detection algorithm may yield to inaccurate results as it influences connected edge points. This is because it does comparisons of the adjacent pixels on the gradient direction to determine if the current pixel has a local maximum. Improper segmentation will also result to improper generation of key points. Moreover, it may be difficult to find and detect the exact points when the same algorithm is applied to elder patients. Dong et al. [11] proposed the use of salient points for degree of facial paralysis estimation. A method was proposed to detect the salient points that will be the basis for the estimation of the degree of paralysis. As the salient points may include some unnecessary points for describing facial features, edge detection was used to discard these points. Then K-means clustering is applied to classify these salient points into six categories: 2 eyebrows, 2 eyes, nose, and mouth. There are about 14 key points are found in the six facial regions, which would represent the points that may be affected when performing some facial expressions. However, this technique falls short when applied to elder patients, in which exact points can be difficult to apply [12]. Another method was proposed that estimates the degree of facial paralysis by comparing multiple regions on human face. In [13], Liu et al. perform comparison of the two sides of the face and compute the four ratios, which are consequently used to represent the severity of the paralysis. Nevertheless, this method suffers from the influence of uneven illumination [8]. In our previous work [4], a technique that generates closed contours for separating outer boundaries of an object from background using Localized Active Contour (LAC) model for feature extraction was employed, which reasonably reduced these drawbacks. However, one limiting factor of the method introduced is that it has a greater sensitivity to subjects exposed to peculiar images where facial features suffer from different occlusions (e.g. eyeglasses, eyebrows occluded with hair, wrinkles, excessive beard, etc.). Moreover, improper setting or identification of parameters, such as the radius of the initial evolving curve (e.g. minimum-bounding box of the eyes, eyebrows, etc.) may lead to improper features segmentation, which may in turn gives inaccurate detection of key points as revealed in Fig. 1. Although such limitation was addressed in [4] by applying a window kernel (i.e. generated based on Otsu's method [14]) that is run through the binary image (e.g. detected eyes, eyebrows, lip, etc.), Otsu's method has some drawbacks of the assumption of uniform illumination. Additionally, it does not use any object structure or spatial coherence, which may sometimes result to inaccurate generation of kernel paramaters that may in turn result to improper segmentation. Pre-processing results. a eye image with some uneven illumination, b-c extracted eye contour by LAC model when parameters of initial evolving curves are not properly identified In order to address these drawbacks, we present a novel approach based on ensemble of regression trees (ERT) model. We leverage ERT model to extract salient points as basis for preparing the features or independent variables. Our target was to keep a natural framework that finds 68 key points on edges of facial features (e.g. eyes, eyebrows, mouth, etc.); and regresses the location of the facial landmarks from a sparse subset of intensity values extracted from an input image. As ERT model provides a simple and intuitive way in performing this task and in generating landmarks that separates boundaries of each significant facial feature from the background, without requiring setting-up of parameters for initial evolving curves as required by the previous approach [4], we find this technique appropriate to address our problem. Furthermore, empirical results [15] reveal that ERT model does have an appealing quality that performs shape invariant feature selection while minimizing the same loss function during training and test time, which significantly lessens time complexity of extracting features. In this study, we make three significant contributions. First, we introduce a more robust approach for efficient evaluation of facial paralysis and classification, which is crucial for determining the appropriate medical treatment scheme. Second, we provide an efficient method in feature extraction based on ensemble of regression trees in a computationally efficient way; and finally, we study in depth, the effect of computing the facial landmarks symmetry features on the quality of facial paralysis classifications. The study performs objective evaluation of facial paralysis particularly the facial classification and grading in a computationally efficient way. We capture facial images (i.e. still photos) of the patients with a front view face and with reasonable illumination of lights so that each side of the face achieves roughly similar amount of lighting. The photo taking procedure starts with the patient performing the 'at rest' face position, followed by the three voluntary facial movements that include raising of eyebrows, screwing-up of nose, and showing of teeth or smiling. The framework of the proposed objective facial paralysis evaluation system is presented in Fig. 2. Facial images of a patient, which are taken while being requested to perform some facial expressions, are stored in the image database. In the rest of this section, we describe the details of the form of individual components of the facial landmark detection and how we perform evaluation and classification of facial paralysis. To begin the process, raw images from the image database are extracted or retrieved. Dimension alignment and image resizing are then performed, followed by the pre-processing of images to enhance contrast and remove undesirable image noise. Framework of the proposed objective facial paralysis evaluation Facial Image Acquisition and Pre-processing Existing literatures show that facial data acquisition methods can be classified into two categories depending on the processing methods they used and on the usage of images or videos as their databases. As image-based systems do have an appealing advantage of ease of use and cost effectiveness [16], we utilize still images as inputs to our classification model. Face Normalization Facial feature extractions can be complicated as the face appearance changes, which is caused by illumination and pose variations. Normalizing the acquired facial images prior to objective FP assessment may significantly reduce these complications. It is worth note taking, however, that while face normalization may be a good approach in connection with the methods for objective facial paralysis assessment, it is but optional, so long as extracted feature parameters are normalized prior to the classification task. Facial Feature Extraction Deformation Extraction Facial features deformation can be categorized by the changes of texture and shape that lead to high spatial gradients, which are good indicators for tracking facial actions [17]. In turn, these are good indicators for facial asymmetry and may be analyzed either in image or spatial frequency domain, which can be computed by high-pass gradient or Gabor wavelet-based filters. Ngo et al. [18] utilized this method by combining it with local binary patterns and claimed to perform well for the task of quantitative assessment of facial paralysis. Deformation extraction approaches can be holistic or local. Holistic image-based approaches are those where face is processed as a whole, while local methods focused on facial features or areas that are proned to change and are able to detect subtle changes in small areas [17]. In this work, we extract the facial salient points and iris area for geometric-based and region-based features generation, respectively. Salient points detection Feature extraction starts from the preprocessing of the input image and facial region detection. We find frontal human faces in an image and estimate their pose. Such estimated pose takes the facial form of 68 landmarks that will lead to the detection of the key points of the mouth, eyebrows and eyes. The face detector is made using the classic Histogram of Oriented Gradients (HOG) feature [19] combined with a linear classifier, an image pyramid, and sliding window detection scheme. We utilize the pose estimator employed in [15]. In the context of computer vision, a sliding window is a rectangular region of fixed width and height that slides across an image, as described in the subsequent section. Normally, for each window, we take the window region and apply an image classifier to check if the window has an object (i.e. the face as our region of interest). Combined with image pyramids, image classifiers can be created that can recognize objects even if the scales and locations in the image vary. These techniques, in its simplicity, play an absolutely critical role in object detection and image classification. Features are extracted from the different facial expressions that the patient will perform, which include: (a) at Rest; (b) Raising of eyebrows; (c) frowning or screwing up of nose; and (d) Smiling or showing of teeth. In this study, we consider geometric and region-based features as our inputs for modelling a classifier. To extract the geometric-based features, the salient points of the facial features (e.g. eyes, eyebrows and lip) are detected using the concept of ensemble of regression trees [15]. The goal is to find the 68 key points on edges of facial features as shown in Fig. 3a. However, for generating the geometric-based symmetry features of each image, we are interested in the following points: inner canthus (IC), mouth angle (MA), infra orbital (IO), upper eyelids (UE), supra-orbital (SO), nose tip (NT) and nostrils (see Fig. 3b). In order to detect the salient points of the facial image, we leverage the dlib library [15] that has the capability to detect points of each facial features. These points are inputs for calculating the distance of the key points as identified in Fig. 4, and the corresponding distance ratio for both sides of the face. Additionally, we also extract region-based features, which involves detection of iris or sclera boundaries by leveraging Daugman's Integro-Differential Operation [20]. Facial landmarks or key points. a 68 key points, and b salient points for each facial feature utilized in this study Facial expressions used in the study. a at rest; b raising or lifting of eyebrows; c screwing-up of nose; and d smiling or showing of teeth Histogram of Oriented Gradients(HOG) In the field of computer vision and image processing, Histogram of Oriented Gradients (HOG) is a feature descriptor used for object or face detection. It is an algorithm, which takes an image and outputs feature vectors or feature descriptors. Feature descriptors encode interesting information into a series of numbers and act as a sort of numerical fingerprint, which can differentiate one feature from another. Ideally, this information is invariant under image transformation; hence, even if the image is transformed in some way, we can still find the feature again. HOG uses locally normalized histogram of gradient orientation features similar to Scale Invariant Feature Transform (SIFT) descriptor in a dense overlapping grid, which gives very good results in face detection. It is similar to SIFT, except that HOG feature descriptors are computed on a dense grid of uniformly spaced cells and they use overlapping local contrast normalization for improved performance [19, 21]. Implementation of the HOG descriptor algorithm is as follows: Partition the image into small connected regions called cells, then for each cell, calculate the gradient directions histogram for the pixels within the cell. According to the gradient orientation, discretize each cell into angular bins. Each pixel of the cell contributes weighted gradient to its corresponding angular bin. Adjacent group of cells are considered as spatial regions, referred to as blocks. The grouping of cells into a block is then the basis for histograms grouping and normalization. Normalized group of histograms represents the block histogram. The set of these block histograms represents the descriptor. The following basic configuration parameters are required for computation of the HOG descriptor: Masks to compute derivatives and gradients. Geometry of splitting an image into cells and grouping cells into a block. Block overlapping Normalizing parameters However, the recommended values for the HOG parameters include: (a)1D centered derivative mask [-1, 0, +1]; (b) Detection window size of 64x128; (c) Cell size of 8x8; and Block size of 16x16 (2x2 cells) [21]. Ensemble of Regression Trees An ensemble of regression tree is a predictive model, which composes a weighted combination of multiple regression trees. Generally, combining multiple regression trees increases the predictive performance. It is with the collection of regression trees that makes a bigger and better regression tree [15]. The core of each regression function rt is the tree-based regressors fit to the residual targets during the gradient boosting algorithm. Shape invariant split tests Based on the approach used by Kazemi et al. [15], we make a decision based on thresholding the difference between the intensities of two pixels at each split node in the regression tree. When defined in the coordinate system of the mean shape, the pixels used in the test are at positions i and k. For facial images having arbitrary shapes, we intend to index the points that have the same position relative to its shape, as i and k have to the mean shape. To accomplish this task, the image can be warped to the mean shape based on the current shape estimate before extracting the features. Warping the location of points is more efficient rather than the whole image, since we only utilize a very sparse representation of the image. In what follows, the details are precisely shown. We let vi be the index of the facial landmark in the mean shape that is nearest to i and its offset from i is defined as: $$ \delta {Y_{i}} = i - {\overline Y_{vi}}\ $$ Then for a shape Sj defined in image Ij, the position in Ij that is qualitatively similar to i in the mean shape image is given by $$ i^{\prime} = {Y_{j,vi}} + \frac{1}{{{s_{j}}}}R_{j}^{\mathrm{T}}\delta {Y_{i}}\ $$ where Sj and Rj are the scale and rotation matrix of the similarity transform which transforms Sj to \(\overline {S}\), the mean shape. The scale and rotation are found to minimize $$ \sum\limits_{m = 1}^{n} {||} {\overline{Y}}_{m} - \left({s_{j} R_{j} Y_{j,m} + t_{j}} \right) {||}^{2} \ $$ the sum of squares between the mean shape's facial landmark points, \(\overline {Y}_{m}\)'s, and those of the warped shape. k′ is similarly defined. Formally each split is a decision involving 3 parameters θ=(τ,i,k) and is applied to each training and test example as $$ h\left({I_{\pi_{j}},{\widehat{S}}_{j}^{(t)},\theta} \right) = \left\{\begin{array}{ll} {1} & {I_{\pi_{j}} \left({i^{\prime}} \right) - I_{\pi_{j}} \left({k^{\prime}} \right) > \tau }\\ {0} & {Otherwise} \end{array}\right. $$ where i′ and k′ are defined using the scale and rotation matrix which best warp \(\hat {S}^{(t)}_{j}\) to \(\overline {S}\) according to equation (2). In practice, during the training phase, the assignments and local translations are identified. Computing the similarity transform, at test time, the most computationally expensive part of the process is only done once at every level of the cascade. This method starts by using the following: A training set of labeled facial landmarks on an image. These images are manually labeled, specifying specific (x, y)-coordinates of regions surrounding each facial structure. Priors, or more specifically, the probability on distance between pairs of input pixels. Given this training data, ensemble of regression trees are trained to estimate the positions or locations of facial landmarks directly from the pixel intensities, that is, no feature extraction is taking place. The final result is a detector of facial landmarks that can be utilized to efficiently detect salient points of an image. Fig. 5 presents sample results of our proposed approach for facial features detection based on ensemble of regression trees (ERT) model when applied in JAFFE dataset [22, 23]. Sample results of our proposed approach for facial feature extraction based on Ensemble of Regression Trees (ERT) model as applied in JAFFE dataset [22]. Facial landmarks (i.e. 68 key points) are detected from a single image, based on ERT algorithm Region-based Feature Extraction Iris Detection A person who has a symptom of facial paralysis is likely to have asymmetric distance between the infra orbital (IO) and mouth angle (MA) of both sides of his face while performing facial movements such as smile and frown. They may also have an unbalanced exposure of iris when performing different voluntary muscle movements (e.g. screwing of nose, showing of teeth or smiling, raising of eyebrows with both eyes directed upward) [4]. We utilize the points (e.g. upper eyelid (UE), infra orbital (IO), inner canthus (IC) and outer canthus (OC)) detected by ensemble of regression trees algorithm and from there, we generate the parameters of the eye region as inputs to Daugman's Integro Differential operator [20] to detect the iris sclera or boundaries. However, as some eye images do have eyelid occlusions, optimization of Daugman's algorithm was performed to ensure proper iris boundaries detection. In our previous work, LAC-based method was employed to optimize Daugman's algorithm and perform subtraction method. However, LAC model is quite tedious as it may result to improper segmentation if the initial evolving curves and iterations are not properly defined. Moreover, it has a greater sensitivity to the eyes of old patients due to the presence of excessive wrinkles. In this paper, we implement the curve fitting scheme to ensure proper detection of iris sclera or boundaries, thereby providing better results in detecting occluded iris. By definition, a parabola is a plane curve. It is said to be mirror-symmetrical and is approximately U-shaped. To fit a parabolic profile through the data, a second degree polynomial is used, defined as: y=ax2+bx+c This will exactly fit a simple curve to the three constraints, which include a point, angle, or curvature. More often, the angle and curvature constraints are added to the end of a curve. In this case, they are called end conditions. To fit a curve through a data, we first get the coefficients b and c of the line p(x) = bx + c. To evaluate this line, values of x (e.g. given by xp) must be chosen. For example, the curve will be plotted for x belongs to [0, 7] in steps of delta x = 0.1. The generated coefficients will then be used to generate the y values of the polynomial fit at the desired values of x given by xp. This means that the vector (denoted as yp) can be generated, which contains the parabolic fit to the data evaluated at the x-values xp. In what follows, we describe our approach for detecting the iris sclera or boundaries: Detect eye region (i.e. creating a rectangular bounding box from the detected UE,IO, IC, OC as illustrated in Figure 3b) Detect upper eyelid edge using gray thresh level (e.g. threshold <= 0.9) Convert generated grayscale image to binary. Find significant points of the upper eyelid. Traverse each column and find the first row whose pixel value=1 and whose location variance (i.e. row address of the last 4 pixels) is minimal within the threshold. We call it vector A. Implement curve fitting through the generated data points using parabolic profile (i.e. second degree of polynomial). We refer this as A'. Detect iris using Daugman's integro-differential operator and convert it to binary form. We call it vector B. Find intersections of the two vectors A' and B and take all pixels of vector B below the parabolic curve A'. Utilize the points of the lower eyelid detected ERT model, we call it vector C. Finally, we find the the region of the detected iris within the intersection of vector B and C. A closer look at Figs. 6 and 7 reveal interesting results of this approach. Extracted iris using our proposed approach based on Daugman's algorithm. (a) converted gray scale image; (b) upper edge of the eye with gray thresh level of 0.9; (c) equivalent binary form of the detected upper eyelid; (d) data points of the upper eyelid; (e)-(f) upper eyelid as a result of employing parabolic function (2nd degree polynomial); (g) result of Daugman's Integro-Differential operator iris detection; (h)-(n) eyelid detection results using our optimized Daugman's algorithm; and (o) final results of detecting iris boundaries with superior or upper eyelid occlusion Some more results of extracted iris from the UBIRIS images [24]. a Original image; b Converted gray scale; c upper edge of the eye with gray thresh level of 0.9; d equivalent binary form of the detected upper eyelid; e data points of the upper eyelid; f-g upper eyelid as a result of employing parabolic function (2nd degree polynomial); h result of Daugman's Integro-Differential operator iris detection; i-n results eyelids detection with our optimized Daugman's algorithm; and o final results of segmented iris occluded by upper and lower eyelids Facial Paralysis Measurement Symmetry Measurement by Iris and Key points In this paper, the symmetry of both sides of the face is measured using the ratio that are obtained from computing the iris area (i.e. generated while the subject performs raising of eyebrows while both eyes are directed upward; and screwing of nose or frown) and the distance between the two identified points on each side of the face while the subject is asked to perform the different facial expressions (e.g. at rest, raising of eyebrows, screwing of nose, and showing of teeth or smile). Table 1 shows the summary of the salient points as basis for extracting features such as the ratio of iris area as well as the distance ratio between two sides of the face. Table 1 List of facial expressions and the corresponding landmarks used for feature extraction With 'at rest' and 'raising of eyebrows', we calculate the distance between two points: Infra Orbital (IO) and supra-orbital (SO); and the Nose Tip (NT) and SO, while with 'smile' expression, we get the distances between the two identified points: inner canthus (IC) and Mouth Angle (MA); IO and MA; and the NT and MA. Lastly, for 'frown' expression, we get the distance between the two points: NT and MA; and NT and nostrils. Consequently, the computed distance ratio of both sides of the face are considered as the symmetrical features of each subject. Computed distances include: P20PR_IO, P25PL_IO, P20P31 and P25P31 (see Fig. 4a and 4b); P31P33, P31P35, P31P49 and P31P55 (see Figure 4c); and P40P49, P43P55, P31P49, P31P55, P49PR_IO and P55PL_IO (see Figure 4d). Additionally, we calculate the area of the extracted iris. This is followed by the computation of ratio between two sides. We find the expression below: $$ dRatio = \left\{\begin{array}{l} {\frac{D_{L}}{D_{R}} D_{R} > D_{L}}\\ {\frac{D_{R}}{D_{L}}otherwise} \end{array}\right. $$ where dRatio is the ratio of the computed distance DL and DR of the specified key points of each half of the face. We also consider the capability of the patients to raise the eyebrows (i.e. rate of movement) as one important features for symmetry measurement by comparing the two facial images, the 'at rest position' and 'raising of eyebrows' as shown in Fig. 4a and 4b. We compute the distances a1 and b1 (Fig. 4a) where a1 and b1 are the distance from P20 and \(P_{\texttt {R\_IO}}\) and P20 and P31 of the right eye, respectively. We then compute the ratio of a1 and b1. Similarly, for the second image (Fig. 4b), we get a2 and b2 as well as the ratio. Finally, we compute the difference of these two ratio (i.e. difference between a1/ b1 and a2/ b2) and denote it as right_Movement. The same procedure is applied to the two images for finding the ratio difference for the left eye (i.e. difference between y3/ x3 and y4/ x4) and denote it as left_Movement. The rate of movement can be computed by finding the ratio between right_Movement and left_Movement. Intuitively, the difference of these two ratio values for normal subjects is likely to be higher (usually approaching to 1, which signifies the ability to raise both of his eyebrows) than those of the FP patients. Facial Palsy Type Classification Classification of facial paralysis type involves two phases: (1) discrimination of healthy from unhealthy subjects; and (2) proper facial palsy classification. In this context, we model the mapping of symmetry features (as described in the previous subsection) into each phase as binomial classification problem. As such, we employ two classifiers to be trained, one for healthy and unhealthy discrimination (0-healthy, 1-unhealthy) and another one for facial palsy type classification (0-peripheral palsy(PP), 1-central palsy(CP)). For each classifier, we consider Random Forest (RF), Regularized Logistic Regression (RLR), Support Vector Machine (SVM), Decision Tree (DT), naïve bayes (NB) and Hybrid classifier as appropriate classification methods as they have been successfully applied to pattern recognition and classification on datasets with realistic size [4, 8, 13, 22]. With hybrid classifier, we apply rule-based approach prior to employing machine learning (ML) method. This is based on empirical results [4, 11, 13], which show that normal subjects are likely to have an average facial measurement ratio closer to 1.0 and central palsy patients are likely to have a distance ratio from Supra-orbital (SO) to Infra-orbital (IO) approaching to 1.0. Similarly, iris exposure ratio usually results to values nearly close to 1. Hence, we find hybrid classifier (rule-based + ML) appropriate in our work. This process is presented in Fig. 8. If rule number 1 is satisfied, the algorithm continues to move to the case path (i.e. for the second task), making a test if rule number 2 is also satisfied; otherwise, it performs a machine learning task, such as RF, RLR, SVM, and NB. The rules are generated after fitting the training set to the DT model. For example, rule 1 may have conditions, like if fx<0.95andfy<0.95 (where fxandfy are two of the predictors, the mean result of all parameters and the IO_MA ratio, respectively, based on Table 1), then the subject is most likely to be diagnosed with facial paralysis, and therefore can proceed for rule no. 2 (i.e. to predict the FP type); otherwise, it performs a machine learning task. If the classifier returns 0, the algorithm exits from the entire process as this signifies that the subject is classified as normal/healthy, else it moves to the case path to perform a test on rule number 2 for facial palsy type classification. Hybrid Classifier If rule number 2 is satisfied, the system gives 1 (i.e. 0-PP; 1-CP), else the feature set is fed to another classifier, which can yield either 0 or 1. The same with rule number 1, rule number 2 is also generated by DT model. For example, rule 2 may set conditions like, if fa>0.95 and fb>0.95 (where fa and fb are two of the predictors SO_IO ratio and iris area ratio, respectively), then it is most likely to be diagnosed as having central palsy (CP), otherwise, the feature set is fed to another classifier, which could return either 0 or 1 (i.e. 0-PP; 1-CP). In our experiments, we used 440 facial images, which were taken from 110 subjects (i.e. 50 patients and 60 healthy subjects). From the 50 unhealthy subjects, 40 of which have peripheral palsy (PP) cases and 10 have central palsy (CP) cases. We used 70% of the dataset as the training set and 30% as the test set. For example, in discriminating healthy from unhealthy subjects, we used 77 subjects (i.e. 35 patients plus 42 normal subjects) as the training set and 33 subjects (i.e. 15 patients plus 18 normal) as the test set. While in FP type classification problem 35 unhealthy cases (i.e. 28 PP and 7 CP) as our training set and 15 (i.e. 12 PP and 3 CP) as our test set. Each subject was asked to perform 4 facial movements. During image pre-processing, resolutions were converted to 960 x 720 pixels. Facial palsy type of each subject was pre-labeled based on the clinicians' evaluation, which was used during the training stage. This was followed by the feature extraction or the calculation of the area of the extracted iris and the distances between the identified points as presented in Fig. 4. Overall, we utilize 11 features to train the classifier. The samples for healthy and unhealthy classifier were categorized into two labels: 0-healthy, 1-unhealthy. Similarly, samples for unhealthy subjects were classified into two labels: 0-central palsy and 1-peripheral palsy. It can be noted that healthy subjects have very minimal asymmetry in both sides of the face resulting to a ratio that approaches to 1. Facial paralysis type classification Regularized logistic regression (RLR), Support Vector Machine (SVM), random forest (RF), naïve bayes (NB), and classification tree (DT) were also utilized to compare with our hybrid classifiers. Given the dataset that is not very large, we adopt the k-fold cross-validation test scheme in forming a hybrid model. The procedure involves 2 steps: rule extraction and hybrid model formation, as applied in our previous work [4]. Step 1: rule extraction. If we have a dataset D = ((x1, y1),..., (xn, yn)), we hold out 30% of D and use it as a test set T = ((x1, y1),..., (xt, yt)), leaving 70% as our new dataset D'. We adopt k-fold cross-validation test scheme over D', i.e. with k = 10. For example, if N = 80 samples, each fold has 8 samples. In each iteration, we leave one fold out and consider it as our validation set and utilize the remaining 9 folds as our training set to learn a model (e.g. rules extraction). Since we have 10 folds, we do this process for 10 repetitions. We extract rules by fitting the training set to a DT model. Step 2: hybrid model formation. In this step, we form a hybrid model by combining the rules generated in each fold and the ML classifier. Each model is tested out over the validation set using different parameters (e.g. lambda for logistic regression and gamma for SVM). For example, to form the first hybrid model, we combine the rule extracted from the first fold and a regularized logistic regression model (i.e. rule + RLR) and test out its performance over the validation set (left-out fold) while applying it to each of the 10 parameters. Therefore, for each fold, we generate 10 performance measures. We repeat this procedure for the succeeding folds, which means performing the steps for k times (i.e. with k = 10, since we are using 10-fold cross validation) will give us 100 performance measures. We calculate the average performance in all folds. This yields 10 average performance measures (i.e. for each parameter n), each of which corresponds to one specific hybrid model. Then we choose the best hybrid model, i.e. a model with lambda that minimizes errors. We retrain the selected model on all of D', test this out over the hidden test set T = ((x1,y1),·,(xt,yt)), i.e. 30% of the dataset D and get the performance of the hybrid model. We evaluate the classifiers by their average performance for 20 repetitions of k-fold cross-validation using k = 10. We repeat this process for evaluation of other hybrid model (e.g. rule-based + RF, rule-based + RLR, etc.) and finally choose the hybrid model that performs best. The hybrid classifiers, RF, SVM, RLR, DT and NB were tested and experiments reveal that hybrid classifier rule-based + RLR (hDT_RLR) outperformed other classifiers for discriminating healthy from unhealthy (i.e. with paralysis) subjects. Similarly, for the classification task of facial palsy (PP-peripheral and CP-central palsy), hDT_RLR hybrid classifier is superior among other classifiers used in the experiments. Table 2 presents a comparison of the average performance of our hybrid classifier, RF, RLR, SVM, DT and NB based on our proposed approach. For FP type classification, our hDT_RLR hybrid classifier achieves a better performance on sensitivity of 5.2% higher than RLR, RF, SVM, DT and NB (see Table 2). Other hybrid classifiers also show good results comparable with hDT_RLR. However, in the context of our study, we are more concerned on designing a classifier that yields a stable results on sensitivity performance measure without necessarily sacrificing the specificity or fall-out or the probability of false alarm. Hence, we employ hDT_RLR. Table 2 Comparison of the performance of different classifiers for facial palsy classification To illustrate the diagnostic ability of our classifier, we create a graphical plot called receiver operating characteristic (ROC) curve, showing the graphical plot while varying the discrimination threshold. The ROC curve is a plot of the true positive rate (TPR) on the y-axis versus the False Positive Rate (FPR) on the x-axis for every possible classification threshold. In machine learning, the true-positive rate, also known as sensitivity or recall answers the question how does the classifier predict positive when the actual classification is positive (i.e. not healthy). On the other hand, the false-positive rate, which is also known as the fall-out or probability of false alarm answers the question how does the classifier incorrectly predict positive when the actual classification is negative (i.e. healthy). The ROC curve is therefore the sensitivity as a function of fall-out. Figures 9 and 10 present the comparison of the area under ROC curve (AUC) of our hybrid hDT_RLR classifier for healthy and unhealthy discrimination and for FP type classification (i.e. central or peripheral), respectively, using three different feature extraction methods: (a)Localized Active Contour-based method for key points extraction (LAC-KPFE));(b)Localized Active Contour-based method for geometric and region features extraction (LAC-GRFE); and (c)Ensemble of Regression Tree-based method for geometric and region features extraction (ERT-GRFE). Comparison of the ROC curve of our classifiers using different feature extraction methods (ERT-GRFE, LAC-GRFE and LAC-KPFE) for Healthy and Not Healthy classification Comparison of the ROC curve of our classifiers using different feature extraction methods (ERT-GRFE, LAC-GRFE and LAC-KPFE)for Facial Palsy classification Our proposed approach ERT-GRFE with hybrid classifier hDT_RLR achieves better performance in AUC of 2.7%-4.9% higher than other two methods in discriminating healthy from unhealthy (i.e. with paralysis) subjects as shown in Fig. 8. Similarly, for the palsy type classification: central palsy (CP) and peripheral palsy (PP), ERT-GRFE plus hDT_RLR hybrid classifier outperformed the two (2) feature extraction methods LAC-GRFE and LAC-KPFE used in the experiments with at least 2.5%-7.7% as in Fig. 9. Experiments reveal that our method yields more stable results. Tables 3 and 4 present a comparison of the performance of the three methods for discriminating healthy from unhealthy subjects; and for classifying facial palsy type, respectively. Each approach differs according to the features applied; and the corresponding methods used for extracting such features, which include: (a)Localized Active Contour-based method for key points feature extraction (LAC-KPFE));(b)Localized Active Contour-based method for geometric and region-based features (LAC-GRFE); and (c)Ensemble of Regression Tree-based method for geometric and region-based feature extraction (ERT-GRFE). Table 3 Comparison of the performance of the three methods for healthy and unhealthy discrimination Table 4 Comparison of the performance of the three methods for facial palsy classification Table 3 shows that in discriminating healthy from unhealthy subjects, our proposed approach outperforms other methods that use key or salient points-based features using LAC model, in terms of sensitivity and specificity with the improvement of 9% and 4.1%, respectively. Similarly, experiments show that our approach outperforms the previous method [4] that used geometric and region-based features (GRFE) using LAC model in terms of sensitivity and specificity with an improvement of 3.1% AND 5.9%, respectively. On the other hand, Table 4 reveals that for central and peripheral palsy classification, our proposed ERT-based GFRE is way better than the previous approach that solely used key points-based features with an improvement of around 12% and 9%, of the sensitivity and specificity performance measure, respectively. Furthermore, experiments reveal that our ERT-based GFRE proposed approach yields better performance, particularly in sensitivity and specificity with an improvement of 6.4% and 6.8%, respectively. Thus, our proposed approach is superior among other three methods. Empirical results [15] reveal that ensemble of regression trees (ERT) model does have an appealing quality that it performs shape invariant feature selection while minimizing the same loss function during training and test time, which significantly lessens time complexity of extracting features. True enough, our experiments reveal that ERT-based method for geometric and region features extraction (ERT-GRFE) works well. Furthermore, our proposed approach to combine iris segmentation and the ERT-based key point detection for feature extraction provides a better discrimination of central and peripheral palsy most especially in 'raising of eyebrows' and 'screwing of nose' movements. It shows changes of the structure on edges of the eye, i.e., the significant difference between the normal side and the palsy side for some facial movements (e.g. eyebrow lifting, nose screwing, and showing of teeth). Also, features based on the combination of iris and key points by utilizing ensemble of regression tree technique can model the typical changes in the eye region. A closer look at the performance of our classifier, as shown in Tables 3 and 4 reveal interesting statistics in terms of the specific abilities of the three methods. Our method proves to have significant contribution in discriminating central from peripheral palsy patients and healthy from facial palsy subjects. The combination of iris segmentation and ERT-based key point approach is more suitable for this operation. In this study, we present a novel approach to address FP classification problem in facial images. Salient points and iris detection based on ensemble of regression trees are employed to extract the key features. Regularized Logistic Regression (RLR) plus Classification Tree (CT) classifier provides an efficient quantitative assessment of the facial paralysis. Our facial palsy type classifier provides a tool essential to physicians for deciding the medical treatment scheme to begin the patient's rehabilitation process. Furthermore, our approach has several merits that are very essential to real application: geometric and iris region features based on ensemble of regression trees and optimized Daugman's theory (using parabolic function, i.e. 2nd degree polynomial), respectively, allows efficient identification of the asymmetry of the human face, which reveals the significant difference between the normal and afflicted side, which localized active contour model fail to track especially for peculiar images (e.g. wrinkles, excessive mustache, occlusions, etc.); ERT model does have have a very appealing quality that reduces errors in each iteration, which can be very useful in extracting boundaries of the facial features from the background; and ERT model does not require proper or perfect identification of initial evolving curves to ensure accurate facial feature extraction. our method significantly lessens the time complexity of extracting features, without sacrificing the level of accuracy making it more suitable for the real application. AUC: area under ROC curve Central palsy Classification tree DT: ERT: ERT-GRFE: Ensemble of Regression Tree-based method for geometric and region features extraction FP: FPR: False Positive Rate HOG: Histogram of Oriented Gradients hDT_RLR: hybrid Decision tree and Regularized Logistic Regression Inner Canthus IO: Infra Orbital IRB: Institution Review Board LAC: Localized Active Contour LAC-GRFE: Localized Active Contour-based method for geometric and region features extraction LAC-KPFE: Localized Active Contour-based method for key points extraction Mouth Angle Naive bayes NT: Nose tip OC: Outer Canthus PP: Peripheral palsy RF: RLR: Regularized Logistic Regression Receiver operating characteristic curve Region of interest Supra-orbital TPR: True positive rate UE: Upper Eyelids Baugh R, Ishii G, RSchwartz S, Drumheller CM, Burkholder R, Deckard N, Dawson C, Driscoll C, BoydGillespie M. Clinical practice guideline bellspalsy. Otolaryngol-Head Neck Surg. 2013; 149:1–27. Peitersen E. Bell's palsy: the spontaneous course of 2,500 peripheral facialnerve palsies of different etiologies. Acta OtoLaryngol. 2002; 122(7):4–30. Kanerva M. Peripheral facial palsy grading, etiology, and melkersson–rosenthal syndrome. PhD thesis. Finland: University of Helsinki; 2008. Barbosa J, Lee K, Lee S, Lodhi B, Cho J, Seo W, Kang J. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier. BMC Med Imaging. 2016; 16:23–40. May M. Anatomy for the clinician In: May M, Schaitkin B, editors. The Facial Nerve, May's Second Edition. 2nd. New York: Thieme, New York: 2000. p. 19–56. Wachtman GS, Liu Y, Zhao T, Cohn J, Schmidt K, Henkelmann TC, VanSwearingen JM, Manders EK. Measurement of asymmetry in persons with facial paralysis. In: Combined Annu Conf. Robert H. Ivy. Ohio Valley: Societies of Plastic and Reconstructive Surgeons: 2002. Liu Y, Schmidt K, Cohn J, Mitra S. Facial asymmetry quantification for expression invariant human identification. Comput Vis Image Underst J. 2003; 91:138–59. He S, Soraghan J, O'Reilly B, Xing D. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos. IEEE Trans Biomed Eng. 2009; 56:1864–70. Wang S, Li H, Qi F, Zhao Y. Objective facial paralysis grading based on pface and eigenflow. Med Biol Eng Comput. 2004; 42:598–603. Anguraj K, Padma S. Analysis of facial paralysis disease using image processing technique. Int J Comput Appl(0975 – 8887). 2012; 54:1–4. Dong J, Ma L, Li W, Wang S, Liu L, Lin Y, Jian M. An approach for quantitative evaluation of the degree of facial paralysis based on salient point detection. In: International Symposium on Intelligent Information Technology Application Workshops. New York City: IEEE: 2008. Liu L, Cheng G, Dong J, Qu H. Evaluation of facial paralysis degree based on regions In: IEEE Computer Science Society Washington U.DC (ed.), editor. Third International Conference onKnowledge Discovery and Data Mining. Washington, DC: IEEE Computer Science Society: 2010. p. 514–7. Liu L, Cheng G, Dong J, Qu H. Evaluation of facial paralysis degree based on regions. In: Proceedings of the 2010 Third International Conference onKnowledge Discovery and Data Mining. Washington, DC: IEEE Computer ScienceSociety: 2010. p. 514–7. Otsu N. A threshold selection method from gray-level histogram. IEEE Trans Syst, Man Cybern. 1979; 9:62–6. Kazemi V, Sullivan J. One millisecond face alignment with an ensemble of regression trees. In: Proc. IEEE Conf Comput Vis Pattern Recog. New York City: IEEE: 2014. p. 1867–1874. Samsudin W, Sundaraj K. Image processing on facial paralysis for facial rehabilitation system: A review. In: IEEE International Conference on Control System, Computing and Engineering. Malaysia: IEEE: 2012. p. 259–63. Fasel B, Luttin J. Automatic facial expression analysis: Survey. Pattern Recog. 2003; 36:259–75. Ngo T, Seo M, Chen Y, Matsushiro N. Quantitative assessment of facial paralysis using local binary patterns and gabor filters. In: Proceedings of the 5th International Symposium on Information and Communication Technology (SoICT). New York: ACM: 2014. p. 155–61. Déniz O, Bueno J, Salido F, De la Torre F. Face recognition using histograms of oriented gradients. Pattern Recog Lett. 2011; 32:1598–603. Daugman J. How iris recognition works. New York City: IEEE: 2004. p. 21–30. vol. 14, IEEE Trans Circ Syst Vi Technol. Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proc IEEE Conf Computer Vision and Pattern Recognition. New York City: IEEE: 2005. Lyons M, Budynek J, Akamatsu S. Automatic classification of single facial images. IEEE Trans Pattern Anal Mach Intell. 1999; 21(12):57–62. Lyons M, Akamatsu S, Kamachi M, Gyoba J. Coding facial expressions with gabor wavelets. In: Third IEEE Conf. Face and Gesture Recognition. IEEE: 1998. p. 200–5. Proenca H, Alexandre L. Ubiris: A noisy iris image database. In: Proceed. of ICIAP 2005 - Intern. Confer. on Image Analysis and Processing. United States: Springer Link: 2005. p. 970–977. There are no acknowledgments. This research was supported by the National Research Foundation of Korea (NRF-2017M3C4A7065887) and National IT Industry Promotion Agency grant funded by the Ministry of Science and ICT and Ministry of Health and Welfare (NO. C1202-18-1001, Development Project of The Precision Medicine Hospital Information System (P-HIS)); and the scholarship was granted by the Korean Government Scholarship Program - NIIED, Ministry of Education, South Korea. The datasets generated and/or analysed during the current study are not available due to patient privacy. Department of Computer Science and Engineering, Korea University, Seoul, South Korea Jocelyn Barbosa & Jaewoo Kang IT Department, University of Science and Technology of Southern Philippines, Cagayan de Oro, Philippines Jocelyn Barbosa Department of Neurology and Stroke Center, Samsung Medical Center, Seoul, South Korea Woo-Keun Seo Sungkyunkwan University School of Medicine, Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, South Korea Jaewoo Kang Conceived and designed the methodology and and analyzed the methods: JB, JK. Performed experiments and programming: JB. Collected images and performed manual evaluation: WS. All authors wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Jaewoo Kang. This study was approved by the Institution Review Board (IRB)of Korea University, Guro Hospital (with reference number MD14041-002). The board permitted not taking an informed consent due to the retrospective design of this study. Barbosa, J., Seo, WK. & Kang, J. paraFaceTest: an ensemble of regression tree-based facial features extraction for efficient facial paralysis classification. BMC Med Imaging 19, 30 (2019). https://doi.org/10.1186/s12880-019-0330-8 Facial paralysis classification Facial paralysis objective evaluation Salient point detection Facial paralysis evaluation framework Head and neck imaging Submission enquiries: [email protected]
CommonCrawl
Standard Model Title: Standard Model Subject: Higgs boson, Particle physics, Grand Unified Theory, Gauge theory, Symmetry (physics) The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth. Standard Model of particle physics Large Hadron Collider tunnel at CERN Gauge theory Spontaneous symmetry breaking Higgs mechanism Electroweak interaction Quantum chromodynamics CKM matrix Strong CP problem Hierarchy problem Neutrino oscillations See also: Physics beyond the Standard Model Sudarshan · Koshiba · Davis, Jr. · Anderson · Fermi · Dirac · Feynman · Rubbia · Gell-Mann · Kendall · Taylor · Friedman · Powell · P. W. Anderson · Glashow · Meer · Cowan · Nambu · Chamberlain · Cabibbo · Schwartz · Perl · Majorana · Weinberg · Lee · Ward · Salam · Kobayashi · Maskawa · Yang · Yukawa · 't Hooft · Veltman · Gross · Politzer · Wilczek · Cronin · Fitch · Vleck · Higgs · Englert · Brout · Hagen · Guralnik · Kibble · Ting · Richter The Standard Model of particle physics is a theory concerning the electromagnetic, weak, and strong nuclear interactions, which mediate the dynamics of the known subatomic particles. It was developed throughout the latter half of the 20th century, as a collaborative effort of scientists around the world.[1] The current formulation was finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, discoveries of the top quark (1995), the tau neutrino (2000), and more recently the Higgs boson (2013), have given further credence to the Standard Model. Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a "theory of almost everything". The Standard Model falls short of being a complete theory of fundamental interactions. It does not incorporate the full theory of gravitation[2] as described by general relativity, or account for the accelerating expansion of the universe (as possibly described by dark energy). The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations (and their non-zero masses). Although the Standard Model is believed to be theoretically self-consistent[3] and has demonstrated huge and continued successes in providing experimental predictions, it does leave some phenomena unexplained. The development of the Standard Model was driven by theoretical and experimental particle physicists alike. For theorists, the Standard Model is a paradigm of a quantum field theory, which exhibits a wide range of physics including spontaneous symmetry breaking, anomalies, non-perturbative behavior, etc. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) in an attempt to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations. 1 Historical background 3 Particle content 3.1 Fermions 3.2 Gauge bosons 3.3 Higgs boson 3.4 Full particle count 4 Theoretical aspects 4.1 Construction of the Standard Model Lagrangian 4.1.1 Quantum chromodynamics sector 4.1.2 Electroweak sector 4.1.3 Higgs sector 5 Tests and predictions 8 Notes and references The first step towards the Standard Model was Sheldon Glashow's discovery in 1961 of a way to combine the electromagnetic and weak interactions.[4] In 1967 Steven Weinberg[5] and Abdus Salam[6] incorporated the Higgs mechanism[7][8][9] into Glashow's electroweak theory, giving it its modern form. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons. After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973,[10][11][12][13] the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W and Z bosons were discovered experimentally in 1981, and their masses were found to be as the Standard Model predicted. The theory of the strong interaction, to which many contributed, acquired its modern form around 1973–74, when experiments confirmed that the hadrons were composed of fractionally charged quarks. At present, matter and energy are best understood in terms of the kinematics and interactions of elementary particles. To date, physics has reduced the laws governing the behavior and interaction of all known forms of matter and energy to a small set of fundamental laws and theories. A major goal of physics is to find the "common ground" that would unite all of these theories into one integrated theory of everything, of which all the other known laws would be special cases, and from which the behavior of all matter and energy could be derived (at least in principle).[14] Particle content The Standard Model includes members of several classes of elementary particles (fermions, gauge bosons, and the Higgs boson), which in turn can be distinguished by other characteristics, such as color charge. Fermions The pattern of weak isospin, T3, weak hypercharge, YW, and color charge of all known elementary particles, rotated by the weak mixing angle to show electric charge, Q, roughly along the vertical. The neutral Higgs field (gray square) breaks the electroweak symmetry and interacts with other particles to give them mass. The Standard Model includes 12 elementary particles of spin-½ known as fermions. According to the spin-statistics theorem, fermions respect the Pauli exclusion principle. Each fermion has a corresponding antiparticle. The fermions of the Standard Model are classified according to how they interact (or equivalently, by what charges they carry). There are six quarks (up, down, charm, strange, top quark, bottom quark), and six leptons (electron, electron neutrino, muon, muon neutrino, tau, tau neutrino). Pairs from each classification are grouped together to form a generation, with corresponding particles exhibiting similar physical behavior (see table). The defining property of the quarks is that they carry color charge, and hence, interact via the strong interaction. A phenomenon called color confinement results in quarks being perpetually (or at least since very soon after the start of the Big Bang) bound to one another, forming color-neutral composite particles (hadrons) containing either a quark and an antiquark (mesons) or three quarks (baryons). The familiar proton and the neutron are the two baryons having the smallest mass. Quarks also carry electric charge and weak isospin. Hence they interact with other fermions both electromagnetically and via the weak interaction. The remaining six fermions do not carry colour charge and are called leptons. The three neutrinos do not carry electric charge either, so their motion is directly influenced only by the weak nuclear force, which makes them notoriously difficult to detect. However, by virtue of carrying an electric charge, the electron, muon, and tau all interact electromagnetically. Each member of a generation has greater mass than the corresponding particles of lower generations. The first generation charged particles do not decay; hence all ordinary (baryonic) matter is made of such particles. Specifically, all atoms consist of electrons orbiting atomic nuclei ultimately constituted of up and down quarks. Second and third generations charged particles, on the other hand, decay with very short half lives, and are observed only in very high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter. Gauge bosons Summary of interactions between particles described by the Standard Model. The above interactions form the basis of the standard model. Feynman diagrams in the standard model are built from these vertices. Modifications involving Higgs boson interactions and neutrino oscillations are omitted. The charge of the W bosons is dictated by the fermions they interact with; the conjugate of each listed vertex (i.e. reversing the direction of arrows) is also allowed. In the Standard Model, gauge bosons are defined as force carriers that mediate the strong, weak, and electromagnetic fundamental interactions. Interactions in physics are the ways that particles influence other particles. At a macroscopic level, electromagnetism allows particles to interact with one another via electric and magnetic fields, and gravitation allows particles with mass to attract one another in accordance with Einstein's theory of general relativity. The Standard Model explains such forces as resulting from matter particles exchanging other particles, known as force mediating particles (strictly speaking, this is only so if interpreting literally what is actually an approximation method known as perturbation theory). When a force-mediating particle is exchanged, at a macroscopic level the effect is equivalent to a force influencing both of them, and the particle is therefore said to have mediated (i.e., been the agent of) that force. The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The gauge bosons of the Standard Model all have spin (as do matter particles). The value of the spin is 1, making them bosons. As a result, they do not follow the Pauli exclusion principle that constrains fermions: thus bosons (e.g. photons) do not have a theoretical limit on their spatial density (number per volume). The different types of gauge bosons are described below. Photons mediate the electromagnetic force between electrically charged particles. The photon is massless and is well-described by the theory of quantum electrodynamics. The W+, W−, and Z gauge bosons mediate the weak interactions between particles of different flavors (all quarks and leptons). They are massive, with the Z being more massive than the W±. The weak interactions involving the W± exclusively act on left-handed particles and right-handed antiparticles. Furthermore, the W± carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral Z boson interacts with both left-handed particles and antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction. The eight gluons mediate the strong interactions between color charged particles (the quarks). Gluons are massless. The eightfold multiplicity of gluons is labeled by a combination of color and anticolor charge (e.g. red–antigreen).[nb 1] Because the gluons have an effective color charge, they can also interact among themselves. The gluons and their interactions are described by the theory of quantum chromodynamics. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section. The Higgs particle is a massive scalar elementary particle theorized by Robert Brout, François Englert, Peter Higgs, Gerald Guralnik, C. R. Hagen, and Tom Kibble in 1964 (see 1964 PRL symmetry breaking papers) and is a key building block in the Standard Model.[7][8][9][15] It has no intrinsic spin, and for that reason is classified as a boson (like the gauge bosons, which have integer spin). The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary particle masses, and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons), are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself. Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010, and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles become visible at energies above 1.4 TeV;[16] therefore, the LHC (designed to collide two 7 to 8 TeV proton beams) was built to answer the question of whether the Higgs boson actually exists.[17] On 4 July 2012, the two main experiments at the LHC (ATLAS and CMS) both reported independently that they found a new particle with a mass of about 125 GeV/c2 (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson." Although it has several properties similar to the predicted "simplest" Higgs,[18] they acknowledged that further work would be needed to conclude that it is indeed the Higgs boson, and exactly which version of the Standard Model Higgs is best supported if confirmed.[19][20][21][22][23] On 14 March 2013 the Higgs Boson was tentatively confirmed to exist.[24] Full particle count Counting particles by a rule that distinguishes between particles and their corresponding antiparticles, and among the many color states of quarks and gluons, gives a total of 61 elementary particles.[25] 2 3 Pair 3 36 2 3 Pair None 12 Gluons 1 1 Own 8 8 1 1 Pair None 2 1 1 Own None 1 Theoretical aspects Construction of the Standard Model Lagrangian Parameters of the Standard Model Renormalization scheme (point) me Electron mass 511 keV mμ Muon mass 105.7 MeV mτ Tau mass 1.78 GeV mu Up quark mass μMS = 2 GeV 1.9 MeV md Down quark mass μMS = 2 GeV 4.4 MeV ms Strange quark mass μMS = 2 GeV 87 MeV mc Charm quark mass μMS = mc 1.32 GeV mb Bottom quark mass μMS = mb 4.24 GeV mt Top quark mass On-shell scheme 172.7 GeV θ12 CKM 12-mixing angle 13.1° θ23 CKM 23-mixing angle 2.4° δ CKM CP-violating Phase 0.995 g1 or g' U(1) gauge coupling μMS = mZ 0.357 g2 or g SU(2) gauge coupling μMS = mZ 0.652 g3 or gs SU(3) gauge coupling μMS = mZ 1.221 θQCD QCD vacuum angle ~0 v Higgs vacuum expectation value 246 GeV mH Higgs mass 125.36 ±0.41GeV (tentative) Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time. The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries. The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3)×SU(2)×U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depend on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table above (note: with the Higgs mass is at 125 GeV, the Higgs self-coupling strength λ ~ 1/8). Quantum chromodynamics sector The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, with SU(3) symmetry, generated by Ta. Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by \mathcal{L}_{QCD} = i\overline U (\partial_\mu-ig_sG_\mu^a T^a)\gamma^\mu U + i\overline D (\partial_\mu-i g_s G_\mu^a T^a)\gamma^\mu D. G_\mu^a is the SU(3) gauge field containing the gluons, \gamma^\mu are the Dirac matrices, D and U are the Dirac spinors associated with up- and down-type quarks, and gs is the strong coupling constant. Electroweak sector The electroweak sector is a Yang–Mills gauge theory with the simple symmetry group U(1)×SU(2)L, \mathcal{L}_\mathrm{EW} = \sum_\psi\bar\psi\gamma^\mu \left(i\partial_\mu-g^\prime{1\over2}Y_\mathrm{W}B_\mu-g{1\over2}\vec\tau_\mathrm{L}\vec W_\mu\right)\psi where Bμ is the U(1) gauge field; YW is the weak hypercharge—the generator of the U(1) group; \vec{W}_\mu is the three-component SU(2) gauge field; \vec{\tau}_\mathrm{L} are the Pauli matrices—infinitesimal generators of the SU(2) group. The subscript L indicates that they only act on left fermions; g′ and g are coupling constants. Higgs sector In the Standard Model, the Higgs field is a complex scalar of the group SU(2)L: \varphi={1\over\sqrt{2}} \left( \begin{array}{c} \varphi^+ \\ \varphi^0 \end{array} \right)\;, where the indices + and 0 indicate the electric charge (Q) of the components. The weak isospin (YW) of both components is 1. Before symmetry breaking, the Higgs Lagrangian is: \mathcal{L}_\mathrm{H} = \varphi^\dagger \left({\partial^\mu}- {i\over2} \left( g'Y_\mathrm{W}B^\mu + g\vec\tau\vec W^\mu \right)\right) \left(\partial_\mu + {i\over2} \left( g'Y_\mathrm{W}B_\mu +g\vec\tau\vec W_\mu \right)\right)\varphi \ - \ {\lambda^2\over4}\left(\varphi^\dagger\varphi-v^2\right)^2\;, which can also be written as: \mathcal{L}_\mathrm{H} = \left| \left(\partial_\mu + {i\over2} \left( g'Y_\mathrm{W}B_\mu +g\vec\tau\vec W_\mu \right)\right)\varphi\right|^2 \ - \ {\lambda^2\over4}\left(\varphi^\dagger\varphi-v^2\right)^2\;. Tests and predictions The Standard Model (SM) predicted the existence of the W and Z bosons, gluon, and the top quark and charm quarks before these particles were observed. Their predicted properties were experimentally confirmed with good precision. To give an idea of the success of the SM, the following table compares the measured masses of the W and Z bosons with the masses predicted by the SM: Measured (GeV) SM prediction (GeV) Mass of W boson 80.387 ± 0.019 80.390 ± 0.018 Mass of Z boson 91.1876 ± 0.0021 91.1874 ± 0.0021 The SM also makes several predictions about the decay of Z bosons, which have been experimentally confirmed by the Large Electron-Positron Collider at CERN. In May 2012 BaBar Collaboration reported that their recently analyzed data may suggest possible flaws in the Standard Model of particle physics.[26][27] These data show that a particular type of particle decay called "B to D-star-tau-nu" happens more often than the Standard Model says it should. In this type of decay, a particle called the B-bar meson decays into a D meson, an antineutrino and a tau-lepton. While the level of certainty of the excess (3.4 sigma) is not enough to claim a break from the Standard Model, the results are a potential sign of something amiss and are likely to impact existing theories, including those attempting to deduce the properties of Higgs bosons.[28] On December 13, 2012, physicists reported the constancy, over space and time, of a basic physical constant of nature that supports the standard model of physics. The scientists, studying methanol molecules in a distant galaxy, found the change (∆μ/μ) in the proton-to-electron mass ratio μ to be equal to "(0.0 ± 1.0) × 10−7 at redshift z = 0.89" and consistent with "a null result".[29][30] List of unsolved problems in physics What gives rise to the Standard Model of particle physics? Why do particle masses and coupling constants have the values that we measure? Why are there three generations of particles? Why is there more matter than antimatter in the universe? Where does Dark Matter fit into the model? Is it even a new particle? Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proven. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem. Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow.[31] To accommodate this finding, the classic Standard Model can be modified to include neutrino mass. If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson.[32] On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory. This is natural in the left-right symmetric extension of the Standard Model [33][34] and in certain grand unified theories.[35] As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude. Theoretical and experimental research has attempted to extend the Standard Model into a Unified field theory or a Theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include: It does not attempt to explain gravitation, although a theoretical particle known as a graviton would help explain it, and unlike for the strong and electroweak interactions of the Standard Model, there is no known way of describing general relativity, the canonical theory of gravitation, consistently in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe; Some consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters; The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. [36]There are also issues of Quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles. It should be modified so as to be consistent with the emerging "Standard Model of cosmology." In particular, the Standard Model cannot explain the observed amount of cold dark matter (CDM) and gives contributions to dark energy which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model. Currently, no proposed Theory of Everything has been widely accepted or verified. Fundamental interaction: Strong interaction: Color charge, Quantum chromodynamics, Quark model Weak interaction: Electroweak theory, Fermi theory of beta decay, Weak hypercharge, Weak isospin Gauge theory: Nontechnical introduction to gauge theory Higgs mechanism: Higgs boson, Higgsless model J. C. Ward J. J. Sakurai Prize for Theoretical Particle Physics Open questions: BTeV experiment, CP violation, Neutrino masses, Quark matter, Quantum triviality Penguin diagram Standard Model: Mathematical formulation of, Physics beyond the Standard Model ^ Technically, there are nine such color–anticolor combinations. However, there is one color-symmetric combination that can be constructed out of a linear superposition of the nine combinations, reducing the count to eight. ^ R. Oerter (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics (Kindle ed.). ^ Sean Carroll, Ph.D., Cal Tech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 59, Accessed Oct. 7, 2013, "...Standard Model of Particle Physics: The modern theory of elementary particles and their interactions ... It does not, strictly speaking, include gravity, although it's often convenient to include gravitons among the known particles of nature..." ^ In fact, there are mathematical issues regarding quantum field theories still under debate (see e.g. ^ S.L. Glashow (1961). "Partial-symmetries of weak interactions". ^ S. Weinberg (1967). "A Model of Leptons". ^ A. Salam (1968). N. Svartholm, ed. "Elementary Particle Physics: Relativistic Groups and Analyticity". ^ a b F. Englert, R. Brout (1964). "Broken Symmetry and the Mass of Gauge Vector Mesons". ^ a b P.W. Higgs (1964). "Broken Symmetries and the Masses of Gauge Bosons". ^ a b G.S. Guralnik, C.R. Hagen, T.W.B. Kibble (1964). "Global Conservation Laws and Massless Particles". ^ F.J. Hasertet al. (1973). "Search for elastic muon-neutrino electron scattering". ^ F.J. Hasert et al. (1973). "Observation of neutrino-like interactions without muon or electron in the Gargamelle neutrino experiment". ^ D. Haidt (4 October 2004). "The discovery of the weak neutral currents". ^ "Details can be worked out if the situation is simple enough for us to make an approximation, which is almost never, but often we can understand more or less what is happening." from The Feynman Lectures on Physics, Vol 1. pp. 2–7 ^ G.S. Guralnik (2009). "The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles". ^ B.W. Lee, C. Quigg, H.B. Thacker (1977). "Weak interactions at very high energies: The role of the Higgs-boson mass". ^ "'"Huge $10 billion collider resumes hunt for 'God particle. ^ M. Strassler (10 July 2012). "Higgs Discovery: Is it a Higgs?". Retrieved 2013-08-06. ^ "CERN experiments observe particle consistent with long-sought Higgs boson". ^ "Observation of a New Particle with a Mass of 125 GeV". ^ "ATLAS Experiment". ^ "Confirmed: CERN discovers new particle likely to be the Higgs boson". ^ D. Overbye (4 July 2012). "A New Particle Could Be Physics' Holy Grail". ^ "New results indicate that new particle is a Higgs boson". ^ S. Braibant, G. Giacomelli, M. Spurio (2009). Particles and Fundamental Interactions: An Introduction to Particle Physics. ^ "BABAR Data in Tension with the Standard Model". ^ "BaBar data hint at cracks in the Standard Model". e! Science News. 18 June 2012. Retrieved 2013-08-06. ^ J. Bagdonaite et al. (2012). "A Stringent Limit on a Drifting Proton-to-Electron Mass Ratio from Alcohol in the Early Universe". ^ C. Moskowitz (13 December 2012). "Phew! Universe's Constant Has Stayed Constant". ^ "Particle chameleon caught in the act of changing". ^ S. Weinberg (1979). "Baryon and Lepton Nonconserving Processes". ^ P. Minkowski (1977). "μ → e γ at a Rate of One Out of 109 Muon Decays?". ^ R. N. Mohapatra, G. Senjanovic (1980). "Neutrino Mass and Spontaneous Parity Nonconservation". ^ M. Gell-Mann, P. Ramond and R. Slansky (1979). F. van Nieuwenhuizen and D. Z. Freedman, ed. Supergravity. ^ Salvio, Strumia (2014-03-17). "Agravity". JHEP 1406 (2014) 080. R. Oerter (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. B.A. Schumm (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Introductory textbooks I. Aitchison, A. Hey (2003). Gauge Theories in Particle Physics: A Practical Introduction. W. Greiner, B. Müller (2000). Gauge Theory of Weak Interactions. G.D. Coughlan, J.E. Dodd, B.M. Gripaios (2006). The Ideas of Particle Physics: An Introduction for Scientists. D.J. Griffiths (1987). Introduction to Elementary Particles. G.L. Kane (1987). Modern Elementary Particle Physics. Advanced textbooks T.P. Cheng, L.F. Li (2006). Gauge theory of elementary particle physics. Highlights the gauge theory aspects of the Standard Model. J.F. Donoghue, E. Golowich, B.R. Holstein (1994). Dynamics of the Standard Model. Highlights dynamical and phenomenological aspects of the Standard Model. L. O'Raifeartaigh (1988). Group structure of gauge theories. Nagashima Y. Elementary Particle Physics: Foundations of the Standard Model, Volume 2. (Wiley 2013) 920 рапуы Schwartz, M.D. Quantum Field Theory and the Standard Model (Сambridge University Press 2013) 952 pages Langacker P. The standard model and beyond. (CRC Press, 2010) 670 pages Highlights group-theoretical aspects of the Standard Model. E.S. Abers, B.W. Lee (1973). "Gauge theories". M. Baak et al. (2012). "The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC". Y. Hayato et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". S.F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283 [hep-ph]. D.P. Roy (1999). "Basic Constituents of Matter and their Interactions — A Progress Report". arXiv:hep-ph/9912523 [hep-ph]. F. Wilczek (2004). "The Universe Is A Strange Place". Nuclear Physics B - Proceedings Supplements 134: 3. "The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast. "LHC sees hint of lightweight Higgs boson" "New Scientist". "Standard Model may be found incomplete," New Scientist. "Observation of the Top Quark" at Fermilab. "The Standard Model Lagrangian." After electroweak symmetry breaking, with no explicit Higgs boson. "Standard Model Lagrangian" with explicit Higgs terms. PDF, PostScript, and LaTeX versions. "The particle adventure." Web tutorial. Nobes, Matthew (2002) "Introduction to the Standard Model of Particle Physics" on Kuro5hin: Part 1, Part 2, Part 3a, Part 3b. "The Standard Model" The Standard Model on the CERN web site explains how the basic building blocks of matter interact, governed by four fundamental forces. Particles in physics e− μ− μ+ τ− τ+ Bosons W± · Z Superpartners Gauginos Gluino Higgsino Neutralino Chargino Sfermion (Stop squark) Planck particle Majorana fermion Leptoquark W' Z' Sterile neutrino Baryons / Hyperons Mesons / Quarkonia η′ J/ψ ϒ Diquarks Exotic atoms Muonium Tauonium Superatoms Exotic baryons Dibaryon Pentaquark Skyrmion Exotic mesons Glueball Tetraquark Mesonic molecule Pomeron Quasiparticles Davydov soliton Dropleton Exciton Magnon Plasmaron Polariton Roton Baryons Timeline of particle discoveries Hadronic Matter Particles of the Standard Model Physics portal Quantum field theories Chern–Simons model Chiral model Kondo model Noncommutative quantum field theory Non-linear sigma model Quartic interaction Quantum Yang–Mills theory Schwinger model Sine–Gordon Toda field theory Wess–Zumino model Wess–Zumino–Witten model Yang–Mills theory Yang–Mills–Higgs model Yukawa model Thirring–Wess model Four-fermion interactions Thirring model Gross–Neveu model Nambu–Jona-Lasinio model WorldHeritage articles needing clarification from July 2013 Articles needing additional references from April 2008 Concepts in physics Magnetism, Maxwell's equations, Chemistry, Quantum mechanics, James Clerk Maxwell Quantum mechanics, Standard Model, Condensed matter physics, Quantum electrodynamics, Quantum chromodynamics Large Hadron Collider, Switzerland, World Wide Web, Israel, Italy Quantum mechanics, Standard Model, Physics, Quantum field theory, Proton Quantum field theory, Quantum electrodynamics, Standard Model, Particle physics, Proton Large Hadron Collider, Proton, Standard Model, Cern, W and Z bosons Grand Unified Theory Standard Model, Supersymmetry, String theory, Large Hadron Collider, Quantum chromodynamics Quantum electrodynamics, Electromagnetism, Quantum field theory, Standard Model, Physics Symmetry (physics) Spacetime, Standard model, Space, Time, Particle physics
CommonCrawl
Preplanned Studies: Estimating the Incidence of Tuberculosis — Shanghai, China, 2025−2050 Shegen Yu1; Jiaqi Ma2; Zhongwei Jia1,3,4, , Despite the impressive achievements in eliminating tuberculosis (TB), the TB burden is still heavy in China. By 2010, China halved the prevalence and mortality reported in 1990, but China is still one of 30 high-TB burden countries in the world. A dynamic transmission model including both rifampin resistant TB (RR-TB) and relapse of pulmonary TB was created. The TB incidence of Shanghai in 2025 and 2035 was predicted, and sensitively analysis of reducing transmission, treating latent TB infection (LTBI), and reducing the recurrence rate was conducted. Screening for latent TB infections should be carried out regularly in high-risk groups and areas using tuberculin skin testing and/or interferon gamma release assays. Center for Intelligent Public Health, Institute for Artificial Intelligence, Peking University, Beijing, China Center for Drug Abuse Control and Prevention, National Institute of Health Data Science, Peking University, Beijing, China Corresponding author: Zhongwei Jia,[email protected] [1] World Health Organization. The end TB strategy. Geneva: WHO; 2014. https://www.who.int/tb/strategy/en/.https://www.who.int/tb/strategy/en/ [2] Wang LX, Zhang H, Ruan YZ, Chin DP, Xia YY, Cheng SM, et al. Tuberculosis prevalence in China, 1990-2010; a longitudinal analysis of national survey data. Lancet 2014;383(9934):2057 − 64. https://www.sciencedirect.com/science/article/abs/pii/S0140673613626392.https://www.sciencedirect.com/science/article/abs/pii/S0140673613626392 [3] Li XW, Yang Y, Liu JM, Zhou F, Cui W, Guan L, et al. Treatment outcomes of pulmonary tuberculosis in the past decade in the mainland of China: a meta-analysis. Front Med 2013;7(3):354 − 66. http://dx.doi.org/10.1007/s11684-013-0257-3CrossRef [4] World Health Organization. Global tuberculosis report 2020. Geneva, Switzerland: WHO; 2020. https://apps.who.int/iris/handle/10665/336069.https://apps.who.int/iris/handle/10665/336069 [5] Houben RMGJ, Dodd PJ. The global burden of latent tuberculosis infection: a re-estimation using mathematical modelling. PLoS Med 2016;13(10):e1002152. http://dx.doi.org/10.1371/journal.pmed.1002152CrossRef [6] Shanghai Municipal Health Commission. Epidemic situation of notifiable infectious diseases in Shanghai in 2019. Shanghai; 2020. http://wsjkw.sh.gov.cn/yqxx/20200703/ce47739b555e4b70bea9f704678524ee.html. (In Chinese). http://wsjkw.sh.gov.cn/yqxx/20200703/ce47739b555e4b70bea9f704678524ee.html [7] Cui XJ, Gao L, Cao B. Management of latent tuberculosis infection in China: Exploring solutions suitable for high-burden countries. Int J Infect Dis 2020;92 Suppl:S37 − 40. http://dx.doi.org/10.1016/j.ijid.2020.02.034CrossRef [8] Gao L, Zhou F, Li XW, Jin Q. HIV/TB co-infection in mainland China: a meta-analysis. PLoS One 2010;5(5):e10736. http://dx.doi.org/10.1371/journal.pone.0010736CrossRef [9] Tobe RG, Xu LZ, Song PP, Huang Y. The rural-to-urban migrant population in China: gloomy prospects for tuberculosis control. Biosci Trends 2011;5(6):226 − 30. http://dx.doi.org/10.5582/bst.2011.v5.6.226CrossRef [10] Cao SY, Gan Y, Wang C, Bachmann M, Wei SB, Gong J, et al. Post-lockdown SARS-CoV-2 nucleic acid screening in nearly ten million residents of Wuhan, China. Nat Commun 2020;11(1):5917. http://dx.doi.org/10.1038/s41467-020-19802-wCrossRef FIGURE 1. The flow diagram of tuberculosis model considering rifampicin resistance and recurrence. S referred to people who are not infected with Mycobacterium tuberculosis (M. tb); E referred to people infected with M. tb but were not yet infectious; I referred to patients with infectious pulmonary tuberculosis (TB); IU1 referred to the undetected and drug sensitive population; IU2 referred to the undetected and rifampicin resistant population; IF1 referred to the detected and drug sensitive population; IF2 refers to the detected and rifampicin resistant population; and R referred to TB patients who have been successfully treated. β1 and β2 are the transmission rates of infectious drug-sensitive TB cases and rifampin resistant TB (RR-TB) cases. κ is the progressive rate from the exposed to the infectious; ρ is the progressive rate from drug-sensitive TB to RR-TB; σ is the detection rate of the infectious; γ1 and γ2 are the successful treatment rates of detected patients with infectious drug-sensitive TB and RR-TB, respectively; ω is the disease recurrence rate from the recovered population; τ1 and τ2 are the drug resistance rates of new patients and recurrent patients, respectively; and μ0 is the natural mortality rate, while μ1 and μ2 are the fatality rates of TB in infectious drug-sensitive TB cases and RR-TB cases, respectively. FIGURE 2. Predictions of the incidence of pulmonary TB in Shanghai with different parameters. (A) Reducing the values of parameters β1, β2 and ω. (B) Reducing the values of parameter κ. (C) Reducing the values of parameter ω. TABLE 1. Definitions and estimated values of parameters. Parameter Definition Estimated value Λ Constant recruitment of the population 486,245 β1 Transmission rate of infectious drug-sensitive TB cases 8.69×10−12 β2 Transmission rate of infectious RR-TB cases 2.52×10−10 k Progressive rate from the exposed to the infectious 1.65×10−3 τ1 Drug resistance rate of new patients 2.39×10−2 σ Detection rate of the infectious 6.20×10−1 ρ Progressive rate from drug-sensitive TB to RR-TB 8.85×10−2 γ1 Treatment successful rates of infectious drug-sensitive cases 9.14×10−1 γ2 Treatment successful rates of infectious RR-TB cases 8.99×10−1 ω Recurrence rate from recovered 5.94×10−3 τ2 Drug resistance rate of recurrent patients 1.25×10−1 μ0 Natural mortality rate 7.46×10−3 μ1 Fatality rates of TB in infectious drug-sensitive TB cases 9.32×10−3 μ2 Fatality rates of TB in infectious RR-TB cases 2.26×10−2 Abbreviations: TB=tuberculosis; RR-TB=rifampin resistant TB. Estimating the Incidence of Tuberculosis — Shanghai, China, 2025−2050 3. Center for Intelligent Public Health, Institute for Artificial Intelligence, Peking University, Beijing, China 4. Center for Drug Abuse Control and Prevention, National Institute of Health Data Science, Peking University, Beijing, China Zhongwei Jia,[email protected] Tuberculosis (TB) is a global public health problem. The World Health Organization (WHO) proposed that the incidence rate of TB should be reduced to less than 55/100,000 population by 2025, less than 10/100,000 by 2035, and to eliminate TB by 2050 (incidence rate <1/100,000) (1). Based on the directly observed treatment, short-course (DOTS) strategy nationwide, China halved the prevalence and mortality of TB in 2010 as compared to 1990 (2), and the cure rate of TB has been reached 92.9% in 2013 (3). The TB incidence rate fell 43.1% from 1990 (130/100,000) to 2010 (74/100,000) (4). Despite impressive achievements, China is still one of the 30 high-TB burden countries in the world. In 2019, there were about 833,000 new TB cases in China with a TB incidence rate of 58/100,000 (4). China also has the highest latent TB infection (LTBI) burden globally with approximately 350 million infections that are at risk for active TB disease (5). Shanghai is one of the areas with the best implementation of TB control measures in China, but the incidence rate was still above 25/100,000 in 2019 (6). Shanghai failing to reach the target by 2035 would indicate a high likelihood of failure for other areas in China. We established a dynamic TB model to estimate the predicted incidence in Shanghai and the impact of different prevention and control measures. According to the natural progressive history of pulmonary TB, the overall population was divided into 7 classes: S referred to people who are not infected with Mycobacterium tuberculosis (M. tb); E referred to people infected with M. tb but were not yet infectious; I referred to patients with infectious pulmonary TB; IU1 referred to the undetected and drug sensitive population; IU2 referred to the undetected and rifampicin resistant population; IF1 referred to the detected and drug sensitive population; IF2 refers to the detected and rifampicin resistant population; and R referred to TB patients who have been successfully treated. In this model, we made the following assumptions: 1) the population was evenly mixed, and contact between all individuals was equally likely; 2) patients were likely to infect susceptible population and the recovered after contact with them; 3) all detected pulmonary TB cases were reported to National Notifiable Disease Reporting System; and 4) the total population of the system was relatively stable. The population supplement was due to births in each year, and population loss was due to natural deaths from each group and deaths due to pulmonary TB from patients. The equations of the model are as follows: $${\left\{ {\begin{array}{*{20}{l}} {\dfrac{{{\rm{dS}}}}{{{\rm{dt}}}}{\rm{ = \Lambda - }}{{\rm{\beta }}_{\rm{1}}}{\rm{(}}{{\rm{I}}_{{\rm{U1}}}}{\rm{ + }}{{\rm{I}}_{{\rm{F1}}}}{\rm{)S - }}{{\rm{\beta }}_{\rm{2}}}{\rm{(}}{{\rm{I}}_{{\rm{U2}}}}{\rm{ + }}{{\rm{I}}_{{\rm{F2}}}}{\rm{)S - }}{{\rm{\mu }}_{\rm{0}}}{\rm{S}}}\\ {\dfrac{{{\rm{dE}}}}{{{\rm{dt}}}}{\rm{ = }}{{\rm{\beta }}_{\rm{1}}}{\rm{(}}{{\rm{I}}_{{\rm{U1}}}}{\rm{ + }}{{\rm{I}}_{{\rm{F1}}}}{\rm{)S + }}{{\rm{\beta }}_{\rm{2}}}{\rm{(}}{{\rm{I}}_{{\rm{U2}}}}{\rm{ + }}{{\rm{I}}_{{\rm{F2}}}}{\rm{)S - \kappa E - }}{{\rm{\mu }}_{\rm{0}}}{\rm{E}}}\\ {\dfrac{{{\rm{d}}{{\rm{I}}_{{\rm{U1}}}}}}{{{\rm{dt}}}}{\rm{ = \kappa (1 - }}{{\rm{\tau }}_{\rm{1}}}{\rm{)(1 - \sigma )E + \omega (1 - }}{{\rm{\tau }}_{\rm{2}}}{\rm{)(1 - \sigma )R - \rho }}{{\rm{I}}_{{\rm{U1}}}}{\rm{ - \sigma }}{{\rm{I}}_{{\rm{U1}}}}{\rm{ - }}\left( {{{\rm{\mu }}_{\rm{0}}}{\rm{ + }}{{\rm{\mu }}_{\rm{1}}}} \right){{\rm{I}}_{{\rm{U1}}}}}\!\!\!\!\!\!\\ {\dfrac{{{\rm{d}}{{\rm{I}}_{{\rm{U2}}}}}}{{{\rm{dt}}}}{\rm{ = \kappa }}{{\rm{\tau }}_{\rm{1}}}{\rm{(1 - \sigma )E + \omega }}{{\rm{\tau }}_{\rm{2}}}{\rm{(1 - \sigma )R + \rho }}{{\rm{I}}_{{\rm{U1}}}}{\rm{ - \sigma }}{{\rm{I}}_{{\rm{U2}}}}{\rm{ - }}\left( {{{\rm{\mu }}_{\rm{0}}}{\rm{ + }}{{\rm{\mu }}_{\rm{2}}}} \right){{\rm{I}}_{{\rm{U2}}}}}\\ {\dfrac{{{\rm{d}}{{\rm{I}}_{{\rm{F1}}}}}}{{{\rm{dt}}}}{\rm{ = \kappa (1 - }}{{\rm{\tau }}_{\rm{1}}}{\rm{)\sigma E + \omega (1 - }}{{\rm{\tau }}_{\rm{2}}}{\rm{)\sigma R + \sigma }}{{\rm{I}}_{{\rm{U1}}}}{\rm{ - \rho }}{{\rm{I}}_{{\rm{F1}}}}{\rm{ - }}{{\rm{\gamma }}_{\rm{1}}}{{\rm{I}}_{{\rm{F1}}}}{\rm{ - }}\left( {{{\rm{\mu }}_{\rm{0}}}{\rm{ + }}{{\rm{\mu }}_{\rm{1}}}} \right){{\rm{I}}_{{\rm{F1}}}}}\\ {\dfrac{{{\rm{d}}{{\rm{I}}_{{\rm{F2}}}}}}{{{\rm{dt}}}}{\rm{ = \kappa }}{{\rm{\tau }}_{\rm{1}}}{\rm{\sigma E + \omega }}{{\rm{\tau }}_{\rm{2}}}{\rm{\sigma R + \sigma }}{{\rm{I}}_{{\rm{U2}}}}{\rm{ + \rho }}{{\rm{I}}_{{\rm{F1}}}}{\rm{ - }}{{\rm{\gamma }}_{\rm{2}}}{{\rm{I}}_{{\rm{F2}}}}{\rm{ - }}\left( {{{\rm{\mu }}_{\rm{0}}}{\rm{ + }}{{\rm{\mu }}_{\rm{2}}}} \right){{\rm{I}}_{{\rm{F2}}}}}\\ {\dfrac{{{\rm{dR}}}}{{{\rm{dt}}}}{\rm{ = }}{{\rm{\gamma }}_{\rm{1}}}{{\rm{I}}_{{\rm{F1}}}}{\rm{ + }}{{\rm{\gamma }}_{\rm{2}}}{{\rm{I}}_{{\rm{F2}}}}{\rm{ - \omega R - }}{{\rm{\mu }}_{\rm{0}}}{\rm{R}}} \end{array}} \right.}$$ The model involves 7 classes and 14 parameters. Each equation represents the change rate of the number of people in each class in unit time, and the right side includes the moving in and out of items that lead to the change of class population. The unit time of this model is one year. Λ is the constant recruitment in the system. β1 and β2 are the transmission rates of infectious drug-sensitive TB cases and rifampin resistant TB cases (RR-TB). κ is the progressive rate from the exposed to the infectious; ρ is the progressive rate from drug-sensitive TB to RR-TB; σ is the detection rate of the infectious; γ1 and γ2 are the successful treatment rates of detected patients with infectious drug-sensitive TB and RR-TB, respectively; ω is the disease recurrence rate from the recovered population; τ1 and τ2 are the drug resistance rates of new patients and recurrent patients, respectively; and μ0 is the natural mortality rate, while μ1 and μ2 are the fatality rates of TB in infectious drug-sensitive TB cases and RR-TB cases, respectively. The transmission diagram is shown in Figure 1. We collected the reported incidence of pulmonary TB in Shanghai from 2004 to 2017 provided by the Public Health Science Data Center, of which 2004−2012 were used as training data and 2013−2017 years were used as test data. The values of the parameters were determined by the reports of earlier studies and adjusted according to TB data, then the incidence of pulmonary TB in Shanghai was estimated for the near future. The incidence of pulmonary TB was numerically defined as the number of new reported cases of pulmonary TB within each year as a proportion of the number of average annual population. The parameters in the model were adjusted to simulate the effect of three different TB prevention and control strategies. We reduced the values of parameters β1, β2, and ω to simulate reducing the probability of infection or reinfection of susceptible and recovered patients (assuming 60% of recurrent patients are due to reinfection) to simulate strengthening personal protection and isolation of active cases during contagious period. The effect of preventive treatment on LTBI cases was evaluated by reducing the rate of progression (κ) of the exposed group to the infectious groups. We reduced the recurrence rate (ω) of the recovered group to study the impact of recurrence rate on the TB epidemic. We set the initial values of the model classes as S (0) = 14,453,131, E (0) = 3,834,319, IU1 (0) = 6,462, IU2 (0) = 333, IF1 (0) = 7,011, IF2 (0) = 361, R (0) = 48,194, and the values of parameters are shown in Table 1. The first curve of each panel in Figure 2 shows our prediction of the incidence of pulmonary TB in Shanghai under current strategies. We predicted that the estimated incidence of pulmonary TB in Shanghai will continue to decline from 2004 to 2050. In 2025, the incidence of TB in Shanghai was estimated to be 24.27/100,000, which will achieve the WHO's goal in 2025 (<55/100,000). However, the incidence was estimated to be 20.81/100,000 in 2035, still far from the goal set for 2035 (<10/100,000). Figure 2 shows the impact of 3 different prevention and control strategies on pulmonary TB in Shanghai. The incidence will decrease slightly with the values of parameters β1, β2 and ω reduced (Figure 2A). The incidence of pulmonary TB in Shanghai in 2035 will be 19.69/100,000 when the parameters dropped by 50%. Reducing the progressing rate (κ) of the exposed group to the infectious groups, the incidence of pulmonary TB in Shanghai will decrease significantly (Figure 2B). In 2035, the incidence will be 11.55/100,000 with the parameters κ dropped by 50%. The incidence of pulmonary TB in Shanghai will be slightly decreased by reducing their recurrence rate (ω) (Figure 2C). With the recurrence rate reduced by 50%, the incidence of pulmonary TB in Shanghai will be 19.08/100,000 in 2035. In this study, a dynamic transmission model concerning both RR-TB and a relapse of TB was established, and the RR-TB rate of recurrent patients was distinguished from that of new patients. Current prevention and control strategy for TB in Shanghai was estimated to be able to achieve the goal set forth by the WHO in the End TB Strategy in 2025 but were not sufficient to achieve the goal in 2035. The target was estimated to be unachievable due to many reasons including the large number of latent infections (7), coinfections with HIV / AIDS (8), and a large migrant population (9). Among the three prevention and control strategies, strengthening preventive treatment for LTBI cases had the best effect on TB epidemic control. The incidence of pulmonary TB in Shanghai was estimated to decrease to 11.55/100,000 when the progressing rate dropped by 50%, which was close to the goal for 2035. The other two strategies were estimated to only reduce the incidence of pulmonary TB slightly. This indicated that the large number of LTBI cases was the reason why the incidence of TB was not decreasing as fast as expected. The WHO estimated that the global LTBI population was close to 2 billion, accounting for 1/3 of the global population (4). Carrying out preventive treatment for latent infections is based on strengthening screening for latent infections. China has previously shown the ability to conduct large-scale screenings for an infectious disease in a large city (10), which indicates that it is possible to strengthen screening. Considering the large scope of consumption in the process of the project, screening for latent TB infections using tuberculin skin testing and/or interferon gamma release assays can be carried out regularly in high-risk groups and areas. This study was subject to some limitations. For example, the parameters used in model operation and prediction were fixed values, but the parameters in the model change dynamically with time in real life. The fixed parameter value could only predict long-term trends for TB but not short-term fluctuations. In addition, considering the difficulty of obtaining the parameters, the model only set 7 classes without further subdivisions, so there was still a gap with real circumstances of TB in Shanghai. Further studies could consider dividing the population of recovered individuals into subgroups with different recurrence rates according to their recurrence risk. Funding: Key Joint Project for Data Center of the National Natural Science Foundation of China and Guangdong Provincial Government (U1611264); and the National Natural Science Foundation of China (91546203, 91846302). [1] World Health Organization. The end TB strategy. Geneva: WHO; 2014. https://www.who.int/tb/strategy/en/. [2] Wang LX, Zhang H, Ruan YZ, Chin DP, Xia YY, Cheng SM, et al. Tuberculosis prevalence in China, 1990-2010; a longitudinal analysis of national survey data. Lancet 2014;383(9934):2057 − 64. https://www.sciencedirect.com/science/article/abs/pii/S0140673613626392. [3] Li XW, Yang Y, Liu JM, Zhou F, Cui W, Guan L, et al. Treatment outcomes of pulmonary tuberculosis in the past decade in the mainland of China: a meta-analysis. Front Med 2013;7(3):354 − 66. http://dx.doi.org/10.1007/s11684-013-0257-3. [4] World Health Organization. Global tuberculosis report 2020. Geneva, Switzerland: WHO; 2020. https://apps.who.int/iris/handle/10665/336069. [5] Houben RMGJ, Dodd PJ. The global burden of latent tuberculosis infection: a re-estimation using mathematical modelling. PLoS Med 2016;13(10):e1002152. http://dx.doi.org/10.1371/journal.pmed.1002152. [6] Shanghai Municipal Health Commission. Epidemic situation of notifiable infectious diseases in Shanghai in 2019. Shanghai; 2020. http://wsjkw.sh.gov.cn/yqxx/20200703/ce47739b555e4b70bea9f704678524ee.html. (In Chinese). [7] Cui XJ, Gao L, Cao B. Management of latent tuberculosis infection in China: Exploring solutions suitable for high-burden countries. Int J Infect Dis 2020;92 Suppl:S37 − 40. http://dx.doi.org/10.1016/j.ijid.2020.02.034. [8] Gao L, Zhou F, Li XW, Jin Q. HIV/TB co-infection in mainland China: a meta-analysis. PLoS One 2010;5(5):e10736. http://dx.doi.org/10.1371/journal.pone.0010736. [9] Tobe RG, Xu LZ, Song PP, Huang Y. The rural-to-urban migrant population in China: gloomy prospects for tuberculosis control. Biosci Trends 2011;5(6):226 − 30. http://dx.doi.org/10.5582/bst.2011.v5.6.226. [10] Cao SY, Gan Y, Wang C, Bachmann M, Wei SB, Gong J, et al. Post-lockdown SARS-CoV-2 nucleic acid screening in nearly ten million residents of Wuhan, China. Nat Commun 2020;11(1):5917. http://dx.doi.org/10.1038/s41467-020-19802-w.
CommonCrawl
What is the physical meaning of the terms in the multipole expansion? I have a few questions on multipole expansions and I have read about the topic in many places but could not find an answer to my questions, so please be patient with me. The electrostatic potential due to an arbitrary charge distribution $\rho(\mathbf{r}')$ at a given point $\mathbf{r}$ is given (up to a factor of $1/4\pi\epsilon_0$) by $$ V(\mathbf{r})=\int_{V'}\frac{\rho(\mathbf{r}')}{|\mathbf{r}-\mathbf{r}'|} dV'$$ In case where $r\gg r'$, $V(\mathbf{r})$ can be multipole expanded to give $$V(\mathbf{r})=V(\mathbf{r})_\text{mon}+V(\mathbf{r})_\text{dip}+V(\mathbf{r})_\text{quad}+\cdots$$ \begin{align} V(\mathbf{r})_\text{mon}& =\frac{1}{r}\int_{V~`}\rho(\mathbf{r}') dV',\\ V(\mathbf{r})_\text{dip}&=\frac{1}{r^2}\int_{V~`}\rho(\mathbf{r}') ~\hat{\mathbf{r}}\cdot\mathbf{r}'dV', \\ V(\mathbf{r})_\text{quad}&=\frac{1}{r^3}\int_{V~`}\rho(\mathbf{r}') ~\left(3(\hat{\mathbf{r}}\cdot\mathbf{r}')^2-r'^2\right)dV', \end{align} and so on. Now here are my questions: Is there an intuitive meaning of every one of these terms? For example, I can make sense of the monopole term in the following way: to the 1st approximation the charge distribution will look like a point charge sitting at the origin, which mathematically corresponds to what is called a monopole term, which is nothing but $Q/r$. Is this correct? Now what is the meaning of the dipole term? I know that the word dipole comes from having 2 opposite charges, and the potential due to that configuration, if the charges are aligned along the $z$ axis symmetrically say, goes like $\frac{\cos\theta}{r^2}$. But from the multipole expansion there is a nonzero dipole term even, say, in the case of a single charge sitting at some distance from the origin. Why is it called a dipole term then? Is there a way to make sense of this term in the same way I made sense of the monopole term? What is the intuitive meaning of the quadrupole term? Is the multipole expansion an expansion in powers of $1/r$ only? or of $\cos\theta$ too? Maybe this is not an independent question but I am wondering if there is something like a geometrical/pictorial meaning of every term in the multipole expansion. electromagnetism electrostatics multipole-expansion Emilio Pisanty RevoRevo For question 2: ("Why does a single charge away from the origin have a dipole term?") Let's say you have a charge of +3 at point (5,6,7). Using the superposition principle, you can imagine that this is the superposition of two charge distributions Charge distribution A: A charge of +3 at point (0,0,0) Charge distribution B: A charge of -3 at point (0,0,0) and a charge of +3 at (5,6,7). Obviously, when you add these together, you get the real charge distribution: $$ (\text{real charge distribution}) = (\text{charge distribution A}) + (\text{charge distribution B}). $$ By the superposition principle: $$ (\text{Real }\mathbf E\text{ field}) = (\mathbf E\text{ field of charge distribution A}) + (\mathbf E\text{ field of charge distribution B}). $$ And, since the multipole expansion also obeys the superposition principle: \begin{align} (\text{real monopole term}) & = (\text{monopole term of distribution A}) + (\text{monopole term of distribution B}),\\ (\text{real dipole term}) & = (\text{dipole term of distribution A}) + (\text{dipole term of distribution B}),\\ (\text{real quadrupole term}) & = (\text{quadrupole term of distribution A}) + (\text{quadrupole term of distribution B}), \end{align} and so on. The field of charge distribution A is a pure monopole field, while the field of charge distribution B has no monopole term, only dipole, quadrupole, etc. Therefore, \begin{align} (\text{real monopole term}) & = (\text{monopole term of distribution A}), \\ (\text{real dipole term}) & = (\text{dipole term of distribution B}),\\ (\text{real quadrupole term}) & = (\text{quadrupole term of distribution B}), \end{align} and so on. Even though it's unintuitive that the real charge distribution has a dipole component, it is not at all surprising that charge distribution B has a dipole component: It is two equal and opposite separated charges! And charge distribution B is exactly what you get after subtracting off the monopole component to look at the subleading terms of the expansion. Steve ByrnesSteve Byrnes $\begingroup$ But this is physically different from having just one charge. You altered the charge distribution. Your new charge distribution will not affect the monopole term, but it will have a physical effect on all the other terms. $\endgroup$ – Revo Nov 16 '11 at 16:56 $\begingroup$ @Revo: I did not alter the charge distribution, I split it into two pieces. I edited, hopefully it's clearer now how this works. $\endgroup$ – Steve Byrnes Nov 16 '11 at 17:32 $\begingroup$ SteveB's explanation is of course perfectly valid but it doesn't directly "attack" the core of Revo's misconception. Revo correctly knows that the "canonical dipole" is a pair of oppositely charged particles. However, that doesn't mean that every object that doesn't have this form carries a zero dipole moment. Quite on the contrary, generic objects (charge distributions) carry a nonzero amount of each multipole moment. One needs some "special" or "very special" distributions for some of these moments to be zero. $\endgroup$ – Luboš Motl Nov 16 '11 at 17:36 $\begingroup$ In particular, it's not true that everything that isn't of the form of "the exact naive children's dipole" carries a vanishing dipole moment - which seems to be the (incorrect) assumption in Revo's question. @Revo, try this analogy: this assumption is analogous to saying that one kilogram is the mass of the platinum prototype in France, or whatever it is. Now, your reasoning is that the mass of a person has to be zero because the person isn't even made of platinum: she can't be the platinum stick. This sounds like a joke but your logic applied to the dipole moment is exactly the same. $\endgroup$ – Luboš Motl Nov 16 '11 at 17:38 $\begingroup$ Just to be sure, the dipole moment is called the dipole because the simplest way to realize a charge distribution with a nonzero dipole moment - but vanishing all other moments - is a pair (therefore "di") of nearby charges. Similarly for quadrupoles, octupoles etc. (powers of two). However, that doesn't mean that the number of charges always has to be a power of two (and they have to be pointlike): almost any charge distribution carries almost all the multipole moments. Just trust the formulae instead of words, and don't misinterpret the words. $\endgroup$ – Luboš Motl Nov 16 '11 at 17:43 To understand the meaning of multipole expansion,firstly we need to ask ourselves about the evaluation of potential of a very random charge distribution.Recall that, if you forget about the multipole expansion you have no any simple device to know the potential of a random charge distribution, either you want potential near the distribution or very far away. In our basic electrostatic courses we are usually taught about how to evaluate the potential of some symmetric charge distribution, for instance, potential for distributions which are spherically and cylindrically symmetric,and in most toughest cases the distribution would be of some parameter of polar angles or radial distances. Now, the second thing which one always ignores (and that's why got stuck with the confusion about intuition) during the study of multipole expansions is that the dipole has nothing to do with a pair of positive and negative charge, or an octupole has nothing to do with a group of 4 positive and 4 negative charge. There charge distribution only take care of the variation of potential, Kushagra YadavKushagra Yadav Is there an intuitive meaning of every one of these terms? For example, I can make sense of the monopole term in the following way: to the 1st approximation, the charge distribution will look like a point charge sitting at the origin, which mathematically corresponds to what is called a monopole term, which is nothing but $Q/r$. Is this correct? First off, the nomenclature is rather unfortunate (goes as $2^n$, i.e $2^2=\text{quadrupole}$) and can be misleading. Mathematically, the monopole term is the zeroth order Legendre Polynomial ($L_0(x)=1$, $L_1(x)=x$, $L_2(x)=(3x^2-1)/2$) and so on (up to a normalization). Physically speaking, the first term tells us something about the symmetry of the system depending on the location of the observer. Suppose the potential at a point A of the system of charges only has a monopole structure, this means that the charge distribution has full spatial invariance. More importantly, the first term tells you that if we want the total energy of the system, we need to only worry about the Potential at A because the monopole only couples to the electric Potential. Now what is the meaning of the dipole term? I know that the word dipole comes from having 2 opposite charges, and the potential due to that configuration, if the charges are aligned along the z axis symmetrically say, goes like $\cos\theta/r^2$. But from the multipole expansion there is a nonzero dipole term even in the case of a single charge sitting at some distance from the origin say. Why is it called a dipole term then? Is there a way to make sense of this term in the same way I made sense of the monopole term? The dipole term is the 1st order Legendre polynomial ($2^1=\text{dipole}$). It is possible to have higher order terms even when the net charge is zero. This means, the energy of the system depends on the interaction of the dipole moment with the Electric Field of the test charge. $\vec{d} \cdot\vec{E}$ couplings are studied in light-matter interactions. Another interesting point to note is that there is some kind of spatial symmetry breaking that emerges because dipole interactions can setup a preferred spatial axis (ex along the line joining the charges). Maybe this is not an independent question, but I am wondering if there is something like a geometrical/pictorial meaning of every term in the multipole expansion. The quadrupole term is so named because it is the second order Legendre polynomial $2^2=4$. The quadrupole moment couples with the gradient of the electric field. Anyway, this is my personal take on the subject. Mathematically, it is very interesting to ask why do we get Legendre polynomials out of this? It turns out that the Legendre polynomials can be generated by ortho-normalizing monomials (basis for this series expansion). Antillar MaximusAntillar Maximus Not the answer you're looking for? Browse other questions tagged electromagnetism electrostatics multipole-expansion or ask your own question. Dipole moment of a single point charge Multipole Expansion: Electrostatics Can one force the electric quadrupole moments of a neutral charge distribution to vanish using a suitable translation? Meaning of terms and interpretation in the electric multipole expansion What determines the factors of the multipole expansion? Physical meaning of multipole moment How is the potential in multipole expansion independent from the origin chosen? How do multipole moments relate to a Taylor expansion, with regards to Newtonian potential? Understanding multipole expansion in classical electrodynamics Griffiths Multipole Expansion and $Q$ going to zero Question about multipole expansion of electrostatic potential? The magnetic and electric dipole moment 'popping out' from multipole expansions
CommonCrawl
Restrictions on rotation sets for commuting torus homeomorphisms Global existence and time-decay estimates of solutions to the compressible Navier-Stokes-Smoluchowski equations October 2016, 36(10): 5267-5285. doi: 10.3934/dcds.2016031 Liouville type theorems for the steady axially symmetric Navier-Stokes and magnetohydrodynamic equations Dongho Chae 1, and Shangkun Weng 2, Department of Mathematics, Chung-Ang University, Seoul 156-756, South Korea Pohang Mathematics Institute, Pohang University of Science and Technology, Pohang, Gyungbuk, 790-784, South Korea Received October 2015 Revised March 2016 Published July 2016 In this paper we study Liouville properties of smooth steady axially symmetric solutions of the Navier-Stokes equations. First, we provide another version of the Liouville theorem of [14] in the case of zero swirl, where we replaced the Dirichlet integrability condition by mild decay conditions. Then we prove some Liouville theorems under the assumption $||\frac{u_r}{r}{\bf 1}_{\{u_r< -\frac {1}{r}\}}||_{L^{3/2}(\mathbb{R}^3)} < C_{\sharp}$ where $C_{\sharp}$ is a universal constant to be specified. In particular, if $u_r(r,z)\geq -\frac1r$ for $\forall (r,z) \in [0,\infty) \times \mathbb{R}$, then ${\bf u}\equiv 0$. Liouville theorems also hold if $\displaystyle\lim_{|x|\to \infty}\Gamma =0$ or $\Gamma\in L^q(\mathbb{R}^3)$ for some $q\in [2,\infty)$ where $\Gamma= r u_{\theta}$. We also established some interesting inequalities for $\Omega := \frac{\partial_z u_r-\partial_r u_z}{r}$, showing that $\nabla\Omega$ can be bounded by $\Omega$ itself. All these results are extended to the axially symmetric MHD and Hall-MHD equations with ${\bf u}=u_r(r,z){\bf e}_r +u_{\theta}(r,z) {\bf e}_{\theta} + u_z(r,z){\bf e}_z, {\bf h}=h_{\theta}(r,z){\bf e}_{\theta}$, indicating that the swirl component of the magnetic field does not affect the triviality. Especially, we establish the maximum principle for the total head pressure $\Phi=\frac {1}{2} (|{\bf u}|^2+|{\bf h}|^2)+p$ for this special solution class. Keywords: axially symmetric solutions., Steady Navier-Stokes equations, Liouville type theorem. Mathematics Subject Classification: Primary: 76D05; Secondary: 35Q3. Citation: Dongho Chae, Shangkun Weng. Liouville type theorems for the steady axially symmetric Navier-Stokes and magnetohydrodynamic equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5267-5285. doi: 10.3934/dcds.2016031 M. Acheritogaray, P. Degond, A. Frouvelle and J.-G. Liu, Kinetic formulation and global existence for the Hall-Magneto-hydrodynamics system,, Kinet. Relat. Models, 4 (2011), 901. doi: 10.3934/krm.2011.4.901. Google Scholar D. Chae, Liouville-type theorem for the forced Euler equations and the Navier-Stokes equations,, Commun. Math. Phys, 326 (2014), 37. doi: 10.1007/s00220-013-1868-x. Google Scholar D. Chae, P. Degond and J.-G. Liu, Well-posedness for Hall-magnetohydrodynamics,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 31 (2014), 555. doi: 10.1016/j.anihpc.2013.04.006. Google Scholar D. Chae and J. Lee, On the regularity of the axisymmetric solutions of the Navier-Stokes equations,, Math. Z., 239 (2002), 645. doi: 10.1007/s002090100317. Google Scholar D. Chae and S. Weng, Singularity formation for the incompressible Hall-MHD equations without resistivity,, Annales de l'Institut Henri Poincare (C) Non Linear Analysis, (2015). doi: 10.1016/j.anihpc.2015.03.002. Google Scholar D. Chae and J. Wolf, On partial regularity for the steady Hall magnetohydrodynamics system,, Commun. Math. Phys, 339 (2015), 1147. doi: 10.1007/s00220-015-2429-2. Google Scholar D. Chae and T. Yoneda, On the Liouville theorem for the stationary Navier-Stokes equations in a critical space,, J. Math. Anal. Appl., 405 (2013), 706. doi: 10.1016/j.jmaa.2013.04.040. Google Scholar H. Choe and B. Jin, Asymptotic properties of axi-symmetric D-solutions of the Navier-Stokes equations,, J. Math. Fluid. Mech., 11 (2009), 208. doi: 10.1007/s00021-007-0256-8. Google Scholar G. P. Galdi, An Introduction to the Mathematical Theory of the Navier-Stokes Equations. In: Steady State problems,, $2^{nd}$ edition, (2011). doi: 10.1007/978-0-387-09619-3. Google Scholar D. Gilbarg and H. F. Weinberger, Asymptotic properties of steady plane solutions of the Navier-Stokes equations with bounded Dirichlet integral,, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 5 (1978), 381. Google Scholar C. He and Z. Xin, On the regularity of weak solutions to the magnetohydrodynamic equations,, J. Differential Equations, 213 (2005), 235. doi: 10.1016/j.jde.2004.07.002. Google Scholar T. Hou, Z. Lei and C. Li, Global regularity of the 3D axi-symmetric Navier-Stokes equations with anisotropic data,, Comm. Partial Differential Equations, 33 (2008), 1622. doi: 10.1080/03605300802108057. Google Scholar G. Koch, N. Nadirashvili, G. Seregin and V. Sverak, Liouville theorems for the Navier-Stokes equations and applications,, Acta. Math., 203 (2009), 83. doi: 10.1007/s11511-009-0039-6. Google Scholar M. Korobkov, K. Pileckas and R. Russo, The Liouville theorem for the steady-state Navier-Stokes problem for axially symmetric 3D solutions in absence of swirl,, J. Math. Fluid Mech., 17 (2015), 287. doi: 10.1007/s00021-015-0202-0. Google Scholar M. Korobkov, K. Pileckas and R. Russo, Solution of Leray's problem for the stationary Navier-Stokes equations in plane and axially symmetric spatial domains,, Annals of Mathematics, 181 (2015), 769. doi: 10.4007/annals.2015.181.2.7. Google Scholar Z. Lei, On axially symmetric incompressible magnetohydrodynamics in three dimensions,, J. Differential Equations, 259 (2015), 3202. doi: 10.1016/j.jde.2015.04.017. Google Scholar J. Liu and W. Wang, Characterization and regularity for axisymmetric solenoidal vector fields with application to Navier-Stokes equation,, SIAM J. Math. Anal., 41 (2009), 1825. doi: 10.1137/080739744. Google Scholar S. Weng, Decay properties of smooth axially symmetric D-solutions to the steady Navier-Stokes equations,, preprint, (). Google Scholar S. Weng, Existence of axially symmetric weak solutions to steady MHD with non-homogeneous boundary conditions,, preprint, (). Google Scholar Y. Zhou, Remarks on regularities for the 3D MHD equations,, Discrete Contin. Dyn. Syst., 12 (2005), 881. doi: 10.3934/dcds.2005.12.881. Google Scholar Zijin Li, Xinghong Pan. Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1333-1350. doi: 10.3934/cpaa.2019064 Dong Li, Xinwei Yu. On some Liouville type theorems for the compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4719-4733. doi: 10.3934/dcds.2014.34.4719 Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277 Hamid Bellout, Jiří Neustupa, Patrick Penel. On a $\nu$-continuous family of strong solutions to the Euler or Navier-Stokes equations with the Navier-Type boundary condition. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1353-1373. doi: 10.3934/dcds.2010.27.1353 Wenjun Wang, Lei Yao. Spherically symmetric Navier-Stokes equations with degenerate viscosity coefficients and vacuum. Communications on Pure & Applied Analysis, 2010, 9 (2) : 459-481. doi: 10.3934/cpaa.2010.9.459 Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 Siegfried Maier, Jürgen Saal. Stokes and Navier-Stokes equations with perfect slip on wedge type domains. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 1045-1063. doi: 10.3934/dcdss.2014.7.1045 Fei Jiang, Song Jiang, Junpin Yin. Global weak solutions to the two-dimensional Navier-Stokes equations of compressible heat-conducting flows with symmetric data and forces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 567-587. doi: 10.3934/dcds.2014.34.567 Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602 Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 161-179. doi: 10.3934/dcds.2010.28.161 Joanna Rencławowicz, Wojciech M. Zajączkowski. Global regular solutions to the Navier-Stokes equations with large flux. Conference Publications, 2011, 2011 (Special) : 1234-1243. doi: 10.3934/proc.2011.2011.1234 Peter Anthony, Sergey Zelik. Infinite-energy solutions for the Navier-Stokes equations in a strip revisited. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1361-1393. doi: 10.3934/cpaa.2014.13.1361 Tomás Caraballo, Peter E. Kloeden, José Real. Invariant measures and Statistical solutions of the globally modified Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 761-781. doi: 10.3934/dcdsb.2008.10.761 Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1085-1103. doi: 10.3934/dcds.2016.36.1085 Jochen Merker. Strong solutions of doubly nonlinear Navier-Stokes equations. Conference Publications, 2011, 2011 (Special) : 1052-1060. doi: 10.3934/proc.2011.2011.1052 Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations II: Global existence of small solutions. Evolution Equations & Control Theory, 2012, 1 (1) : 217-234. doi: 10.3934/eect.2012.1.217 Rafaela Guberović. Smoothness of Koch-Tataru solutions to the Navier-Stokes equations revisited. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 231-236. doi: 10.3934/dcds.2010.27.231 Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033 Zhilei Liang. Convergence rate of solutions to the contact discontinuity for the compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1907-1926. doi: 10.3934/cpaa.2013.12.1907 Dongho Chae Shangkun Weng
CommonCrawl
Spectacle: fast chromatin state annotation using spectral learning Jimin Song1 & Kevin C Chen1 Epigenomic data from ENCODE can be used to associate specific combinations of chromatin marks with regulatory elements in the human genome. Hidden Markov models and the expectation-maximization (EM) algorithm are often used to analyze epigenomic data. However, the EM algorithm can have overfitting problems in data sets where the chromatin states show high class-imbalance and it is often slow to converge. Here we use spectral learning instead of EM and find that our software Spectacle overcame these problems. Furthermore, Spectacle is able to find enhancer subtypes not found by ChromHMM but strongly enriched in GWAS SNPs. Spectacle is available at https://github.com/jiminsong/Spectacle. Identifying regulatory elements in the human genome is a challenging problem that is important for understanding many fundamental aspects of biology, including the molecular mechanisms of disease, development and evolution. Cell-type-specific gene regulation clearly cannot be explained by genome sequence alone because the genome is essentially identical in almost all cell types. The epigenome refers to the complete set of chromatin modifications across the entire genome, including DNA methylation marks and post-translational histone modifications, and it has received great interest in recent years for its potential to elucidate gene regulation. It has been called the 'second dimension of the genome' [1], and we use the term here as commonly done with no requirement for the epigenetic marks to be heritable. Epigenetic marks are known to be correlated with fundamental biological processes such as mRNA transcription, splicing, DNA replication and DNA damage response (reviewed in [1-3]). Although it is debated whether epigenetic marks are mechanistically required for these processes, genome-wide studies have nonetheless been highly successful in using epigenetic marks to identify important genomic features that were often previously very difficult to find by other methods, including enhancers, promoters, transcribed regions, repressed regions of the genome and non-coding RNAs (e.g. [4,5]). There is also the potential to use epigenome maps to identify subclasses of functional elements, such as promoters, that are active in a cell type versus those that are poised for activation at a later time in development [6]. Importantly, functional elements identified by epigenetic marks have been shown to overlap significantly with disease-associated SNPs found by genome-wide association studies (GWASs) [7,8]. Since approximately 90% of GWAS SNPs are thought to be located in non-coding regions [9], such results give hope that one might be able to fine-map the causal disease variants of many GWASs or other disease gene mapping studies using epigenome maps. Recently, the ENCODE project [4] produced a wealth of epigenomic data from many different human cell types using a combination of stringent biochemical assays and high-throughput sequencing technologies. In addition, the International Human Epigenome Consortium [10] also aims to produce reference maps of 1000 human epigenomes and it includes several major projects, such as BLUEPRINT and the Roadmap Epigenomics Project [11,12], which is producing epigenome maps from multiple primary human tissues. Finally, individual research labs are also producing epigenome maps for related species such as the mouse and pig [13], and for different human individuals [14]. As the number of human epigenomic data sets grows, the need for fast and robust computational methods for analyzing these data will increase. One successful computational approach for analyzing epigenomic data is to build a unified statistical model to decipher the patterns of multiple chromatin modifications in a cell type, rather than analyzing each chromatin modification individually. Several computational methods have been developed to annotate chromatin states from epigenomic data, not only in the human genome but also in the Drosophila, mouse, yeast and Arabidopsis thaliana genomes [15-27]. Among these methods, hidden Markov models (HMMs) have been commonly used as the underlying probabilistic model of the sequence of chromatin states along the genome. Currently, a detailed understanding of the specific chromatin modifications associated with different classes of regulatory elements, such as enhancers and promoters, is lacking, so many researchers have taken the approach of performing unsupervised estimation of the HMM parameters (i.e. inferring the relevant subclasses of chromatin states directly from the data without access to existing biological examples of such subclasses). To perform unsupervised learning, the expectation-maximization (EM) algorithm has been the standard algorithm used in practice for a long time [28,29]. The EM algorithm is a maximum likelihood approach that iteratively converges to a local optimum in the likelihood. However, it suffers from several well-known issues. It is often slow to converge since the likelihood is not convex in general and EM is a first-order optimization method, and deciding when to stop the iterations is somewhat arbitrary. EM is not guaranteed to find a global optimum, so often multiple parameter initializations are needed to achieve good practical performance [30]. Finally, the maximum likelihood approach is known to be prone to overfitting [31] and to perform poorly on classification problems that suffer from class imbalance [32]. Importantly, the observation about class imbalance implies that for a data set where the majority of the genome is in the background null chromatin state without any chromatin immunoprecipitation sequencing (ChIP-seq) peaks for chromatin marks, the EM algorithm will tend to devote more parameters to modeling the large background class, at the expense of modeling other types of biologically important functional classes. All of these issues lower the biological significance of the solutions obtained from the algorithm. To address all of these issues, here we investigated the feasibility of using spectral learning, an approach that is currently being developed in the theoretical machine learning community (e.g. [33,34]) to perform chromatin state annotation instead of the EM algorithm. Spectral learning fits in the overall framework of the method of moments or the plug-in estimator approach, which predates the maximum likelihood approach [35,36]. Broadly speaking, method of moments estimators are different by nature from maximum likelihood estimators. Instead of attempting to find the maximum likelihood solution, in the method of moments one expresses various unobservable moments of the model as functions of the parameters, sets these moments equal to the sample moments estimated from the data and solves these equations to estimate the parameters. Method of moments estimators are often simpler, faster and more efficient to compute. They also often do not suffer from local optima issues, so they do not depend on the parameter initialization. On the other hand, maximum likelihood and method of moments estimators are complementary approaches and there are important advantages to the maximum likelihood approach. Maximum likelihood approaches are generally more sample efficient (i.e. they do not require as much data to produce the same quality of solution). Thus a common way to combine the advantages of the two methods is to use the method of moments estimator as an initializer and to run a few iterations of local optimization of the likelihood (e.g. using the EM algorithm) [37]. We hypothesized that the human genome might be large enough to provide a sufficient number of samples to compute a method of moments estimator accurately. Thus, in this work, our main technical contribution is to develop a practical implementation of spectral learning for HMMs for the specific biological application of annotating chromatin states in the human genome. Although there are a few practical implementations of spectral learning in the natural language processing and computer vision communities, as well as a few previous implementations for specific biological problems, we believe Spectacle is one of the first practical implementations for a commonly studied biological problem. We stress that many technical issues remain to be studied in spectral learning, as described in this paper. We found that our method, Spectacle (Spectral Learning for Annotating Chromatin Labels and Epigenomes), was much faster than a previous state-of-the-art method, ChromHMM [15], for commonly used numbers of chromatin states and epigenetic marks in a number of ENCODE cell types. We believe this speed-up will become more important as the number of new epigenomic data sets increases. In addition, we observed empirically that Spectacle was more robust to the class imbalance problem and produced fewer null chromatin states and more functional chromatin states than the EM algorithm. We believe this is a novel observation for spectral learning, which may be significant for future algorithmic work in spectral learning. Most importantly, we demonstrate the biological significance of our chromatin state annotations by showing a higher enrichment for disease-associated SNPs in the additional functional chromatin states found by Spectacle compared to ChromHMM. We analyzed selective constraints on chromatin states, which suggests that epigenome-based enhancer predictions are more informative for fine-mapping disease-associated SNPs than evolutionary conservation. Finally the faster run time of our program enabled us to compare epigenomic data sets across cell types by stacking the chromatin mark data from different cell types. This analysis suggested that most enhancer states are cell-type specific and that this set of enhancers is important for fine-mapping disease-associated SNPs. We implemented Spectacle in Java and Python by modifying the ChromHMM code, so it should be easily portable to most desktops and the source code is freely available online. We use HMMs to represent the chromatin states as hidden states and all possible combinations of epigenetic marks as observations. We compare our software, Spectacle (see the details in the Materials and methods), which implements a spectral learning algorithm for parameter estimation, to a previous state-of-the-art method using the EM algorithm, ChromHMM. Spectral learning is much faster than expectation-maximization for common numbers of chromatin states and epigenetic marks In our tests, Spectacle had significantly faster training times than ChromHMM (23.9 to 124.1 times faster) for all tested numbers of chromatin states (Figure 1). Indeed, our implementation of spectral learning takes much less compute time than just two iterations of the EM algorithm – much less than required for convergence. For most biological analyses, the number of chromatin states is usually set lower than 100 and for biological interpretability it is often set to around 20 states (e.g. [22]). Thus for the number of chromatin states most relevant to biological data, Spectacle was 97 times (37 vs 3599.5 s) faster for 15 states and 110 times (71.8 vs 7915.3 s) faster for 20 states than ChromHMM. The faster training times are important in allowing individual users to annotate the large numbers of epigenomic maps being produced for different species, cell types, individuals and disease states, and we expect our method to be even more valuable in the future as the number of such data sets increases with decreasing sequencing costs. Training time of Spectacle vs ChromHMM for the GM12878 cell line. All of these reported results used eight chromatin marks from three ENCODE Tier 1 cell types that have been used in other method development studies as well. This is also roughly the number of epigenetic marks used in other studies from individual labs. It is important to note that our method considers all possible combinations of epigenetic marks, unlike other methods that assume independence between the epigenetic marks, and therefore the theoretical space and time complexity of our method grows exponentially with the number of epigenetic marks. We consider this a biologically significant feature of our approach because given the current knowledge of all the possible chromatin marks and their interactions, we feel it is important to be open to the possibility of interactions between any set of epigenetic marks. Thus we need to consider all possible combinations of epigenetic marks, which makes the (theoretical) exponential complexity unavoidable. On the other hand, it is also important in practice that the exponential dependence is only theoretical and in fact an analysis of 16 chromatin marks from a public human epigenomics data set shows that the actual number of combinations of epigenetic marks that appear in real human epigenomics data is much smaller and scales as only a low-order polynomial with exponent in the range 1 to 2 (Additional file 1: Figures S1, S2 and S3). We found that although there are theoretically 216=65,536 possible combinations of epigenetic marks, in actual practice only 9,954 combinations (approximately 15%) of the chromatin marks appear in the genome. Only 409 combinations (approximately 0.6%) of epigenetic marks accounted for more than 99% of the genome. We downloaded narrow peak calls for 29 chromatin marks in the human embryonic stem cell H9 from the Roadmap Epigenomics Project [12]. In theory, there can be up to 229 combinations of chromatin marks, but we observed that there were only 186,814 combinations appearing in the genome. After eliminating singleton combinations that are likely due to noise, only 56,750 combinations appeared at least twice in the genome. This phenomenon of data sparsity is both very common in modern statistical research and entirely consistent with our biological intuition that not all possible combinations of epigenetic marks are likely to be biologically significant for directing cellular processes. Having sparse input data allows us to use sparse linear algebra routines in our method. Indeed our computational experiments with our Python implementation using sparse singular value decomposition (SVD) methods based on power iteration show that the method is very efficient in practice on the largest data sets in the Roadmap Epigenomics Project (Additional file 1). Spectral learning appears to be more robust to class imbalance than expectation-maximization We examined the chromatin states inferred by Spectacle and ChromHMM for the GM12878, H1-hESC and K562 cell types, setting the number of chromatin states to 20. We observed that the EM algorithm always assigned more hidden states to modeling the background null chromatin state (Figures 2 and 3, Additional file 1: Figures S14 to S17). Figures 2 and 3 show an example of this phenomenon where states 1, 15 and 20 were all assigned the null state (i.e. all epigenetic marks with close to no signal) in the EM solution (Figure 3 left panel) whereas only state 1 was assigned to be the null state in the spectral learning solution (Figure 2 left panel). ChromHMM used the three chromatin states to cover most null segments in the genome (approximately 88%) whereas Spectacle covered about the same number of null segments (approximately 86%) with only one chromatin state. Spectacle results for cell line GM12878 for 20 chromatin states. (Left) Emission matrix and state annotation (values between 0 and 1 are colored from white to black). (Middle) Number of genomic segments and distance from TSS of the segments for each state. The distribution bars are shown only when at least 10% of the segments are within 5 kb from the TSS (the percentage of the segments within 5 kb from the TSS is shown by the color of the distribution bars, i.e., the more segments within 5 kb from the TSS, the bluer the distribution bars are). (Right) Enrichment of validated biological features (enrichment is colored red and depletion is colored blue). CpGI, CpG island; DRM, gene-distal regulatory module; Enh, strong enhancer; EnhP, poised enhancer; EnhW, weak enhancer; kb, kilobase; Num, number; Pol2, RNA polymerase II binding; PRM, promoter-proximal regulatory module; Prom, strong promoter; PromF, promoter flanking; PromP, poised promoter; PromW, weak promoter; Repr, repressed region; TSS, transcription start site; Txn, transcribed region. ChromHMM results for cell line GM12878 for 20 chromatin states. Shown are the emission matrix, state annotation and enrichment of biological features. Refer to caption of Figure 2. Art, artificial; CpGI, CpG island; DRM, gene-distal regulatory module; Enh, strong enhancer; EnhP, poised enhancer; EnhW, weak enhancer; kb, kilobase; Num, number; Pol2, RNA polymerase II binding; PRM, promoter-proximal regulatory module; Prom, strong promoter; PromF, promoter flanking; PromP, poised promoter; PromW, weak promoter; Repr, repressed region; TSS, transcription start site; Txn, transcribed region. We believe that the difference between the two approaches is due to their robustness for data with a high class imbalance. We found that using the Poisson binarization method [7] for the ChIP-seq data, the fraction of the genome in the null state was approximately 90% in all ten ENCODE cell types we examined (Table 1). It is well known that for data with a class imbalance, the maximum likelihood criterion will tend to devote more parameters to modeling the large background class (e.g. [32]). This in turn causes lower quality modeling of other biologically more interesting states, since there are fewer parameters devoted to them. For instance, state 20 is unique to the Spectacle solution and does not appear in the ChromHMM solution, yet has the hallmarks of an active enhancer (Figure 2, left panel). Similarly, for another cell type K562, Spectacle found a strong enhancer state that does not appear in the ChromHMM solution (state 20 in Additional file 1: Figure S16, left panel). We have not yet analyzed the theoretical robustness of spectral learning to class imbalance and leave it as a theoretical question for future work but speculate that it may come from the orthogonality condition in the SVD computation; the first singular vector accounts for the null state and the other vectors that account for other chromatin states are constrained to be orthogonal to it. Instead here we performed extensive empirical testing of spectral learning and EM/maximum likelihood on a more balanced epigenomics data set (Additional file 1: Figures S4 to S13, Tables S1, S2 and S3). This data set was more balanced in the sense that the fraction of the genome in the null state was approximately 52%(Table 1). For this data set, we found that maximum likelihood was indeed a good criterion that corresponds to better performance for predicting biological features, and we show that spectral learning was useful as an initializer to a local optimization method for the likelihood, a common use of method of moments estimators in statistics. Table 1 Fraction of null segments for ten ENCODE cell types We note that simple approaches to addressing class imbalance, such as subsampling from the dominant class, which are used in simple classification problems (e.g. using naive Bayes classifiers), are not directly applicable in the HMM context because the length distribution of the chromatin states is important for the model and cannot simply be changed by sampling. Finally, we note that for the Poisson binarization data, when we used the spectral learning parameters to initialize the EM algorithm and ran it to convergence, in all cases the parameters converged to those found by ChromHMM. This indicates that the difference between the chromatin state annotations of ChromHMM and Spectacle are not due to the initialization method used in ChromHMM but rather a difference between the spectral learning and maximum likelihood approaches. Comparison of chromatin state annotations Next we studied the biological significance of the chromatin state annotations produced by the two methods. To do so, we did an in-depth study for three ENCODE Tier 1 cell types: GM12878, H1-hESC and K562. We ran both Spectacle and ChromHMM using eight epigenetic marks for 20 states and annotated the predicted chromatin states from the eight epigenetic marks as promoters (Prom), enhancers (Enh), transcribed regions (Txn) and repressed regions (Repr). We then validated the predicted chromatin states using external biological data sets. In the literature, promoters are characterized by high H3K4me3/H3K4me2 and low H3K4me1, while H3K9ac is associated with promoter activity (reviewed in [39,40]). Promoter states are categorized into three states depending on enrichment of H3K4me3/H3K4me2 and H3K9ac, or H3K36me (signal for flanking regions): Prom (strong promoter), PromW (weak promoter) and PromF (promoter flanking). For the GM12878, H1-hESC and K562 ENCODE Tier 1 cell types, we found 5, 3 and 5 Spectacle promoter states and 5, 3 and 4 ChromHMM promoter states, respectively. These segments were associated with active transcription start sites (TSSs) (Table 2). Also, we found that these were highly enriched with relevant biological features including RNA polymerase II binding (Pol2), promoter-proximal regulatory modules (PRMs), CpG islands (CpGIs), DNase 1 hypersensitive sites (DNase) and exonic regions (exons), and tend to be close to active TSSs (Figures 2 and 3, Additional file 1: Figures S14 to S17). These results confirmed the accuracy of our promoter annotations with both Spectacle and ChromHMM. Table 2 Precision and recall for predicting active transcription start sites Next we examined the differences between the ChromHMM and Spectacle annotations. We found that Spectacle predicted an interesting pattern of epigenetic marks, which did not appear in the ChromHMM solution. Specifically, Spectacle state 19 for GM12878 (Figure 2) had a pattern of high H3K4me3, high H3K9ac and low H3K27ac, and was highly enriched with active TSSs (log2 of fold enrichment: 5.7). ChromHMM state 12 (Figure 3) had a similar pattern but slightly lower H3K9ac, and was less enriched with active TSSs (log2 of fold enrichment: 2.6). There was 1, 1 and 0 poised promoter state (PromP; defined by high enrichment of H3K27me3) in Spectacle, and 1, 1 and 0 poised promoter state in ChromHMM for GM12878, H1-hESC and K562, respectively. It has been shown that these bivalent states are mostly found in embryonic stem cells [6] but are found in other cell types as well and are associated with genes involved in development in other cell types [3,41]. In fact, these states are enriched with gene ontology (GO) terms related to development and differentiation according to the GREAT software [42] (see Materials and methods for detailed parameter settings). For GM12878, Spectacle chromatin state 18 was associated with several development and differentiation-related GO terms such as 'digestive tract mesoderm development' (q<4×10−7), whereas ChromHMM chromatin state 19 was not associated with any such GO terms. For H1-hESC, both Spectacle and ChromHMM poised promoter states were associated with development and differentiation-related GO terms, e.g. Spectacle state 18: 'myeloid progenitor cell differentiation' (q<2×10−4) and ChromHMM state 13: 'central nervous system neuron differentiation' (q<5×10−50). Taken together, both methods effectively identified active and inactive promoter states. However, Spectacle found certain biologically relevant chromatin states not found by ChromHMM and we believe this is because maximum likelihood methods tend to use more parameters to model the null chromatin state, which makes up the vast majority of the genome. Enhancers are characterized in the literature by high H3K4me1/H3K4me2 and low H3K4me3, while H3K27ac is associated with enhancer activity (reviewed in [39,40]). Enhancer states are categorized into three states depending on enrichment of H3K4me1/H3K4me2 and H3K27ac: Enh (strong enhancer), EnhW (weak enhancer) and EnhP (poised enhancer; with high H3K27me3). There were 9, 9 and 10 enhancer states in Spectacle and 6, 7 and 8 enhancer states in ChromHMM for GM12878, H1-hESC and K562, respectively. These enhancers were associated with distal P300 peaks, gene-distal regulatory modules (DRMs) and VISTA enhancers (Materials and methods) as well (Table 3). They were also highly enriched with Pol2, distal P300, DRM and DNase signals (Figures 2 and 3, Additional file 1: Figures S14 to S17). Table 3 Precision and recall for predicting distal P300, gene-distal regulatory modules and VISTA enhancers Importantly, we found that Spectacle could subclassify active enhancers better than ChromHMM. For instance, for GM12878, most segments in ChromHMM state 9 were labeled with either state 9 or state 20 in Spectacle according to H3K9ac activity. For H1-hESC, some enhancers were poised enhancers with high H3K27me3. There were five Spectacle and one ChromHMM poised enhancer states. Poised enhancer states were also highly enriched with Pol2, distal P300, DRM and DNase signals, similar to active enhancer states (Additional file 1: Figures S14 and S15) [44]. All poised enhancer states found by the two methods were associated with several development and differentiation related GO terms, e.g. Spectacle state 12: 'sensory organ development' (q<1×10−66), Spectacle state 13: 'enteric nervous system development' (q<4×10−6), Spectacle state 16: 'positive regulation of neuron differentiation' (q<6×10−3), Spectacle state 19: 'cell differentiation in spinal cord' (q<3×10−19), Spectacle state 20: 'tissue development' (q<2×10−23) and ChromHMM state 12: 'skeletal system development' (q<1×10−70). We found that Spectacle subclassified enhancers depending on their distance from the TSS. Spectacle states 12, 13 and 19 were closer to the TSS but Spectacle states 16 and 20 were farther from the TSS (Additional file 1: Figures S14 and S15). Taken together, Spectacle found more biologically relevant subclasses of enhancers than ChromHMM. We will discuss the biological significance of these enhancer subtypes further in the next section with regards to GWAS SNPs. Transcribed regions We define transcribed regions or gene bodies based on enrichment with H3K36me3. There were 2, 2 and 1 transcribed region states (Txn) in Spectacle and 2, 1 and 2 transcribed region states in ChromHMM for GM12878, H1-hESC and K562, respectively. We found that all these states were highly enriched with annotated exons. Repressive and quiescent regions The majority of the genome is considered to be in a repressed state (Repr) since there is very low enrichment of histone marks. There are a few segments (usually <0.03% of the genome) that have high enrichment of almost all epigenetic marks. These segments are considered artificial (Art) and are also treated as repressed regions. There were 3, 5 and 4 repressed states in Spectacle and 6, 8 and 6 null (Repr and Art) states in ChromHMM for GM12878, H1-hESC and K562, respectively. Thus ChromHMM devoted more of the parameters to modeling the repressed state than Spectacle, consistent with our observations about class imbalance. Taken together, our chromatin annotation shows that ChromHMM devoted more states to modeling the background repressed chromatin state whereas Spectacle devoted more states to modeling other states, which we have argued are biologically significant. Genome-wide association study SNPs are enriched in strong enhancer states Next we investigated if any of the chromatin enhancer states discovered by Spectacle were significantly enriched in disease-associated SNPs discovered in GWASs. To do so, we downloaded SNPs from the NHGRI GWAS catalogue [9]. Since approximately 90% of GWAS SNPs are found in non-coding regions of the genome, epigenomic data has been suggested as a way to give functional interpretation for non-coding GWAS SNPs [45-47]. Chromatin state annotation tools can be used to annotate strong enhancers in the genome and predict possible functions for the disease-causing SNPs as disrupting regulation of specific genes [7,22]. After excluding SNPs in unmapped chromosomes or chrY and phenotypes (diseases or traits) with less than ten SNPs, our final data set consisted of 10,604 non-coding SNPs associated with 360 phenotypes. Overall, we found that the non-coding SNPs were highly enriched in the strong enhancer states predicted by either Spectacle or ChromHMM (χ 2 test: P=0 for all three cell lines). The enrichment in Spectacle strong enhancer states was similar to but slightly higher than in ChromHMM strong enhancer states (log2 of fold enrichment for Spectacle vs ChromHMM: GM12878: 1.35 vs 1.32; H1-hESC: 1.56 vs 1.41; K562: 1.17 vs 1.11; all P>0.05). Next we computed the fold enrichment of the SNPs for each phenotype in each chromatin state separately, since the common use of chromatin state annotations is to consider one GWAS data set at a time. In total, we found 102 phenotypes whose SNPs were most highly enriched in one of the strong enhancer states predicted by either Spectacle or ChromHMM for GM12878. Of these states, Spectacle chromatin state 20, which did not appear in ChromHMM, had the largest number of phenotypes and largest total number of GWAS SNPs, which were most highly enriched in it (Table 4). This result suggests that spectral learning may be able to produce more biologically meaningful chromatin state annotations for mapping GWAS SNPs. Methodologically, this is presumably because the enhancer subclasses account for only a small fraction of the genome, and therefore are not influential in the maximum likelihood solution, but distinctive in terms of their pattern of chromatin marks, and therefore are found by the spectral learning algorithm. More importantly, it suggests that we may have found a biologically meaningful enhancer subtype using our spectral learning approach. Table 4 Most highly enriched phenotypes for GM12878 Further, we looked into phenotypes related to autoimmune diseases. GM12878 is infected with Epstein–Barr virus and it has been shown that infection with the virus is related to the risk of certain autoimmune diseases [48,49]. We found that 16 out of the 21 autoimmune disease-related phenotypes were most highly enriched in strong enhancers predicted by either Spectacle or ChromHMM, and in particular all of the 16 phenotypes in strong enhancer states in Spectacle but not in ChromHMM. Notably, we found that chromatin state 20 in Spectacle (Figure 2) had the highest enrichment of GWAS SNPs for ten out of the sixteen autoimmune diseases including all three phenotypes that were shown to be enriched in strong enhancer states in a previous work [7] (Figure 4 for the three phenotypes studied previously, Additional file 1: Figure S18 for all 16 phenotypes). This suggests that this chromatin state, which was found only by Spectacle, might be particularly interesting for mapping causing variants of disease phenotypes in this cell type. Chromatin states enriched with GWAS SNPs for autoimmune diseases for GM12878. Art, artificial; Enh, strong enhancer; EnhW, weak enhancer; GWAS, genome-wide association study; Prom, strong promoter; PromP, poised promoter; PromW, weak promoter; Repr, repressed region; SNP, single nucleotide polymorphism; Txn, transcribed region. The genomic segments labeled with Spectacle chromatin state 20 were mostly found in ChromHMM chromatin state 9. Most genomic segments annotated as enhancer state 9 in ChromHMM were divided into two enhancer states, 9 and 20, in Spectacle according to their enrichment of H3K9ac (Figure 2). This result is biologically significant as it shows that an unbiased analysis of the epigenomic data for this cell type found this particular distinction of chromatin states to be statistically important for explaining the patterns of chromatin marks and it is consistent with previous reports that H3K9ac separates stronger enhancers from weaker enhancers [50]. It would be expected that if a chromatin state were randomly split in two then one of the states might have higher enrichment by chance, but we observed that Spectacle state 20 had much more enriched phenotypes and SNPs than Spectacle state 9 (e.g. 1284 vs 290 SNPs). Our results were robust whether or not we controlled for the well-studied major histocompatibility region (e.g. following [51]), which can skew the statistics we computed. We found similar results on GWAS SNP enrichment analysis for the K562 cell line to those for the GM12878 cell line. There were 98 phenotypes whose SNPs were most highly enriched in one of the strong enhancer states predicted by either Spectacle or ChromHMM for K562, an erythroleukemic cell line. Of these states, chromatin state 20 from Spectacle, which was not found by ChromHMM, had the largest number of phenotypes and largest total number of GWAS SNPs, which were most highly enriched in it over all strong enhancer states found by either method (Additional file 1: Table S5). Like previous analysis, we took the union of all the SNPs for all the phenotypes to account for correlations between the phenotypes (for example, two phenotypes could be similar). Most segments labeled with ChromHMM enhancer state 7 were divided into two Spectacle enhancer states, 7 and 20, according to the enrichment of H3K36me3 (Additional file 1: Figure S16). Notably, five out of the six phenotypes related to erythrocytes were most highly enriched in Spectacle state 20 (Additional file 1: Figure S19). Specifically the enrichment in Spectacle state 20 was higher than the closest strong enhancer state annotated by ChromHMM for the five erythrocyte-related phenotypes related to the cell type we are studying. Additional file 1: Table S5 shows that there are many more phenotypes and SNPs more highly enriched in Spectacle state 20 compared to Spectacle state 7 (e.g. 1,614 SNPs in Spectacle state 20 vs 436 SNPs in Spectacle state 7). This suggests that Spectacle state 20 may be better for identifying potential causal variants in this cell type than other annotated strong enhancers. We repeated the GWAS analysis for the H1-hESC cell type but there was no difference between the strong enhancer state annotations of Spectacle and ChromHMM, so we did not investigate the enrichment of GWAS SNPs further (Additional file 1: Table S4). Analysis of selective constraint at the population genetic and cross-species levels Next we quantified the levels of selective constraint acting on the Spectacle chromatin states at the population genetic and cross-species levels. For each chromatin state we computed the fraction of SNPs with minor allele frequency (MAF) less than 0.1 at the population level, and the mean of phastCons scores at the cross-species level (see more details in Materials and methods). We used these data to analyze the levels of conservation of the different chromatin states (Additional file 1: Tables S7 and S8). In general, we found a reasonable concordance between the population genetic and cross-species measures. We found that promoter and transcribed states tended to be under the strongest negative selection while repressive and quiescent (null) states tended to be under the weakest negative selection, consistent with our biological intuition. Enhancer states tended to be under intermediate levels of negative selection, at least at the SNP level. Among enhancer states, those with epigenomic marks associated with either promoters (H3K4me3) or transcribed regions (H3K36me3) tended to be under stronger negative selection than the other enhancers. Enhancer state 20 in Spectacle, which showed high enrichment for GWAS SNPs for GM12878, was under less negative selection than enhancer state 9 (Mann–Whitney U test: Spectacle state 20 vs Spectacle state 9: P<4.1×10−10). This is a biologically plausible result since disease-causing alleles might be under intermediate levels of selective constraint compared to neutral (i.e. non-functional) alleles and highly constrained (i.e. lethal) alleles. In addition, we noticed that one transcribed state (Spectacle state 15 of GM12878) was one of the most selectively constrained states at the population genetic level but one of the least selectively constrained at the cross-species level. This pattern is consistent with the action of recent species-specific selection. Since this chromatin state does not strongly correspond to cell-type-specific protein-coding exons or long non-coding RNAs (log2 fold enrichment: 1.7 for exons and 1.5 for long non-coding RNAs), it presumably corresponds to a class of lowly expressed and/or poorly understood non-coding transcripts. The action of recent selection on this class of transcripts is again consistent with our biological intuition. Overall, our results also suggest more general principles about the informativeness of different types of functional annotation for fine-mapping GWAS SNPs. We have shown that enhancer states annotated from global epigenomics data are highly enriched for GWAS SNPs despite not being under strong negative selection. We compared Spectacle chromatin state 20 of GM12878 with the most conserved phastCons intergenic regions in terms of GWAS SNP enrichment (we selected the same number of conserved genomic segments as in chromatin state 20). The log2 fold enrichment in Spectacle state 20 for GM12878 vs most conserved regions for all SNPs was 1.87 vs 0.20. Spectacle state 20 of GM12878 had 40 phenotypes with positive log2 fold enrichment and 14 out of the 40 phenotypes were autoimmune-related phenotypes, whereas the most conserved regions had 16 phenotypes with positive log2 fold enrichment and none of them were autoimmune-related phenotypes. Similarly, we compared Spectacle chromatin state 20 of K562 with the most conserved phastCons regions. The log2 fold enrichment in Spectacle state 20 of K562 vs most conserved regions for all SNPs was 1.59 vs −0.32. Spectacle state 20 of K562 had 34 phenotypes with positive log2 fold enrichment and 6 out of the 34 phenotypes were erythrocyte-related phenotypes, whereas the most conserved regions had nine phenotypes with positive log2 fold enrichment and none of them were erythrocyte-related phenotypes. These results suggest that epigenomics-based enhancer predictions may be more informative for fine-mapping GWAS SNPs than looking for evolutionarily conserved regions. A similar result was previously found for microRNA binding sites where predicted conserved microRNA binding sites were shown to be more informative of selective constraint than searching for conserved 3 ′ UTR regions [52]. Cell-type specificity of chromatin states Finally, we analyzed the cell-type specificity of chromatin states by performing a combined analysis of epigenomics data from multiple cell types. In a previous approach, epigenomics data from nine human cell types were concatenated and the same HMM parameters were learned for the nine human cell types by running ChromHMM for the virtual long genome [7]. However, there are some disadvantages to characterizing cell-type specificity of chromatin states in this way. First, this approach produces a single set of parameters for all cell types, which might not characterize chromatin states existing in a single cell type (e.g. a bivalent state in an embryonic stem cell [6]). Second, it is non-trivial to interpret the chromatin states of the genomic segments from multiple cell types from the concatenated data sets in a post-processing step since the genome segmentations of the different cell types need to be aligned and compared. The TreeHMM paper [21] suggested a more complicated model that utilized the lineage information between multiple cell types instead of naively concatenating cell types. However, since the model is more complicated, a variational approximation was used for parameter learning, which is technically an unbounded approximation of the EM algorithm and thus may not always give accurate biological results. The model also requires that the relationships between cell types are well described by a tree and more importantly that the tree is known beforehand. We took a different approach of stacking epigenome data sets from multiple cell types to perform a joint analysis of multiple cell types, similar to [24]. By doing so, we can learn a single set of chromatin states with a uniform genome segmentation for the different cell types. We stacked three cell types with eight histone marks each for GM12878 and H1-hESC and seven histone marks for HepG2 (the H3K4me1 mark for HepG2 was not available). There were 79,839 combinations of the 23 histone marks in the genome. If each cell type had seven chromatin states, there would be a total of 73=343 possible combined chromatin states for the three cell types. To improve biological interpretability and running time, we ran Spectacle with 50 combined chromatin states (Additional file 1: Figure S22), which is still a large number that would be slow to run using the EM algorithm. Consistent with previous results [7,21], we found that most enhancers were cell-type specific while promoter states could be cell-type specific or constitutive across cell types. We confirmed the cell-type-specific enhancer states had a distal P300 signal for only the corresponding cell type and lacked the TSS signal. Promoter states that were either constitutive or cell-type specific had a corresponding enrichment of the TSS signal. Importantly, we found that higher enrichment of non-coding GWAS SNPs in strong enhancer states was specific to the cell-type-specific enhancer states of the GM12878 cell line (Additional file 1: Figure S22, Table S9). Among 132 phenotypes whose SNPs were most highly enriched in one of the 13 enhancer states, 14 phenotypes were for autoimmune diseases and all of them were most highly enriched in one of the enhancer states that are specific to GM12878. We note that running the EM algorithm (e.g. with ChromHMM) with such a large number of hidden chromatin states is computationally inefficient. It took approximately 1.6 hr for Spectacle and approximately 8.5 hr for ChromHMM. This demonstrates that our spectral learning approach facilitates the analysis of epigenome data from multiple cell types. We have developed a practical, robust implementation of a published spectral learning algorithm for HMMs, which we have specially tuned to specific features of epigenomic data. We have implemented our method in a software tool called Spectacle (Spectral Learning for Annotating Chromatin Labels and Epigenomes), which we have tested extensively on human epigenomic data sets though it should be useful for many model organisms as well. We made a number of technical modifications to a previously published algorithm [33], which improved its accuracy and numerical stability on epigenomics data sets. Using the commonly used Poisson binarization of the ENCODE epigenetic mark data, we showed that Spectacle is much faster than ChromHMM for commonly used numbers of chromatin states and epigenetic marks. Furthermore, the overall statistical approach appears to be more robust to class imbalance than the usual maximum likelihood approach, which is an observation we believe to be novel for spectral learning. We support our empirical observations on class imbalance by testing our method extensively on a set of broad peak epigenetic mark data (Additional file 1: Figures S4 to S13, Tables S1, S2 and S3). These data are not commonly used in biological applications but they are useful for demonstrating the utility of our program on a data set that exhibits more class balance. For this data set, we showed that the likelihood appears to be a good optimization criterion for retrieving biologically relevant features. In this case, the spectral learning approach and local likelihood optimization methods, such as EM, are complementary. Spectral learning can be used to initialize the EM algorithm, which is a common use of method of moments estimators in statistics. In this context, we show that the spectral learning initializer usually outperforms a previously published initialization heuristic in terms of finding higher likelihoods, faster running time and higher accuracy for several independent biological data sets. This observation may be useful for chromatin state annotation in model organisms that have proportionally more coding DNA and non-coding functional regions than humans and therefore might have epigenome data sets that are more class balanced. Overall, our software implementation is freely available online and is lightweight and easy to use on a regular desktop without the need for specialized computer hardware. Our code modifies the ChromHMM code, which has been used by several experimental groups, so we believe it will be user-friendly and accessible. Importantly, we show that although the chromatin state annotations produced by Spectacle are similar overall to those produced by ChromHMM for two out of three ENCODE Tier 1 cell types, Spectacle found enhancer subtypes that were significantly more enriched in GWAS SNPs for relevant diseases than the enhancer states found by ChromHMM. Furthermore, the fast running time of Spectacle for high numbers of chromatin states facilitated the analysis of combined epigenome data sets from multiple cell types and we found that GWAS SNPs were enriched in these cell-type-specific enhancer states. Such a refinement of the chromatin state annotations will be important for downstream biological applications as the field moves towards fine-mapping the causal variants of complex human diseases. Our population genetic and cross-species analyses of selective constraint on the Spectacle chromatin states not only revealed interesting evolutionary patterns but also suggested that enhancer predictions from epigenomics data may be more informative for fine-mapping GWAS SNPs than evolutionary conservation. We note that GWAS SNPs tend to be common in the population so it is not surprising that they are not highly enriched in evolutionarily conserved regions. For rare disease variants, the results could be different. The refinements to the spectral learning algorithm we have described can be applied to multiple problems in computational biology and should be of broader interest. To our knowledge we have used spectral learning for one of the first times to learn HMM parameters explicitly for a problem in computational biology. A recent work used spectral learning for poly(A) motif prediction [53]. The authors did not try to recover the HMM parameters explicitly but instead learned them up to an unknown invertible linear transformation and used the transformed parameters as features for classification by a support vector machine. Another recent paper [54] applied a spectral learning algorithm for contrastive learning to a problem involving epigenome maps. This is a more restricted version of the problem we study here in which one is specifically contrasting two data sets instead of annotating a single data set. HMMs and the EM algorithm are used in many other problems in computational biology for problems as diverse as gene finding, modeling linkage disequilibrium, predicting enhancers or microRNA binding sites, and detecting copy number variation from array CGH data. Thus, it is very possible that the methods described here may be useful in those settings as well. We provide an implementation of our method. It uses Python sparse matrix libraries to allow users to analyze a large number of chromatin marks, for example in the Roadmap Epigenomics Project. It is also possible to reduce the dimension of the observation space in the HMM using principal component analysis in a similar way to [27]. For further technical improvement, one might utilize recent developments in approximate, randomized linear algebra (e.g. using random linear projections and QR decompositions for fast SVD computations) and we leave a full exploration for future work. In addition, we stress that our overall modeling approach is fundamentally different from previous methods such as ChromHMM, which assumed independence between chromatin marks. Instead we consider all possible combinations of epigenetic marks and discover arbitrary interactions between chromatin marks for which there is statistical support. Since researchers are only at an early stage of exploring the full biological complexity of the epigenetic code, we believe this is a useful and important aspect of our approach. Here we have presented a new software tool, Spectacle, for annotating chromatin states in the genome from epigenome maps, such as those produced by ENCODE. We have developed a practical implementation of a spectral learning approach for HMMs that was previously mainly discussed in the theoretical literature and should be of broad interest for other computational biology problems. We have also demonstrated that the approach is faster and more robust to class imbalance than the more commonly used EM approach. In particular, for two out of three ENCODE Tier 1 cell types, we show that the highest enrichment of cell-type-related GWAS SNPs is in an enhancer state only found in Spectacle and not in ChromHMM. Furthermore, for both of these cell types, the enrichment of GWAS SNPs in these enhancer subtypes inferred from epigenome maps was much higher than in evolutionary conserved intergenic regions. This result may suggest a more general principle for future fine-mapping of GWAS SNPs. We also found that most enhancers appear to be cell-type specific based on our analysis of combined epigenomics data from multiple cell types and we found that GWAS SNPs were highly enriched in these cell-type-specific enhancer states. In the future, we believe that having a faster and more robust chromatin state annotation tool should be useful for annotating multiple epigenomic maps. Previous works found associations between disease-causing variants and epigenomic marks [8,14,55,56], suggesting that a better understanding of the epigenome might help interpret the variants underlying human disease. For example, [14] used ChromHMM to infer variation in chromatin states across individuals, so we expect Spectacle to be useful for other similar types of data sets in the future. Indeed there are currently several major epigenomics projects (e.g., BLUEPRINT and the Roadmap Epigenomics Project) producing chromatin mark data for many cell types and human populations [11,12]. Given the rapid decrease in the cost of sequencing, we also expect that many more epigenomic maps will be produced for different cell types, human populations [14], species [13], environmental conditions and developmental contexts [7,57]. Thus we expect that the need for fast and robust tools for processing this type of data will continue to grow in the future. To identify functional elements from epigenomic data, there are two broad classes of approaches: supervised and unsupervised learning. Supervised learning associates patterns of epigenetic modifications with known classes of functional elements, such as enhancers, promoters and non-coding RNAs. It generally has good performance for known classes of functional elements but it requires the availability of independently validated examples and it cannot discover new classes of chromatin states [5,16,25,43,58]. In contrast, unsupervised learning discovers patterns of epigenetic modifications for each chromatin state directly from the data [15,18,59-62]. Here we take an unsupervised approach as we believe that the current state of the field is such that more research is needed to understand better the variety of possible chromatin states and the specific epigenetic marks underlying them. Indeed the results presented in this paper suggest that additional enhancer subtypes remain to be discovered. One method, ChromHMM [15,62], uses a HMM to model epigenomic data where each chromosome is segmented into non-overlapping regions of 200 bp and each segment has a binary value representing the presence or absence of each epigenetic mark. Each segment is assumed to be in a hidden chromatin state, which defines a distribution over combinations of epigenetic marks. To learn the HMM parameters, ChromHMM uses the EM algorithm [28], also called the Baum–Welch algorithm in the context of HMMs. To run the EM algorithm, instead of initializing the parameters randomly, ChromHMM provides a heuristic initialization method called the information method [62]. It then runs the EM algorithm to convergence, that is, until the difference between the likelihood of the current iteration and that of the previous iteration is less than 0.001. Regardless of convergence, the maximum number of iterations was set to 200. We ran ChromHMM for training using four multiprocessors in parallel on a desktop. Another method, Segway [18], uses a generalization of HMMs called dynamic Bayesian networks to model chromatin states. For example, Segway models the length distribution of each chromatin state by adding a hidden state called a countdown variable. Segway has high spatial resolution since it uses a segment size of 1 bp [22]. However, it is much slower than ChromHMM because of the high resolution and the number of parameters in the model. Nevertheless, the performance of Segway on biological data sets is similar to ChromHMM [22]. Since the state space of Segway is much larger than ChromHMM, it might be harder to find the global optimum of the likelihood. Currently, Segway cannot be run on a desktop but needs to be run on a compute cluster. The self-organizing map (SOM) model [24] gives a finer resolution analysis of the patterns of chromatin marks. It takes a genome segmentation as input (e.g. using stacked chromatin marks from multiple cell types and running ChromHMM), discovers at least 1,000 distinct chromatin states, and visualizes the chromatin states in a two-dimensional map. Similar chromatin states are located close to each other in the map and so the SOM method is able to find interesting clusters of chromatin states. The SOM model does not need to fix the number of chromatin states ahead of time but allows a larger number of chromatin states. This fine-resolution analysis was able to find clusters enriched in specific GO terms. It is also not obvious how to focus on particular chromatin states in the map since there are so many chromatin states. Instead the user is encouraged to use the map interactively. In general, the SOM model is complementary to the Spectacle software, and Spectacle can be used instead of ChromHMM in the initial genomic segmentation. In addition, there are a few other methods that are more focused on different aspects of analyzing epigenomic data, such as enabling joint analysis of multiple data sets (jMOSAiCS [26]) and incorporating lineage information between cell types (TreeHMM [21]). As described above, many methods use HMMs to model the chromatin states and these methods differ mainly in the way they model the epigenetic mark data. For instance, some methods discretize the data while others fit Gaussian distributions to the data. Here we did not explore different approaches to modeling the underlying ChIP-seq mark data but rather focus on exploring the properties of spectral learning in the context of chromatin state annotation. We present our results comparing Spectacle with ChromHMM in the main text and our results comparing with other methods in Additional file 1. Hidden Markov models and spectral learning We discuss spectral learning from the viewpoint of theoretical machine learning and the few attempts to apply spectral learning in other applied fields in Additional file 1. Here we describe our practical implementation of spectral learning for the chromatin state annotation problem. Description of the hidden Markov model We use HMMs to represent the chromatin states as hidden states and all possible combinations of (binarized) epigenetic marks as observations. The whole genome is divided into segments of size 200 bp following [15,62]. We define the HMM in matrix form as follows. Note the matrix and vector indices in this paper. For a matrix M, M[ i,j] denotes an element in the ith row and the jth column. For a vector v, v[ i] denotes the ith element. Let K be the number of hidden chromatin states and N be the number of possible combinations of epigenetic marks (i.e., N=2M where M is the number of epigenetic marks). Let A be the state transition matrix where A[ i,j] is the probability of transition from state j to state i for 1≤i,j≤K. Let O be the emission matrix where O[ i,j] is the probability of observing the ith combination of the epigenetic marks in state j where 1≤i≤N and 1≤j≤K. Let π be the initial state distribution vector where π[ i] is the probability of state i in the first segment of each chromosome when 1≤i≤K. To simplify the description of the method, the whole genome is considered to be one chromosome by concatenating all chromosomes. Let T be the number of segments in the genome. Let x t be the observation at the tth segment and let \(x_{t_{1}:t_{2}}\) represent \(x_{t_{1}},x_{{t_{1}}+1},\ldots,x_{t_{2}}\) for t 1≤t 2. Given the HMM parameters, θ=(A,O,π), the likelihood of an observed sequence is: $${\fontsize{9pt}{9.6pt}\selectfont{\begin{aligned} P(x_{1:T}|\theta) &= \sum_{h_{1},h_{2},\ldots,h_{T}}{P\left(x_{1:T},h_{1:T}|\theta\right)} & \\[-2pt] &= \textbf{1}_{K}^{T}{AO}_{x_{T}}\ldots {AO}_{x_{2}}{AO}_{x_{1}}\pi & \\ &= \textbf{1}_{K}^{T}B_{x_{T}}\ldots B_{x_{2}}B_{x_{1}}\pi \end{aligned}}} $$ where h t in the first equation is the hidden state at the tth segment and the summation is taken over all possible sequences of hidden states. O i = diag(O[ i,1],O[ i,2],…,O[ i,K]), i.e., O i is a matrix diagonalizing the ith row of O where O i [ j,j]=O[ i,j] for 1≤i≤N and 1≤j≤K and non-diagonal elements in O i are zero. B i is called the observable operator [63], defined as B i =A O i , and \(\textbf {1}_{K}^{T}\) is a vector [ 1,1,…,1] of size K. Estimating the hidden Markov model parameters using spectral learning The original paper [33] does not attempt to estimate parameters directly as it can be unstable. We use a method adapted from [64] for inferring general phylogenetic tree models (which include HMMs as a special case) and improve the method in a deterministic and principled way using major observations (see further discussion in Additional file 1). Given all observed consecutive triples of observations in the genome, (x t ,x t+1,x t+2), 1≤t≤T−2, the marginal probabilities of observing the counts of singletons, pairs and triples in the data (the moments in the method of moments) are defined in vector and matrix form as follows: $${} \begin{aligned} P_{1}[\!i] &= \text{Pr}[\!x_{t} = i], \quad 1 \leq i \leq N \\ P_{2,1}[\!i,j] &= \text{Pr}[\!x_{t+1}=i,x_{t}=j], \quad 1 \leq i,j \leq N \\ P_{3,x,1}[\!i,j] &= \text{Pr}[\!x_{t+2}=i,x_{t+1}=x,x_{t}=j], \quad\! 1 \leq i,x,j \leq N \\ P_{3,1}[\!i,j] &= \sum_{1 \leq x \leq N}{P_{3,x,1}[\!i,j]} & \\ &= \text{Pr}[\!x_{t+2}=i,x_{t}=j], \quad 1 \leq i,j \leq N. \end{aligned} $$ Note that the counts of triples (the third moment) is actually a third-order tensor but for computational reasons we represent it by a collection of matrices indexed by the middle observation x, where each matrix corresponds to a slice through the tensor. Hsu et al. [33] showed that we can infer the HMM parameters from these marginal probabilities as follows. Let U be an N×K matrix of the top K left singular vectors (computed by the SVD) of P 3,1. Intuitively, U is a surrogate for the observation matrix O. We computed the following matrix C x for each observation x by expressing the sample moments in terms of the parameters, P 3,x,1=O A O x Adiag(π)O T and P 3,1=O A Adiag(π)O T, $${} C_{x} := \left(U^{T}P_{3,x,1}\right)\left(U^{T}P_{3,1}\right)^{+} = \left(U^{T}OA\right)O_{x}\left(U^{T}OA\right)^{-1} $$ ((1)) where M + is the Moore–Penrose pseudoinverse of M. Since O x is a diagonal matrix, it is easily seen that U T O A represents the eigenvectors of the matrix and the diagonal elements of O x are exactly the eigenvalues. We will discuss how to compute the eigenvectors for epigenomic data below. For now, suppose that the eigenvectors, U T O A, are given. Then for each observation x, $${} \begin{aligned} \left(U^{T}OA\right)^{-1}C_{x}\left(U^{T}OA\right) &= \left(U^{T}OA\right)^{-1}\left(U^{T}OA\right)O_{x}\\ &\quad \times\left(\!U^{T}OA\!\right)^{-1}\left(U^{T}OA\right) = O_{x}. \end{aligned} $$ Thus we can infer the emission matrix elements for x and the emission matrix O is inferred by combining all the O x 's. Note that we cannot infer the emission matrix elements directly from the eigenvalues computed from the above matrix C x because we do not know the order of the eigenvalues. We circumvent this problem by utilizing that the eigenvectors are the same for all observations. Given the emission probabilities, the remaining parameters of the HMM, π and A, are easily computed by expressing the sample moments in terms of the parameters, P 1=O π and P 2,1=O Adiag(π)O T, and the assumption that A,O and diag(π) are rank K (i.e. full-rank) as follows: $$\begin{array}{@{}rcl@{}} \pi &=& O^{+}P_{1}. \end{array} $$ $$\begin{array}{@{}rcl@{}} A &=& \left(O^{+}O\right)A\left(\text{diag}(\pi)\text{diag}(\pi)^{-1}\right) \\ &=& \left(O^{+}O\right)A\text{diag}(\pi)\left(O^{+}O\right)^{T}\text{diag}(\pi)^{-1} \\ &=& O^{+}\left(OA\text{diag}(\pi)O^{T}\right)\left(O^{+}\right)^{T}\text{diag}(\pi)^{-1} \\ &=& O^{+}P_{2,1}\left(O^{+}\right)^{T}\text{diag}(\pi)^{-1}. \end{array} $$ The original algorithm of [33] assumed that we have access to many short samples from the HMM. When adapting the algorithm for a few long samples (e.g. chromosomes in the epigenomic data), we found that the distribution of initial observations was quite different from the distribution of all observations. Estimating the initial state distribution π from the distribution of all observations P 1 introduces a significant amount of noise. Therefore, we modified P 1 to \(P^{\text {init}}_{1}\), which is the distribution of the first segment of all the chromosomes and we slightly modified Equation 3 as $$ \pi = O^{+}P^{\text{init}}_{1}. $$ More significantly, we empirically found that most entries of π are close to zero and so taking the inverse of π in the calculation of A would introduce noise. Thus, we modified the computation of A to remove the dependence on π as follows: $${} {\fontsize{9.5pt}{9.6pt}\selectfont{\begin{aligned} A &= A\left(A\text{diag}(\pi)O^{T}\right)\left(A\text{diag}(\pi)O^{T}\right)^{+} \\ &= \left(\left(O^{+}O\right)A\left(A\text{diag}(\pi)O^{T}\right)\right)\left(\left(O^{+}O\right)\left(A\text{diag}(\pi)O^{T}\right)\right)^{+}\\ &= \left(O^{+}P_{3,1}\right)\left(O^{+}P_{2,1}\right)^{+}. \end{aligned}}} $$ Computing the eigenvectors using major observations It is not easy to compute the eigenvectors correctly because of noise in the data. If we compute U T O A from the matrix for each observation, we will generally get different U T O A's for each observation in practice. Thus, we used a method based on our empirical observations about the epigenomic data. For each state, we identified the major (i.e. most frequently occurring) observation in the genome segments labeled with that state by taking the corresponding singular vectors from the matrix U and using it to perform the eigen decomposition. Note that this approach differs from that of [65], which required a much stronger anchor word assumption, where for each chromatin state there is some epigenetic mark combination that occurs only in that state and none of the others. For each hidden state i, we picked the major observation x ′=argmax x {U[ x,i]2}, analogously to a previous method for learning topic models from text documents [34]. Empirically we found that the major observation tends to appear in the chromatin state much more often than in the other chromatin states but we stress that this is not required by our method. Then we computed eigenvectors from \(C_{x^{\prime }}\phantom {\dot {i}\!}\) and extracted a single eigenvector corresponding to the largest eigenvalue, which in practice was usually well separated from the other eigenvalues. We did this for all states separately and combined the eigenvectors into a single matrix. In addition to picking a major observation for each state i, we also tried to compute the weighted summation of C x 's weighted by U[ x,i]2 for computing the eigenvectors. If the major observation makes U[ x,i]2 close to 1, the result of taking the sum will be very similar to our method of taking the max, which is what we indeed observed. Handling noisy data Since the observation data are noisy, some numerical issues can occur when computing the SVD and eigen decomposition. Previously, [66] described implementation issues using spectral learning for a different latent variable model in natural language processing. We implemented several optimizations to solve the analogous issues for HMMs. Handling negative probabilities The estimated probabilities should be non-negative but the estimated parameters can have negative values because the signs can be flipped while performing the numerical computations. Therefore, we took the absolute value of the estimated parameters and normalized the parameters following [66]. Parameter adjustment Although we estimated the HMM parameters using spectral learning (1), these estimates can sometimes be improved slightly by using the estimates from spectral learning as an initializer to the EM algorithm. We did not use this approach for the results reported in the main text but did so for the results with the Scripture binarization reported in Additional file 1 and this option is provided for users of the software. Smoothing of observation matrices Finally, although we did not use this optimization for the biological results reported here, our software allows for smoothing the observed pairs and triples matrices to address noise and data sparsity. Intuitively, our smoothing method is similar to adding pseudocounts to sparse data matrices except it uses the marginal frequencies of the observations instead of a uniform pseudocount. We provide this option for users who have noisier data than the ENCODE data. Data sets and experimental settings For the epigenetic modification data, we used eight histone marks (H3K4me1, H3K4me2, H3K4me3, H3K9ac, H3K27ac, H3K27me3, H3K36me3 and H4K20me1) for eight ENCODE cell types (GM12878, H1-hESC, HMEC, HSMM, HUVEC, K562, NHEK and NHLF) [4] from the processed data from ENCODE [67]. For two additional ENCODE cell types, HeLa-S3 and HepG2, the processed data for the histone mark H3K4me1 were not available and so we used seven histone marks for those two cell types. We note that H3K4me2 can be used instead of H3K4me1 for identifying enhancers, so we expect all of the ten cell types we tested to give meaningful biological results. The processed ChIP-seq data from ENCODE were binarized following the Poisson binarization method of [7]. From the binarization, our final data set consisted of presence/absence calls for each histone mark for each segment, where the human reference genome was divided into 15,181,508 segments of size 200 bp. The observation space consisted of all non-zero combinations of epigenetic marks and in our case the number of possible observations was 28=256. We fixed the number of chromatin states to 20 unless stated otherwise, following the suggestion of previous studies (e.g. [7]). This number of chromatin states is readily interpretable biologically and allows us to compare our annotations with previously published annotations. For further biological analysis, we focused on three ENCODE Tier 1 cell types (GM12878, H1-hESC and K562) and we used external biological data sets to validate our inferred chromatin states. GM12878 is a B-lymphocyte cell line infected by the Epstein–Barr virus, H1-hESC is a human embryonic stem cell line and K562 is an erythrocytic leukemia cell line. We used active TSS data from Cap Analysis of Gene Expression (CAGE) and polyA RNA-seq data, both from ENCODE [68] (version 10, May 2012). RNA Polymerase II (Pol2), P300 ChIP-seq and DNase I hypersensitivity data were also used. We used long non-coding RNAs from [69], and conserved enhancer regions from the VISTA enhancer browser [70]. For distal P300 peaks, we removed peaks within ±2.5 kb of the above TSS data. Also, we used PRMs and DRMs derived from transcription-related factor binding data [43]. All of the data sets except the VISTA enhancer data were specific to the cell type that we used for training the HMM. We excluded genomic segments (approximately 0.3% of the genome) in the Duke excludable regions following [22]. Gene ontology term analysis We used the program GREAT [42] to identify over-represented GO terms for regulatory elements. For promoters, we tested segments in a promoter state against all segments in all promoter states as background (parameters: proximal was 1 kb upstream and 1 kb downstream and no distal). For enhancers, we tested segments with distal P300 signal in an enhancer state against all segments with distal P300 signal in all enhancer states as background (parameters as default: proximal was 5 kb upstream and 1 kb downstream and distal was up to 1,000 kb). Conservation analysis We quantified the levels of selective constraint acting on the Spectacle chromatin states at the population genetic and cross-species levels. To analyze population genetic levels of constraint, we computed the MAFs of all SNPs from the European populations in the 1000 Genomes Project [71], processed as previously described [72]. We chose to use the MAF statistic instead of rooting the SNPs (e.g. with the chimpanzee allele) to avoid complications with multiple mutations at highly mutable sites (e.g. CpG sites) and because the extent of positive selection in humans is expected to be low due to the low effective population size. Thus the MAF is expected to be a reasonable indicator of the strength of negative selection on the SNPs. To minimize the effects of very rare variations, which are likely to reflect primarily the nucleotide mutational process, we removed SNPs with MAF less than 0.01, leaving 16,455,987 SNPs. For each chromatin state, we quantified the level of selective constraint by computing the fraction of low-frequency SNPs in that chromatin state, specifically those with MAF less than 0.1. We compared this with a background set of SNPs consisting of the 11,015,766 SNPs not contained in genes annotated by RefSeq, following [72]. To quantify the level of evolutionary conservation across species, we downloaded phastCons scores for the primate subset of the 46 vertebrate species from the UCSC Genome Browser (phastCons46way.primates) [73]. For each segment in the genome, we computed the mean of phastCons scores of the 200 bases for a conservation score of the segment. For each chromatin state, we quantified the cross-species level of selective constraint by computing the mean of phastCons scores of all bases for all segments in that chromatin state. base pair ChIP-seq: chromatin immunoprecipitation sequencing CpGI: CpG island gene-distal regulatory module Enh: strong enhancer EnhP: poised enhancer EnhW: weak enhancer GWAS: genome-wide association study HMM: hidden Markov model kb: kilobase MAF: minor allele frequency Pol2: RNA polymerase II binding PRM: promoter-proximal regulatory module Prom: strong promoter PromF: promoter flanking PromP: poised promoter PromW: weak promoter Repr: repressed region SNP: single nucleotide polymorphism SOM: self-organizing map SVD: singular value decomposition TSS: transcription start site Txn: transcribed region untranslated region Rivera CM, Ren B. Mapping human epigenomes. Cell. 2013; 155:39–55. Maze I, Noh KM, Soshnev AA, Allis CD. Every amino acid matters essential contributions of histone variants to mammalian development and disease. Nat Rev Genet. 2014; 15:259–71. Chen T, Dent SYR. Chromatin modifiers and remodellers: regulators of cellular differentiation. Nat Rev Genet. 2014; 15:83–106. ENCODE Project Consortium. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012; 489:57–74. Guttman M, Amit I, Garber M, French C, Lin MF, Feldser D, et al. Chromatin signature reveals over a thousand highly conserved large non-coding RNAs in mammals. Nature. 2009; 458:223–7. Article PubMed Central CAS PubMed Google Scholar Bernstein B, Mikkelson A, Xie X, Kamal M, Huebert D, Cuff J, et al. A bivalent chromatin structure marks key developmental genes in embryonic stem cells. Cell. 2006; 125:315–26. Ernst J, Kheradpour P, Mikkelsen TS, Shoresh N, Ward LD, Epstein CB, et al. Mapping and analysis of chromatin state dynamics in nine human cell types. Nature. 2011; 473:43–9. Maurano MT, Humbert R, Rynes E, Thurman RE, Haugen E, Wang H, et al. Systematic localization of common disease-associated variation in regulatory DNA. Science. 2012; 337:1190–5. Hindorff LA, Sethupathy P, Junkins HA, Ramos EM, Mehta JP, Collins FS, et al. Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc Nat Acad Sci USA. 2009; 106:9362–7. International Human Epigenome Consortium. http://ihec-epigenomes.org/. Adams D, Altucci L, Antonarakis SE, Ballesteros J, Beck S, Bird A, et al. BLUEPRINT to decode the epigenetic signature written in blood. Nat Biotechnol. 2012; 30:224–6. Bernstein BE, Stamatoyannopoulos JA, Costello JF, Ren B, Milosavljevic A, Meissner A, et al. The NIH Roadmap Epigenomics Mapping Consortium. Nat Biotechnol. 2010; 28:1045–8. Xiao S, Xie D, Cao X, Yu P, Xing X, Chen CC, et al. Comparative epigenomic annotation of regulatory DNA. Cell. 2012; 149:1381–92. Kasowski M, Kyriazopoulou-Panagiotopoulou S, Grubert F, Zaugg JB, Kundaje A, Liu Y, et al. Extensive variation in chromatin states across humans. Science. 2013; 342:750–2. Ernst J, Kellis M. Discovery and characterization of chromatin states for systematic annotation of the human genome. Nat Biotechnol. 2010; 28:817–25. Filion GJ, van Bemmel JG, Braunschweig U, Talhout W, Kind J, Ward LD, et al. Systematic protein location mapping reveals five principal chromatin types in Drosophila cells. Cell. 2010; 143:212–24. Kharchenko PV, Alekseyenko AA, Schwartz YB, Minoda A, Riddle NC, Ernst J, et al. Comprehensive analysis of the chromatin landscape in Drosophila melanogaster. Nature. 2011; 471:480–5. Hoffman MM, Buske OJ, Wang J, Weng Z, Bilmes JA, Noble WS. Unsupervised pattern discovery in human chromatin structure through genomic segmentation. Nat Methods. 2012; 9:473–6. Shen Y, Yue F, McCleary DF, Ye Z, Edsall L, Kuan S, et al. A map of the cis-regulatory sequences in the mouse genome. Nature. 2012; 488:116–20. Wang J, Lunyak VV, Jordan IK. Chromatin signature discovery via histone modification profile alignments. Nucleic Acids Res. 2012; 40:10642–56. Biesinger J, Wang Y, Xie X. Discovering and mapping chromatin states using a tree hidden Markov model. BMC Bioinformatics. 2013; 14:S4. PubMed Central PubMed Google Scholar Hoffman MM, Ernst J, Wilder SP, Kundaje A, Harris RS, Libbrecht M, et al. Integrative annotation of chromatin elements from ENCODE data. Nucleic Acids Res. 2013; 41:827–41. Lai WKM, Buck MJ. An integrative approach to understanding the combinatorial histone code at functional elements. Bioinformatics. 2013; 29:2231–7. Mortazavi A, Pepke S, Jansen C, Marinov GK, Ernst J, Kellis M, et al. Integrating and mining the chromatin landscape of cell-type specificity using self-organizing maps. Genome Res. 2013; 23:2136–48. Won KJ, Zhang X, Wang T, Ding B, Raha D, Snyder M, et al. Comparative annotation of functional regions in the human genome using epigenomic data. Nucleic Acids Res. 2013; 41:4423–32. Zeng X, Sanalkumar R, Bresnick EH, Li H, Chang Q, Keles S. jMOSAiCS joint analysis of multiple ChIP-seq datasets. Genome Biol. 2013; 14:R38. Article PubMed Central PubMed Google Scholar Sequeira-Mendes J, Aragüez I, Peiró R, Mendez-Giraldez R, Zhang X, Jacobsen SE, et al. The functional topography of the Arabidopsis genome is organized in a reduced number of linear motifs of chromatin states. Plant Cell. 2014; 26:2351–66. Dempster A, Laird N, Rubin D. Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc. 1977; 39:1–38. Rabiner LR. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE. 1989; 77:257–86. Huang X, Acero A, Hon HW. Spoken language processing. Upper Saddle River, NJ: Prentice-Hall; 2001. Bishop CM. Pattern Recognition and Machine Learning (Information Science and Statistics). Secaucus, NJ, USA: Springer-Verlag New York, Inc.; 2006. García V Sánchez JS, Mollineda RA, Alejo R, Sotoca JM. The class imbalance problem in pattern classification and learning. In: II Congreso Español de Informática (CEDI 2007). ISBN:978-84-9732-602-5 2007. Hsu D, Kakade S, Zhang T. A spectral algorithm for learning hidden Markov models. J Comput Syst Sci. 2012; 78:1460–80. Anandkumar A, Hsu D, Kakade SM. A method of moments for mixture models and hidden Markov models. In: Proceedings of the 25th Conference on Learning Theory (COLT); 2012 June 25–27; Scotland, Edinburgh. MLR Workshop and Conference Proceedings;: 2012. p. 1–33. 34. Pearson K. Contributions to the Mathematical Theory of Evolution. Philos Trans R Soc London, A. 1895; 186:343–414. Rice JA. Mathematical statistics and data analysis. Boston, MA: Cengage Learning; 2006. Zhang Y, Chen X, Zhou D, Jordan MI. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. In: Advances in Neural Information Proceeding Systems (NIPS). Red Hook, NY, USA: Curran Associates, Inc.: 2014. Guttman M, Garber M, Levin JZ, Donaghey J, Robinson J, Adiconis X, et al. Ab initio reconstruction of cell type-specific transcriptomes in mouse reveals the conserved multi-exonic structure of lincRNAs. Nat Biotechnol. 2010; 28:503–10. Hon GC, Hawkins RD, Ren B. Predictive chromatin signatures in the mammalian genome. Hum Mol Genet. 2009; 18:R195–R201. Zhou VW, Goren A, Bernstein BE. Charting histone modifications and the functional organization of mammalian genomes. Nat Rev Genet. 2011; 12:7–18. Roh TY, Cuddapah S, Cui K, Zhao K. The genomic landscape of histone modifications in human T cells. Proc Nat Acad Sci USA. 2006; 103:15782–7. McLean CY, Bristor D, Hiller M, Clarke SL, Schaar BT, Lowe CB, et al. GREAT improves functional interpretation of cis-regulatory regions. Nat Biotechnol. 2010; 28:495–501. Yip KY, Cheng C, Bhardwaj N, Brown JB, Leng J, Kundaje A, et al. Classification of human genomic regions based on experimentally determined binding sites of more than 100 transcription-related factors. Genome Biol. 2012; 13:R48. Rada-Iglesias A, Bajpai R, Swigut T, Brugmann SA, Flynn RA, Wysocka J. A unique chromatin signature uncovers early developmental enhancers in humans. Nature. 2011; 470:279–83. ENCODE Project Consortium. A user's guide to the encyclopedia of DNA elements (ENCODE). PLoS Biol. 2011; 9:e1001046. Hardison RC. Genome-wide epigenetic data facilitate understanding of disease susceptibility association studies. J Biol Chem. 2012; 287:30932–40. Schaub MA, Boyle AP, Kundaje A, Batzoglou S, Snyder M. Linking disease associations with regulatory information in the human genome. Genome Res. 2012; 22:1748–59. Pender MP. Infection of autoreactive B lymphocytes with EBV, causing chronic autoimmune diseases. Trends Immunol. 2003; 24:584–88. Toussirot E, Roudier J. Epstein–Barr virus in autoimmune diseases. Best Pract Res Clin Rheumatol. 2008; 22:883–96. Karmodiya K, Krebs AR, Oulad-Abdelghani M, Kimura H, Tora L. H3K9 and H3K14 acetylation co-occur at many gene regulatory elements, while H3K14ac marks a subset of inactive inducible promoters in mouse embryonic stem cells. BMC Genomics. 2012; 13:424. Gusev A, Bhatia G, Zaitlen N, Vilhjalmsson BJ, Diogo D, Stahl EA, et al. Quantifying missing heritability at known GWAS loci. PLoS Genetics. 2013; 9:e1003993. Chen K, Rajewsky N. Natural selection on human microRNA binding sites inferred from SNP data. Nat Genet. 2006; 38:1452–6. Xie B, Jankovic B, Bajic V, Song L, Gao X. Poly(A) motif prediction using spectral latent features from human DNA sequences. Bioinformatics. 2013; 29:i316–25. Zou J, Hsu D, Parkes D, Adams R. Contrastive learning using spectral methods. In: Advances in Neural Information Proceeding Systems (NIPS). Red Hook, NY, USA: Curran Associates, Inc.: 2013. Kilpinen H, Waszak SM, Gschwind AR, Raghav SK, Witwicki RM, Orioli A, et al. Coordinated effects of sequence variation on DNA binding, chromatin structure, and transcription. Science. 2013; 342:744–7. McVicker G, van de Geijn B, Degner JF, Cain CE, Banovich NE, Raj A, et al. Identification of genetic variants that affect histone modifications in human cells. Science. 2013; 342:747–9. Zhu J, Adli M, Zou JY, Verstappen G, Coyne M, Zhang X, et al. Genome-wide chromatin state transitions associated with developmental and environmental cues. Cell. 2013; 152:642–54. Heintzman ND, Hon GC, Hawkins RD, Kheradpour P, Stark A, Harp LF, et al. Histone modifications at human enhancers reflect global cell-type-specific gene expression. Nature. 2009; 459:108–12. Lian H, Thompson WA, Thurman R, Stamatoyannopoulos JA, Noble WS, Lawrence CE. Automated mapping of large-scale chromatin structure in ENCODE. Bioinformatics. 2008; 24:1911–6. Jaschek R, Tanay A. Spatial clustering of multivariate genomic and epigenomic information. Res Comput Mol Biol (RECOMB.), LNCS. 2009; 5541:170–83. Ucar D, Hu Q, Tan K. Combinatorial chromatin modification patterns in the human genome revealed by subspace clustering. Nucleic Acids Res. 2011; 39:4063–75. Ernst J, Kellis M. ChromHMM: automating chromatin state discovery and characterization. Nat Methods. 2012; 9:215–16. Jaeger H. Observable operator models for discrete stochastic time series. Neural Comput. 2000; 12:1371–98. Mossel E, Roch S. Learning nonsingular phylogenies and hidden Markov models. Ann Appl Probabil. 2006; 16:583–614. Arora S, Ge R, Moitra A. Learning topic models – Going beyond SVD. In: IEEE 53rd Annual Symposium on Foundations of Computer Science (FOCS). Washington, DC, USA: IEEE Computer Society: 2012. Cohen S, Stratos K, Collins M, Foster D, Ungar L. Experiments with spectral learning of latent variable PCFGs. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Stroudsburg, PA, USA: Association for Computational Linguistics: 2013. Wiggler. https://sites.google.com/site/anshulkundaje/projects/wiggler. Djebali S, Davis CA, Merkel A, Dobin A, Lassmann T, Mortazavi A, et al. Landscape of transcription in human cells. Nature. 2012; 489:101–8. Kelley D, Rinn J. Transposable elements reveal a stem cell-specific class of long noncoding RNAs. Genome Biol. 2012; 13:R107. Visel A, Minovitsky S, Dubchak I, Pennacchio LA. VISTA Enhancer Browser – a database of tissue-specific human enhancers. Nucleic Acids Res. 2007; 35:D88–D92. The 1000 Genomes Project Consortium. A map of human genome variation from population-scale sequencing. Nature. 2010; 467:1061–73. Article PubMed Central Google Scholar Friedländer MR, Lizano E, Houben AJ, Bezdan D, Báne~z-Coronel M, Kudla G, et al. Evidence for the biogenesis of more than 1,000 novel human microRNAs. Genome Biol. 2014; 15:R57. Pollard KS, Hubisz MJ, Rosenbloom KR, Siepel A. Detection of nonneutral substitution rates on mammalian phylogenies. Genome Res. 2010; 20:110–21. We thank Kamalika Chaudhuri, Alexander Schliep and Mona Singh for helpful comments and discussions. We also thank Jason Ernst for making the ChromHMM code freely available and for helpful comments on a previous version of this manuscript. Department of Genetics and BioMaPS Institute, Rutgers University, 145 Bevier Road, Piscataway, NJ, USA Jimin Song & Kevin C Chen Jimin Song Kevin C Chen Correspondence to Kevin C Chen. JS and KCC designed the project and wrote the paper. JS implemented the method and analyzed the data. Both authors read and approved the final manuscript. Additional file 1 Supplementary note. Supplementary results, figures and tables. Song, J., Chen, K.C. Spectacle: fast chromatin state annotation using spectral learning. Genome Biol 16, 33 (2015). https://doi.org/10.1186/s13059-015-0598-0 Accepted: 26 January 2015 Epigenetic Mark Chromatin State Histone Mark Class Imbalance Chromatin Mark
CommonCrawl
Honeybee communication during collective defence is shaped by predation Andrea López-Incera1 na1, Morgane Nouvian ORCID: orcid.org/0000-0003-4882-23722,3,4, Katja Ried1, Thomas Müller5 & Hans J. Briegel1,5 207 Altmetric Social insect colonies routinely face large vertebrate predators, against which they need to mount a collective defence. To do so, honeybees use an alarm pheromone that recruits nearby bees into mass stinging of the perceived threat. This alarm pheromone is carried directly on the stinger; hence, its concentration builds up during the course of the attack. We investigate how bees react to different alarm pheromone concentrations and how this evolved response pattern leads to better coordination at the group level. We first present a dose-response curve to the alarm pheromone, obtained experimentally. This data reveals two phases in the bees' response: initially, bees become more likely to sting as the alarm pheromone concentration increases, but aggressiveness drops back when very high concentrations are reached. Second, we apply Projective Simulation to model each bee as an artificial learning agent that relies on the pheromone concentration to decide whether to sting or not. Individuals are rewarded based on the collective performance, thus emulating natural selection in these complex societies. By also modelling predators in a detailed way, we are able to identify the main selection pressures that shaped the response pattern observed experimentally. In particular, the likelihood to sting in the absence of alarm pheromone (starting point of the dose-response curve) is inversely related to the rate of false alarms, such that bees in environments with low predator density are less likely to waste efforts responding to irrelevant stimuli. This is compensated for by a steep increase in aggressiveness when the alarm pheromone concentration starts rising. The later decay in aggressiveness may be explained as a curbing mechanism preventing worker loss. Our work provides a detailed understanding of alarm pheromone responses in honeybees and sheds light on the selection pressures that brought them about. In addition, it establishes our approach as a powerful tool to explore how selection based on a collective outcome shapes individual responses, which remains a challenging issue in the field of evolutionary biology. Whoever has delighted in honey knows how much of a valuable food source a honeybee colony can be. To fend off the many predators attracted by this nutrient trove, honeybees have evolved stingers and a powerful venom efficient against vertebrates and invertebrates alike. But arguably their most important weapon is number: honeybees build a collective defence against intruders, stinging, threatening and harassing them in dozens or hundreds. Central to this response is the alarm pheromone carried directly on their stingers, whose banana-like scent is well known to beekeepers. When released, the sting alarm pheromone (SAP) alerts and attracts other bees, recruiting them to the site of the disturbance and priming them to sting. It is a chemically complex blend of over 40 compounds, but its main component, isoamyl acetate (IAA), is sufficient to trigger most of the behavioural response. The SAP has been extensively studied, both from the releaser end (production, dispersal) and from the recipient end (detection, reaction, role of the different compounds, role of the context in which it is perceived — reviewed in [1]). Despite this wealth of knowledge on the SAP, two important aspects of its function remain elusive: its quantitative action and the evolution of this response. In this study, we use a combination of in vivo and in silico methods to better understand how honeybees react to different concentrations of alarm pheromone, how this impacts the organisation of the collective response, and which selection pressures might have driven the evolution of this defensive strategy. If they detect a threat, guard bees can disperse the SAP actively by raising their abdomen, extruding their stinger and fanning their wings. Alternatively, since the SAP is carried on the stinger itself, it is automatically released upon stinging. Thus, the SAP potentially carries information about the presence and location of a threat, but also about the magnitude of the attack already mounted against it. Several studies already demonstrated that the intensity of the response is correlated with the amount of alarm pheromone in the atmosphere [2–5]. While these studies provided valuable information, they all tested bees in groups — and often in field conditions; hence, they could not establish an individual dose-response curve. Furthermore, in many cases, the behavioural readout was rather indirect (e.g. attraction or fanning); only one study [5] actually measured stinging frequency. To comprehend how each bee makes the decision to sting, and thus how the colony as a whole coordinates actions during a defensive event, an individually resolved dose-response curve is necessary. Here, we took advantage of an assay developed a few years ago [6], which measures stinging responses under controlled conditions, to fill this knowledge gap. We found that, indeed, there is a steep ramp-up phase at low to medium alarm pheromone concentrations, in which the stinging likelihood of a bee increases together with the concentration. In addition, we show for the first time the existence of a second, decreasing phase at high pheromone concentrations. This is consistent with an anecdotal report that a high dose of IAA became repellent to bees [7]. How to interpret this experimental curve? More precisely, what could have driven the evolution of such a response pattern? In social insect colonies, individuals coordinate their actions to reach fitness goals set at the colony level, effectively functioning as a single evolutionary unit. Thus, individual responses can only be understood through the collective outcome that they contribute to. Making the link between individual and collective behaviour has been the focus of a large body of work. Modern tracking technologies [8] combined with physics-inspired modelling of individuals have proven that collective movement, for example of marching locusts and schooling fish [9–11], could arise from simple interaction rules between members of a group [12]. To explain the more complex division of labour of social insects, threshold models for stimulus-response reactions of individual agents have been used, but they usually do not give an account of the underlying mechanisms or of their evolution [13]. Such an evolutionary perspective on self-organisation has been provided using neural networks [14, 15] but while being one of the most powerful tools of machine learning, neural networks are typically hard to interpret because of their high dimensionality. Evolutionary game theory has also recently been applied to study task allocation in social insects [16], modelling behavioural changes on the scale of colony lifetime under certain imposed pay-off relations for the individual behaviour. Game-theoretic approaches provide interesting novel perspectives on the dynamics of task distribution in a population, but usually give no account of the agent-based mechanisms that underlie this dynamics [17], in stark contrast to the approach we will follow in this paper. To address this evolutionary question, we resort to Projective Simulation [18], which is a simple model of agency that integrates a notion of episodic memory with a reinforcement learning mechanism. Projective Simulation (PS) allows for a realistic encoding of the sensory apparatus and motor abilities of the agents, which can perceive, make decisions and act as individuals. An important distinction between PS and classic neural network approaches is the straightforward interpretability of the decision process, because of its restricted dimensionality. When interacting with other agents, individual actions may influence the perceptions and responses of the rest of the ensemble, which in turn leads to the emergence of collective behaviour. Crucially, neither the individual responses nor the interaction rules among agents are fixed in advance. Instead, they are developed throughout a learning process in which the agents' decisions are reinforced if they are beneficial under certain evolutionary pressures (note that here "learning" is therefore a process of selection between generations rather than happening within an individual's lifetime). All of these features make Projective Simulation a suitable framework to model behavioural experiments like the one presented above, since it allows us to analyse the observed responses from both the individual and the collective perspectives. Furthermore, we are able to draw conclusions about the possible evolutionary pressures that may have led to such behaviour by means of the reinforcement learning process. In this work, we model each bee as a learning PS agent and the colony as an ensemble of agents that undergoes repeated encounters with predators. During each encounter, bees can die from stinging but this also participates in deterring the predator, or they can be directly killed by the predator. Since the success of a bee colony is highly dependent on its available workforce, the overall performance of the colony is then evaluated by counting the number of bees that are still alive at the end of the encounter, and the individual response pattern is rewarded accordingly. Hence, the whole simulation is similar to an evolutionary process, in which the behaviour of each generation of bees is passed on depending on its reproductive success. By investigating systematically the parameters of a simple but realistic predator model, we found that the initial ramp-up in stinging responsiveness was mainly driven by uncertainty on the predator detection (frequency of false alarms) and that the pheromone concentration at peak aggressiveness was dependent on the most resistant predator encountered. The second, decreasing phase could be interpreted as the combination of a self-limiting mechanism to avoid over-stinging (i.e. stinging far beyond what would be necessary to deter the predator) and of a return to baseline due to lack of experience in this range of concentrations. The native range of the Western honeybee (Apis mellifera) spans a large part of Europe, Africa and Middle East Asia [19] and thus includes a wide diversity of ecosystems. As a consequence, multiple subspecies exist that have adapted to local conditions. In particular, the African honeybees are known for having stronger defensive reactions than their European counterparts: they recruit more bees, do so more quickly and are more persistent [20, 21]. Part of the explanation resides in their higher sensitivity to the SAP [4]. As a final test of our model, we tune the parameters to represent the constraints that were hypothesised to drive the evolution of this striking difference in behaviour (mainly a higher predation rate in Africa). We show that, with this input, our model accurately predicts the different strategies adopted by each subspecies. Thus, this novel application of Projective Simulation [22] to a group of agents with a common goal is promising for the study of social insects in general, and of the honeybee defensive behaviour in particular. Experimental results To better understand if and how bees use the alarm pheromone concentration as a source of information during a defensive event, we first sought to establish a dose-response curve to the SAP. This requires to precisely control the pheromone concentration inside the testing arena. We used two methods to create a range of alarm pheromone concentrations: (1) pulling out a defined number of stingers from live, cold-anaesthetised bees, or (2) diluting synthetic IAA. To verify that the final concentrations scaled linearly with either the number of stingers or the dilution factor, we measured odour concentrations inside the arena using a photoionization detector (PID). As shown on Fig. 1, both methods reliably created linear series of concentrations (IAA: Pearson's r=0.9989,p<0.001; stingers: Pearson's r=0.9922,p<0.001). The absolute concentrations reached by using stingers appear to be much lower than the ones obtained with synthetic IAA, but one should keep in mind that the delivery method was also very different (stingers placed on the dummy vs. IAA carried in by the air flow) and that only a subset of the SAP compounds can be detected by the PID, making a direct comparison difficult. Stinging response as a function of the pheromone concentration. a Stinging frequency of a bee as a function of the number of pulled stingers on the dummy. a' Schematic of the behavioural assay (top view). The bee is facing a rotating dummy inside the arena, on which pulled stingers (red arrowheads) have been placed. b Stinging frequency of a bee as a function of the IAA concentration inside the arena. b' Schematic of the behavioural assay (top view). The bee is facing a rotating dummy inside the arena, while the alarm pheromone is carried in by three air flows (red arrows). In both graphs, the shaded area corresponds to the 95% confidence interval, estimated from a binomial distribution with our sample size (126 and 72 bees for each point in a and b, respectively). Insets show photoionization detector (PID) measurements of the SAP and IAA concentrations, which are linearly correlated with both the number of stingers and the dilution factor We first evaluated the stinging response of single bees faced with a rotating dummy on which a certain number of freshly pulled stingers were placed, to mimic previous attacks by other bees. The stinging likelihood of an individual bee increased linearly with the number of stingers (see Fig. 1a), from about 20% of the bees reacting to the dummy alone to 60% of them stinging when 7 stingers — and hence 7 "units" of SAP — were added (n=126 bees per data point so 756 bees in total; Pearson's r=0.9845,p<0.001). Three colonies contributed equally to this dataset, which allowed us to check for variations in this response pattern (see Fig. 2). We found no significant difference on the regression slope between colonies (ANOCOVA, interaction term: F(2,754)=0.9,p=0.432), indicating that the effect of the SAP was similar on all bees. However, bees from colony 1 were overall more likely to sting than bees from colony 2 (ANOCOVA followed by Tukey's HSD on intercepts, p=0.024), while bees from colony 3 showed intermediate aggressiveness (ANOCOVA followed by Tukey's HSD on intercepts, 1 vs. 3: p=0.150, 2 vs. 3: p=0.550). Such behavioural variability is not surprising, as it is known that the aggressiveness of honeybees is strongly influenced by genetic factors [23]. Stinging frequency of a bee as a function of the number of pulled stingers on the dummy, for different colonies of origin (numbered from 1 to 3), evaluated using n = 42 bees for each data point, i.e. a total of 756 bees across the 3 colonies. An ANOCOVA detected no significant difference in alarm pheromone responsiveness between colonies (similar slopes), but bees from colony 1 were more aggressive than bees from colony 2 (different means) The advantage of getting the SAP from stingers is that we could work with the full pheromonal blend, which is otherwise difficult to obtain. Its main inconvenience, however, is that only limited concentrations can be reached. To get at these higher concentrations, we repeated the experiment with different dilutions of IAA, the main active component of the SAP. The results are shown in Fig. 1b. Consistent with our previous results, we observed a linear increase in stinging responsiveness between 0 and 25% IAA (Pearson's r=0.9127,p=0.011). The two higher concentrations that we sampled (50% and 100%) revealed a decay in aggressiveness. While these 3 points were not sufficient to test for a correlation, the decrease in stinging frequency between 25 and 100% was significant (χ2(1,96)=7.3612,p=0.007). Modelling the honeybees' collective response The agent-based learning framework of PS [18] was used to model the defensive behaviour of honeybee colonies. PS agents possess an explicit episodic memory (ECM), which is a network of clips representing how sensory information encountered by the agent is connected with possible action outputs (Fig. 3). The state of the agent's memory (i.e. the weight of the connections) at a given time is represented by a real-valued matrix, h. In our case, this memory structure is not acquired through individual experience, but rather selected across generations. This evolutionary process is represented by the parameters g, R and γ, which model, respectively, the past behaviour of the population, the success of the current strategy and "forgetting" (imperfect selection). To represent different environmental pressures that may shape the resulting collective strategy, we also consider that bee populations may vary in how early they detect a predator approaching (tatt), and in the time lag Δtv with which the bees visually perceive the predator's desistance, enabling them to terminate their defence independently of pheromone percepts. Finally, our model includes several variables describing the predation pressure experienced by bee colonies. These are the killing rate k of a predator (number of killed bees per time step), a threshold value sth quantifying the number of stings after which a predator stops its attack, and the rate of false alarms rf. A more detailed description of the model and its parameters is given in the "Methods" section and in Fig. 3. Theoretical model of one defensive event. We model the colony as an ensemble of N=100 identical bees, which are artificial learning agents that decide whether to sting ("Sting" and S) or not ("Chill" and ∙) based on their sensory perception. These percepts include the alarm pheromone concentration (binned logarithmically from 0 to 8) and a visual signal that the predator is leaving, vESC. The 100 bees act sequentially, so there are 100 time steps, and each stinging bee releases one unit of alarm pheromone (small ticks on middle panel's y axis), so that the sensory environment of a bee is defined by the behaviour of previous bees. A predator attacks the colony and kills k bee per time step from the time it reaches the colony, tatt, until it receives a certain number of stings sth. At this point, it stops killing, but is still in the vicinity for Δtv time steps before truly escaping, modelled as the activation of vESC for the remaining bees after time Δtv. Once every bee has made a decision, the outcome of the defensive event is evaluated and the individual decision process is updated based on the colony performance (reward factor R, proportional to the number of remaining live bees) for each percept (glow matrix g), with some forgetting (γ). The upper panels show the internal structure of bees and the predator's parameters, the middle panel the time course of the bees' perception and the bottom panel the behaviour of both bees and predator during an example trial In the following, we describe the variations of the model through which we tested the influence of different environmental pressures on the resulting collective strategy, with a view to uncovering a plausible causal explanation for the empirically observed reaction of bees to the alarm pheromone. Interval between predator detection and the start of its attack A small proportion of honeybees, termed guards, sit at the nest entrance and monitor its surroundings [1]. They may detect predators early enough to start the defensive response before the intruder reaches the colony. We varied the time tatt at which a predator starts killing bees after it was detected: high values for tatt thus represent colonies that invest heavily in guards or monitor large areas, whereas low values can be taken to represent colonies that only get alerted once the predator is already close by. As shown in Fig. ??a, we find that the probabilities of stinging are lower when the bees detect the predator early (tatt=60) than when they do not have guards (tatt=0). Nonetheless, populations with guards actually fare better than their counterparts, as the predator has less time to kill bees before being deterred (Fig. ??a'). Stinging response as a function of alarm pheromone concentration (first column) and performance (second column) in modelled colonies facing different environmental pressures. Triangles denote colonies trained for 80,000 trials with parameters: N=100,sth∈(16,40),k=1,tatt=0,rf=0 and Δtv=10. They are compared to colonies that a invest in guards to detect the predator earlier (tatt=60); b have higher false alarm rates (rf=0.3,0.6); c face only weak predators (sth∈(7,16)); d face a non-uniform (n-u) distribution of predators, in which weak predators (sth∈(16,26)) appear 4 times more often than strong ones (sth∈(27,40)); and e need more time to visually detect that the predator is escaping (Δtv=20). Only one parameter is varied in each comparison. Panel f compares colonies that defend small and large territory areas, which we model by setting tatt=Δtv=10 and tatt=Δtv=40, respectively. In all plots of the first column, percepts in which the probability of stinging remained as initialised (ps=0.5) because they were never reached are not included. Shaded areas indicate the sth range. Markers are at the end of each percept's bin. Probabilities ps for percept vESC are given in Table 1. Average ± one standard deviation for 50 independently trained populations. In the second column, panels a', b', d' and f' display the percentage of live bees at the end of encounters, depending on the predator resistance sth. The upper bound indicates the optimal performance for the given scenario. Panels c' and e' show the total number of bees stinging as a function of predator resistance, and the dashed lines indicate the maximum number of times bees could sting before percept vESC is activated. Numbers below this boundary indicate self-limitation based on pheromone concentration. Average ± one standard deviation in the last 500 trials of 50 independently trained populations Predation rate In our model, each trial corresponds to a defensive event, which we assume is started by a non-specified disturbance close to the colony (for example, the visual perception of an object moving in the vicinity). In reality, most of these stimuli are likely to be unrelated to predation: they could be animals just passing by, falling branches and so on. Reacting to these stimuli would mean that bees waste time and effort that could be better invested (and, at worst, even die from stinging unnecessarily). The frequency of these false alarms depends on the environment considered: a high density of predators and/or the presence of specialised predators may be translated by a high predation rate for colonies, and hence fewer false alarms. To explore this, we include in our model trials in which there is no predator, which appear with probability rf. Formally, for these trials, sth is set to 0, since there is no actual need to sting. We observe that, as the percentage of false alarms in the learning process grows larger, the initial probability of stinging gets lower but the maximum reached stays similar (Fig. ??b). This ramp-up is consistent with the experimental data and can be interpreted as follows: since overreaction has a cost for the colony, few bees would sting in the absence of alarm pheromone, which prevents many bees from leaving the hive after every minimal signal of danger. However, the pronounced increase in the probability of stinging (ps) enables a fast recruitment when there actually is a predator. Hence, the steepness of the ramp-up phase gives us insights into the predation rate to which populations were subjected. Table 1 Probability of stinging (ps) for percept vESC at the end of learning processes with the parameters specified that are analysed in Fig. ??. Learning process #1 (triangles in Fig. ??) is used as a baseline for comparison Diversity of predators Honeybee colonies attract a wide array of vertebrate predators, from mice and toads to humans and bears. Obviously some are easier to deter than others, and their distribution may vary depending on the ecosystem considered. We evaluate the influence of predator diversity by changing the range of sth. Each population adapted primarily to the most resistant predator encountered, with bees stinging with very high probability until they reach an alarm pheromone concentration at which all predators should be gone (Fig. ??c). While it makes sense that honeybee colonies need to be able to cope with the worst predator in their environment, it is interesting to note that "weak" predators have little impact on the defensive strategy adopted. Predators can also vary in the relative frequency at which they are encountered. We investigate this by comparing a scenario with uniform distribution of predators to one in which "small" (sth∈(16,26)) predators are 4 times more likely than "large" (sth∈(27,40)) ones. We observe (Fig. ??d) that the bees that more often encounter weak predators have a lower probability of stinging at high pheromone concentrations, most likely because these are not often reached. As a result, these populations are less efficient against large predators (Fig. ??d'). Hence, the relative abundance of the different predators in a given environment also influences the defensive strategy adopted. Defence termination Finally, we consider how bees determine when to stop stinging. Realistically, a predator does not disappear as soon as its stinging threshold is reached: it needs time to move outside of the defended area. This delay is implemented as the parameter Δtv in our model. Agents trained with Δtv=20 have lower probabilities of stinging than those with Δtv=10 for high pheromone concentrations (Fig. ??e). Since it takes more time for the predator to leave, these populations are at a higher risk of "wasting" bees (i.e. bees stinging even though sth was already reached), as is indeed observed for weak predators. However, they seem to compensate this effect when dealing with strong predators by relying on high pheromone concentrations to signal that an efficient defensive response has already been achieved. Thanks to this adaptation, they are able to curb their number of stings to value close to sth for strong predators (Fig. ??e'). This could explain the decay in alarm pheromone responsiveness observed in the experimental data. Alternatively, this decay may be a simple return to baseline, as is observed when the agents do not have to address this issue (Δtv=0, the predator is immediately removed, see Additional file 1: Figure S1). In any case, measuring when this decay starts would provide information about the range of predators encountered by real populations. Territory size Both tatt and Δtv are linked to the time needed for the predator to move from the edge of the defended territory to the nest itself. Hence, we could expect that for a given population, tatt=Δtv and that this single value represents the territory radius. It is interesting to note that an optimal behaviour would require a high value for tatt (to get a better chance to cope with predators with high sth, Fig. ??a') but a low value for Δtv (to avoid over-stinging predators with small sth, Fig. ??e'). This is indeed what we observe in Fig. ??f': colonies defending a smaller area perform better when confronted with weak predators, but lose more bees when faced with resistant predators. Thus, defending a large area seems to be an adaptation to predators that are on average quite resistant, whereas defending a small area is a better strategy when most predators can be easily deterred. Case study: European vs. African bees The native range of the Western honeybee (Apis mellifera) includes a wide variety of ecosystems [19]. As a consequence, multiple subspecies exist that have adapted to local conditions. Among them, A. m. scutellata (which we call here "African bees" for simplicity) are well known for their fierce attacks, although most of the experimental data was gathered on "Africanized" bees, a hybrid subspecies which retained the defensive behaviour of their African ascendants. When faced with a predator, these bees recruit more bees, do so more quickly and are more persistent [20, 21]. Part of the explanation could reside in their higher sensitivity to the SAP, as shown in Fig. 5a (data from [4]). It has been hypothesised that several traits of African bees, including high defensiveness, evolved in response to higher predation rates in the tropics [24, 25]. From our model perspective, this would mean fewer false alarms. African bees may also face specialised predators such as ratels (Mellivora capensis [26–28]), which are more difficult to deter. In our model, we translate this into a higher range of sth for African bees than for European bees. Comparison of European and African(ized) honeybees. a Comparison of the aggressive score as a function of the alarm pheromone concentration for Africanized (AHB, black triangles) and European (EHB, blue circles) honeybees. Experimental data modified from [4] (see the "Comparison between Africanized and European bees" section). Stars indicate significant differences in responsiveness according to the original paper (* p<0.05, ** p<0.01). b Comparison of the learned probabilities of stinging between European and African populations, from modelling. Shaded areas indicate the range of predators that European (blue, sth∈(15,25)) and African (grey, sth∈(15,70)) colonies faced during the learning process. Average ± one standard deviation from 50 independently trained populations, at the end of a learning process with 105 trials. For clarity, percepts for which the probability of stinging remains at the initialisation values (ps=0.5) are not shown. Visual percept vESC: ps=0.09±0.01 for EHB and ps=0.05±0.01 for AHB. Parameters: \(N=200, \gamma =0.003, k=1, t_{{att}}=0, \Delta t_{v}=10, r_{f}^{E}=0.6, r_{f}^{A}=0.3\). c Performance of each colony when faced with predators of sizes sth=20,55, from modelling. European colonies go extinct when they encounter a predator that is more resistant than the ones they faced during the learning process, whereas African colonies are able to survive We modelled two types of bee populations that differed only with respect to these properties: frequency of false alarms (rf) and range of sth. Consistent with experimental data, the populations which have learned to defend against larger and more frequent predators develop a stronger reaction to the pheromone. The higher frequency of attacks in the African case drives the agents to react more strongly at low concentrations. In addition, the probability of stinging remains high over a much larger range of alarm pheromone concentrations due to the larger sth interval (Fig. 5b). As a result, African populations are able to survive attacks from very resistant predators, whereas European populations would go nearly extinct (Fig. 5c). These results support the hypothesis that the difference in aggressiveness at low pheromone concentrations [4] is due to higher attack rates in Africa. Furthermore, under the assumption that African bees also encounter more resistant predators, we observe that they keep stinging over a broader range of alarm pheromone concentrations. It would be interesting to test African(ized) bees at these higher pheromone concentrations, as we did for European bees, in order to verify this prediction. We use experimental and theoretical work to better understand how the collective defensive behaviour of honeybees is organised. We show that not only the presence, but also the concentration of alarm pheromone provides important information to individual bees. We explore the meaning of this regulation of stinging behaviour by alarm pheromone concentration via a relatively simple model of collective defence and characterise the shaping role of several environmental factors such as predation rate, predator resistance and predator diversity. The architecture of our agent's decision-making process is very simple: it only postulates that the agent can discriminate between different concentrations of pheromone and acts accordingly. Honeybees are capable of assessing and even learning absolute odour concentrations [29]; hence, it would be tempting to see our model as a simplified view of the processing being implemented by an actual bee brain. However, we have to keep in mind that our experimental results were obtained by testing each bee only once and can thus be interpreted in another way: the gradation in response could arise at the population level, from varying thresholds of response between bees [30] (rather than within each individual bee). It is important to note that both interpretations are valid within the scope of our model, because our reward scheme is based on the population response. In the first interpretation, the internal structure of each agent could thus directly represent the decision-making process of a real bee, hard-wired in the brain. In the alternative interpretation, the probability of stinging would be more accurately described as the proportion of bees with a stinging threshold lower than the given alarm pheromone concentration, and hence, the internal structure of the agents would rather represent the decision process of an ersatz "average bee" given this population structure. In this case, selection should act on the mechanisms controlling this individual variability, to ensure that an appropriate distribution of thresholds is maintained across generations [31]. Further experiments would be necessary to decide between these two interpretations, for example by repeatedly testing individuals at different concentrations. Independently of this interpretation, our data suggests that each individual has at least two internal thresholds: one to start stinging and one to stop. This is consistent with an anecdotal field observation that high concentrations of alarm pheromone become repulsive to bees [7]. Indeed, without a stopping threshold, the dose-response curve would only plateau at high concentrations instead of decreasing as we observed. The ecological function of this newly discovered second threshold, according to our theoretical results, could be to avoid over-stinging (i.e. stinging and chasing of a predator already in the process of moving away from the colony). Alternatively, it could be that these high concentrations are never reached in the wild and hence that no adaptive response could evolve. Of course, our model is by no means exhaustive: there may be other environmental factors influencing the honeybees' defensive behaviour that we did not investigate. We did try, however, to include the most common descriptors of predation. It is also important to note that the factors that we did include are sufficient to provide plausible explanations regarding the adaptive value of the alarm pheromone dose-response curve. Moreover, each of these factors only had a narrow effect on alarm pheromone responsiveness, such that the features of the experimental curve could not be obtained in many different ways. As a consequence, our model could successfully identify the most pertinent factors. Detailed ecological surveys of the interactions between bees and predators are difficult to conduct, and indeed, we could not find any quantitative information about the prevalence of specific predators or about predation rates in a given environment. Our results could serve as an alternative tool to gather information on this topic. We provide an example of this using previously collected data comparing African(ized) and European bees and found that our model accurately predicted the experimental data when considering that African bees were subjected to higher predation rates. Hence, measuring alarm pheromone responses could inform about the natural history and environment of given populations where this knowledge is missing. Nonetheless, our results should first be validated by gathering accurate field data before they could be used in this way. From what we already know, African bees defend a much larger area around their nest [20]. According to our model, this strategy is better suited to tackle resistant predators. Future studies could also test this hypothesis by experimentally measuring the SAP concentration at which individual stinging responsiveness is maximum, since we found this to be representative of the most resistant predator. On the theoretical side, our model could be expanded into a more fine-grained analysis of collective defence. For example, it would be interesting to allow some heterogeneity between agents, since we know that at least two types of bees participate in the defence: guards and soldiers [1]. Another interesting challenge would be to let the agents learn about the optimal size of the territory to defend (tatt and Δtv). Finally and most importantly, we believe that this agent-based modelling approach is an exciting tool to study the evolution of collective behaviour in general, which could be applied to other tasks and species. For example, trail building and following in ants is also a process that relies on pheromone accumulation and in which individuals make decisions based on pheromone concentration [32, 33]. It could be interesting to adapt our framework to this alternative type of collective decision-making. Social insect colonies are often called "superorganisms" because of how some tasks are distributed between colony members, which is reminiscent of the different functions of cells and organs in multi-cellular organisms. Understanding how such coordination is achieved and how selection shaped the behavioural responses of individual group members is a fascinating and complex question. Here, we contribute to this field of research by combining experimental work and a novel computational approach to better comprehend the collective defensive behaviour of honeybees. In particular, we focused on responsiveness to the sting alarm pheromone, as this signalling mechanism is at the core of the bees' communication during a defensive event. First, we show experimentally that the stinging likelihood of individual bees varies depending on the concentration of SAP in the atmosphere. This response pattern exhibits at least two phases: an initial, ramp-up phase from low to intermediate concentrations of pheromone, followed by a decrease at high concentrations. To interpret these results, we built a relatively simple agent-based model of the honeybee defensive behaviour. The novelty of our approach resides in adapting Projective Simulation to a group of agents with a common goal (and hence a common reward scheme). We also added the constraint that all agents inherit the same decision process, as this better represents the heritability of aggressive traits across generations. This agent model allowed us to explore the impact of different evolutionary pressures on individual responsiveness to the alarm pheromone. From these insights, we postulate that the existence of the first phase (ramp-up) in SAP responsiveness results from a trade-off between avoiding false alarms and quickly recruiting nestmates in the presence of real predators. The SAP intensity at which the stinging probability peaks depends on the most resistant predator in a given environment. Finally, the decrease in stinging likelihood at high SAP concentrations could be due to a self-limiting mechanism to avoid unnecessary stings, or simply the consequence of a return to baseline because such high concentrations are never encountered in the wild, and hence no specific response had the chance to evolve. Altogether, our work provides new insights into the defensive behaviour of honeybees and establishes PS as a promising tool to explore how selection on a collective outcome drives the evolution of individual responses. Experimental material and methods The experiments were conducted at the University of Konstanz, with honeybees (Apis mellifera) from freely foraging colonies hosted on the roof. The experiment in which the alarm pheromone was obtained by pulling out stingers (see the "Alarm pheromone" section below) was conducted between May and August 2018. Three colonies contributed equally to this experiment (n=252 bees per colony, 756 bees in total). The experiment with synthetic alarm pheromone was conducted in May/June 2019. The bees were taken from 4 colonies (including one from 2018, the other colonies were lost during the winter), again in equal numbers (n=96 bees per colony, 384 bees in total). To catch the bees, a black ostrich feather was waved in front of the hive entrance. The bees attacking the feather (and thus involved in colony defence) were collected in a plastic bag and cooled in ice for a few minutes, until they stopped moving. They were then placed alone in a modified syringe with ad libitum sugar water (50% vol/vol) and given at least 15 min to recover from the cold anaesthesia before testing. Alarm pheromone The alarm pheromone of honeybees includes over 40 compounds [34, 35], making it difficult to synthesise. Hence, in a first experiment, we pulled stingers out of cold-anaesthetised bees to get the full alarm pheromone blend. This manipulation was done as fast as possible and just before the start of the trial. The stingers were placed on the dummy (the stinging target) to mimic other bees stinging it before the start of the trial. The range of alarm pheromone concentration was obtained by varying the number of stingers: 0, 1, 2, 3, 5 or 7 stingers. A clean air flow entered the arena from 3 holes on the sides, equally spaced, and the arena lid was drilled with an array of small holes to allow the air out. The advantage of pulling stingers was to obtain the full odour blend, but the inconvenience was that the concentrations we reached were limited. To cope with this issue, we performed a second set of tests in which we only used the main component of the alarm pheromone, iso-amylacetate (IAA, Merck Ⓡ). IAA is sufficient to reproduce most of the action of the full blend [36, 37]. In this case, IAA was diluted in mineral oil (Merck Ⓡ) to a final concentration (vol/vol) of 0% (control), 0.1%, 1%, 5%, 10%, 25%, 50% or 100% (pure). To deliver the odour, 10μl of solution were put on a small filter paper which was then placed inside the air flow entering the arena. To verify that the odour concentration was linearly correlated to either the number of stingers or the dilution ratio, measurements were made inside the arena with a photoionization detector (PID). The measures were taken every 0.01 s, and the data was smoothed on a sliding 1 s (101 points) time window centred on each point. The amplitude of the odour signal was then calculated by subtracting the baseline (average of the 5 s just before the stingers were inserted or the air flow was started) from the peak value (average of the 5 s centred on the maximum value reached). For tests with IAA, a single measure was taken for each concentration. In tests with stingers, the concentrations were close to the limit of the PID sensitivity; hence, we repeated the measures 3 times and averaged the results to increase reliability. Stinging assay The protocol for the stinging assay has been described in detail in [6]. Briefly, the bee was introduced into a cylindrical testing arena where it faced a rotating dummy coated in black leather. The stinging behaviour was first scored visually and defined by the bee adopting the characteristic stinging posture: arched with the tip of the abdomen pressed on the dummy. This was further confirmed at the end of the trial by the presence of the stinger, embedded in the leather. Comparison between Africanized and European bees It has been shown that Africanized bees are more sensitive to SAP [4]. In this previous study, the responses of caged honeybees to different concentrations of alarm pheromone were classified into 5 categories according to their intensity: "no response" (N), "weak response" (W), "moderate response" (M), "strong response" (S) or "very strong response" (V). To better visualise this data and to be able to compare it to our model results, we transformed this data by calculating an "Aggressive score" (As) for each alarm pheromone concentration and for each ecotype, which was defined as As=N×0+W×1+M×2+S×3+V×4, with each letter corresponding to the percentage of reactions that fell into the corresponding category. Theoretical model: Projective Simulation Projective Simulation (PS) [18, 38–42] is a model for artificial agency that combines a notion of episodic memory with a simple reinforcement learning mechanism. It allows an agent to adapt its internal decision-making processes and improve its performance in a given environment. PS has a transparent structure than can be analysed and interpreted throughout the learning process. This feature is of particular importance in this work, since we aim at explaining the experimentally observed individual responses to certain stimuli. In the context of behavioural biology, the model of PS offers the possibility of enriching the description of the entities' sensorimotor abilities to get closer to the real mechanisms, which can help gain new insight into phenomena that too simplified or abstract models cannot account for. Honeybees offer an interesting opportunity for PS since they exhibit complex behaviours at both the individual and the collective level despite their relatively small brain. In addition, Projective Simulation can be used to model collective behaviour [22, 43] by considering ensembles of PS agents that interact with each other. In the present work, this interaction is determined by the olfactory perception of the pheromone that bees release when stinging. The fact that each agent has an individual deliberation process allows us not only to explain the experimental results but also to study how the individual responses to alarm pheromone are combined into an appropriate defensive reaction for the colony. In this section, we describe the general features of Projective Simulation and we further specify how we model the scenario of colony defence in the "Details of the model I: the bee and the learning process" and "Details of the model II: the predator" sections. The individual interaction of a PS agent with its surroundings starts with the agent perceiving some input information, which triggers a deliberation process that ends with the agent performing a certain action. The deliberation process is carried out by the main internal structure of the agent — called episodic and compositional memory (ECM) [18] —, which is a network consisting of nodes, termed clips, connected by edges. Clips represent snippets (or "episodes") of the agent's experience and can encode information from basic percepts, like a colour or an odour, to compositions of short sequences of sensory information. Each clip is connected to its neighbouring clips by directed, weighted edges. The weights, termed h values, are stored in the so-called h matrix and in turn determine the transition probability from one clip to another. The deliberation process is thus modelled as a random walk through the clip network. The ECM has a flexible structure that may consist of several layers and that can change over time by, for instance, the creation of new clips and their addition to the existing network (see e.g. [41]). However, for the purpose of this work, it is sufficient to consider the basic two-layered structure (see Fig. 2), where one layer of percept-clips (or just "percepts", for brevity) encodes the perceptual information that the agent gets from its surroundings, and another layer of action-clips (or just "actions") encodes the information about the possible actions the agent can take. The interaction round of an individual PS agent goes as follows: first, it perceives certain input information that activates the corresponding percept-clip in the ECM, which triggers a random walk through the network that ends when an action-clip, and subsequently its corresponding actuator, are activated, leading the agent to actually perform the action. Therefore, the final action that the agent will execute depends on the transition probabilities from clip to clip in the ECM, which are determined by the h values as, $$ p_{{ij}}=\frac{h_{{ij}}}{\sum_{k} h_{{ik}}}, $$ where the transition probability from clip i to clip j, pij, is given by the weight hij of the edge that connects them, normalised over all the possible outgoing transitions to clips k connected to i. In this work, we consider the two-layered PS, where each percept clip is only connected to all the action clips, so pij is simply the transition probability from percept i to action j. A reinforcement learning mechanism can be implemented by updating the h values at the end of an interaction round. If the agent's choice is good, it receives a reward R that increases the h value of the traversed percept-action edge, so that the agent has higher probability of performing that action the next time the same percept clip is activated. At the beginning of the learning process, we consider that the agent chooses one of the possible actions at random. Therefore, all edges are initialised with the same h value, \(h_{{ij}}^{(0)}=1\), which leads to a uniform probability distribution over the actions. In addition to an increase of the edge weights throughout the learning process, noise can be added by introducing a forgetting parameter γ (0≤γ<1) that quantifies how much the h values are damped towards their initial value. This can be interpreted as the agent forgetting part of its past experience. The specific update rule at the end of the interaction round for an edge connecting percept i to action j has the form, $$ h_{{ij}}^{(t+1)} \longleftarrow h_{{ij}}^{(t)} - \gamma (h_{{ij}}^{(t)}-h_{{ij}}^{(0)}) + R, $$ where \(h_{{ij}}^{(t)}\) denotes the current h value, \(h_{{ij}}^{(0)}\) the initialised h value at the beginning of the learning process and R≥0 the reward. If the transition from percept i to action j is rewarded, then R has a value R>0, whereas if it is not rewarded, R=0 and the edge weight is only damped. Note that this update rule increases the h value at the end of each interaction round, depending on whether a reward is given for that round. If one considers a scenario where the agent interacts for several rounds and only gets a reward at the end of the last one, then only the percept-action edge that is traversed in that round is enhanced. In order to reinforce all the percept-action pairs that led to a reward, a mechanism called glow is introduced as part of the model. The idea is to keep track of which edges are traversed during the interaction rounds before the reward is given. To do so, once an edge is traversed, a certain level of excitation or "glow" is associated to it with the effect that, when the next reward is given, all the "glowing" edges are enhanced in proportion to their glow level. The glow for each edge i→j is stored in the element gij of the so-called glow matrix g, and the update rule of Eq. (2) takes the form, $$ h_{{ij}}^{(t+1)} \longleftarrow h_{{ij}}^{(t)} - \gamma (h_{{ij}}^{(t)}-h_{{ij}}^{(0)}) + g_{{ij}} R. $$ Therefore, if that edge is glowing (gij=1) at the end of the interaction round where the reward is given, it will be enhanced. For the purpose of this work, it is sufficient for the reader to consider that edges get a glow value gij=1 if they are traversed and gij=0 if they are not. For more details on how to assign and update glow values in different scenarios, we refer the reader to [38]. So far, we have described the main processes that a single PS agent carries out when interacting with its surroundings and learning via reinforcement. In this work, we model a scenario with an ensemble of PS agents, each of which has its own deliberation process and makes decisions independently from the rest of the ensemble. The precise details of how the agents interact with each other and how the collective performance is evaluated are given in the "Details of the model I: the bee and the learning process" section. There are a few remarks still to be made regarding the learning process of such an ensemble and its interpretation from a biological point of view. In this work, we do not assume that the individual biological entities have the capacity to learn, but we consider the learning process from an evolutionary perspective. Hence, the improvement of the collective performance of the ensemble of agents throughout the learning process can be interpreted as the adaptation of a given species to certain pressures throughout its entire evolutionary history. In the context of reinforcement learning, the selection pressure is encoded in the reward function, in such a way that we can test hypotheses about which environmental factors may have influenced the resultant behaviour we currently observe in the real organisms. In this view, the forgetting parameter would capture one aspect of genetic drift. Details of the model I: the bee and the learning process We consider a population of N bees, where each bee is modelled as a PS agent that perceives, decides, acts and learns according to the model explained in the previous section. Unless specified otherwise, we always use N=100. The population is confronted with the pressure of a predator that attacks the colony and kills agents until it is scared away, i.e. until it is stung a certain number of times. In this section, we describe how we model the colony and the learning process. We give further details on the model of the predator in the "Details of the model II: the predator" section. Honeybees release an alarm pheromone when their stinger is exposed, which allows them to alert and recruit nearby bees into a collective defensive response. Since we are interested in explaining the experimentally observed response of bees to the sting alarm pheromone, we consider that the agents decide whether to sting or not based on the pheromone concentration they perceive. The alarm pheromone concentration is discretised in our model and it increases by one unit every time an agent decides to sting. This is the only mechanism for SAP release that we consider. The honeybee SAP disperses very fast when the stinger is extruded [1], so we considered this increase to be immediate. In addition, this pheromone has been shown to accumulate within the time-frame relevant for an attack [44]. Hence, the minimum concentration is 0 units, when no agent stings, and the maximum is N units, if all agents sting. Note that each agent can only sting once, like a real bee (when a bee stings an animal with elastic skin such as a mammal or bird, its stinger remains hooked into the predator thanks to its barbs, and tears loose from the bee's abdomen). In order to define the structure of the ECM and the percepts of the PS model, we group the range of N pheromone units into bins of logarithmic size, where each bin corresponds to one percept (see Fig. 3). There are two reasons for this choice, exposed thereafter. Nonetheless, we also explored other types of binning, as presented in Additional file 2. First, the logarithmic sensing implies that the agents are able to resolve with high precision low values of pheromone concentration but that precision decreases as the concentration increases. On the one hand, with a range of pheromone units extending to N∼102, it is plausible to assume that the agent can distinguish between 0 and 1 pheromone units but a change by 1 pheromone unit becomes harder to sense when the concentration is of the order of e.g. 30 units. Furthermore, several species of animals present a logarithmic relationship between the stimuli and their perception, as it is the case for instance with human perception, which is quantified by the Weber-Fechner law. The second reason for this choice is related to the structure of the PS agents' interaction. We consider the collective interaction to be sequential, i.e. the agents perceive, decide and act one after the other until all of the N agents have made their decision and acted. This a priori artificial condition becomes more realistic with the logarithmic binning since several agents perceive the same percept and hence act based on the same input information, which is effectively as if they would react in parallel. An illustrative example of the sequential interaction, named trial, is given in Fig. 3. Each trial is always a defensive event that is triggered by an unspecified disturbance outside the nest. Each of the agents decides only once whether to sting or not; hence, the trial comprises N time steps, where a "time step" is defined to be the decision time of one agent. Since we want to model a situation as close as possible to the experimental one, we assume that our agents also have a visual perception of the predator during the whole process (the bees see a rotating dummy, as explained in the "Experimental material and methods" section). As in the experimental setup, we consider that this visual perception remains unchanged while the agent perceives the increase in the pheromone concentration (percepts 0 to 8, see Fig. 3). However, in our model, we add an additional percept (labelled as vESC in Fig. 3) that is activated when the visual perception of the predator changes and the agents see that the predator is already escaping and is no longer a threat to the colony. In this case, we assume that the visual input overrides the olfactory one so that the agents decide based on the visual information only. This is consistent with reports that show that while the alarm pheromone recruits bees to the location of the disturbance, a moving visual stimulus is then necessary to release the stinging behaviour [45]. In order to study the interplay between the visual and the olfactory information and their influence on the self-limitation of the stinging response, we consider the percept vESC to not be activated immediately after the predator stops attacking, but after some time delay Δtv. This time delay is more realistic than an immediate cut-off: it corresponds to the time needed for the bees to perceive that the predator is retreating. In addition, the predator may need time to move outside of the perimeter defended by the resident bees. In order to avoid unnecessary losses, we could expect the agents to learn to stop stinging already during the interval Δtv, based on the pheromone concentration (i.e. on the number of bees that already stung, which may be sufficient to deter any predator). We can thus analyse the two extreme cases with Δtv=0 and Δtv=∞, which correspond respectively to when bees are able to immediately distinguish that the predator is leaving (Δtv=0) or to when they completely ignore this visual cue (Δtv=∞). In the former case, we expect no self-limiting mechanism to develop, since the end of the attack is always immediately signalled by the visual stimulus. In the latter scenario however, unusually high alarm pheromone concentrations may start playing the role of this "stop signal". Note that in the experimental setup the dummy is always rotating; hence, the bees' reactions are modulated by the pheromone level exclusively. With respect to the learning process, the performance of the population is evaluated at the end of one sequential interaction (or trial) and the individuals are rewarded or not depending on the final state of the colony. Thus, the agents' ECMs do not change during the trial. At the end of it, a reward that is proportional to the number of surviving agents is given and the ECMs of the agents are updated accordingly. Importantly, the same h matrix is used for every agent of the population. The reason for this choice is that genetic mixing during reproduction means that, effectively, the average individual responses to the alarm pheromone are transmitted to the next generation [46]. One could easily imagine that a scenario in which certain agents always sting while others never do may lead to a viable collective strategy, but such a population structure could not be stably maintained across generations. As explained in the "Theoretical model: Projective Simulation" section, the update rule of Eq. (3), now given in matrix form, reads, $$ h^{(t+1)}=h^{(t)}-\gamma \left(h^{(t)}-h^{(0)}\right) + g R. $$ where h(t) is the h matrix at the current trial and h(0) denotes the initial h matrix (a 2×10 matrix with ones in all its entries). Note that agents start with a probability of stinging ps=0.5 for all the percepts. The already learned responses are damped by a factor γ at the end of each trial. The influence of this parameter on the learning process is further studied in Additional file 2. In this work, we adapt the notion of a glow matrix g presented in the "Theoretical model: Projective Simulation" section to take into account the choices of all agents and distribute the reward depending not on the individual performance but on the collective one. We remark that the learning process is, in our case, interpreted as the evolutionary history of honeybees. Therefore, even though there exist fluctuations at the individual level, we are interested in the average effect on the population. From this perspective, we consider a glow matrix g that stores how many agents chose an action given a certain percept. In the example of Fig. 3, the second column of g — the one corresponding to percept 1 — in that trial is (9,3), which indicates that 9 agents decided not to sting and 3 decided to sting. If the population is rewarded at the end of the trial, the individual responses that lead to a good collective performance are enhanced. For instance, if the optimal defence is that all the bees sting from the beginning, the individual probability of stinging for low pheromone concentrations will converge to a high value. As to the reward, it is determined by the percentage of bees that remain alive at the end of the trial, $$ R=\frac{a}{N}, $$ where a denotes the number of live bees. This number is evaluated at the end of each trial, by subtracting the number of dead bees from the total number of agents N. Bees die after stinging (s) or because they are killed by the predator (q), $$ a=N-s-q. $$ Note that the number of agents does not change during the trial (all N agents get to decide and act, bees killed by the predator are only counted at the end of the trial). This choice of reward system reflects the fact that honeybee colonies with a larger workforce are more successful ecologically: they have a better chance of surviving winter [47], and most importantly, they are more likely to be able to invest in reproduction in spring [48]. The reward function presented here is linear. We also explored a different scaling of this linear function and non-linear functions, and the results are included in Additional file 3: Figure S3. In the simulations reported here, the entire learning process consists of 80,000 trials, which is sufficient for the population to converge to a stable behaviour. We remark that the learning processes that the populations of PS agents undergo are interpreted as processes of adaptation to given evolutionary pressures. In this work, we focus on the defence behaviour against predators, so, by changing the parameters of our predator model, we are effectively testing how different pressures affect the final behaviour. This allows us to analyse possible causal explanations for the responses observed in present-day real bees. Details of the model II: the predator The predator has an active role in our model, since it attacks the hive and kills bees at a given killing rate of k bees per time step (for an exploration of the role of this parameter, see Additional file 3: Figure S4). Therefore, the colony needs to build up the defence as fast as possible to reduce the number of bees killed by the predator. Of particular importance is the time needed for the bees to detect the presence of the predator, which we parametrise as the time step at which the predator starts its attack tatt (see Fig. 3). A low value for this parameter simulates cases in which the predator is only detected close to the colony and hence starts killing bees quickly. At the opposite, a high value for tatt represents an early detection by the bees, when they have more time to fly out and build up the defensive response before the predator reaches the nest itself. The predator stops killing bees when the number of total stings reaches a threshold sth. By changing this parameter, we model the type of predator that the colony may encounter. As an example, one may consider that small predators such as mice can be killed or scared away with fewer stings than bigger or thick-skinned animals like bears and honey badgers. Thus, different sth can be interpreted as differences in the predator's resistance to bee stings. In the wild, bees regularly encounter a wide variety of predators, and they need to be able to cope with all of them. We model this situation by introducing a range of sth. Instead of being faced with only one type of predator (same sth in all trials), the colony is attacked by a predator with a different sth for different trials, which is chosen from a uniform distribution over a certain range. Therefore, the parameter sth gives us the flexibility to model different environmental conditions. For instance, we can model colonies of bees that are usually attacked by small/less resistant predators and observe the defensive strategy that they adopt. We can then study how they respond when suddenly faced with bigger/more resilient predators, thus mimicking their introduction into a novel environment. Since the agents can only develop one strategy (i.e. a set of probabilities of stinging for the various percepts) per learning process, they have to optimise it to accommodate the whole range of sth. Note that the activation of the visual percept vESC — which happens when the visual information changes and the predator is seen leaving — allows them to stop the defence behaviour at different points of the trial depending on the specific sth of each predator. In particular, percept vESC is activated after Δtv time steps from the point where the number of stings reaches the corresponding sth. The theoretical results were obtained using the code CollectiveStinging, available at https://github.com/qic-ibk/CollectiveStinging. The experimental data are available upon request to MN ([email protected]). ECM: Episodic and compositional memory Isoamyl acetate Photoionization detector Projective Simulation SAP: Sting alarm pheromone Nouvian M, Reinhard J, Giurfa M. The defensive response of the honeybee Apis mellifera. J Exp Biol. 2016; 219(Pt 22):3505–17. https://doi.org/10.1242/jeb.143016. Ghent RL, Gary NE. A chemical alarm releaser in honey bee stings (Apis mellifera L.)Psyche. 1962; 69:1–6. https://doi.org/10.1155/1962/39293. Southwick EE, Moritz RFA. Metabolic response to alarm pheromone in honey bees. J Insect Physiol. 1985; 31(5):389–92. https://doi.org/10.1016/0022-1910(85)90083-6. Collins AM, Rinderer TE, Tucker KW, Pesante DG. Response to alarm pheromone by European and Africanized honeybees. J Apic Res. 1987; 26(4):217–23. https://doi.org/10.1080/00218839.1987.11100763. Lensky Y, Cassier P, Tel-Zur D. The setaceous membrane of honey bee (Apis mellifera L.) workers' sting apparatus: structure and alarm pheromone distribution. J Insect Physiol. 1995; 41(7):589–95. https://doi.org/10.1016/0022-1910(95)00007-H. Nouvian M, Hotier L, Claudianos C, Giurfa M, Reinhard J. Appetitive floral odours prevent aggression in honeybees. Nat Commun. 2015; 6:10247. doi:10.1038/ncomms10247. Boch R, Shearer DA, Petrasovits A. Efficacies of two alarm substances of the honey bee. J Insect Physiol. 1970; 16:17–24. https://doi.org/10.1016/0022-1910(70)90108-3. Dell AI, Bender JA, Branson K, Couzin ID, de Polavieja GG, Noldus LPPJ, et al. Automated imaging-based tracking and its application to ecology. Trends Ecol Evol. 2014; 29:417–28. https://doi.org/10.1016/j.tree.2014.05.004. Buhl J, Sumpter DJT, Couzin ID, Hale J, Despland E, Miller E, et al. From disorder to order in marching locusts. Science. 2006; 312:1402–6. https://doi.org/10.1126/science.1125142. Lopez U, Gautrais J, Couzin ID, Theraulaz G. From behavioural analyses to models of collective motion in fish schools. Interf Focus. 2012; 2(6):693–707. https://doi.org/10.1098/rsfs.2012.0033. Herbert-Read JE. Understanding how animal groups achieve coordinated movement. J Exp Biol. 2016; 219(Pt 19):2971–83. https://doi.org/10.1242/jeb.129411. Vicsek T, Czirók A, Ben-Jacob E, Cohen I, Shochet O. Novel type of phase transition in a system of self-driven particles. Phys Rev Lett. 1995; 75:1226–9. https://doi.org/10.1103/PhysRevLett.75.1226. Duarte A, Weissing FJ, Pen I, Keller L. An evolutionary perspective on self-organized division of labor in social insects. Annu Rev Ecol Evol Syst. 2011; 42:91–110. https://doi.org/10.1146/annurev-ecolsys-102710-145017. Duarte A, Scholtens E, Weissing FJ. Implications of behavioral architecture for the evolution of self-organized division of labor. PLoS Comput Biol. 2012; 8:e1002430. https://doi.org/10.1371/journal.pcbi.1002430. Lichocki P, Tarapore D, Keller L, Floreano D. Neural networks as mechanisms to regulate division of labor. Am Nat. 2012; 179:391–400. https://doi.org/10.1086/664079. Chen R, Meyer B, Garcia J. A computational model of task allocation in social insects: ecology and interactions alone can drive specialisation. Swarm Intell. 2020; 14:143–70. https://doi.org/10.1007/s11721-020-00180-4. Izquierdo LR, Izquierdo SS, Vega-Redondo F. Learning and evolutionary game theory In: Seel NM, editor. Encyclopedia of the sciences of learning. Boston, MA: Springer: 2012. p. 1782–8. Briegel HJ, De las Cuevas G. Projective simulation for artificial intelligence. Sci Rep. 2012; 2:400. https://doi.org/10.1038/srep00400. Requier F, Garnery L, Kohl PL, Njovu HK, Pirk CWW, Crewe RM, et al. The conservation of native honey bees is crucial. Trends Ecol Evol. 2019; 34(9):789–98. https://doi.org/10.1016/j.tree.2019.04.008. Breed MD, Guzman-Novoa E, Hunt GJ. Defensive behavior of honey bees: organization, genetics, and comparisons with other bees. Ann Rev Entomol. 2004; 49:271–98. https://doi.org/10.1146/annurev.ento.49.061802.123155. Guzman-Novoa E, Hunt GJ, Uribe-Rubio JL, Prieto-Merlos D. Genotypic effects of honey bee (Apis mellifera) defensive behavior at the individual and colony levels: the relationship of guarding, pursuing and stinging. Apidologie. 2004; 35(1):15–24. https://doi.org/10.1051/apido:2003061. López-Incera A, Ried K, Müller T, Briegel HJ. Development of swarm behavior in artificial learning agents that adapt to different foraging environments. PLoS ONE. 2020; 15(12):e0243628. https://doi.org/10.1371/journal.pone.0243628. Hunt GJ. Flight and fight: a comparative view of the neurophysiology and genetics of honey bee defensive behavior. J Insect Physiol. 2007; 53(5):399–410. https://doi.org/10.1016/j.jinsphys.2007.01.010. Winston ML. Killer bees. The Africanized honey bee in the Americas. Cambridge, Massachusetts: Harvard University Press; 1992. Roubik DW. Ecology and natural history of tropical bees. Cambridge, UK: Cambridge University Press; 1989. https://doi.org/10.1017/CBO9780511574641. Carter S, du Plessis T, Chwalibog A, Sawosz E. The honey badger in South Africa: biology and conservation. Int J Avian Wildl Biol. 2017; 2(2):00091. https://doi.org/10.15406/ijawb.2017.02.00016. Gebretsadik T. Survey on honeybee pests and predators in Sidama and Gedeo zones of Southern Ethiopia with emphasis on control practices. Agric Biol J N Am. 2016; 7(4):173–81. https://doi.org/10.5251/abjna.2016.7.4.173.181. Schmidt JO. Evolutionary responses of solitary and social Hymenoptera to predation by primates and overwhelmingly powerful vertebrate predators. J Hum Evol. 2014; 71:12–9. https://doi.org/10.1016/j.jhevol.2013.07.018. Sandoz JC. Behavioral and neurophysiological study of olfactory perception and learning in honeybees. Front Syst Neurosci. 2011; 5:98. https://doi.org/10.3389/fnsys.2011.00098. Robinson GE. Modulation of alarm pheromone perception in the honey bee: evidence for division of labor based on hormonally regulated response thresholds. J Comp Physiol A. 1987; 160(5):613–9. https://doi.org/10.1007/BF00611934. Ayroles JF, Buchanan SM, O'Leary C, Skutt-Kakaria K, Grenier JK, Clark AG, et al. Behavioral idiosyncrasy reveals genetic control of phenotypic variability. Proc Natl Acad Sci U S A. 2015; 112(21):6706–11. https://doi.org/10.1073/pnas.1503830112. Billen J. Signal variety and communication in social insects. In: Proceedings of the Netherlands Entomological Society Meeting: 2006. p. 17. Dussutour A, Fourcassie V, Helbing D, Deneubourg JL. Optimal traffic organization in ants under crowded conditions. Nature. 2004; 428(6978):70–3. https://doi.org/10.1038/nature02345. Collins AM, Blum SM. Bioassay of compounds derived from the honeybee sting. J Chem Ecol. 1982; 8(2):463–9. https://doi.org/10.1007/BF00987794. Collins AM, Blum MS. Alarm responses caused by newly identified compounds derived from the honeybee sting. J Chem Ecol. 1983; 9(1):57–65. https://doi.org/10.1007/BF00987770. Boch R, Shearer DA, Stone BC. Identification of isoamyl acetate as an active component in the sting pheromone of the honey bee. Nature. 1962; 195:1018–20. https://doi.org/10.1038/1951018b0. Free JB, Simpson J. The alerting pheromones of the honeybee. Z Vergleichende Physiol. 1968; 61:361–5. https://doi.org/10.1007/BF00428008. Mautner J, Makmal A, Manzano D, Tiersch M, Briegel HJ. Projective simulation for classical learning agents: a comprehensive investigation. New Gener Comput. 2015; 33(1):69–114. https://doi.org/10.1007/s00354-015-0102-0. Makmal A, Melnikov AA, Dunjko V, Briegel HJ. Meta-learning within projective simulation. IEEE Access. 2016; 4:2110–22. https://doi.org/10.1109/access.2016.2556579. Melnikov AA, Makmal A, Briegel HJ. Benchmarking projective simulation in navigation problems. IEEE Access. 2018; 6:64639–48. https://doi.org/10.1109/ACCESS.2018.2876494. Ried K, Eva B, Müller T, Briegel HJ. How a minimal learning agent can infer the existence of unobserved variables in a complex environment. preprint arXiv:191006985v1. 2019. Available from: https://arxiv.org/pdf/1910.06985.pdf. Hangl S, Dunjko V, Briegel HJ, Piater J. Skill learning by autonomous robotic playing using active learning and exploratory behavior composition. Front Robot AI. 2020; 7:42. https://doi.org/10.3389/frobt.2020.00042. Ried K, Müller T, Briegel HJ. Modelling collective motion based on the principle of agency: general framework and the case of marching locusts. PLoS ONE. 2019; 14:e0212044. https://doi.org/10.1371/journal.pone.0212044. Millor J, Pham-Delegue M, Deneubourg JL, Camazine S. Self-organized defensive behavior in honeybees. Proc Natl Acad Sci U S A. 1999; 96(22):12611–5. https://doi.org/10.1073/pnas.96.22.12611. Wager BR, Breed MD. Does honey bee sting alarm pheromone give orientation information to defensive bees?Ann Entomol Soc Am. 2000; 93:1329–32. https://doi.org/10.1603/0013-8746(2000)093[1329:DHBSAP]2.0.CO;2. Avalos A, Fang M, Pan H, Ramirez Lluch A, Lipka AE, Zhao SD, et al. Genomic regions influencing aggressive behavior in honey bees are defined by colony allele frequencies. Proc Natl Acad Sci U S A. 2020; 117(29):17135–41. https://doi.org/10.1073/pnas.1922927117. Doke MA, Frazier M, Grozinger CM. Overwintering honey bees: biology and management. Curr Opin Insect Sci. 2015; 10:185–93. https://doi.org/10.1016/j.cois.2015.05.014. Smith ML, Ostwald MM, Loftus JC, Seeley TD. A critical number of workers in a honeybee colony triggers investment in reproduction. Naturwissenschaften. 2014; 101:783–90. https://doi.org/10.1007/s00114-014-1215-x. The experimental data was gathered with the help of Maxime Pocher and Karoline Weich. MN was supported by a DFG research grant (project number 414260764) and by the Zukunftskolleg (University of Konstanz). AL and HJB were supported by the Austrian Science Fund (FWF) through grant SFB BeyondC F7102. HJB was supported by the Ministerium für Wissenschaft, Forschung, und Kunst Baden-Württemberg (AZ:33-7533-30-10/41/1). HJB and TM were supported by Volkswagen Foundation (Az:97721) and a BlueSky project of the University of Konstanz. Open Access funding enabled and organized by Projekt DEAL. Andrea L\'{o}pez-Incera and Morgane Nouvian contributed equally to this work. Institute for Theoretical Physics, Universität Innsbruck, Innsbruck, 6020, Austria Andrea López-Incera, Katja Ried & Hans J. Briegel Department of Biology, Universität Konstanz, Konstanz, 78457, Germany Morgane Nouvian Zukunftskolleg, Universität Konstanz, Konstanz, 78457, Germany Centre for the Advanced Study of Collective Behavior, Universität Konstanz, Konstanz, 78457, Germany Department of Philosophy, Universität Konstanz, Konstanz, 78457, Germany Thomas Müller & Hans J. Briegel Andrea López-Incera Katja Ried Hans J. Briegel All authors participated in the design and conception of the study, as well as in writing and editing the manuscript. MN collected and analysed the experimental results. AL wrote the script for the theoretical model and performed all simulations. All authors read and approved the final manuscript. Correspondence to Morgane Nouvian. This study was carried out in accordance with the recommendations of the "3Rs" principles as stated in the Directive 2010/63/EU governing animal use within the European Union. The use of (non-transgenic) honeybees for research purposes has been reported to the "Regierungspräsidium" as required. Learned probabilities of stinging and total number of stings for the limiting cases with Δtv=0,∞. Study of the influence of the forgetting parameter (hyperparameter optimisation) and the sensing resolution in the learning process. Study of the influence of the reward function used and of the killing rate. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. López-Incera, A., Nouvian, M., Ried, K. et al. Honeybee communication during collective defence is shaped by predation. BMC Biol 19, 106 (2021). https://doi.org/10.1186/s12915-021-01028-x Collective behaviour
CommonCrawl
Main result Existence of periodic solutions for a higher-order neutral difference equation Jiao-Ling Zhang1 and Gen-Qiang Wang2Email author Published: 5 December 2018 Based on a continuation theorem of Mawhin, the existence of a periodic solution for a higher-order nonlinear neutral difference equation is studied. Our conclusion is new and interesting. Periodic solutions Neutral difference equation Continuation theorem The periodic solution theory of differential equation and difference equation has important academic value and application background. It has aroused people's great concern, and many good results have been achieved. For example, see articles [1–8] and the references therein. However, as far as we know, the results of the periodic solutions of neutral difference equations are relatively few (see [7, 8]). In this paper, we study the periodic solutions of a higher-order nonlinear neutral difference equation of the form $$ \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] =g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}},\ldots,u_{n-\tau _{l}}), \quad {n\in Z}, $$ where k is a positive integer, c is a real number different from −1 and 1, σ and \(\tau _{i}\) are integers for \(i\in \{ 1,2,\ldots,l \} \), \(g_{n}\in C(R^{l},R)\) for \(n\in Z\) and \(g_{n}=g_{n + \omega }\), where ω is a positive integer which satisfies \(\omega \geq 2\). We use a continuity theorem to give some criteria for the existence of a periodic solution of (1), and our conclusion is new and interesting. A solution of (1) is a real sequence of the form \(x= \{ x_{n} \} _{n\in Z}\) which renders (1) into an identity after substitution. As usual, a solution of (1) of the form \(x= \{ x_{n} \} _{n\in Z}\) is said to be ω-periodic if \(x_{n+\omega }=\) \(x_{n}\) for \(n\in Z\). We also state Mawhin's continuation theorem (see [1]). Let X and Y be two Banach spaces, and \(L : \operatorname{Dom} L\subset X \rightarrow Y\) is a linear mapping and \(N:X \rightarrow Y\) is a continuous mapping. The mapping L is called a Fredholm mapping of index zero if \(\operatorname{dim} \operatorname{Ker} L = \operatorname{codim} \operatorname{Im} L <+\infty \), and ImL is closed in Y. If L is a Fredholm mapping of index zero, then there exist continuous projectors \(P : X\rightarrow X\) and \(Q:Y\rightarrow Y\) such that \(\operatorname{Im} P = \operatorname{Ker} L\) and \(\operatorname{Im} L = \operatorname{Ker} Q =\operatorname{Im} (I-Q)\). It follows that \(L_{|\operatorname{Dom} L \cap \operatorname{Ker} P} : (I-P) X\rightarrow \operatorname{Im} L\) has an inverse which is denoted by \(K_{p}\). If Ω is an open and bounded subset of X, then the mapping N is called L-compact on Ω̅ when \(QN(\overline{\varOmega })\) is bounded and \(K_{P} (I-Q) N:\overline{\varOmega } \rightarrow X\) is compact. Since ImQ is isomorphic to KerL, there exists an isomorphism \(J:\operatorname{Im}Q\rightarrow \operatorname{Ker} L\). Theorem A (Mawhin's continuation theorem [1]) Let L be a Fredholm mapping of index zero, and let N be L-compact on Ω̅. Suppose for each \(\lambda \in (0,1)\), \(x\in \partial \varOmega \), \(Lx\neq \lambda Nx \); and for each \(x \in \partial \varOmega \cap \operatorname{Ker} L\), \(QNx\neq 0\) and \(\deg (JQN,\varOmega \cap \operatorname{Ker} L,0)\neq 0\). Then the equation \(Lx=Nx\) has at least one solution in \(\overline{\varOmega }\cap \operatorname{Dom} L\). 2 Main result The main result of this paper is as follows: Theorem 2.1 Let \(|c|\neq 1\). Assume that there exist constants \(D>0\), \(\alpha \geq 0\), and \(\beta \geq 0\) such that \(|g_{n}(x_{1},x_{2},\ldots,x_{l})|\leq \beta \max_{1\leq i\leq l}|x_{i}|+\alpha \) for \(n\in Z\) and \((x_{1},x_{2},\ldots,x_{l})^{T}\in R^{l}\), \(g_{n}(x_{1},x_{2},\ldots,x_{l})>0\) for \(n\in Z\) and \(x_{1},x_{2},\ldots,x_{l}\geq D\), \(g_{n}(x_{1},x_{2},\ldots,x_{l})<0\) for \(n\in Z\) and \(x_{1},x_{2},\ldots,x_{l}\leq -D\). Then the higher-order neutral difference equation (1) has an ω-periodic solution when \(\omega ^{k}\beta <2^{k}|1-|c||\). Remark 2.1 When \(g_{n}\) in (1) is replaced by \(-g_{n}\),the result of Theorem 2.1 still holds. Next, some preparations are presented to prove our theorems. Let \(X_{\omega } \) be the Banach space of all real ω-periodic sequences of the form \(x=\{x_{n}\}_{n\in Z}\) and endowed with the usual linear structure as well as the norm \(\vert x \vert _{\infty }=\max_{1\leq i\leq \omega } \vert x_{i} \vert \). Define the mappings \(L:X_{\omega }\rightarrow X_{\omega }\) and \(N:X_{\omega }\rightarrow X_{\omega }\) respectively by $$ ( Lx ) _{n}=\Delta ^{k} [ x_{n}+cx_{n-\sigma } ] ,\quad n\in Z, $$ $$ ( Nx ) _{n}=g_{n}(x_{n-\tau _{1}},x_{n-\tau _{2}}, \ldots,x_{n-\tau _{l}}),\quad n\in Z. $$ It is easy to see that L is a linear mapping. Similar to the paper [8], in case \(\vert c \vert \neq 1\), direct calculation shows that \(\operatorname{Ker} L= \{ x\in X_{\omega } \vert x_{n}=x_{0},n\in Z \} \). Since \(\operatorname{dim} X_{\omega }=\omega \) and \(L:X_{\omega }\rightarrow X_{\omega }\) is a linear mapping, by the knowledge of linear algebra, we know that \(\operatorname{dim}\operatorname{Ker} L\bigoplus \dim \operatorname{Im} L=\dim X_{\omega }\). It is easy to see that \(\operatorname{dim}\operatorname{Ker} L = \operatorname{codim} \operatorname{Im} L=1\) and \(\operatorname{dim} \operatorname{Im} L=\omega -1\). It follows that ImL is closed in \(X_{\omega }\). Thus L is a Fredholm mapping of index zero. Now, we assert that $$ \operatorname{Ker}L\bigoplus \operatorname{Im}L=X_{\omega }. $$ To do that, we just have to prove that $$ \operatorname{Ker}L\cap \operatorname{Im}L=0. $$ Indeed, if \(y=\{y_{n}\}_{n\in Z}\in \operatorname{Im} L\), then there is \(x=\{x_{n}\}_{n\in Z}\in X_{\omega }\) such that $$ y_{n}=\Delta ^{k} [ x_{n}+cx_{n-\sigma } ] , \quad n\in Z. $$ Thus $$ \sum_{i=1}^{\omega }y_{i}=\sum _{i=1}^{\omega }\Delta ^{k} [ x_{i}+cx_{i-\sigma } ] . $$ Note that \(x=\{x_{n}\}_{n\in Z} \in X_{\omega }\). It follows that \(\{\Delta ^{k-1}x_{n}\}_{n\in Z} \in X_{\omega }\). Furthermore, direct calculation shows that $$\begin{aligned} \sum_{i=1}^{\omega }\Delta ^{k}x_{i} =& \bigl[ \Delta ^{k-1}x_{2}-\Delta ^{k-1}x_{1} \bigr] + \bigl[ \Delta ^{k-1}x_{3}-\Delta ^{k-1}x_{2} \bigr] +\cdots \\ &{}+ \bigl[ \Delta ^{k-1}x_{\omega }-\Delta ^{k-1}x_{\omega -1} \bigr] + \bigl[ \Delta ^{k-1}x_{\omega +1}-\Delta ^{k-1}x_{\omega } \bigr] \\ =&\Delta ^{k-1}x_{\omega +1}-\Delta ^{k-1}x_{1}= \Delta ^{k-1}x_{1}-\Delta ^{k-1}x_{1}=0. \end{aligned}$$ By (7) and (8), we have $$ \sum_{i=1}^{\omega }y_{i}=\sum _{i=1}^{\omega }\Delta ^{k} [ x_{i}+cx_{i-\sigma } ] =\sum_{i=1}^{\omega } \Delta ^{k}x_{i}+c\sum_{i=1}^{\omega } \Delta ^{k}x_{i-\sigma }=0+0=0. $$ We see that for any \(u= \{ u_{n} \} _{n\in Z}\in \operatorname{Ker}L\cap \operatorname{Im}L\), then \(u= \{ u_{n} \} _{n\in Z}\in \operatorname{Ker}L\) and \(u= \{ u_{n} \} _{n\in Z}\in \operatorname{Im}L\). Because of \(u= \{ u_{n} \} _{n\in Z}\in \operatorname{Ker}L\) and \(\operatorname{Ker} L= \{ x\in X_{\omega } \vert x_{n}=x_{0},n\in Z \} \), thus for any \(n\in Z\), we have $$ u_{n}=\frac{1}{\omega }\sum_{i=1}^{\omega }u_{i}. $$ On the other hand, since \(u= \{ u_{n} \} _{n\in Z}\in \operatorname{Im}L\), by (9), we have $$ \sum_{i=1}^{\omega }u_{i}=0. $$ By (10) and (11), we see that, for any \(n\in Z\), \(u_{n}=0\). This implies that (5) is true, that (4) is true. Now, for any \(u= \{ u_{n} \} _{n\in Z}\in X_{\omega }\), if $$u=x+y, $$ where \(x=\{x_{n}\}_{n\in Z}\in \operatorname{Ker} L\) and \(y=\{y_{n}\}_{n\in Z}\in \operatorname{Im} L\), then $$ x_{n}=\frac{1}{\omega }\sum_{i=1}^{\omega }u_{i}, \quad n\in Z, $$ $$y_{n}=u_{n}-\frac{1}{\omega }\sum _{i=1}^{\omega }u_{i},\quad n\in Z. $$ As in paper [8], we define \(P=Q:X_{\omega }\rightarrow X_{\omega }\) by $$ ( Px ) _{n}= ( Qx ) _{n}=\frac{1}{\omega }\sum _{i=1}^{\omega }x_{i},\quad n\in Z. $$ The operators P and Q are projections. We have \(\operatorname{Im} P=\operatorname{Ker} L\), \(\operatorname{Ker} Q=\operatorname{Im} L\), and \(X_{\omega }=\operatorname{Ker} P\bigoplus \operatorname{Ker} L=\operatorname{Im} L\bigoplus \operatorname{Im} Q\). It follows that \(L_{|\operatorname{Dom}L \cap \operatorname{Ker} P}: (I-P) X_{\omega }\rightarrow \operatorname{Im} L\) has an inverse which is denoted by \(K_{p}\). By (3) and (13), we see that, for any \(x=\{x_{n}\}_{n\in Z}\in X_{\omega }\), $$ ( QNx ) _{n}=\frac{1}{\omega }\sum_{i=1}^{\omega }g_{i}(x_{i-\tau _{1}},x_{i-\tau _{2}}, \ldots,x_{i-\tau _{l}}),\quad n\in Z, $$ $$ \bigl( ( I-Q ) Nx \bigr) _{n}=g_{n}(x_{n-\tau _{1}},x_{n-\tau _{2}}, \ldots,x_{n-\tau _{l}})-\frac{1}{\omega }\sum_{i=1}^{\omega }g_{i}(x_{i-\tau _{1}},x_{i-\tau _{2}}, \ldots,x_{i-\tau _{l}}),\quad n\in Z. $$ Since the Banach space \(X_{\omega }\) is finite dimensional, \(K_{p }\) is linear. By relations (14) and (15), we see that QN and \(K_{p} ( I-Q ) N\) are continuous on \(X_{\omega }\) and take bounded sets into bounded sets respectively. Thus, we know that if Ω is an open and bounded subset of \(X_{\omega }\), then the mapping N is called L-compact on Ω̅. Lemma 2.1 (see [7]) Let \(\{ u_{n } \} _{n\in Z}\) be a real ω-periodic sequence, then we have $$ \max_{1\leq s,t\leq \omega } \vert u_{s}-u_{t} \vert \leq \frac{1}{2}\sum_{n=1}^{\omega } \vert \Delta u_{n} \vert , $$ where the constant factor \(1/2\) is the best possible. Let \(\{ u_{n } \} _{n\in Z}\) be a real ω-periodic sequence, then $$ \sum_{n=1}^{\omega } \vert \Delta u_{n} \vert \leq \frac{\omega }{2}\sum_{n=1}^{\omega } \bigl\vert \Delta ^{2}u_{n} \bigr\vert . $$ If \(|c|\neq 1\) and \(\{ u_{n } \} _{n\in Z}\) is a real ω-periodic sequence, then $$ \max_{1\leq s,t\leq \omega } \vert u_{s}-u_{t} \vert \leq \frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] \bigr\vert . $$ $$\begin{aligned} \sum_{n=1}^{\omega } \bigl\vert \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] \bigr\vert \geq & \Biggl\vert \sum_{n=1}^{\omega } \bigl\vert \Delta ^{k}u_{n} \bigr\vert - \vert c \vert \sum _{n=1}^{\omega } \bigl\vert \Delta ^{k}u_{n-\sigma } \bigr\vert \Biggr\vert \\ =& \bigl\vert 1- \vert c \vert \bigr\vert \sum_{n=1}^{\omega } \bigl\vert \Delta ^{k}u_{n} \bigr\vert . \end{aligned}$$ It follows that $$ \sum_{n=1}^{\omega } \bigl\vert \Delta ^{k}u_{n} \bigr\vert \leq \frac{1}{ \vert 1- \vert c \vert \vert }\sum _{n=1}^{\omega } \bigl\vert \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] \bigr\vert . $$ If \(k=1\), by Lemma 2.1 and (19), then $$\max_{1\leq s,t\leq \omega } \vert u_{s}-u_{t} \vert \leq \frac{1}{2 \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert \Delta [ u_{n}+cu_{n-\sigma } ] \bigr\vert . $$ If \(k\geqslant 2\), by Lemma 2.2, then $$\begin{aligned} \sum_{i=1}^{\omega } \vert \Delta u_{i} \vert \leq &\frac{\omega }{2}\sum _{n=1}^{\omega } \bigl\vert \Delta ^{2}u_{n} \bigr\vert \\ \leq &\frac{\omega ^{2}}{2^{2}}\sum_{n=1}^{\omega } \bigl\vert \Delta ^{3}u_{n} \bigr\vert \leq \cdot \cdot \cdot \\ \leq &\frac{\omega ^{k-1}}{2^{k-1}}\sum_{n=1}^{\omega } \bigl\vert \Delta ^{k}u_{n} \bigr\vert . \end{aligned}$$ In view of (19) and (20), we have $$\begin{aligned} \max_{1\leq s,t\leq \omega } \vert u_{s}-u_{t} \vert \leq &\frac{\omega ^{k-1}}{2^{k}}\sum_{n=1}^{\omega } \bigl\vert \Delta ^{k}u_{n} \bigr\vert \leq \frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] \bigr\vert . \end{aligned}$$ The proof is completed. □ Proof of Theorem 2.1 Consider the system $$ ( Lx ) _{n}=\lambda ( Nx ) _{n},\quad n\in Z, $$ where \(\lambda \in ( 0,1 ) \) is a parameter. Let \(u\in X_{\omega }\) be a solution of (21). By (2), (3), and (21), $$ \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] =\lambda g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}},\ldots,u_{n-\tau _{l}}),\quad n\in Z. $$ Let \(u_{\xi }= \max_{1\leq n\leq \omega } u_{n }\) and \(u_{\eta }=\min_{1\leq n\leq \omega } u_{n}\). By Lemma 2.3 and (21), $$\begin{aligned} u_{\xi }-u_{\eta } =&\max_{1\leq s,t\leq \omega } \vert u_{s}-u_{t} \vert \\ \leq &\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert \Delta ^{k} [ u_{n}+cu_{n-\sigma } ] \bigr\vert \\ \leq &\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert . \end{aligned}$$ If there exists a constant \(m\in \{ 1,2,\ldots,\omega \} \) such that \(\vert u_{m } \vert < D\), by (23), for any \(n\in Z\), then $$\begin{aligned} \vert u_{n } \vert \leq & \vert u_{m } \vert + \vert u_{n}-u_{m } \vert \\ \leq &D+\max_{1\leq s,t\leq \omega } \vert u_{s}-u_{t} \vert \\ \leq &D+\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert . \end{aligned}$$ Otherwise by (22), $$ \sum_{n=1}^{\omega }g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau_{l}})=0 . $$ In view of conditions (II), (III) and (25), we know \(u_{\xi }\geq D\) and \(u_{\eta }\leq -D\). By (23), $$\begin{aligned} u_{\xi } \leq &u_{\eta }+\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum _{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert \\ \leq &-D+\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert , \end{aligned}$$ $$\begin{aligned} u_{\eta } \geq &u_{\xi }-\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum _{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert \\ \geq &D-\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert . \end{aligned}$$ By (26) and (27), for any \(n\in Z\), $$\begin{aligned} D-\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}},\ldots,u_{n-\tau _{l}}) \bigr\vert \leq u_{\eta }\leq u_{n}\leq u_{\xi } \\ \leq -D+\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert . \end{aligned}$$ From (24) and (28), for any \(n\in Z\), we have $$\vert u_{n} \vert \leq D+\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum _{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert . $$ $$ \vert u \vert _{\infty }\leq D+\frac{\omega ^{k-1}}{2^{k} \vert 1- \vert c \vert \vert }\sum _{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}}, \ldots,u_{n-\tau _{l}}) \bigr\vert . $$ By condition (I), $$\begin{aligned} \sum_{n=1}^{\omega } \bigl\vert g_{n}(u_{n-\tau _{1}},u_{n-\tau _{2}},\ldots,u_{n-\tau _{l}}) \bigr\vert \leq &\omega \beta \max_{1\leq i\leq l} \vert u_{n-\tau _{i}} \vert +\omega \alpha \\ \leq &\omega \beta \vert u \vert _{\infty }+\omega \alpha . \end{aligned}$$ By (29) and (30), $$ \vert u \vert _{\infty }\leq \frac{C}{1-\rho }, $$ where \(C=D+\frac{\omega ^{k}\alpha }{2^{k}|1-|c||}\) and \(\rho =\frac{\omega ^{k}\beta }{2^{k}|1-|c||}\). $$\varOmega = \bigl\{ u\in X_{\omega }: \vert u \vert _{\infty }< \overline{D} \bigr\} , $$ where D̅ is a fixed number which satisfies \(\overline{D}>D+\frac{C}{1-\rho }\). We have that Ω is an open and bounded subset of \(X_{\omega }\). By (31), for each \(\lambda \in ( 0,1 ) \), \(u\in \partial \varOmega \), \(Lu\neq \lambda Nu\). If \(u\in \partial \varOmega \cap \operatorname{Ker} L\), then \(u= \{ \overline{D} \} _{n\in Z}\) or \(u= \{ -\overline{D} \} _{n\in Z}\). By (13), $$( QNu ) _{n}=\frac{1}{\omega }\sum_{i=1}^{\omega }g_{i}(u_{i-\tau _{1}},u_{i-\tau _{2}}, \ldots,u_{i-\tau _{l}})\neq 0,\quad n\in Z. $$ In particular, we see that if \(u= \{ \overline{D} \} _{n\in Z}\), then $$( QNu ) _{n}=\frac{1}{\omega }\sum_{i=1}^{\omega }g_{i}( \overline{D},\overline{D},\ldots,\overline{D})>0,\quad n\in Z, $$ and if \(u= \{ -\overline{D} \} _{n\in Z}\), then $$( QNu ) _{n}=\frac{1}{\omega }\sum_{i=1}^{\omega }g_{i}(- \overline{D},-\overline{D},\ldots,\overline{-D})< 0,\quad n\in Z. $$ This indicates $$\deg ( QN,\varOmega \cap \operatorname{Ker} L,0 ) \neq 0. $$ By Theorem A, we see that the equation \(Lx=Nx\) has at least one solution in \(\overline{\varOmega }\cap \operatorname{Dom} L\). In other words, (1) has an ω-periodic solution. The proof is completed. □ Example 3.1 The difference equation $$ \Delta ^{4} \biggl[ u_{n}+\frac{1}{3}u_{n-2} \biggr] =\frac{10}{243} \biggl[ 2-\sin \biggl( \frac{2n\pi }{3} \biggr) \biggr] ( u_{n-2} ) ^{\frac{1}{3}} ( u_{n-1} ) ^{\frac{1}{3}} ( u_{n} ) ^{\frac{1}{3}} $$ is one of the form (1), where \(k=4\), \(c=\frac{1}{3}\), \(\sigma =2\), \(l=3\), \(\tau _{1}=2\), \(\tau _{2}=1\), \(\tau _{3}=0\), and $$g_{n}(u_{1},u_{2},u_{3})= \frac{10}{243} \biggl[ 2-\sin \biggl( \frac{2n\pi }{3} \biggr) \biggr] ( u_{1} ) ^{\frac{1}{3}} ( u_{2} ) ^{\frac{1}{3}} ( u_{3} ) ^{\frac{1}{3}}. $$ We can prove that (32) has a 3-periodic nontrivial solution. Indeed, let \(D=1\), \(\beta =\frac{10}{81}\), and \(\alpha =1\). Then the conditions of Theorem 2.1 are satisfied. Therefore (32) has a 3-periodic solution. Furthermore, the solution is nontrivial since \(g_{n}(0,0,0)\) is not identically zero. The authors would like to thank the referees for invaluable comments and insightful suggestions. This work was supported by GDSFC project (No. 9151008002000012). JLZ and GQW worked together in the derivation of the mathematical results. Both authors read and approved the final manuscript. School of Mathematics and Systems Science, Guangdong Polytechnic Normal University, Guangzhou, P.R. China Department of General Education, Tianhe College, Guangdong Polytechnic Normal University, Guangzhou, P.R. China Gaines, R.E., Mawhin, J.L.: Coincidence Degree and Nonlinear Differential Equations. Lecture Notes in Math., vol. 568 (1977) View ArticleGoogle Scholar Gopalsamy, K., He, X., Wen, L.: On a periodic neutral logistic equation. Glasg. Math. J. 33, 281–286 (1991) MathSciNetView ArticleGoogle Scholar Wang, G.Q., Yan, J.R.: Existence of periodic solutions for nth order neutral differential equations with multiple variable lags. Acta Math. Sci. 26(2A), 306–313 (2006) (In Chinese) MathSciNetMATHGoogle Scholar Elaydi, S.: Periodic solutions of difference equations. J. Differ. Equ. Appl. 6(2), 203–232 (2000) MathSciNetView ArticleGoogle Scholar Wang, G.Q., Chen, S.S.: Periodic solutions of higher order nonlinear difference equations via a continuation theorem. Georgian Math. J. 12(3), 539–550 (2005) MathSciNetGoogle Scholar Wang, G.Q., Chen, S.S.: Positive periodic solutions for nonlinear difference equations via a continuation theorem. Adv. Differ. Equ. 4, 311–320 (2004) MathSciNetGoogle Scholar Wang, G.Q., Chen, S.S.: Periodic solutions of a neutral difference system. Bol. Soc. Parana. Mat. 22(2), 117–126 (2004) MathSciNetGoogle Scholar Wang, G.Q.: Steady State Solutions of Neural Networks and Periodic Solutions of Difference Equations. Jinan University Press, Guangzhou (2012) (in Chinese) Google Scholar
CommonCrawl
Square root of 2 is irrational The statement we are going to discuss and prove is known as Theorem of Theaetetus due to its appearance in Plato's Theaetetus dialog: Theaetetus: Theodorus was proving to us a certain thing about square roots, I mean the square roots of three square feet and five square feet, namely, that these roots are not commensurable in length with the foot-length, and he proceeded in this way, taking each case in turn up to the root of seventeen square feet; at this point for some reason he stopped. Now it occurred to us, since the number of square roots appeared to be unlimited, to try to gather them into one class, by which we could henceforth describe all the roots. Socrates: And did you find such a class? Theaetetus: I think we did; but see if you agree. Socrates: Speak on. Theaetetus: We divided all numbers into two classes. The one, consisting of numbers that can be represented as the product of equal factors, we likened in shape to the square and called them square or equilateral numbers. Socrates: And properly so. Theaetetus: The numbers between these, among which are three and five and all that cannot be represented as the product of equal factors, but only as the product of a greater by a less or a less by a greater, and are therefore contained by greater and less sides, we likened to oblong shape and called oblong numbers. Socrates: Excellent. And what after this? Theaetetus: Such lines as form the sides of equilateral plane numbers we called lengths, and such as form the oblong numbers we called roots, because they are not commensurable with others in length, but only with the plane areas which they have the power to form. And similarly in the case of solids. Below we shall concentrate on the one root -- that of 2 -- that Theaetetus has not mentioned and at times suggest an extension to a more general result. But first let's see how Richard Dedekind, one of the people most responsible for the current definition and understanding of irrational numbers, treated Theorem of Theaetetus: Proof 1 Let $D$ be a positive integer but not the square of an integer, then there exists a positive integer $\lambda$ such that $\lambda^{2} \lt D \lt (\lambda + 1)^{2}.$ ... If there exists a rational number whose square is $D,$ then there exist two positive integers $t,$ $u,$ that satisfy the equation: $t^{2} - Du^{2} = 0,$ and we may assume that $u$ is the least positive integer possessing the property that its square, by multiplication by $D,$ may be converted into the square of an integer $t.$ Since evidently $\lambda u \lt t \lt (\lambda + 1)u,$ the number $v = t - \lambda u$ is a positive integer certainly less than $u.$ If further we put $s = Du - \lambda t,$ $s$ is likewise a positive integer, and we have $s^{2} - Dv^{2} = (\lambda^{2} - D)(t^{2} - Du^{2}) = 0,$ which is contrary to the assumption respecting $u.$ Hence the square of every rational number is either less than $D$ or greater than $D$ ... A standard proof (e.g., [Rademacher and Toeplitz, Ch. 4]) of this result does not differ from the proof that $\sqrt{5}$ is irrational. Still, the argument can be modified a little. The premise $p^{2} = 2q^{2}$ tells us that $p$ is even. Assuming $p$ and $q$ mutually prime, $q$ is bound to be odd. However, the square of an even number is divisible by $4,$ which leads us to conclude that $q$ is even. A contradiction. Following is another intuitive one [Laczkovich, p. 4, Davis & Hersh, p. 299]. This one is based on the assertion that every number is uniquely (up to the order of factors) representable as a product of primes. (A prime is a number divisible only by itself and $1.)$ The fact which is known as The Fundamental Theorem of Arithmetic. Assuming it as an axiom (or a given fact), let $(p/q)^{2} = 2$ for some integers p and q. Then $p^{2} = 2q^{2}.$ Factor both $p$ and $q$ into a product of primes. $p^{2}$ is factored into a product of the very same primes as $p$ each taken twice. Therefore, $p^{2}$ has an even number of prime factors. So does $q^{2}.$ Therefore, $2q^{2}$ has an odd number of prime factors. Contradiction. Proof 3' Here's a modification that only counts the factors equal to $2$ [Automatic Sequences, p. 40]. $p^{2}$ has an even number of such factors, while $2q^{2}$ is bound to have an odd number of them. This is equivalent to saying that in the binary expansion of $p^{2}$ the number of unit digits is even, whereas on the right, in $2q^{2},$ the number of units is odd. Following is yet another illuminating proof. As in the standard proof, assume $p$ and $q$ are mutually prime (numbers with no common factors). Their squares are still mutually prime for they are built from the same factors. Therefore, the fraction $p^{2}/q^{2}$ cannot cancel out. In particular, $p^{2}/q^{2}$ cannot cancel down to equal 2. Therefore, $p^{2}/q^{2}\ne 2.$ J. L. Lagrange in his Lectures on Elementary Mathematics (1898) argues that "... it's impossible to find a whole number which multiplied by itself will give $2.$ It cannot be found in fractions, for if you take a fraction reduced to its lowest terms, the square of this fraction will again be a fraction reduced to its lowest terms, and consequently cannot be equal to the whole number $2.$" More than half a century earlier (1831), Augustus De Morgan explained that "... $7$ is not made by the multiplication either of any whole number or fraction by itself. The first is evident; the second cannot be readily proved to the beginner, but he may, by taking a number of instances, satisfy himself of this, that no fraction which is really such, that is whose numerator is not measured by its denominator, will give a whole number when multiplied by itself, thus $4/3\times 4/3 = 16/9$ is not a whole number, and so on." The latter proof makes it entirely obvious that unless a square root of an integer is an integer itself, it is bound to be irrational. Furthermore, the same argument applies to roots other than square. Unless it's an integer itself, a fifth root of an integer is an irrational number! The proofs above, directly or indirectly, appeal to the Fundamental Theorem of Arithmetic. In an article Irrationality Without Number Theory (Am. Math. Monthly, 1991), Richard Beigel set out to check how much of Number Theory is actually needed. He showed that for any integer $k$ and $t,$ $k^{1/t}$ is either integer or irrational using only the divisibility algorithm and the floor (whole part) function [n]. Following is his proof for $t = 2.$ (A more general result is shown to follow from the Rational Root Theorem and Gauss' Lemma. The latter has a proof that does not even mention the divisibility.) Let $x = k^{1/2}$ and assume that $x$ is rational but not integer. Then there exists a minimal positive integer $n$ such that $xn$ is an integer. Consider $m = n(x - \lfloor x\rfloor).$ Since the fractional part of $x,$ $0 \le x - \lfloor x\rfloor \lt 1,$ $0 \le m \lt n.$ Note that $m$ is an integer for $m = nx - n\lfloor x\rfloor$ which is an integer. Furthermore, $mx = nx^{2} - (nx)\lfloor x\rfloor = nk - (nx)\lfloor x\rfloor$ which is also an integer. Due to the minimality of $n,$ $m = 0.$ In other words, $x = \lfloor x\rfloor$ and is, thus, an integer in contradiction to our assumption. Richard's argument can be modified to invoke an infinite regression which is impossible for positive integers. Assuming $x$ to be rational, there exists an integer $n$ such that $nx$ is also an integer. As before, we can find an integer $m$ less than $n$ with the same property. $mx = 0$ gives an immediate contradiction. Otherwise, applying the same reasoning to $m$ and so on, we potentially have an infinite set of integers with no minimal element which is impossible. This is akin to the following geometric proof [Barbin, Gardner]. If $x = \sqrt{2}$ were rational, there would exist a quantity $s$ commensurable both with $1$ and $x:$ $1 = sn$ and $x = sm.$ (It's the same as assuming that $x = m/n$ and taking $s = 1/n.)$ The same will be true of their difference $x - 1,$ which is smaller than $x.$ And the process could continue indefinitely in contradiction with the existence of a minimal element. The game Euclid might have played always ends! As it does not, the algorithm leads to a continued fraction representation of $\sqrt{2}.$ The geometric form of the process is known as anthyphairesis which means (in Greek) continually subtract the smaller from the larger. And here's another geometric proof I came across in an article by Tom Apostol (The American Mathematical Monthly, v 107, N 9, pp 841-842.) This one is so simple it may pass as a proof without words. But this is what his author had to say: This note presents a remarkably simple proof of the irrationality of $\sqrt{2}$ that is a variation of the classical Greek geometric proof. By the Pythagorean theorem, an isosceles right triangle of edge-length $1$ has hypotenuse of length $\sqrt{2}.$ If $\sqrt{2}$ is rational, some positive integer multiple of this triangle must have three sides with integer lengths, and hence there must be a smallest isosceles right triangle with this property. But inside any isosceles right triangle whose three sides have integer lengths we can always construct a smaller one with the same property, as shown below. Therefore $\sqrt{2}$ cannot be rational. Construction. A circular arc with center at the uppermost vertex and radius equal to the vertical leg of the triangle intersects the hypotenuse at a point, from which a perpendicular to the hypotenuse is drawn to the horizontal leg. Each line segment in the diagram has integer length, and the three segments with double tick marks have equal lengths. (Two of them are tangents to the circle from the same point.) Therefore the smaller isosceles right triangle with hypotenuse on the horizontal base also has integer sides. The reader can verify that similar arguments establish the irrationality of $\sqrt{n^{2} + 1}$ and $\sqrt{n^{2} - 1}$ for any integer $n \gt 1.$ For $\sqrt{n^{2} + 1}$ use a right triangle with legs of lengths $1$ and $n.$ For $\sqrt{n^{2} - 1}$ use a right triangle with hypotenuse $n$ and one leg of length $1.$ Essentially the same diagram has been used in a Russian geometry textbook by A. P. Kiselev, p. 121. The book, first published in 1892, has been in a systematic use up to the late 1950s with practically no competition, and frequently in the ensuing years. A proof to the same effect but with a paper folding interpretation is due to [Conway and Guy, pp. 183-185] [Rademacher and Toeplitz, Ch. 4] gave two proofs of the irrationality of $\sqrt{2}$ of which the first was illustrated by practically the same diagram, without mention of the paperfolding background. As the starting point of the argument, they simply laid a side of a square on the diagonal and then proved the emergence of the smaller right isosceles triangle. Assume [Laczkovich, Gardner] that $\sqrt{2} = p/q$ where $p$ and $q$ are the positive integers and $q$ is the smallest possible. Then $p \gt q$ and also $q \gt p - q,$ so that we have $\begin{align} (2q - p)/(p - q) &= (2 - p/q)/(p/q - 1)\\ &= (2 - \sqrt{2})/(\sqrt{2} - 1)\\ &= (2 - \sqrt{2})(\sqrt{2} + 1)\\ &= \sqrt{2}. \end{align}$ But this contradicts the minimality of q. (Prof. Claus I. Doering, Instituto de Matemática - UFRGS, Brazil, has pointed to a much earlier reference. The proof appeared in a footnote of the classic A Course of Modern Analysis by E. T. Whittaker and G. N. Watson, 4th Edition, Cambridge University Press, 1927. The footnote had also been included in the 3rd edition (1920).) Obviously, the proof can be restated as $(2p - q)^{2} = 2(p - q)^{2},$ which, too, is easily shown to hold, provided $p^{2} = 2q^{2}$ holds. $\begin{align} (p - q)^{2}&= p^{2} - 2pq + q^{2}\\ &= 3q^{2} - 2pq,\\ (2q - p)^{2}&= 4q^{2} - 4pq + p^{2}\\ &= 3p^{2} - 4pq\\ &= 2(3q^{2} - 2pq)\\ &= 2(p - q)^{2}. \end{align}$ (Gary Davis has suggested an alternative formulation: If there is an integer $n > 0$ with $n\sqrt{2}$ an integer then $m = n(\sqrt{2} -1)$ has the same property, but is smaller than $n.)$ Proof 8'' A kindred proof has been published by Edwin Halfar (Am Math Monthly, Vol. 62, No. 6 (Jun-Jul 1955), p. 437). Suppose $\sqrt{2} = m/n,$ where $n$ and $m$ are positive integers. Then $n \gt m,$ and there is an integer $p \gt 0$ such that $n = m + p,$ and $2m^{2} = m^{2} + 2mp + p^{2}.$ This implies $m \gt p.$ Consequently, for some integer $a \gt 0,$ $m = p + a,$ $n = 2p + a$ and $2(p + a)^{2} = 2(p + a)^{2}.$ The last equality implies $a^{2} = 2p^{2}$ so that the entire process may be repeated indefinitely giving $n \gt m \gt a \gt p \ldots,$ but since every non-null set of positive integers has a smallest element, this is a contradiction and $\sqrt{2}$ is irrational. In plain English this asserts that given two squares with integer sides and one having twice the area of the other, there exist a pair of smaller squares with the same properties. Proof 8''' A superb graphic illustration for the latter has been popularized by J. Conway around 1990, see [Hahn, ex. 37 for Ch. 1]. Conway discussed the proof at a Darwin Lecture at Cambridge. The lecture appears alongside other Darwin lectures in the book Power published by Cambridge University Press. Conway's contribution is included as the chapter titled "The Power of Mathematics". The text can be found online. Conway attributes the proof to the Princeton mathematician Stanley Tennenbaum (1927 - 2006) who made the discovery in the early 1950s while a student at the University of Chicago. Given two squares with integer sides, one twice the other move the smaller squares into opposite corners of the bigger square The intersection of the two forms a square at the center of the diagram. Their union leaves two squares at the free corners of the diagram. By the Carpets Theorem, the two areas are equal: (Obviously, these squares also have integer sides.) Proof $8^{IV}$ Another illustration by Grant Cairns appeared in Math. Mag. 85 (2012), p. 123: The first two steps of the infinite discent are suggestive of the continuation There are extensions of the geometric arguments to regular polygons other than the square that illustrate the irrationality of $\sqrt{3},$ $\sqrt{5},$ $\sqrt{6},$ and - in principle - of other square roots. Proof $8^{V}$ Here's a variant from Am Math Monthly (120 (August/September) 2013, p 674) by Samuel G. Moreno and Esther M. García-Caballero: If $\sqrt{2}$ is a rational number, then $x_{0}=\sqrt{2}+1$ is also rational, so $x_{0}=p/q,$ for some positive integers $p$ and $q$ with $q$ the smallest possible. Since $1\lt\sqrt{2}\lt 2,$ then $2\lt x_{0}\lt 3,$ which implies $2q\lt p\lt 3q.$ Clearly, $x_{0}(x_{0}-2)=(\sqrt{2}+1)(\sqrt{2}-1)=1,$ and thus $\displaystyle\frac{1}{x_0}=x_{0}-2=\frac{p}{q}-2=\frac{p-2q}{q},$ so that $x_{0}=q/(p-2q).$ The bound $p-2q\lt q$ contradicts the minimality of $q.$ This proof is due to Alex Healy and was once available online at The Braden Files site. I owe the deepest thanks to Rick Mabry (Software Developer turned Professor) for pointing me in the right direction.) Consider the set $W = {a + b \sqrt{2}},$ $a, b$ integers. Clearly $W$ is closed under multiplication and addition. Define $\alpha = (\sqrt{2} - 1),$ an element of $W.$ Obviously, $0 \lt \alpha \lt 1,$ so that $\alpha ^{k} \rightarrow 0$ as $k\rightarrow\infty.$ Assume $\sqrt{2} = p/q.$ Since $W$ is closed, $\alpha ^{k} = e + f \sqrt{2} = (eq + fp)/q \ge 1/q$ contradicting (1). (This proof has also appeared in an article Irrationality of Square Roots by P. Ungar, Math Magazine, v. 79, n. 2, April 2006, pp. 147-148, with an extension to roots of more general polynomials with integer coefficients, and [Laczkovich, pp. 4-5]) Proof 10 This proof too is by D. Kalman et al (Variations on an Irrational Theme-Geometry, Dynamics, Algebra, Mathematics Magazine, Vol. 70, No. 2 (Apr., 1997), pp. 93-104). Let $A$ be the matrix $A = \begin{pmatrix} -1 & \space 2\\ \space 1 & -1\\ \end{pmatrix}$ By the definition, $ \begin{pmatrix} -1 & \space 2\\ \space 1 & -1\\ \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} 2b-a \\ a-b \end{pmatrix} $ Two facts are worth noting: (a) matrix $A$ maps integer lattice onto itself, (b) the line with the equation $a = \sqrt{2}b$ is an eigenspace $L,$ say, corresponding to the eigenvalue $\sqrt{2} - 1:$ $ \begin{pmatrix} -1 & \space 2\\ \space 1 & -1\\ \end{pmatrix} \begin{pmatrix} \sqrt{2} \\ 1 \end{pmatrix} = \begin{pmatrix} 2-\sqrt{2} \\ \sqrt{2}-1 \end{pmatrix} =(\sqrt{2}-1) \begin{pmatrix} \sqrt{2} \\ 1 \end{pmatrix}. $ Since $0 \lt \sqrt{2} - 1 \lt 1,$ the effect of $A$ on $L$ is that of a contraction operator. So that the repeated applications of matrix $A$ to a point on $L$ remain on $L$ and approach the origin. On the other hand, if the starting point was on the lattice, the successive iteration would all remain on the lattice, meaning that there are no lattice points on $L.$ Conway and Guy (pp. 184-185) argue that if $\sqrt{N}$ is not an integer but a rational number $B/A,$ then $B/A = NA/B.$ Assuming that $B/A$ in lowest terms, we observe that the fractional parts of $B/A$ and $NA/B$ have the form $a/A$ and $b/B,$ where $a,$ $b$ are positive integers smaller than $A,$ $B.$ But if two numbers are equal, their fractional parts are also equal: $a/A = b/B$ $b/a = B/A = \sqrt{N}.$ This gives a simpler form for $\sqrt{N},$ contrary to our assumption. Proof 11' Geoffrey C. Berresford (Am Math Monthly, Vol. 115, No. 6 (June-July 2008), p. 524) offered a different route from the assumption $\sqrt{N} = B/A = NA/B,$ with $B/A$ in lowest terms. If two fractions are equal, with one in lowest terms, the numerator and denominator of the other are a common integer (say $c)$ multiples of the numerator and denominator of the first. Therefore, $B = Ac,$ so that $B/A = c,$ and hence $\sqrt{N}$ is an integer, so $N$ is a perfect square. (I am grateful to Prof. Claus I. Doering from Instituto de Matemática - UFRGS, Brazil for bringing this proof to my attention.) Alan Cooper found what I would call a common-sense proof of the fact at hand. He based his proof on an observation that squaring a finite decimal fraction, say $m.n,$ never removes the decimal part. This becomes clear on considering the very last (non-zero) digit. This is the only digit responsible for the last digit in the decimal expansion of the square. Thus the latter is never zero. In case where the fraction is not a finite decimal, we may switch to another number system in which the fraction is finite. The same argument now applies. Alan notes that the above is a way of interpreting Proof 4. Following Nick Lord (Math Gazette, v 91, n 521, July 2007, p. 256) we shall show that, for an integer $N \gt 1,$ $\sqrt{N}$ is irrational. The proof is based on the assertion that, for integer $a, m, n,$ $n \gt 1,$ $m$ and $n$ coprime, the expression $m/n + an/m$ is never an integer. Indeed, if say $m/n + an/m = k,$ for an integer $k,$ then $m^{2} = n\cdot (km - an),$ which would imply that $n$ divides $m$ - in contradiction with either $n \gt 1$ or coprimality of $n$ and $m.$ Assume then that $\sqrt{N}$ is rational but not an integer. Then, since $\sqrt{N} + 1$ is not an integer, we must have $\sqrt{N} + 1 = m / n,$ for coprime $m$ and $n,$ with $n \gt 1.$ But the observation above with $a = 1 - N$ then leads to a contradiction that $2 = \sqrt{N} + 1 + (1 - N) / (\sqrt{N} + 1)$ is not an integer. Gustave Robson (Am Math Monthly, Vol. 63, No. 4 (Apr., 1956), p. 247) published a short proof preceded by a remark: The following proof was invented by Robert James Gauntt, in 1952, while he was a freshman at Purdue. I was unable to induce him to write up his proof.) $a^{2} = 2b^{2}$ cannot have a non-zero solution in integers because the last non-zero digit of a square, written in the base three, must be $1,$ whereas the last non-zero digit of twice a square is $2.$ This proof is by Stuart Anderson. In $\mathbb{Z}_{3},$ the field of residues modulo $3$, $0^{2} = 0,$ $1^{2} = 1$ and $2^{2} = 1,$ so there is no element whose square is $2.$ Now suppose $\sqrt{2}$ is a rational number $a = p/q.\;$ Then $a$ maps to $p\;\mbox{(mod }3)$ / $q\;\mbox{ (mod }3),$ which is either $p$ or $2p\;\mbox{ (mod }3).\;$ It follows that $a^{2} = p^{2} \ne 2\;\mbox{ (mod }3)).\;$ But since reduction $\mbox{mod}\space 3$ respects all the arithmetic operations, $a^{2} = 2\;$ implies $a^{2} = 2\;\mbox{ (mod }3)\;$ in \mathbb{Z}_{3}, a contradiction. (It can be shown that the equation $x^{2} = 2\;(\mbox{mod}\space p),\;$ where $p\;$ is an odd prime, is solvable if $p = 1, 7 \;(\mbox{mod}\space 8)$ and is unsolvable if $p = 3, 5\space(\mbox{mod}\space 8),$ see [Stark, pp. 311-313].) In general, it is easy to see that $n^{1/m}$ is irrational if there exists at least one prime $p$ such that $n$ is not a perfect $m^{th}$ power in \mathbb{Z}_{p}. Proof 14'' This proof was found by Sergey Markelov when yet in high school. In the decimal system, a square of an interger may only end in one of the following digits: $0,1, 4, 5, 6, 9\;$ whereas twice a square may only end with $0,2,8.\;$ Thus assuming $a^2=2b^2,\;$ both $a\;$ and $b\;$ may only end with $0.\;$ This triggers an infinite descent which proves that this is also impossible, as further explained here. 2-proofs-in-1 from The American Mathematical Monthly 114 (May 2007), p. 416. The proof is by Xinyun Zhu of Central Michigan University. Starting as in the proof from Conway and Guy, let $\sqrt{N}$ is not an integer but a rational number $B/A,$ then Assume $B/A$ is in lowest terms so that $A$ is minimal. Recollect that $x/y = w/z$ implies $x/y = (x + tw) / (y + tz),$ for any $t.$ Since $B$ is not divisible by $A,$ there is a $q \gt 0$ satisfying $0 \lt B - qA \lt A.$ From $B/A = NA/B$ it then follows that $\sqrt{N} = NA/B = (N\cdot A - qB) / (B - qA)$ which contradicts $B - qA \lt A.$ Alternatively, we may use the fact that for mutually prime $A$ and $B$ there are integer $r$ and s such that $rA + sB = 1.$ Then $\sqrt{N} = (sN\cdot A + rB) / (sB + rA) = sN\cdot A + rB,$ which is an integer. A contradiction. (N. C. Ferreño, Yet Another Proof of the Irrationality of $\sqrt{2}$, Am Math Monthly, v 116, n 1 (Jan 2009), pp. 68-69.) Consider a linear mapping $f:\space\mathbb{R}\rightarrow \mathbb{R}$ given by $f(x) = (\sqrt{2} - 1)x$ as a dynamical system. For each point $x_{0}\in\mathbb{R}$ let us define its orbit $O(x_{0})$ as the sequence of iterates starting at $x_{0},$ namely $O(x_{0}) = \{(\sqrt{2} - 1)^{n}x:\space n \ge 0\}.$ Since $0 \lt \sqrt{2} - 1 \lt 1,$ it is clear that for each $x_{0}\in \mathbb{R}$ the orbit $O(x_{0})$ converges to zero. Now suppose $\sqrt{2} = p/q$ is a rational number with $p, q \in \mathbb{N}.$ Then, for all $n\in \mathbb{N},$ it holds that $(\sqrt{2})^{n}q = \begin{cases} 2^{n/2}q, &\mbox{if}\space n \space\mbox{is even}\space\\ 2^{(n-1)/2}p, &\mbox{if}\space n \space\mbox{is odd}\space. \end{cases}$ In either case, $(\sqrt{2})^{n}q$ is a natural number, and from binomial theorem it follows that $\displaystyle 0 \lt (\sqrt{2} - 1)^{n}q = \sum_{k}\bigg[{n\choose k}\sqrt{2}^{k}(-1)^{n-k}q\bigg]\in\mathbb{Z},$ $k = 1, \ldots, n.$ Therefore $O(x_{0})\subset\mathbb{N}$ which contradicts the fact that $O(x_{0})$ tends to zero. Yoram Sagher gave a modification of Dedekind's argument (Am Math Monthly, Vol. 95, No. 2. (Feb., 1988), p. 117): Suppose $\sqrt{k} = m/n,$ where $m$ and $n$ are integers with $n > 0.$ If $k$ is not a square, there is an integer $q$ so that $q < m/n < q + 1.$ Now $m^{2} = kn^{2}$ implies $m(m - qn) = n(kn - qm)$ and, hence, $m/n = (kn - qm)/(m - qn).$ From $q < m/n < q + 1$ we get $0 < m - qn < n.$ Therefore we have: $\sqrt{k} = (kn - qm)/(m - qn),$ where the denominator is positive and smaller than the one in the original fraction. Continuing, we get an infinite decreasing sequence of positive integers, an impossibility. This proof does not use any properties of primes and would thus be fully accessible to Pythagoras and to Theodorus. Following up on Y. Sagher proof, Robert W. Floyd published (Am Math Monthly, Vol. 96, No. 1 (Jan., 1989), p. 67) an extension: Assuming Pythagoras understood Euclid's algorithm, the following proofs show how he could have demonstrated that any integer root of an integer is an irrational or an integer, and even that the cube root of an integer either is not the root of a quadratic (i.e., not in the form $(a + b\sqrt{N}) / c$ or is an integer. I placed the proof into a separate file. This is a proof by D. Kalman et al (Variations on an Irrational Theme-Geometry, Dynamics, Algebra, Mathematics Magazine, Vol. 70, No. 2 (Apr., 1997), pp. 93-104). $ABC$ is an isosceles right triangle. $D$ on $AB$ is such that $BD = BC.$ $DE\perp BC;$ $F$ and $G$ are such that $CEGF$ is a square. Observe that $CG = DG.$ Indeed, by the construction of $D,$ $\Delta BCD$ is isosceles so that $\angle BDC = \angle BCD.$ Also, $\angle ECG = \angle BDE.$ Subtracting equals from equals, we see that $\angle DCG = \angle CDG,$ implying $CG = CD.$ Let $H$ on $AC$ be such as to form a parallelogram $ADGH.$ Then $\angle FHG = \angle CAB = 45^{\circ}.$ Hence, $\Delta CGH$ is isosceles and $CG = GH.$ But then in the parallelogram $ADGH$ all sides are equal. The following diagram only shows three equal segments that are important for the proof. Assume $BC$ and $AB$ are commensurable, i.e. assume that the two have a common unit. Their difference $AD$ is then also wholly measured by the same unit. The unit therefore measures two legs $(GH$ and $CG)$ of the isosceles right $\Delta CGH.$ The unit measures $AC$ and $AH$ and so also $CH.$ We obtain an isosceles right $\Delta CGH$ all of whose sides are measured by the same unit. But $\Delta CGH$ is smaller than $\Delta ABC.$ Since the process could be continued, we obtain a contradiction. This proof too does not use any properties of primes and would thus be fully accessible to Pythagoras and to Theodorus. The proof admits an algebraic equivalent. Suppose $e$ is the unit common to $BC$ and $AC:$ $BC = ne$ and $AC = me.$ We then produce successively $AD = (n - m)e,$ $AH = (n - m)e,$ $CH = m - (n - m)e = (2m - n)e;$ and since triangles $ABC$ and $CHG$ are similar, $n/m = (2m - n)/(n - m).$ The latter identity is equivalent to $n^{2} = 2m^{2},$ meaning that $\sqrt{2} = n/m.$ But then also $\sqrt{2} = (2m - n)/(n - m)$ which again leads to an infinite descent. The proof appears to be a geometric equivalent of the short Proof 8. I am grateful to Aharon Meyerowitz from Florida Atlantic University for bringing to my attention the geometric arguments from The Elements of Dynamic Symmetry by Jay Hambidge (the book is available online.) Cut off of a $\sqrt{2}\times 1$ rectangle a square of side 1. The rectangle left over will have dimensions $(\sqrt{2} - 1)\times 1.$ Removing from the latter a square as shown in the diagram leaves a rectangle $(\sqrt{2} - 1)\times (2 - \sqrt{2}).$ This rectangle is similar to the one we started with: $(2 - \sqrt{2}) / (\sqrt{2} - 1) = \sqrt{2} / 1.$ which shows that the standard "infinite descent" argument applies. Were $\sqrt{2}$ rational, it would be possible to select the smallest rectangle with dimensions proportional to $(\sqrt{2} - 1)\times 1.$ As the argument shows, the existence of such a smallest rectangle would lead to a contradiction. This argument may serve as an illustration to Proof 8. The book contains another approach illustrated by the following diagram: Here two $1\times 1$ squares are drawn at the opposite ends of the rectangle. This splits the original $\sqrt{2}\times 1$ rectangle into one big and two small squares and three rectangles of dimensions $(\sqrt{2} - 1)\times (2 - \sqrt{2}).$ The same argument applies again. The irrationality of $\sqrt{2}$ is a consequence of the $p$-adic Local-Global Principle. $\sqrt{2}$ is irrational because $2$ is not a quadratic residue modulo $5$! Also $\sqrt{2}\;$ is irrational because it is represented by an infinite continued fraction. Indeed, $\displaystyle\sqrt{2}+1=2+\frac{1}{\sqrt{2}+1}$ makes it clear that the process of converting $\sqrt{2}+1\;$ into continued fraction never ends, the same holds for $\sqrt{2}.$ The irrationality of $\sqrt{2}$ can be rephrased in a way that appears quite paradoxical: a cover of the unit interval by open intervals centered on rational numbers, with infinite total length does not cover $\sqrt{2}/2$. The details are on a separate page. A modified argument leads to a criterion of irrationality via a limit. I placed the proof on a separate page. The concept of limit is central in the following proof that is based on solving a simple difference equation: $x_{n+2}=-2x_{n+1}+x_n,$ where $n=0,1,2,\ldots.$ The details are on a separate page. Samuel G. Moreno, Esther M. García-Caballero (The Mathematical Gazette, July 2013) derive the irrationality of $\sqrt{2}$ from one of the formulas that accompany a figure in Ramanujan's Notebooks: The details are in a separate file. Samuel G. Moreno and Esther M. García-Caballero also proved irrationality of $k$-th roots, for $k\ge 2,$ of integers that are not $k$-th powers. Their proof by induction that employs Bézout's Lemma can be found in a separate file. Moreno and García-Caballero sent me an unpublished proof where they employed - in a playful manner - the well known formula for the sum of the first odd numbers $\displaystyle n^{2}=\sum_{k=1}^{n}(2k-1);$ this can be found in a separate file. A purely number theoretical proof - the last for 2014, with a generalization to the irrationality of $\sqrt{a},$ for a square-free $a\in\mathbb{N}.$ Details are in a separate file. A proof by M. Jacobson and H. Williams exploits the behavior of two sequences defined in terms of each other: $d_{n+1}=d_n+2s_n,\;d_1=1,\\ s_{n+1}=s_n+d_n,\;s_1=1.$ The details can be found in a separate file. It's edifying to recall an estimate of approximation of irrational numbers with rational ones. The irrationality of $\sqrt{2}/2$ leads to an interesting investigation. In case you are curious, $\sqrt{2}$ is the length of the diagonal of the unit square. Ludmila Duchêne and Agnès Leblanc put together an enchanting literary tribute to the question of irrationality of $\sqrt{2}$ (in French.) J-P Allouche & J. Shallit, Automatic Sequences, Cambridge University Press, 2003 E. Barbin, The Meanings of Mathematical Proof, in In Eves' Circles, J.M.Anthony, ed., MAA, 1994 J. H. Conway, R. K. Guy, The Book of Numbers, Copernicus, 1996 P. J. Davis and R. Hersh, The Mathematical Experience, Houghton Mifflin Company, Boston, 1981 J. W. R. Dedekind, On Irrational Numbers, in A Source Book in Mathematics by D. E. Smith, Dover, 1959, pp. 38-40 A. De Morgan, On the Study and Difficulties of Mathematics, Dover, 2005, p. 130 M. Gardner, Gardner's Workout, A K Peters, 2001 A. Hahn, Basic Calculus: From Archimedes to Newton to its Role in Science, Springer Verlag & Key College, 1998 (Also available online) M. Lasckovitch, Conjecture and Proof, MAA, 2001. H. Rademacher, O. Toeplitz, The Enjoment of Mathematics, Dover, 1990. S. K. Stein, Mathematics: The Man-Made Universe, 3rd edition, Dover, 2000. H. M. Stark, An Introduction to Number Theory, MIT Press, 1970 I. Thomas, Greek Mathematical Works, v1, Harvard University Press, 2006 D. Wells, You are a Mathematician, John Wiley & Sons, 1995 |Contact| |Front page| |Contents| |Impossible| |Up|
CommonCrawl
Del as a Differential Operator: (Matrix times Del) cross vector [duplicate] Creating the Nabla operator (also known as Del operator) as an operator (3 answers) I tried to reply to this answer, but don't have enough reputation points yet. Basically the poster constructed Del (i.e. $\nabla = \left( \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z}\right)$) as a differential operator which works in some cases. However, it doesn't seem to work for what I need (Mathematica doesn't understand how to evaluate the expression I input): I would like to define Del as an operator, with the purpose of evaluating an expression like this: $(A \cdot \nabla)\times \vec{v}$, where $A$ is say a 3x3 matrix, and $\vec{v}(x,y,z) = (u(x,y,z),v(x,y,z),w(x,y,z))$ is an arbitrary vector in $\mathbb{R}^3$. That is, I need to take the product of the matrix A from the left with the vector operator $\nabla$, and then take the cross product of the resulting differential operator with an arbitrary vector in (either $\mathbb{R}^2$ or) $\mathbb{R}^3$. (This would be a separate question, but a solution using tensors to construct $\varepsilon_{ijk} A_{jl}\partial_lv_k$ would also work, if that's easier). tensors vector-calculus operators FishyFishy $\begingroup$ Sum[LeviCivitaTensor[3][[i, j, k]] A[[j, l]] D[v[[k]], x[[l]]], {j, 1, 3}, {k, 1, 3}, {l, 1, 3}] $\endgroup$ – AccidentalFourierTransform Jun 5 '18 at 23:05 $\begingroup$ I've added support for the rule you mentioned. BTW, using tensors to code these calculations is easier of course, as shown by @AccidentalFourierTransform . $\endgroup$ – xzczd Jun 6 '18 at 12:13 $\begingroup$ Thank you xzczd and AccidentalFourierTransform! I will use one of these methods, and consider the case closed. $\endgroup$ – Fishy Jun 6 '18 at 15:51 Browse other questions tagged tensors vector-calculus operators or ask your own question. Creating the Nabla operator (also known as Del operator) as an operator Plotting a complicated vector field Computing with matrix differential operators How best to write an exponential of differential operators? Working with tensor algebra Using Vector Operations In Mathematica
CommonCrawl
Optimized Support Vector Machines for Nonstationary Signal Classification Davy, M., Gretton, A., Doucet, A., Rayner, P. IEEE Signal Processing Letters, 9(12):442-445, December 2002 (article) This letter describes an efficient method to perform nonstationary signal classification. A support vector machine (SVM) algorithm is introduced and its parameters optimised in a principled way. Simulations demonstrate that our low complexity method outperforms state-of-the-art nonstationary signal classification techniques. PostScript Web DOI [BibTex] Davy, M., Gretton, A., Doucet, A., Rayner, P. Optimized Support Vector Machines for Nonstationary Signal Classification IEEE Signal Processing Letters, 9(12):442-445, December 2002 (article) A New Discriminative Kernel from Probabilistic Models Tsuda, K., Kawanabe, M., Rätsch, G., Sonnenburg, S., Müller, K. Neural Computation, 14(10):2397-2414, October 2002 (article) Tsuda, K., Kawanabe, M., Rätsch, G., Sonnenburg, S., Müller, K. A New Discriminative Kernel from Probabilistic Models Neural Computation, 14(10):2397-2414, October 2002 (article) Functional Genomics of Osteoarthritis Aigner, T., Bartnik, E., Zien, A., Zimmer, R. Pharmacogenomics, 3(5):635-650, September 2002 (article) Aigner, T., Bartnik, E., Zien, A., Zimmer, R. Functional Genomics of Osteoarthritis Pharmacogenomics, 3(5):635-650, September 2002 (article) Constructing Boosting algorithms from SVMs: an application to one-class classification. Rätsch, G., Mika, S., Schölkopf, B., Müller, K. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1184-1199, September 2002 (article) We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm—one-class leveraging—starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach. Rätsch, G., Mika, S., Schölkopf, B., Müller, K. Constructing Boosting algorithms from SVMs: an application to one-class classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1184-1199, September 2002 (article) Co-Clustering of Biological Networks and Gene Expression Data Hanisch, D., Zien, A., Zimmer, R., Lengauer, T. Bioinformatics, (Suppl 1):145S-154S, 18, July 2002 (article) Motivation: Large scale gene expression data are often analysed by clustering genes based on gene expression data alone, though a priori knowledge in the form of biological networks is available. The use of this additional information promises to improve exploratory analysis considerably. Results: We propose constructing a distance function which combines information from expression data and biological networks. Based on this function, we compute a joint clustering of genes and vertices of the network. This general approach is elaborated for metabolic networks. We define a graph distance function on such networks and combine it with a correlation-based distance function for gene expression measurements. A hierarchical clustering and an associated statistical measure is computed to arrive at a reasonable number of clusters. Our method is validated using expression data of the yeast diauxic shift. The resulting clusters are easily interpretable in terms of the biochemical network and the gene expression data and suggest that our method is able to automatically identify processes that are relevant under the measured conditions. Hanisch, D., Zien, A., Zimmer, R., Lengauer, T. Co-Clustering of Biological Networks and Gene Expression Data Bioinformatics, (Suppl 1):145S-154S, 18, July 2002 (article) Confidence measures for protein fold recognition Sommer, I., Zien, A., von Ohsen, N., Zimmer, R., Lengauer, T. Bioinformatics, 18(6):802-812, June 2002 (article) Sommer, I., Zien, A., von Ohsen, N., Zimmer, R., Lengauer, T. Confidence measures for protein fold recognition Bioinformatics, 18(6):802-812, June 2002 (article) The contributions of color to recognition memory for natural scenes Wichmann, F., Sharpe, L., Gegenfurtner, K. Journal of Experimental Psychology: Learning, Memory and Cognition, 28(3):509-520, May 2002 (article) The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework. Wichmann, F., Sharpe, L., Gegenfurtner, K. The contributions of color to recognition memory for natural scenes Journal of Experimental Psychology: Learning, Memory and Cognition, 28(3):509-520, May 2002 (article) Training invariant support vector machines DeCoste, D., Schölkopf, B. Machine Learning, 46(1-3):161-190, January 2002 (article) Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification problem at hand ought to be incorporated into the training procedure. We describe and review all known methods for doing so in support vector machines, provide experimental results, and discuss their respective merits. One of the significant new results reported in this work is our recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods. DeCoste, D., Schölkopf, B. Training invariant support vector machines Machine Learning, 46(1-3):161-190, January 2002 (article) Model Selection for Small Sample Regression Chapelle, O., Vapnik, V., Bengio, Y. Machine Learning, 48(1-3):9-23, 2002 (article) Model selection is an important ingredient of many machine learning algorithms, in particular when the sample size in small, in order to strike the right trade-off between overfitting and underfitting. Previous classical results for linear regression are based on an asymptotic analysis. We present a new penalization method for performing model selection for regression that is appropriate even for small samples. Our penalization is based on an accurate estimator of the ratio of the expected training error and the expected generalization error, in terms of the expected eigenvalues of the input covariance matrix. Chapelle, O., Vapnik, V., Bengio, Y. Model Selection for Small Sample Regression Machine Learning, 48(1-3):9-23, 2002 (article) Contrast discrimination with sinusoidal gratings of different spatial frequency Bird, C., Henning, G., Wichmann, F. Journal of the Optical Society of America A, 19(7), pages: 1267-1273, 2002 (article) The detectability of contrast increments was measured as a function of the contrast of a masking or "pedestal" grating at a number of different spatial frequencies ranging from 2 to 16 cycles per degree of visual angle. The pedestal grating always had the same orientation, spatial frequency and phase as the signal. The shape of the contrast increment threshold versus pedestal contrast (TvC) functions depend of the performance level used to define the "threshold," but when both axes are normalized by the contrast corresponding to 75% correct detection at each frequency, the (TvC) functions at a given performance level are identical. Confidence intervals on the slope of the rising part of the TvC functions are so wide that it is not possible with our data to reject Weber's Law. Bird, C., Henning, G., Wichmann, F. Contrast discrimination with sinusoidal gratings of different spatial frequency Journal of the Optical Society of America A, 19(7), pages: 1267-1273, 2002 (article) A Bennett Concentration Inequality and Its Application to Suprema of Empirical Processes C. R. Acad. Sci. Paris, Ser. I, 334, pages: 495-500, 2002 (article) We introduce new concentration inequalities for functions on product spaces. They allow to obtain a Bennett type deviation bound for suprema of empirical processes indexed by upper bounded functions. The result is an improvement on Rio's version \cite{Rio01b} of Talagrand's inequality \cite{Talagrand96} for equidistributed variables. Bousquet, O. A Bennett Concentration Inequality and Its Application to Suprema of Empirical Processes C. R. Acad. Sci. Paris, Ser. I, 334, pages: 495-500, 2002 (article) Numerical evolution of axisymmetric, isolated systems in general relativity Frauendiener, J., Hein, M. Physical Review D, 66, pages: 124004-124004, 2002 (article) We describe in this article a new code for evolving axisymmetric isolated systems in general relativity. Such systems are described by asymptotically flat space-times, which have the property that they admit a conformal extension. We are working directly in the extended conformal manifold and solve numerically Friedrich's conformal field equations, which state that Einstein's equations hold in the physical space-time. Because of the compactness of the conformal space-time the entire space-time can be calculated on a finite numerical grid. We describe in detail the numerical scheme, especially the treatment of the axisymmetry and the boundary. Frauendiener, J., Hein, M. Numerical evolution of axisymmetric, isolated systems in general relativity Physical Review D, 66, pages: 124004-124004, 2002 (article) Marginalized kernels for biological sequences Tsuda, K., Kin, T., Asai, K. Bioinformatics, 18(Suppl 1):268-275, 2002 (article) Tsuda, K., Kin, T., Asai, K. Marginalized kernels for biological sequences Bioinformatics, 18(Suppl 1):268-275, 2002 (article) Support Vector Machines and Kernel Methods: The New Generation of Learning Machines Cristianini, N., Schölkopf, B. AI Magazine, 23(3):31-41, 2002 (article) Cristianini, N., Schölkopf, B. Support Vector Machines and Kernel Methods: The New Generation of Learning Machines AI Magazine, 23(3):31-41, 2002 (article) Stability and Generalization Bousquet, O., Elisseeff, A. Journal of Machine Learning Research, 2, pages: 499-526, 2002 (article) We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification. Bousquet, O., Elisseeff, A. Stability and Generalization Journal of Machine Learning Research, 2, pages: 499-526, 2002 (article) Subspace information criterion for non-quadratic regularizers – model selection for sparse regressors Tsuda, K., Sugiyama, M., Müller, K. IEEE Trans Neural Networks, 13(1):70-80, 2002 (article) Tsuda, K., Sugiyama, M., Müller, K. Subspace information criterion for non-quadratic regularizers – model selection for sparse regressors IEEE Trans Neural Networks, 13(1):70-80, 2002 (article) Modeling splicing sites with pairwise correlations Arita, M., Tsuda, K., Asai, K. Bioinformatics, 18(Suppl 2):27-34, 2002 (article) Arita, M., Tsuda, K., Asai, K. Modeling splicing sites with pairwise correlations Bioinformatics, 18(Suppl 2):27-34, 2002 (article) Perfusion Quantification using Gaussian Process Deconvolution Andersen, IK., Szymkowiak, A., Rasmussen, CE., Hanson, LG., Marstrand, JR., Larsson, HBW., Hansen, LK. Magnetic Resonance in Medicine, (48):351-361, 2002 (article) The quantification of perfusion using dynamic susceptibility contrast MR imaging requires deconvolution to obtain the residual impulse-response function (IRF). Here, a method using a Gaussian process for deconvolution, GPD, is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as using data from healthy volunteers. It is shown that GPD is comparable to SVD variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal to noise ratio increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes. Andersen, IK., Szymkowiak, A., Rasmussen, CE., Hanson, LG., Marstrand, JR., Larsson, HBW., Hansen, LK. Perfusion Quantification using Gaussian Process Deconvolution Magnetic Resonance in Medicine, (48):351-361, 2002 (article) Tracking a Small Set of Experts by Mixing Past Posteriors Bousquet, O., Warmuth, M. Journal of Machine Learning Research, 3, pages: 363-396, (Editors: Long, P.), 2002 (article) In this paper, we examine on-line learning problems in which the target concept is allowed to change over time. In each trial a master algorithm receives predictions from a large set of n experts. Its goal is to predict almost as well as the best sequence of such experts chosen off-line by partitioning the training sequence into k+1 sections and then choosing the best expert for each section. We build on methods developed by Herbster and Warmuth and consider an open problem posed by Freund where the experts in the best partition are from a small pool of size m. Since k >> m, the best expert shifts back and forth between the experts of the small pool. We propose algorithms that solve this open problem by mixing the past posteriors maintained by the master algorithm. We relate the number of bits needed for encoding the best partition to the loss bounds of the algorithms. Instead of paying log n for choosing the best expert in each section we first pay log (n choose m) bits in the bounds for identifying the pool of m experts and then log m bits per new section. In the bounds we also pay twice for encoding the boundaries of the sections. Bousquet, O., Warmuth, M. Tracking a Small Set of Experts by Mixing Past Posteriors Journal of Machine Learning Research, 3, pages: 363-396, (Editors: Long, P.), 2002 (article) A femoral arteriovenous shunt facilitates arterial whole blood sampling in animals Weber, B., Burger, C., Biro, P., Buck, A. Eur J Nucl Med Mol Imaging, 29, pages: 319-323, 2002 (article) Weber, B., Burger, C., Biro, P., Buck, A. A femoral arteriovenous shunt facilitates arterial whole blood sampling in animals Eur J Nucl Med Mol Imaging, 29, pages: 319-323, 2002 (article) Contrast discrimination with pulse-trains in pink noise Henning, G., Bird, C., Wichmann, F. Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise. Henning, G., Bird, C., Wichmann, F. Contrast discrimination with pulse-trains in pink noise Journal of the Optical Society of America A, 19(7), pages: 1259-1266, 2002 (article) Choosing Multiple Parameters for Support Vector Machines Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S. Machine Learning, 46(1):131-159, 2002 (article) The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVM) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance. Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S. Choosing Multiple Parameters for Support Vector Machines Machine Learning, 46(1):131-159, 2002 (article) White matter glucose metabolism during intracortical electrostimulation: a quantitative [(18)F]Fluorodeoxyglucose autoradiography study in the rat Weber, B., Fouad, K., Burger, C., Buck, A. Neuroimage, 16, pages: 993-998, 2002 (article) Weber, B., Fouad, K., Burger, C., Buck, A. White matter glucose metabolism during intracortical electrostimulation: a quantitative [(18)F]Fluorodeoxyglucose autoradiography study in the rat Neuroimage, 16, pages: 993-998, 2002 (article) Unexpected and anticipated pain: identification of specific brain activations by correlation with reference functions derived form conditioning theory Ploghaus, A., Clare, S., Wichmann, F., Tracey, I. 29, 29th Annual Meeting of the Society for Neuroscience (Neuroscience), October 1999 (poster) Ploghaus, A., Clare, S., Wichmann, F., Tracey, I. Unexpected and anticipated pain: identification of specific brain activations by correlation with reference functions derived form conditioning theory 29, 29th Annual Meeting of the Society for Neuroscience (Neuroscience), October 1999 (poster) Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten Schölkopf, B., Müller, K., Smola, A. Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article) We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels. Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results. Schölkopf, B., Müller, K., Smola, A. Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article) Input space versus feature space in kernel-based methods Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A. IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article) This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data. Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A. Input space versus feature space in kernel-based methods IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article) p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53. Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, .. Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article) Mutations in the p53 tumor suppressor gene are the most frequent genetic alterations found in human cancers. Recent identification of two human homologues of p53 has raised the prospect of functional interactions between family members via a conserved oligomerization domain. Here we report in vitro and in vivo analysis of homo- and hetero-oligomerization of p53 and its homologues, p63 and p73. The oligomerization domains of p63 and p73 can independently fold into stable homotetramers, as previously observed for p53. However, the oligomerization domain of p53 does not associate with that of either p73 or p63, even when p53 is in 15-fold excess. On the other hand, the oligomerization domains of p63 and p73 are able to weakly associate with one another in vitro. In vivo co-transfection assays of the ability of p53 and its homologues to activate reporter genes showed that a DNA-binding mutant of p53 was not able to act in a dominant negative manner over wild-type p73 or p63 but that a p73 mutant could inhibit the activity of wild-type p63. These data suggest that mutant p53 in cancer cells will not interact with endogenous or exogenous p63 or p73 via their respective oligomerization domains. It also establishes that the multiple isoforms of p63 as well as those of p73 are capable of interacting via their common oligomerization domain. Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, .. p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53. Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article) Single-class Support Vector Machines Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J. Dagstuhl-Seminar on Unsupervised Learning, pages: 19-20, (Editors: J. Buhmann, W. Maass, H. Ritter and N. Tishby), 1999 (poster) Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J. Single-class Support Vector Machines Dagstuhl-Seminar on Unsupervised Learning, pages: 19-20, (Editors: J. Buhmann, W. Maass, H. Ritter and N. Tishby), 1999 (poster) Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots Balakrishnan, K., Bousquet, O., Honavar, V. Adaptive Behavior, 7(2):173-216, 1999 (article) Balakrishnan, K., Bousquet, O., Honavar, V. Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots Adaptive Behavior, 7(2):173-216, 1999 (article) SVMs for Histogram Based Image Classification Chapelle, O., Haffner, P., Vapnik, V. IEEE Transactions on Neural Networks, (9), 1999 (article) Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that Support Vector Machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form $K(mathbf{x},mathbf{y})=e^{-rhosum_i |x_i^a-y_i^a|^{b}}$ with $aleq 1$ and $b leq 2$ are evaluated on the classification of images extracted from the Corel Stock Photo Collection and shown to far outperform traditional polynomial or Gaussian RBF kernels. Moreover, we observed that a simple remapping of the input $x_i rightarrow x_i^a$ improves the performance of linear SVMs to such an extend that it makes them, for this problem, a valid alternative to RBF kernels. Chapelle, O., Haffner, P., Vapnik, V. SVMs for Histogram Based Image Classification IEEE Transactions on Neural Networks, (9), 1999 (article) Pedestal effects with periodic pulse trains Henning, G., Wichmann, F. Perception, 28, pages: S137, 1999 (poster) It is important to know for theoretical reasons how performance varies with stimulus contrast. But, for objects on CRT displays, retinal contrast is limited by the linear range of the display and the modulation transfer function of the eye. For example, with an 8 c/deg sinusoidal grating at 90% contrast, the contrast of the retinal image is barely 45%; more retinal contrast is required, however, to discriminate among theories of contrast discrimination (Wichmann, Henning and Ploghaus, 1998). The stimulus with the greatest contrast at any spatial-frequency component is a periodic pulse train which has 200% contrast at every harmonic. Such a waveform cannot, of course, be produced; the best we can do with our Mitsubishi display provides a contrast of 150% at an 8-c/deg fundamental thus producing a retinal image with about 75% contrast. The penalty of using this stimulus is that the 2nd harmonic of the retinal image also has high contrast (with an emmetropic eye, more than 60% of the contrast of the 8-c/deg fundamental ) and the mean luminance is not large (24.5 cd/m2 on our display). We have used standard 2-AFC experiments to measure the detectability of an 8-c/deg pulse train against the background of an identical pulse train of different contrasts. An unusually large improvement in detetectability was measured, the pedestal effect or "dipper," and the dipper was unusually broad. The implications of these results will be discussed. Henning, G., Wichmann, F. Pedestal effects with periodic pulse trains Perception, 28, pages: S137, 1999 (poster) Implications of the pedestal effect for models of contrast-processing and gain-control Understanding contrast processing is essential for understanding spatial vision. Pedestal contrast systematically affects slopes of functions relating 2-AFC contrast discrimination performance to pedestal contrast. The slopes provide crucial information because only full sets of data allow discrimination among contrast-processing and gain-control models. Issues surrounding Weber's law will also be discussed. Wichmann, F., Henning, G. Implications of the pedestal effect for models of contrast-processing and gain-control OSA Conference Program, pages: 62, 1999 (poster) Comparing support vector machines with Gaussian kernels to radial basis function classifiers Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V. IEEE Transactions on Signal Processing, 45(11):2758-2765, November 1997 (article) The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application. Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V. Comparing support vector machines with Gaussian kernels to radial basis function classifiers IEEE Transactions on Signal Processing, 45(11):2758-2765, November 1997 (article) ATM-dependent telomere loss in aging human diploid fibroblasts and DNA damage lead to the post-translational activation of p53 protein involving poly(ADP-ribose) polymerase. Vaziri, H., MD, .., RC, .., Davison, T., YS, .., CH, .., GG, .., Benchimol, S. The European Molecular Biology Organization Journal, 16(19):6018-6033, 1997 (article) Vaziri, H., MD, .., RC, .., Davison, T., YS, .., CH, .., GG, .., Benchimol, S. ATM-dependent telomere loss in aging human diploid fibroblasts and DNA damage lead to the post-translational activation of p53 protein involving poly(ADP-ribose) polymerase. The European Molecular Biology Organization Journal, 16(19):6018-6033, 1997 (article)
CommonCrawl
High-pressure study of thermodynamic parameters of diamond-type structured crystals using interatomic Morse potentials Duc Ba Nguyen ORCID: orcid.org/0000-0001-6190-05881 & Hiep Phi Trinh1 In this work, we have determined the mean square relative displacement, elastic constant, anharmonic effective potential, correlated function, local force constant, and other thermodynamic parameters of diamond-type structured crystals under high-pressure up to 14 GPa. The parameters are calculated through theoretical interatomic Morse potential parameters, by using the sublimation energy, the compressibility, and the lattice constant in the expanded X-ray absorption fine structure spectrum. Numerical results agree well with the experimental values and other theories. High-pressure research is a very active research field. Recent progress has been recently made in characterizing elastic, mechanical, and other physical properties of material [1,2,3]. The use of interatomic Morse potentials in Expanded X-ray Absorption Fine Structure (EXAFS) theory to study thermodynamic parameters under high-pressure currently also attracts the attention of materials scientists. In EXAFS spectra with the anharmonic effects, the anharmonic Morse potential [4] is suitable for describing the interaction and oscillations of atoms in the crystals [5]. In the EXAFS theory, photoelectrons are emitted by the absorber scattered by surrounded vibrating atoms. This thermal oscillation of atoms contributes to the EXAFS spectra, especially the anharmonic EXAFS [6, 7], which is affected by these spectra's physical information. In the EXAFS spectrum analysis, the parameters of interatomic Morse potential are usually extracted from the experiment. Because experimental data are not available in many cases, a theory is necessary to deduce interatomic Morse potential parameters. The only calculation has been carried out for cubic crystals by using anharmonic correlated Einstein model [8]. The results have been used actively for calculating EXAFS thermodynamic parameters [9] and are reasonable with those extracted from EXAFS data [10]. Therefore, the requirement for calculation of the anharmonic interatomic Morse interaction potential due to thermal disorder for other structures is essential. The purpose of this study is to expand a method to calculate the interatomic Morse potential parameters using the energy of sublimation, the compressibility, and the lattice constant with the effect of the disorder of temperature. The received interatomic Morse potential parameters are used to calculate the mean square relative displacement (MSRD), mean square displacement (MSD), elastic constant, anharmonic interatomic effective potential, and effective local force constant for diamond-type (DIA) structure crystals such as silicon (Si), germanium (Ge), and SiGe semiconductor. Numerical results are in agreement with the experimental values and other theories [10,11,12,13,14]. Diamond's cubic structure is in the Fd3m space group, which follows the face-centered cubic Bravais lattice (Fig. 1). The lattice describes the repeat pattern, for diamond cubic crystals, the lattice of two tetrahedrally bonded atoms in each primitive cell, separated by 1/4 of the width of the unit cell in each dimension. The diamond lattice can be viewed as a pair of intersecting face-centered cubic lattices, with each separated by 1/4 of the width of the unit cell in each dimension. The atomic packing factor of the diamond cubic structure is π√3/16, significantly smaller (indicating a less dense structure) than the packing factors for the face-centered-cubic lattices. The first-, second-, third-, fourth-, and fifth-nearest-neighbor distances in units of the cubic lattice constant are √3/4, √2/2, √11/4, 1, and √19/4, respectively. Style of diamond's cubic structure is in the Fd3m space group This study uses the theoretical method to calculate the parameter of interatomic Morse potential. Using the obtained interatomic Morse potential parameters to determine state equations, calculate some thermodynamic parameters that depend on temperature and pressure for some pure and doped crystals with a cubic structure. Compare the theoretical results with experimental data. The ε(rij) potential of atoms i and j separated by a distance rij is given in by the Morse function: $$ \varepsilon \left({\mathrm{r}}_{\mathrm{ik}}\right)=\mathrm{D}\left\{{\mathrm{e}}^{-2\alpha \left({\mathrm{r}}_{\mathrm{ij}}-{\mathrm{r}}_{\mathrm{o}}\right)}-2{\mathrm{e}}^{-\alpha \left({\mathrm{r}}_{\mathrm{ij}}-{\mathrm{r}}_{\mathrm{o}}\right)}\right\}, $$ where 1/α describes the width of the potential, D is the dissociation energy (ε(r0) = − D); r0 is the equilibrium distance of the two atoms. To obtain the potential energy of a large crystal whose atoms are at rest, it is necessary to sum Eq. (1) over the entire crystal. It is quickly done by selecting an atom in the lattice as origin, calculating its interaction with all others in the crystal, and then multiplying by N/2, where N is the total number of atoms in a crystal. Therefore, the potential E is given by: $$ E=\frac{1}{2} ND\sum \limits_j\left\{{e}^{-2\alpha \left({r}_j-{r}_o\right)}-2{e}^{-\alpha \left({r}_j-{r}_o\right)}\right\}. $$ Here rj is the distance from the origin atom to the jth atom. It is beneficial to describe the following quantities: $$ {r}_j={\left[{m}_j^2+{n}_j^2+{l}_j^2\right]}^{1/2}a={M}_ja, $$ where mj, nj, lj are position coordinates of atoms in the lattice. Substitute the Eq. (3) into Eq. (2), the potential energy can be rewritten as: $$ E(a)=\frac{1}{2}{NDe}^{\alpha {r}_0}\left[{e}^{\alpha {r}_0}\sum \limits_j{e}^{-2\alpha {aM}_j}-2\sum \limits_j{e}^{-\alpha {aM}_j}\right]. $$ The first and second derivatives of the potential energy of Eq. (4) concerning a, we have: $$ \frac{dE}{da}=-\alpha {NDe}^{\alpha {r}_0}\left[{e}^{\alpha {r}_0}\sum \limits_j{M}_j{e}^{-2\alpha {aM}_j}+\sum \limits_j{M}_j{e}^{-\alpha {aM}_j}\right], $$ $$ \frac{d^2E}{da^2}={\alpha}^2{NDe}^{\alpha {r}_0}\left[2{e}^{\alpha {r}_0}\sum \limits_j{M}_j^2{e}^{-2\alpha {aM}_j}-\sum \limits_j{M}_j^2{e}^{-\alpha {aM}_j}\right]. $$ At absolute zero T = 0, a0 is the value of a for which the lattice is in equilibrium, then E(a0) gives the energy of cohesion, \( {\left[\frac{dE}{da}\right]}_{a_0}=0 \), and \( {\left[\frac{d^2E}{da^2}\right]}_{a_0} \)is related to the compressibility [15]. That is, $$ dE\left({a}_0\right)={E}_0\left({a}_0\right), $$ where E0(a0) is the energy of sublimation at zero pressure and temperature, $$ {\left(\frac{dE}{da}\right)}_{a_0}=0, $$ and the compressibility is given by [8] $$ \frac{1}{\upkappa_0}={V}_0{\left(\frac{d^2{E}_0}{dV^2}\right)}_{a_0}={V}_0{\left(\frac{d^2E}{dV^2}\right)}_{a_0}, $$ where V0 is the volume at T = 0 and κ0 is compressibility at zero temperature and pressure. The volume per atom V/N is related to the lattice constant a by $$ \frac{V}{N}={ca}^3. $$ Substituting Eq. (10) into Eq. (9) the compressibility is formulated by $$ \frac{1}{\upkappa_0}=\frac{1}{9{cNa}_0}{\left(\frac{d^2E}{da^2}\right)}_{a={a}_0}. $$ Using Eq. (5) to solve Eq. (8), we obtain $$ {e}^{\alpha {r}_0}=\frac{\sum \limits_j{M}_j{e}^{-\alpha {aM}_j}}{\sum \limits_j{M}_j{e}^{-2\alpha {aM}_j}}. $$ From Eqs. (4, 6, 7, 11), we derive the relation $$ \frac{e^{\alpha {r}_0}\sum \limits_j{e}^{-2\alpha {aM}_j}-2\sum \limits_j{e}^{-\alpha {aM}_j}}{4{\alpha}^2{e}^{\alpha {r}_0}\sum \limits_j{M}_j^2{e}^{-2\alpha {aM}_j}-2{\alpha}^2\sum \limits_j{M}_j^2{e}^{-\alpha {aM}_j}}=\frac{E_0{\upkappa}_0}{9{cNa}_0}. $$ Solving the system of Eq. (12, 13), we obtain α and r0. Using α and Eq. (4) to solve Eq. (7), we have D. The interatomic Morse potential parameters D, α depend on the compressibility κ0, the energy of sublimation E0, and the lattice constant a. These values of all crystals are available already [16]. Next, we apply the above expressions to claculate the equation of state and elastic constants. It is possible to calculate the state equation from the potential energy E. If we assumed that the Debye model could express the thermal section of the free energy, then the Helmholtz energy is given by [8] $$ F=E+3{Nk}_BT\ln \left(1-{e}^{-{\theta}_D/T}\right)-{Nk}_B TD\left({\uptheta}_D/T\right), $$ $$ D\left(\frac{\uptheta_D}{T}\right)=3{\left(\frac{T}{\uptheta_D}\right)}^3\underset{0}{\overset{\theta_D/T}{\int }}\frac{x^3}{e^x-1} dx, $$ where kB is Boltzmann constant, and θD is Debye temperature. Using Eqs. (14, 15), we derive the equation of state as $$ P=-{\left(\frac{\partial F}{\partial V}\right)}_T=\frac{1}{3{ca}^2}\frac{dE}{da}+\frac{3{\gamma}_G RT}{V}D\left(\frac{\uptheta_D}{T}\right), $$ where γG is the Grüneisen parameter, and V is the volume. After transformations, the Eq. (16) is resulted as $$ P=\frac{\left[{NDe}^{\alpha {r}_0}\alpha \sum \limits_j{M}_j{e}^{-\alpha {a}_0{M}_j{\left(1-x\right)}^{1/3}}\right]}{3{ca}_0^2{\left(1-x\right)}^{2/3}}-{NDe}^{2\alpha {r}_0}\alpha \sum \limits_j{M}_j{e}^{-2\alpha {a}_0{M}_j{\left(1-x\right)}^{1/3}}+\frac{3{\gamma}_G RT}{V_0\left(1-x\right)}D\left(\frac{\uptheta_D}{T}\right), $$ $$ x=\frac{V_0-V}{V_0},\kern0.48em {V}_0={ca}_0^3,\kern0.74em R={Nk}_B,\kern0.62em N=6.02\times {10}^{23}. $$ The equation of state (17) contains the obtained interatomic Morse potential parameters; c is a constant and has value according to the structure of the crystal. An elastic tensor describes the elastic properties of a crystal in the crystal's motion equation. The non-vanishing components of the elastic tensor are defined as elastic constants. They are given for crystals of lattice structure by [17]: $$ {c}_{11}={c}_{22}=\sqrt{2}{r}_0\left[10{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+16{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)+81{\Psi}^{{\prime\prime}}\left(3{r}_0^2\right)\cdots \right]-\frac{{\left\{\sqrt{\frac{2}{3}}\left[-2{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+16{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)-40{\Psi}^{{\prime\prime}}\left(3{r}_0^2\right)\cdots \right]\right\}}^2}{\sqrt{2}{r}_0^{-1}\left[4{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+16{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)+12{r}_0^{-1}{\Psi}^{\prime}\left(2{r}_0^2\right)\cdots \right]}, $$ $$ {c}_{12}=\frac{\sqrt{2}{r}_0\left[10{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+16{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)+81{\Psi}^{{\prime\prime}}\left(3{r}_0^2\right)\cdots \right]}{3}+\frac{{\left\{\sqrt{\frac{2}{3}}\left[-2{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+16{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)-40{\Psi}^{{\prime\prime}}\left(3{r}_0^2\right)\cdots \right]\right\}}^2}{\sqrt{2}{r}_0^{-1}\left[4{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+16{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)+12{r}_0^{-1}{\Psi}^{\prime}\left(2{r}_0^2\right)\cdots \right]}, $$ $$ \kern0.24em {c}_{33}=\frac{\sqrt{2}}{3}{r}_0\left[32{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+32{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)+\frac{512}{3}{\Psi}^{{\prime\prime}}\left(3{r}_0^2\right)+\cdots \right], $$ $$ {c}_{13}={c}_{23}=\sqrt{2}{r}_0\left[8{\Psi}^{{\prime\prime}}\left({r}_0^2\right)+32{\Psi}^{{\prime\prime}}\left(2{r}_0^2\right)+112{\Psi}^{{\prime\prime}}\left(3{r}_0^2\right)+\cdots \right], $$ $$ {\Psi}^{\prime }(r)=-2 D\alpha \left[{e}^{-2\alpha \left(r-{r}_0\right)}-{e}^{-\alpha \left(r-{r}_0\right)}\right]\frac{1}{r}, $$ $$ {\Psi}^{{\prime\prime} }(r)=D{\alpha}^2\left[2{e}^{-2\alpha \left(r-{r}_0\right)}-\frac{1}{2}{e}^{-\alpha \left(r-{r}_0\right)}\right]\frac{1}{r^2}+ D\alpha \left[{e}^{-2\alpha \left(r-{r}_0\right)}-{e}^{-\alpha \left(r-{r}_0\right)}\right]\frac{1}{2{r}^3}. $$ Hence, the derived elastic constants contain the interatomic Morse potential parameters. Next, apply to calculate of anharmonic interatomic effective potential and local force constant in EXAFS theory. The expression for the anharmonic EXAFS function [2] is described by $$ \chi (k)=A(k)\frac{\exp \left[-2\Re /\lambda (k)\right]}{k\Re^2}\operatorname{Im}\left\{{e}^{i\phi (k)}\exp \left[2 ik\mathit{\Re}+\sum \limits_n\frac{{\left(2 ik\right)}^n}{n!}{\sigma}^{(n)}\right]\right\}, $$ where A(k) is scattering amplitude of atoms, φ(K) is the total phase shift of photoelectron, and k and λ are wave number and mean free path of the photoelectron, respectively. The σ(n) are the cumulants; they describe asymmetric of anharmonic interatomic Morse potential, due to the average of the function e−2ikr, ℜ = < r>, and r is the instantaneous bond length between absorber and backscatter atoms at T temperature. For describing anharmonic EXAFS, effective anharmonic potential [9] of the system is derived which in the current theory is expanded up to the third order and given by $$ {\mathrm{E}}_{\mathrm{eff}}\left(\mathrm{x}\right)=\frac{1}{2}{\mathrm{k}}_{\mathrm{eff}}{\mathrm{x}}^2+{\mathrm{k}}_{3\mathrm{eff}}{\mathrm{x}}^3+\dots +=\mathrm{E}\left(\mathrm{x}\right)+\sum \limits_{\mathrm{j}{}^1\mathrm{i}}\mathrm{E}\left(\frac{\upmu}{{\mathrm{M}}_{\mathrm{i}}}\mathrm{x}{\hat{\mathrm{R}}}_{12}.{\hat{\mathrm{R}}}_{\mathrm{i}\mathrm{j}}\right),\kern1em \mu =\frac{M_1{M}_2}{M_1+{M}_2};\kern1em \hat{\Re}=\frac{\Re }{\mid R\mid }. $$ Here, keff is the effective local force constant, and k3eff is the cubic parameter characterizing the asymmetry in the pair interatomic Morse potential, and x is the deviation of instantaneous bond length between the two atoms from equilibrium. The correlated model defined as the oscillation of a pair of particles with M1 and M2 mass. Their vibration influenced by their neighbors atoms given by the sum in Eq. (24), where the sum i is over absorber (i = 1) and backscatterer (i = 2), and the sum j is over all their near neighbors, excluding the absorber and backscatterer themselves whose contributions are described by the term E(x). The advantage of this model is a calculation based on including the contributions of the nearest neighbors of absorber and backscatter atoms in EXAFS. The anharmonic interatomic effective potential Eq. (26) has the form $$ {E}_{eff}(x)={E}_x(x)+2{E}_x\left(-\frac{x}{2}\right)+8{E}_x\left(-\frac{x}{4}\right)+8{E}_x\left(\frac{x}{4}\right). $$ Applying interatomic Morse potential given by Eq. (1) expanded up to 4th order around its minimum point $$ {E}_{eff}(x)=D\left({e}^{-2\alpha x}-2{e}^{-\alpha x}\right)\approx D\left(-1+{\alpha}^2{x}^2-{\alpha}^3{x}^3+\frac{7}{12}{\alpha}^4{x}^4\dots \right). $$ From Eqs. (26)–(28), we obtain the anharmonic effective potential Eeff, effective local force constant keff, anharmonic parameters k3eff for lattice crystals presented in terms of our calculated interatomic Morse potential parameters D and α. In Eq. (25), σ(n) is cumulants, in which σ2(T) is the Debye-Waller factor (DWF) or MSRD [9]. In the diffraction or X-ray absorption, the DWF has a form similar u2(T). In the EXAFS spectrum, DWF is regarded as to correlated averages over the relative displacement of σ2(T) for a pair of atoms, while neutron diffraction allude to the MSD u2(T) of an atom [18]. From σ2(T) and u2(T), the correlated function CR(T) to describe the effects of correlation in the vibration of atoms can be deduced. Using the anharmonic correlated Debye model (ACDM), the MSRD σ2(T) has the form [19]: $$ {\sigma}^2(T)=\frac{\mathrm{\hslash}a}{10\pi \mathrm{D}{\alpha}^2}\underset{0}{\overset{\pi /a}{\int }}{\omega}_A(q)\frac{1+{z}_A(q)}{1-{z}_A(q)} dq, $$ $$ z(q)={e}^{-\left(\beta \mathrm{\hslash}{\omega}_A(q)\right)},\kern2em {\omega}_A(q)=2\sqrt{\frac{10\mathrm{D}{\alpha}^2}{M}}\left|\sin \left( qa/2\right)\right|,\kern2em \left|q\right|\le \uppi /\mathrm{a}. $$ Similarly, for the anharmonic Debye model, u2(T) have been determined as: $$ {u}^2(T)=\frac{\mathrm{\hslash}a}{16\pi \mathrm{D}{\alpha}^2}\underset{0}{\overset{\pi /a}{\int }}{\omega}_D(q)\frac{1+{z}_D(q)}{1-{z}_D(q)} dq, $$ $$ {z}_D(q)={e}^{-\left(\beta \mathrm{\hslash}{\omega}_D(q)\right)},\kern2em {\omega}_D(q)=2\sqrt{\frac{8\mathrm{D}{\alpha}^2}{M}}\left|\sin \left( qa/2\right)\right|,\kern2em \left|q\right|\le \uppi /\mathrm{a},\kern1em $$ where a is the lattice constant, ω(q) and q are the frequency and phonon wavenumber, and M is the mass of composite atoms. To receive the interatomic Morse potential parameters, we need to calculate the parameter c in Eq. (10). The space lattice of the diamond is the fcc. The primordial basis has two identical atoms connected with each point of the fcc lattice, one atom at (0 0 0) position, which has the atomic Wyckoff positions for the predicted phases at the ambient condition of 4a, and one atom at (1/4 1/4 1/4) with the atomic Wyckoff positions of 8c. Thus, the conventional unit cube contains eight atoms so that we obtain the value c = 1/4 for this structure. Applying the above derived expressions, we calculate thermal parameters for DIA structure crystals (Si, Ge, and SiGe) using the lattice constants [11], the energy of sublimation [15], and the compressibility [20]. The numerical results of the interatomic Morse potential parameters are presented in Tables 1 and 3. The theoretical values of D, α fit well with the experimental values [10, 14]. The elastic constants ci, effective spring force constants keff and effective spring cubic parameters k3eff calculated by interatomic Morse potential parameters for Si, Ge, and their alloys are presented in Tables 2 and 3 and compared to the experimental values [11, 15]. Table 1 Morse potential parameters D, α and the related parameter r0 of Si, Ge, and SiGe in comparison to some experimental results [10, 14] Table 2 Values elastic constants (× 10−11 N/m) for Si, Ge by present theory and experimental values [11] Table 3 Morse potential parameters, spring force constants, and cubic parameters under pressure effects up to 14GPa The calculated results for the state equation are illustrated in Fig. 2 for Si crystal and Fig. 3 for Ge crystal compared to the experimental ones (dashed line) [10] represented by an extrapolation procedure of the measured data. They show a good agreement between theoretical and experimental results, especially at low pressure. The dependence of volume ratio (V0-V)/V0 on pressure P in the equation of state for a silicon atom The dependence of volume ratio (V0-V)/V0 on pressure P in the equation of state for a germani atom Figures 4 and 5 illustrate good agreement of the anharmonic interatomic effective potentials for Si, Ge, and SiGe semiconductor calculated by using the present theory (solid line), and the experiment values obtained from interatomic Morse potential parameters of J. C. Slater (solid line and symbol □) [10], and simultaneously show strong asymmetry of these potentials due to the anharmonic contributions in atomic vibrations of these DIA structure crystals which are illustrated by their anharmonic shifting from the harmonic terms (dashed line). Anharmonic effective potential for Si and SiGe semiconductor and comparison with harmonic effects Anharmonic effective potential for Ge and SiGe semiconductor and comparison with harmonic effects Figures 6 and 7 shows dependence on pressure and temperature of MSRD σ2(T) and MSD u2(T) for Si and Ge crystals. MSRD and MSD linear proportional to the temperature T at high temperatures so the classical limit can be applied. At low temperatures, the curves of MSRD and MSRD for Si and Ge contain zero-point energy contributions; this is a quantum effect. The calculated results of MSRD and MSD for the Si, Ge crystals agree well with the values of the experiment [10]. Thus, it is possible to deduce that the present proceduce for diamond-type structure crystals such as Si, Ge crystals is reasonable. Dependence on temperature of mean square displacement MSD under pressure effects up to 14 GPa Dependence on temperature of mean square relative displacement MSRD under pressure effects up to 14 GPa In this work, a calculation method of interatomic Morse potential parameters and application for DIA and fcc structure crystals have been developed based on the calculation of volume and number of an atom in each basic cell and the sublimation energy, compressibility, and lattice constant. The results have applied to the mean square relative displacement, mean square displacement, the state equation, the elastic constants, anharmonic interatomic effective potential, correlated function, and local force constant in EXAFS theory. Derived equation of state and elastic constants satisfy all standard conditions for these values, for example, all elastic constants are positive. The interatomic Morse potentials obtained satisfy all their basic properties. They are reasonable for calculating and analyzing the anharmonic interatomic effective potentials describing anharmonic effects in EXAFS theory. This procedure can be generalized to the other crystal structures based on calculating their volume and number of an atom in each elementary cell. Reasonable agreement between our calculated results and the experimental data show the efficiency of the present procedure. The calculation of potential atomic parameters is essential for estimating and analyzing physical effects in the EXAFS technique. It can solve the problems involving any deformation and of atom interaction in the diamond structure crystals. EXAFS: Expanded X-ray Absorption Fine Structure MSRD: Mean square relative displacement MSD: Mean square displacement Diamond-type DWF: Debye-Waller factor Anzellini S, Monteseguro V, Bandiello E, Dewaele A, Burakovsky L, Errandonea D (2019) In situ characterization of the high pressure – high temperature melting curve of platinum. Sci Rep 9(1):13034. https://doi.org/10.1038/s41598-019-49676-y Errandonea D, Burakovsky L, Preston DL, MacLeod SG, Santamaría-Perez D, Chen S, Cynn H, Simak SI, McMahon MI, Proctor JE, Mezouar M (2020) Experimental and theoretical confirmation of an orthorhombic phase transition in niobium at high pressure and temperature. Commun Mater 1(1):60. https://doi.org/10.1038/s43246-020-00058-2 Anzellini S, Burakovsky L, Turnbull R, Bandiello E, Errandonea D (2021) P–V–T equation of state of iridium up to 80 GPa and 3100 K. Crysstal. 11(452). https://doi.org/10.3390/cryst11040452 Marques EC, Sandrom DR, Lytle FW, Greegor RB (1982) Determination of thermal amplitude of surface atoms in a supported Pt catalyst by EXAFS spectroscopy. J. Chem. Phys 77:1027. https://doi.org/10.1063/1.443914 Duc NB, Binh NT (2017) Statistical Physics-Theory and Application in XAFS, 173-198. Academic Publishing, LAP LAMBERT Hung NV, Duc NB, Frahm RR (2002) A new anharmonic factor and exafs including anharmonic contributions. J. Phys. Soc. Jpn. 72(5):1254–1259. https://doi.org/10.1143/JPSJ.72.1254 Frenkel AI, Rehr JJ (1993) Thermal expansion and x-ray-absorption fine-structure cumulants. Phys. Rev. B 48:585. https://doi.org/10.1103/PhysRevB.48.585 Girifalco LA, Weizer VG (1959) Application of the Morse potential function to cubic metals. Phys. Rev. 114(3):687–690. https://doi.org/10.1103/PhysRev.114.687 Hung NV, Rehr JJ (1997) Anharmonic correlated Einstein-model Debye-Waller factors. Phys. Rev. B 56(1):43–46. https://doi.org/10.1103/PhysRevB.56.43 Pirog IV, Nedoseikina TI, Zarubin AI, Shuvaev AT (2002) Anharmonic pair potential study in face-centred-cubic structure metals. J. Phys.: Condens. Matter 14(8):1825–1832. https://doi.org/10.1088/0953-8984/14/8/311 Sydney P, Clark J (1996) Handbook of Physical Constants Geological Society of America Okube M, Yoshiasa A (2001) Anharmonic effective pair potentials of group VIII and Lb Fcc metals. J. Synchrotron Radiat. 8(2):937–939. https://doi.org/10.1107/s0909049500021051 Miyanaga T, Fujikawa T (1994) Quantum statistical approach to Debye-Waller Factor in EXAFS, EELS and ARXPS. III. Applicability of Debye and Einstein Approximation. J. Phys. Soc. Jpn 63(1036):3683–3690. https://doi.org/10.1143/JPSJ.63.3683 Greegor RB, Lytle FW (1979) Extended x-ray absorption fine structure determination of thermal disorder in Cu: comparison of theory and experiment. Phys. Rev. B 20(12):4908–4907. https://doi.org/10.1103/PhysRevB.20.4902 Slater, J. C. 1939). Introduction to Chemical Physics, (ed. McGraw) (Hill Book Company, Inc., New York,. Charles Kittel, 1986) Introduction to Solid-State Physics, (John Wiley & Sons ed.), (Inc. New York,. Born M, Huang K (1956) Dynamical Theory of Crystal Lattice, 2nd edn. Clarendon Press, Oxford Errandonea D, Ferrer-Roca C, Martínez-Garcia D, Segura A, Gomis O, Muñoz A, Rodríguez-Hernández P, López-Solano J, Alconchel S, Sapiña F (2010) High-pressure x-ray diffraction and ab initio study of Ni2Mo3N, Pd2Mo3N, Pt2Mo3N, Co3Mo3N, and Fe3Mo3N: two families of ultra-incompressible bimetallic interstitial nitrides. Phys. Rev. B 82(17):174105. https://doi.org/10.1103/PhysRevB.82.174105 Duc NB (2020) Influence of temperature and pressure on cumulants and thermodynamic parameters of intermetallic alloy based on anharmonic correlated Einstein model in EXAFS. In: Application of the Debye model to study anharmonic correlation effects for the CuAgX (X = 72; 50) intermetallic alloy, Physica Scripta, 95. https://doi.org/10.1088/1402-4896/ab90bf Bridgeman, P. W. (1940) The Compression of 46 Substances to 50,000 kg/cm. Proceedings of the American Academy of Arts and Sciences, 74, 3, 21-51. DOI: 10.2307/20023352 The author (DBN) thanks the Tan Trao University, Tuyen Quang, Vietnam, for support. No funding was obtained for this study. Tan Trao University, Tuyen Quang, Vietnam Duc Ba Nguyen & Hiep Phi Trinh Duc Ba Nguyen Hiep Phi Trinh DBN (corresponding author) analyzed the structural data and conceptualized and wrote the manuscript. HPT collected the experimental data, and read, analyzed, and edited the errors in the manuscript. All authors have read and approved the manuscript. Professor Duc Ba Nguyen is a senior lecturer and researcher of Tan Trao University, Tuyen Quang, Vietnam. He had many studies that have been published in ISI, Scopus on the field of XAFS spectroscopy, and the material structure. Hiep Trinh Phi is a lecturer and researcher of Tan Trao University, Tuyen Quang, Vietnam. He studies XAFS spectroscopy and the material structure and has some published in the journal of science. Correspondence to Duc Ba Nguyen. Nguyen, D.B., Trinh, H.P. High-pressure study of thermodynamic parameters of diamond-type structured crystals using interatomic Morse potentials. J. Eng. Appl. Sci. 68, 17 (2021). https://doi.org/10.1186/s44147-021-00015-x Morse potential parameter State equation Correlation function Elastic constant
CommonCrawl
Show that a photon cannot transmit its total energy to a free electron. Contradiction with Photoelectric effect? This is a problem in my textbook and I've shown it this way: $E_{initial}=\frac{hc}{\lambda} + mc^2$ $p_{initial}=h/\lambda$ After collision with photon having zero energy we get $p_{final}=h/\lambda$ $E_{final}=\sqrt{(\frac{hc}{\lambda})^2+(mc^2)^2}$ Which is in contradiction with the conservation of energy. Now, this result is I think contradictory to Einstein's explanation of the photoelectric effect. In the photoelectric effect the photon is absorbed by the free electron and this is what makes it have kinetic energy. What am I interpreting wrong? The problem comes from the context of the Compton Effect, by the way. experimental-physics photoelectric-effect DLVDLV $\begingroup$ Hi you say In the photoelectric effect the photon is absorbed by the free electron...just to be clear, the electron receives momentum from the moving photon, but the Compton effect shows that the photon is then deflected away, with less energy sure, but not completely absorbed, as far as I remember. Regards $\endgroup$ $\begingroup$ Electrons in metals are certainly not free. $\endgroup$ – Robin Ekman $\begingroup$ Hi I think this question, although unclear, is about the Compton effect and producing the energy needed to release an electron. In the OP, the wording is photon has zero energy (this does not make sense to me) and in the context of the C.E. which means that the electron gets extra momentum, possible above the work function. Anyway, I think the question should be edited with a diagram, which I would make the actual question clearer. Regards $\endgroup$ $\begingroup$ Well if the Photon has donated all its energy there will be no "photon term" in the final energy. Thats what I meant. Also – electrons in metals are not free in a very strict manner, but they are free in the context of Compton experiments or the PE effect aren't they ? $\endgroup$ – DLV $\begingroup$ By free I'm imagining the electron gas model inside a metal. $\endgroup$ The original problem can be seen in terms of energy and momentum conservation. Before scatter, there are two particles in the center of mass and the center of mass has an invariant mass larger than the mass of the electron. For total absorption of the photon there would be only the electron left. As the electron has a fixed mass and at the center of mass it should be at rest, the reaction cannot happen. It can only happen if a third particle is involved to conserve the overall energy and momentum , and this is what is happening with the photoelectric effect. The incoming photon interacts with an electron that is tied to the atom by a virtual photon . The whole system takes up the energy and momentum conservation. The inverse problem happens with a gamma generating an e+e- pair. The gamma has zero invariant mass, the pair will have at least two electon masses at the center of mass, so a gamma cannot turn into an electron positron pair , a third particle has to be involved. The simplest is a virtual photon from some nucleus too. The Feynman diagram is the same as the one above with a different interpretation ( incoming e- is read as outgoing e+) a) diagram copied anna vanna v $\begingroup$ So the photoelectric effect cant have a single electron as products? $\endgroup$ $\begingroup$ As the other answers states and the feynman diagram above shows, the photoelectric effect kicks out one electron and the energy and momentum balance is kept by the left over Z atom and the outgoing electron. The incoming e- is the outer electron tied to the Z shown here as independent for clarity. The original Z has that electron in an outer shell. $\endgroup$ – anna v $\begingroup$ Though it's a great answer I don't get the point with the fixed mass of the electron. Why should the mass be increased? $\endgroup$ $\begingroup$ The input particles obey the Lorenz transformations, the addition of all four vectors hyperphysics.phy-astr.gsu.edu/hbase/Relativ/vec4.html of the system has a fixed invariant mass . For individual particles that is the mass in the elementary particle table, and is invariant with lorentz transformation, the way the length of a three vector is invariant in galilean transformations . One cannot start with a total invariant mass in the initial system and end up with a different invariant mass at the final system, in this case the electron, which has no excited states with higher mass $\endgroup$ As the comments indicate, the answer truly is that the electrons in the solid are not really free. But wait, I hear you say -- the free electron model approximates the electrons in the solid as a free gas of electrons. It certainly isn't perfect, but it can't be that poor of a description. Yes it can, and I'll explain why. Consider what it means to say that a solid is filled with a free electron gas. For definiteness, say that your solid is a metal cube of side $a$. Surely the electrons in the solid are bound to the solid, which is to say, they're not free throughout all of space. They're free inside the solid. So we can model the solid as a 3d infinite square well of width $a$. But you can never remove a particle from an infinite square well, no matter how much energy you give, via photons or anything else. So it's utterly inadequate as a model for the photoelectric effect. You probably want then something like a very high but finite square well. If the well is high enough, the finiteness doesn't change the lowest eigenvalues much, which will still be given by $$E_{n_x,n_y,n_z} \approx \frac{\hbar^2\pi^2}{2ma^2}\left(n_x^2 + n_y^2 + n_z^2\right)$$ Since electrons are fermions, at zero temperature they will occupy the lowest energy eigenstates up to the Fermi energy $E_F$. If we assume that the energy levels are closely spaced enough that $n_x, n_y$ and $n_z$ may be treated as continuous variables, we'll have the following relation for the filled eigenstates: $$n_x^2 + n_y^2 + n_z^2 \leq \frac{2m}{\hbar^2 \pi^2 } a^2 E_f$$ In other words, the total number of filled eigenstates is approximately given by the volume of the positive octant of a sphere of radius $\frac{a}{\hbar \pi } \sqrt{2m E_f}$. $$N_F \approx \frac{1}{8}\frac{4}{3} \frac{a^3}{\hbar^3 \pi^2 } (2m E_f)^{3/2}$$ But of course, $N_F$ has to be equal to the number of electrons in the solid (well, up to a factor of two due to spin degeneracy), which is proportional to the volume of the box. $$C a^3 = 2 N_F \approx \frac{1}{3} \frac{a^3}{\hbar^3 \pi^2 } (2m E_f)^{3/2}$$ So the Fermi energy will be given by $$E_F = \left(3 \pi^2 C \right)^{2/3}\frac{\hbar^2}{2m} $$ which in accordance with intuition and good sense doesn't depend on $a$. Instead it depends only on $C$ which is a property of the material. So in order to remove one electron from the material, by whatever means, we need to pay at least the difference between the Fermi energy and the height of the well. This is the work function. Now I hope it's become clear: the electrons in the box may only be regarded as free as long as their energy is small enough that you can pretend that the well is infinite. So let's repeat your argument for a photon that has just enough energy to liberate an electron. \begin{align} E_{\text{initial}} &= \hbar \omega + mc^2 - W\\ p_{\text{initial}} &= \frac{\hbar \omega}{c} \end{align} After the photon is absorbed we have a free electron with zero kinetic energy, so \begin{align} E_{\text{final}} &= mc^2\\ p_{\text{final}} &= \frac{\hbar \omega}{c} \end{align} Since the condition that the photon has the exact minimum amount of energy to liberate one electron is precisely that $\hbar \omega = W$, the two equations are perfectly consistent and energy is conserved. Of course, since the electron is at rest, it can't have the momentum $p_{\text{final}}$ in the above. So the solid has it, and because it's so much more massive than the electron, its kinetic energy may be neglected. If you cling to the free electron model, I doubt you can go any farther than this. You might still be feeling uneasy about the whole thing because while the energy imparted to the box is extremely small, it's non zero. That is true. However, the metal is always at some finite temperature so there's always some thermal energy available to supply the minute amount of kinetic energy imparted to the metal. Realistically this would probably be implemented by looking at how the extracted electron scatters off phonons in the lattice or some such. Leandro M.Leandro M. Let's consider a free electron. If this absorbs a photon, then the Feynman diagram associated with it (the fundamental building block of all diagrams) will give probability zero. Because in the rest frame of the outgoing electron it has less energy than the incoming electron and photon. Conservation of energy is lost in this case. So an electron can't absorb a photon and move on freely. In the same way: the diagram is topological equivalent to an incoming electron and an outgoing electron and photon, and in the centre of mass frame the incoming electron is at rest and the outgoing electron and photon have an energy that's greater than the energy of the incoming electron at rest, which contradicts conservation of energy. The diagram is also topological equivalent to a real incoming electron and positron and a real outgoing photon. In the COMF of the electron and positron, the total momentum before the collision is zero, while the photon's momentum can't be zero since it always travels at the speed of light, which contradicts conservation of energy. It has to be connected with the same kind of diagram, but with an incoming real electron and outgoing real electron and photon. The resulting diagram includes a real incoming electron and photon, a virtual electron (in the "middle") and a real outgoing electron and photon. Of course, this is the first order diagram of Compton scattering. Now a virtual electron isn't on its mass scale, i.e. its momentum and energy are unrelated so it can enforce conservation of 4-momentum at each vortex, which is to say that the combined energy of the electron and photon before the collision is the same as after the collision, as is also the case for the combined momenta of the electron and photon before and after the collision. So if the incoming electron has the described momentum and energy and the photon has zero energy (and thus zero momentum), which is the same as no photon at all, the final four-momentum for the electron must be the same as the initial four-momentum. I think you make the error to assume that $p_{initial}=\frac h {\lambda}$. Why should it have that value? It must have a value $x$ to match $E_{initial}$ with $E_{final}$. In that case, you get the equation $\frac{hc}{\lambda}+mc^2=\sqrt{x^2c^2+(mc^2)^2}$, which you can solve for $x$: $x=\sqrt(\frac{h^2+2hmc\lambda} {\lambda^2})$, so if $x=0$ (in the rest frame of the electron) also $\frac h \lambda=0$. If you look to the incoming and outgoing electrons in their rest frames then the electrons have only the $mc^2$ term as energy because the "photon" which it absorbs has zero energy, and if energy is conserved in one inertial frame (in this case the rest frame) energy is conserved in every inertial frame. Deschele SchilderDeschele Schilder $\begingroup$ A photon with no energy is not a photon. This makes no sense. $\endgroup$ – Floris $\begingroup$ In the question, it is said that there is a collision with a photon having zero energy, which is indeed the same as no photon at all. $\endgroup$ – Deschele Schilder Not the answer you're looking for? Browse other questions tagged experimental-physics photoelectric-effect or ask your own question. How are energy states of photons reason for frequency independence in Compton scattering Photoelectric effect – Why does one electron absorb one photon? Photoelectric effect absorption coefficient decreases with energy, why? How is the photoelectric effect consistent with conservation laws Is photoelectric effect example of inelastic collision How are photons absorbed by electrons? How is the particle nature theory consistent with multi-photon photoelectric effect? Is it true, that for all atoms the second checkmark is valid?
CommonCrawl
Respiratory Research Determination of rheology and surface tension of airway surface liquid: a review of clinical relevance and measurement techniques Zhenglong Chen1, Ming Zhong2, Yuzhou Luo1, Linhong Deng3, Zhaoyan Hu4 & Yuanlin Song5 Respiratory Research volume 20, Article number: 274 (2019) Cite this article By airway surface liquid, we mean a thin fluid continuum consisting of the airway lining layer and the alveolar lining layer, which not only serves as a protective barrier against foreign particles but also contributes to maintaining normal respiratory mechanics. In recent years, measurements of the rheological properties of airway surface liquid have attracted considerable clinical attention due to new advances in microrheology instruments and methods. This article reviews the clinical relevance of measurements of airway surface liquid viscoelasticity and surface tension from four main aspects: maintaining the stability of the airways and alveoli, preventing ventilator-induced lung injury, optimizing surfactant replacement therapy for respiratory syndrome distress, and characterizing the barrier properties of airway mucus to improve drug and gene delivery. Primary measuring techniques and methods suitable for determining the viscoelasticity and surface tension of airway surface liquid are then introduced with respect to principles, advantages and limitations. Cone and plate viscometers and particle tracking microrheometers are the most commonly used instruments for measuring the bulk viscosity and microviscosity of airway surface liquid, respectively, and pendant drop methods are particularly suitable for the measurement of airway surface liquid surface tension in vitro. Currently, in vivo and in situ measurements of the viscoelasticity and surface tension of the airway surface liquid in humans still presents many challenges. Human airways, from the trachea through the bronchioles to the alveoli, are lined on the inside with a continuous film of surface liquid, which increases in thickness from ~ 0.1 μm in the alveoli to ~ 10 μm in the trachea [1,2,3,4,5]. In the airways, this film of liquid is a bilayer consisting of a periciliary layer and a mucous or 'gel' layer atop. The mucous layer is composed of 97% water and 3% solids (mucins, nonmucin proteins, salts, and cellular debris), which determine the linear and nonlinear viscoelasticity and diffusive properties of the mucus [6, 7]. The periciliary layer is of low viscosity containing water and solutes and in which the cilia reside [8]. The cilia beat 12 to 15 times per second, with cilia tips intermittently gripping the underside of the mucous layer, thus propelling it and entrapped particles towards the mouth, where it is swallowed or expectorated. In this way, the airways are kept clean. In addition to acting as a solid physical barrier to most pathogens, the airway lining fluid also contains lysozymes and a range of defensins that are capable of chemically inactivating inhaled pathogens [1]. In airway generations beyond 15 or 16, the primary secretory cells are Club cells (originally known as Clara cells), found in the respiratory bronchioles, and type II epithelial cells, found in the alveoli. These two types of cells have a relatively weaker mucus-secreting capacity compared with the Goblet cells found in the bronchi and conducting bronchioles [1, 4]. As a result, the airway lining fluid transitions from a two-layered fluid to a single layer of fluid that is primarily saltwater, yet with significant concentrations of surfactant [9]. Pulmonary surfactant (PS) is a complex mixture of some 90% lipids and 10% proteins. Most of the lipids are phospholipids, of which 70–80% are dipalmitoyl phosphatidyl choline (DPPC), the main surface-active material responsible for lowering surface tension, while the surfactant proteins (SP) are SP-A, SP-B, SP-C and SP-D [1, 10]. Among these surfactant-associated proteins, SP-B and SP-C are vital to the stabilization of the surfactant monolayer; SP-A and SP-D are involved in the control of surfactant release and possibly in the immunology of the lung. The main function of the pulmonary surfactant film is to reduce surface tension at the air-liquid interface, thus preventing collapse of the alveoli and small airways during end-exhalation [11]. Pulmonary surfactant is present not only in the alveoli but also in the bronchioles and small airways. We hereafter refer to the thin fluid continuum consisting of the airway lining layer and the alveolar lining layer as airway surface liquid [9]. Changes in macro- and microrheological properties of airway surface liquid have a significant impact on normal respiratory mechanics and normal barrier and clearance functions of the lung [12, 13]. For example, an intermediate viscoelasticity of the mucous gel layer, or in other words a viscosity in the range of 12–15 Pa ∙ s(1 Pa ∙ s = 1000 cP) and an elastic modulus of 1 Pa, are essential for optimal mucociliary clearance [14,15,16]. If the viscoelasticity of airway mucus becomes too low, however, the elasticity is not enough for mucus to counteract gravitational action, which likely makes mucus to slide down into the lung and flood the alveoli [13, 17,18,19]. In contrast, pulmonary disease conditions, such as cystic fibrosis (CF), chronic obstructive pulmonary disease (COPD), and asthma, are usually characterized by an increase in the viscoelasticity of mucus. As a result, ciliary action and cough are incapable of effectively clearing the sticky mucus, leading to the accumulation of mucus and even the complete blockage of the airway observed in the above disorders [8, 13, 20]. Likewise, alterations in the surface tension of the alveolar lining fluid also cause severe respiratory diseases, such as neonatal respiratory distress syndrome (NRDS) and acute lung injury (ALI) or the acute respiratory distress syndrome (ARDS) [21,22,23]. Premature infants who suffer from NRDS due to surfactant deficiency exhibit a high lining-fluid surface tension and hence a propensity for prominent atelectasis, decreased lung compliance, increased work of breathing and impaired gas exchange. The postnatal delivery of exogenous surfactant can significantly lower surface tension forces in the lung and has been established as a standard therapeutic intervention in the management of preterm infants with NRDS [24]. Similarly, in ALI/ARDS patients, the normal function of PS is inhibited by protein-rich oedematous fluid present in the airspaces, leading to a dramatic increase in surface tension and hence a decrease in lung compliance [25]. The development of ventilator-induced lung injury (VILI) in mechanically ventilated ALI/ARDS patients is largely attributed to this change in the surface tension-relevant lung micromechanics. Therefore, the study of the rheological properties of airway surface liquid has both physiological and clinical significance. Unfortunately, due to a lack of suitable in vivo and in situ measurement techniques, thus far, all rheological measurements of human respiratory mucus came from in vitro studies that may not give a true picture of in vivo conditions. Moreover, reported literature values of the viscoelasticity of human respiratory mucus show large (orders of magnitude) intersubject, intrasubject, and even within the same mucus -sample variations (Table 1). It is imperative to heighten collaborations between clinicians, biomedical engineers, and applied scientists to explain these variations in perspective of both physiology and experimental techniques, to further develop tools to assess the quantitative properties of airway surface liquid, and finally to correlate the biophysical properties of airway surface liquid with healthy versus diseased states. This article reviews the importance of airway surface liquid rheology and surface tension measurements in: (1) maintaining the stability of small airways and alveoli; (2) preventing ventilator-induced lung injury; (3) optimizing surfactant replacement therapy (SRT); and (4) characterizing lung barrier and clearance functions. Subsequently, new methods and techniques for determining the viscosity and surface tension of airway surface liquid are described. Table 1 Comparison of Published Viscosities of Airway Surface Liquid Importance of rheological measurements of airway surface liquid Maintaining stability of airways and alveoli The role of the surface tension and viscosity of airway surface liquid in maintaining airway stability is primarily two-fold: retarding small airway closure and preventing alveolar collapse. As described in Section 1, the liquid lining usually forms a thin and relatively uniform layer on the inner surface of the airway, but sometimes it is possible for the airway to become occluded by the liquid, leading to airway closure. Closure of the distal airways at low lung volumes can occur through two mechanisms, "liquid bridge formation" or "compliant collapse" (Fig. 1) [30,31,32]. In the former case, liquid in a uniform film lining on the inner wall of an axisymmetric airway redistributes via a classical fluid-elastic instability known as the Plateau-Rayleigh instability [30, 33, 34]. This process leads to complete occlusion of the airway by a liquid plug or "bridge", provided that the condition of sufficient liquid for closure is satisfied. A theoretical analysis by White and Heil showed that the growth rate of the film thickness increased with surface tension and decreased with an increase in the fluid's viscosity [35]. Halpern and coworkers revealed that the growth rate for a viscoelastic layer was larger than for a Newtonian fluid with the same viscosity [36]. The overall timescale required for an occlusion to form is small compared with a single breathing cycle, provided that no surfactant is present. Halpern and Grotberg further demonstrated that the closure time for a pulmonary surfactant-rich film can be approximately five times greater than that for a film free of pulmonary surfactant [37]. Mechanisms of airway closure: (A) Liquid bridge formation (B) compliant collapse Alternatively, if the surface tension of the airway lining fluid is sufficiently large relative to the airway's bending stiffness, a fluid-elastic "compliant collapse" is more likely to occur [30, 31, 33]. As lung volume falls during expiration, the radius of the airway is decreased, thus resulting in an increase of the curvature of the air-liquid interface. The initially uniform and axisymmetric liquid lining can become unstable, and pressure gradients are induced in the fluid that drive flows redistributing the fluid. As a result, in the region where the liquid lining film is thickest, surface tension creates a large pressure jump over the highly curved air-liquid interface, causing negative pressure in the liquid. At the same time, parenchymal tethering forces on the external surface of the airway fall because of the gradual increase in lung volumes. This combination of reduced lining fluid pressure and parenchymal tethering subjects the airway wall to a significant compressive load and promotes the propensity of the airway to buckle inward, producing a compliant collapse. In diseased conditions such as pulmonary oedema or neonatal RDS, this compliant collapse of the airways may occur due to an increase in the volume of fluid or in the surface tension. Surface forces also have a critical effect on airspace stability, as illustrated in Fig. 2. Two connected bubbles (alveoli) with a common pressure and a constant surface tension are blown at the end of a Y-tube [1, 38]. According to the Laplace equation, the pressure generated by surface tension in the small bubble is larger than that in one with a greater diameter, resulting in an inherently unstable system: the smaller alveolus will eventually collapse and the larger one will become over-distended. Of course, this is not the case in a healthy lung. The surface tension of the alveolar lining fluid is variable in situ as a function of expansion and compression of the alveolar surface area due to the presence of pulmonary surfactant. The surface tension drops as the alveolar surface decreases, and it rises when the surface expands, allowing for equal pressure between two different sized alveoli; therefore, system stability is maintained. Connected alveoli illustrating the driving force collapsing the smaller alveolus in the case of constant surface tension Preventing ventilator-induced lung injury A number of theoretical and experimental studies have demonstrated that the increase in viscosity and surface tension of airway surface liquid likely results in VILI. Two main physical mechanisms for VILI are lung tissue overdistention caused by surface tension-induced alterations in interalveolar micromechanics and atelectrauma to the epithelial cells during repetitive airway reopening and closure [39,40,41]. The prediction from an adjoining two-alveoli model by Chen et al. [42] shows that the pattern of alveolar expansion can appear heterogeneous or homogeneous, strongly depending on differences in air-liquid interface tension on alveolar segments. More specifically, if surface tension in the liquid-filled alveolus is much greater than that in the air-filled alveolus, then alveolar expansion is heterogeneous. Consider a pair of juxtaposed alveoli: the maximum stress and strain within the septum shared by the two alveoli may occur at a low alveolar pressure; in contrast, as alveoli inflate to near total lung capacity (TLC), the stress and strain of the alveolar walls may decrease instead. On the other hand, if the surface tensions in two adjacent alveoli are identical, then alveolar expansion is homogenous; that is, the stress and strain of all alveolar septa will appear to linearly increase as alveolar volume varies from functional residual capacity (FRC) to near TLC. These calculations are in good agreement with the experimental phenomenon observed by Perlman and coworkers [43]. Using real-time optical section microscopy, these investigators quantified the micromechanics of an air-filled alveolus that shares a septum with a liquid-filled alveolus. Instilling liquid into the alveolus produced a meniscus that changed the septal curvature and consequently the pressure difference across the septum. As a consequence, the air-filled alveolus bulged into its liquid-filled neighbour even at FRC. Given that liquid-filled and air-filled alveoli can be focal, diffuse or patchy in pulmonary oedema, these findings may provide a novel understanding of segmental heterogeneities and alveolar overdistension during mechanical ventilation. Using thin-walled polyethylene tubes to mimic bronchial walls held in apposition by airway lining fluid, Gaver III et al. [39] investigated the effect of the tube radius (R) and the surface tension and viscosity of airway lining fluid on the airway opening velocity (U) and the applied opening pressure. They found that increasing the surface tension (γ) or viscosity (μ) resulted in an increase in airway opening pressures. Gaver III et al. further defined a nondimensional parameter capillary number (Ca ≡ μU/γ) to represent the relative importance of viscous and surface tension forces in airway opening. When Ca is small, the applied opening pressure must exceed the "yield pressure" ~8γ/R before airway opening can proceed. When Ca is larger than 0.5, the contribution of viscous forces to the overall opening pressures is non-negligible. In subsequent studies, Bilek and Kay et al. utilized a parallel-plate flow chamber lined with pulmonary epithelial cells as an idealized model airway to investigate the mechanisms of surface tension-induced epithelial cell damage [44]. The narrow channel of the chamber was filled with either phosphate-buffered saline (high surface tension) or Infasurf (ONY, Buffalo, NY), a biologically derived pulmonary surfactant with low surface tension. Airway reopening was generated by the steady progression of a semi-infinite bubble of air along the length of the channel, which displaced the occlusion fluid. Two bubble progression velocities were investigated, and the results showed that for the saline-occluded channels, both slow and fast bubble velocities resulted in significant cellular injury compared with the control and that for the Infasurf-occluded channels, cellular injury was dramatically reduced at both bubble velocities, indicating that surfactant has a protective effect. A comparison of the experimental and theoretical observations demonstrated that among four potentially injurious components of the stress cycle associated with airway reopening (shear stress, pressure, shear stress gradient or pressure gradient), the pressure gradient was the most predominant mechanism underlying the observed cellular damage [45]. Recently, Chen et al. [46] further estimated in situ the magnitudes of mechanical stresses exerted on the alveolar walls during repetitive alveolar reopening by using the tape-peeling model from McEwan and Taylor [47]. Their calculations showed that 1) for a lung with normal fluid viscosity and surface tension, the predicted maximum shear stresses were less than 15 dyn/cm2 at all alveolar opening velocities that were in the physiological range, whereas for a lung with an elevated viscosity of the alveolar lining fluid, shear stresses may increase by several orders of magnitude, enough to induce epithelial cell injury; 2) similarly, in the case of elevated viscosity, pressure drops across a cell may rise to levels greater than 300 dyn/cm2 and consequently result in hydraulic epithelial cracks [48]; 3) the capillary pressure for alveolar opening ranged from 5 to 30 cmH2O, strongly depending on the initial depth of the alveolar lining fluid, which may explain the clinically high opening pressure in sticky atelectasis; and 4) assuming alveolar lining fluid to be a Newtonian flow, the magnitudes of shear stress were proportional to the alveolar opening velocity, and therefore, the reduction of inspiratory flow rate or respiratory frequency would lead to a decrease in shear stress and a concomitant reduction in atelectrauma on alveolar epithelial cells. As discussed above, rheological measurements of airway surface liquid during the progression of the disease in many pulmonary disorders have an important role in the management of mechanical ventilation. Measured values of surface tension and viscoelasticity provide clinical data for establishing and validating mathematical models of VILI. Knowing the values of viscosity and surface tension of airway surface liquid enables clinicians to quickly determine individualized inflation pressures and PEEPs and to roughly estimate lung stress and strain based on computational models, thereby adjusting the ventilation settings or therapeutic strategies in time to avoid VILI as much as possible. Optimizing surfactant replacement therapy Exogenous SRT has been established as a standard therapeutic intervention for preterm and term neonates with clinically confirmed respiratory distress syndrome since the early 1990s [10, 21, 49]. During traditional SRT, natural or synthetic surfactant is administered via an endotracheal tube either as a bolus or by infusion via a thin catheter inserted into the endotracheal tube. Thereafter, the infants are maintained on mechanical ventilation. The INSURE (INtubtion-SURfactant-Extubation) technique, which features early bolus instillation of surfactant with prompt extubation to nasal CPAP, has also been studied in a number of small randomized trials. The results showed that this strategy reduced the need for mechanical ventilation and improved survival rates [21, 50, 51]. A number of alternatives to the administration of surfactant include the use of aerosolized surfactant preparations, laryngeal mask airway-aided delivery of surfactant, instillation of pharyngeal surfactant, and administration of surfactant using laryngoscopy or bronchoscopy [21, 51]. SRT has also been applied to adults whose surfactant systems are compromised by ARDS, but clear indications of a distinct surfactant-mediated decrease in mortality or improvement in ventilator care of ARDS patients are still lacking [49, 52]. Additionally, recent randomized clinical trials have indicated that preventive surfactant administration to infants with suspected NRDS is no longer effective in groups of infants when CPAP is used routinely [21]. Most previous studies on SRT failures have focused on examining the biophysical mechanisms for surfactant inhibition due to plasma proteins or lipids [49]. However, the three-dimensional model of SRT recently proposed by Filoche and colleagues provides new insights into this issue, as it strongly suggests that inadequate delivery of surfactant may be a major cause of SRT failure [53]. Using similar surfactant mixtures and instilled dose volume, these investigators simulated the delivery of surfactant to neonates and adults in 3D structural models of the lung airway tree. The results revealed well-mixed distributions in the neonatal lungs but very inhomogeneous distributions in the adult lungs. When liquid surfactant mixtures are instilled into the trachea via an endotracheal tube, they form liquid plugs, which are then blown distally into the branching network of the airways by forced inspirations. Filoche and colleagues simplified the above complicated flow process into two separate steps: step A, deposition of the liquid onto the airway walls into a trailing film; and step B, liquid plug splitting at an airway bifurcation. Step A determines the total amount of liquid reaching the acini, i.e., the delivery efficiency. Theoretical work by Helpern et al. [54] has shown that the thickness (h) of a trailing film in the parent tube is related to the local capillary number by the relation $$ \frac{\mathrm{h}}{{\mathrm{a}}_1}=0.36\left(1-{\mathrm{e}}^{-2{\mathrm{Ca}}_{\mathrm{p}}^{0.523}}\right) $$ where a1 is the radius of the parent tube, the capillary number Cap = μUp/γ represents the ratio of viscous force (μ) to surface tension force (γ), and Up is the plug speed. Eq. (1) shows that as the viscosity of the liquid plugs or the airflow rate increases, so does Cap, and thus there is more liquid deposited into the trailing film. Step B governs the homogeneity of delivery. When the liquid plug splits at the bifurcation of the airway, a fraction of the plug's volume goes down one daughter airway, V1, and the rest goes down the other, V2. Zheng and coworkers [55, 56] defined the ratio of the volumes in the daughter airways as the split ratio, Rs = V1/V2, which is affected by a number of factors, including the physical properties of the liquid (viscosity, density and surface tension), the gravitational orientation, the airway geometry, plug propagation speed, interfacial activity, and the presence of plug blockage in nearby airways from previous instillations. A critical parent tube capillary number was found to exist below which Rs = 0, and above which Rs increased and eventually levelled out with Cap. This feature can be explained by the driving pressure at the bifurcation. When the fluid viscosity or plug velocity is too small, the driving pressure is not large enough to overcome gravity; thus, no liquid enters the upper (gravitationally opposed) daughter airway after bifurcation. In summary, the viscosity and surface tension of surfactant mixtures have a profound effect on the distribution quality of the delivered surfactant. For example, a computational model of SRT by Filoche and colleagues showed that the synthetic surfactant mixture Exosurf, with a low viscosity of ~ 3 cP, yielded a less homogenous distribution compared with the surfactants Survanta, Curosurf and Infasurf, each with a viscosity of ~ 30 cP, under the same neonatal treatment protocol. Furthermore, in the distal regions of the lung, surface tension gradient-induced Marangoni flows drive the surfactant deeper into the lung. This requires that the surface tension of the endogenous surfactant be above that of the instilled exogenous surfactant [57]. Characterizing barrier properties of mucus The airway mucus gel layer acts as a solid physical barrier to foreign pathogens, toxins and environmental ultrafine particles while allowing rapid passage of selective small molecules, ions, capsid viruses and many proteins. These selective barrier properties of airway mucus are intimately related to its viscoelasticity, which shows order-of-magnitude variations in healthy versus diseased states. The rheological characterization of airway mucus has contributed greatly to both the understanding of mucocilliary clearance and the quantitation of the severity of airway diseases such as CF, COPD and chronic bronchitis [13]. For example, Hill et al. reported that the mean solids concentration (% solids by weight including salts, denoted wt%) of sputum for normal subjects is 1.7%, whereas the sputum wt% of COPD and CF subjects are ~ 2× higher (3.5%) and ~ 4× higher (7.0%), respectively. Below 3.0 wt%, the loss (viscous) modulus of human bronchial epithelial (HBE) cell culture mucus dominates the storage (elastic) modulus across all frequencies ranging from ~ 0.1 Hz (tidal breathing frequency) to ~ 10 Hz (ciliary beating frequency), implying that low wt% mucus is a viscoelastic solution; at 4.0 wt%, the elastic and viscous moduli were almost equal over the above frequency range, suggestive of the beginning of a transition from solution-like to gel-like behaviour; and at 5.0 wt%, the elastic modulus dominated the viscous modulus, suggesting that high solids wt% mucus is a viscoelastic gel at frequencies above 0.1 Hz [58]. These findings provide key data linking increased mucus solids concentration to the observation of Puchelle and Zahm [59] that when cilia beat against a fluid viscosity of higher than 100 cP, the ciliary beating frequency decreases and mucus clearance slows. In addition, Button and coworkers recently found that mucus concentration was also strongly correlated with the mucus-epithelial surface adhesive and mucus cohesive strengths. The increased mucus concentration and viscous energy dissipation in CF and COPD patients therefore make the cough mechanism fail to effectively clear accumulated mucus from the lungs [60]. The gel-on-brush model of the mucus clearance system by Button et al. demonstrated that the mucus and pericilliary brush layers must be in relative osmotic modulus balance for effective mucociliary clearance [20]. At a mucus solids concentration of ~ 5 wt% or above, the corresponding osmotic modulus of the mucus layer begins to exceed that of the PCL, therefore collapsing the PCL and slowing down the mucus clearance observed in diseases. Kesimer et al. tested the relationships predicted by the gel-on-brush model between total mucin concentration and the increase in severity of chronic bronchitis [61]. The mean total mucin concentrations were higher in current or former smokers with severe COPD than in controls who had never smoked. The relationships between total mucin concentration and prospective annualized respiratory exacerbation showed that mucin concentrations were higher in participants who had exacerbations than in those who had none. These results suggest that airway mucin concentrations may serve as a biomarker for the diagnosis of chronic bronchitis. Microrheology affords a detailed characterization of the barrier properties of airway mucus at a scale relevant to pathogens, toxins, and foreign particles. When the scale approaches the mesh size of the mucus layer, the diffusion rates of particles are expected to be reduced due to steric or adhesive forces, thus leading to a higher apparent viscosity. A variety of conventional nanoparticle-based drug delivery systems for CF and other pulmonary diseases have been discouraged by the mucus barrier since nanoparticles are usually subjected to mucociliary clearance before they reach airway mucosal surfaces due to the extremely slow diffusion rates of these particles in the mucus. As such, to engineer nanoparticles capable of penetrating this highly viscoelastic and adhesive mucus barrier, it is imperative to characterize the local viscoelasticity of mucus at scales relevant to nanoparticle delivery systems. Suk and coworkers investigated the effect of nanoparticle size and surface chemistry on transport rates in fresh, undiluted CF sputum. They found that the transport rates of 200 nm particles that were densely coated with low molecular weight polyethylene glycol (PEG) were 90-fold faster than the same-sized uncoated particles. On the other hand, the movement of both coated and uncoated 500 nm particles was strongly hindered. Therefore, by tracking the displacement of 200 nm PEG-coated particles, they further showed that the local microviscosities of fresh, undiluted cystic fibrosis sputum were only 5-fold higher than the viscosity of water, while the bulk viscosity was 10,000-fold higher at low shear rates. Additionally, these authors estimated the average mesh spacing of physiological human CF sputum to be 140 ± 50 nm [7]. In light of these findings, investigators have further designed various muco-inert nanoparticles that can rapidly penetrate the mucus layer, thus enhancing the efficacy of drug and gene delivery at mucosal surfaces [62, 63]. Techniques for measuring the rheology of airway surface liquid When selecting an appropriate technique to investigate the viscosity of airway surface liquid, it is important to keep in mind that airway surface liquid has two particular features: 1) a relatively small available sample volume and 2) large variations in the range of viscosity depending on the patient, the sampling site in the lung, and healthy or diseased conditions [7, 13,14,15,16, 26, 27, 29, 59, 64]. Commonly used instruments for the measurement of viscosity include glass capillary viscometers, falling sphere viscometers, rotational viscometers, magnetic microrheometers and particle tracking microrheometers. Among these instruments, the first four have been used to determine the macroscopic bulk viscosity, while the last one has been applied to the study of microrheology. Glass capillary viscometers Glass capillary viscometers are also known as tube-type viscometers, which consist of a U-shaped glass tube with a reservoir bulb on one arm of the U and a measuring bulb with a precise narrow bore (the capillary) on the other. There are two calibrated marks along the length of the capillary. During use, liquid is suctioned into the measuring bulb and then allowed to flow downward through the capillary into the reservoir. The time it takes for the liquid to pass between two marks is a measure of the viscosity η [65]. The principle of the glass capillary viscometer is based on the Poiseuille law [66]: $$ \eta =\frac{\pi {r}^4\varDelta Pt}{8 LV} $$ where r is the radius of the capillary tube, ΔP is the pressure difference between the two ends of the capillary tube, t is the time it takes for a volume V of fluid to elute, and L represents the length of the capillary tube. Theoretically, the more viscous the liquid, the longer it takes to flow. Basch et al. first used a capillary viscometer to measure sputum viscosity, but their results were later demonstrated to be unreliable by Forbes and Wise [67, 68]. Despite a variety of modified versions later, using a wide-bore or horizontal tube, for instance, the measurements with sputum were still widely scattered and not reproducible, as the sputum frequently either slipped through the tube as a solid plug or remained stuck somewhere in the tube. In addition, lung fluids such as mucus and sputum behave as a non-Newtonian viscosity that is dependent on shear rate. Thus, capillary viscometers are further limited since they can only measure viscosity for one shear rate at a time. Falling-sphere viscometers Stokes' law is the basis of the falling-sphere viscometer, in which the fluid under examination is stationary in a vertical or inclined glass tube. A small sphere is allowed to move through the test fluid. As the falling velocity of the sphere increases, the frictional force also increases, and eventually, a terminal velocity Vs is reached when the gravitational force is balanced with the buoyant force and this frictional force. The viscosity η of the test fluid can be calculated by Stokes' law: $$ \eta ={d}^2\left({\rho}_s-{\rho}_f\right)g/18{V}_s $$ where d is the sphere diameter, ρs is the sphere density, ρf is the fluid density, and g is the local gravitational acceleration [69]. Falling sphere viscometers have undergone important modifications over the years; some commercially available instruments, for example, use cylindrical needles or pistons with hemispheric ends instead of spheres [65]. Unlike the traditional falling sphere viscometer that only applies for viscosity measurements of Newtonian fluids, the falling needle also possesses the ability to measure non-Newtonian rheological parameters [70]. In terms of sputum viscosity, there are several drawbacks associated with the falling sphere viscometers, including the requirement of a significant sample volume, operation at low shear rates and poor measurement stability and reproducibility. For instance, Elmes and White measured sputum viscosity employing a rolling ball viscometer and found that the ball moved along the line of least resistance and rolled around the aggregation of viscous material suspended in the sputum [71]. Rotational viscometers Rotational viscometers use the concept that viscosity is defined as the ratio of shear stress to shear rate. They measure the torque required to rotate an immersed element (the spindle) in a fluid at a known speed. The spindle is driven by a motor through a calibrated spring. By utilizing a multiple speed transmission and interchangeable spindles, a wide range of viscosities can be measured, thus enhancing the versatility of the instruments. There are two basic types of rotational viscometers, one with two coaxial cylinders and the other with a cone and plate [65, 66]. In the cylinder viscometer, the liquid to be tested is placed in a narrow space between the rotating cylinder and the fixed cylinder. The more viscous the fluid is, the greater the torque required to spin the rotating cylinder. The primary disadvantage of the cylinder viscometer is the relatively large sample volumes required. For example, despite the use of specifically designed Small Sample Adapter, commercial Wells-Brookfield cylinder viscometers still require a sample volume of 2 to 16 mL. Baldry and Josse found that the rotating cylinder did not move at all or rotated with a very low speed when sputum viscosity was relatively high [66]. For these reasons, they are not extensively used in clinical laboratories. In the cone and plate viscometer, a nearly flat cone (with cone angle between 0.8° and 3°) in close proximity to a plate forms a narrow gap where the liquid is contained (Fig. 3) [65]. This cone and plate spindle geometry requires a sample volume of only 0.5 to 2.0 mL and generates a constant shear rate at all locations under the cone at any given rotational speed. The viscosity can easily be determined from shear stress (the torque) and shear rate (the angular velocity) by the following equation: $$ \eta =\frac{3M\sin \theta }{2\pi {r}^3\omega } $$ where M is the torque input by the instrument, θ is the cone angle, r is the cone radius, and ω is the angular velocity. Schematic representation of a cone and plate viscometer Furthermore, both the elastic and viscous characteristics of the material can be studied by using a strain-controlled cone and plate rheometer [72, 73]. In this type of device, a motor is designed to impose a sinusoidal strain γ(t) to a material, and the resultant stress σ(t) is measured with a transducer. The phase angle (δ) between stress and strain is used to decompose the measured stress into an in-phase component and to determine the elastic modulus G′ of a material, defined as the in-phase stress divided by the amplitude of the strain, \( {G}^{\prime }=\frac{\hat{\sigma}}{\hat{\gamma}} cos\delta \), where \( \hat{\sigma} \) and \( \hat{\gamma} \) are the amplitudes of the stress and strain, respectively. Similarly, the viscous modulus G′′ of the material is defined as the out-of-phase stress divided by the amplitude of the strain, \( {G}^{\prime \prime }=\frac{\hat{\sigma}}{\hat{\gamma}} sin\delta \). The cone and plate viscometer has been widely employed in the measurement of the rheological properties of airway surface liquid [26, 64, 66, 74]. As a result of measurements taken with a cone and plate viscometer, Baldry and Josse showed that comparable readings could be obtained with duplicate sputum samples at different shear rates [66]. Lieberman found that sputum viscosity could reach a relatively steady state after a limited amount shearing in a cone and plate viscometer [74]. Similarly, King et al. investigated the bulk shear viscosities of aqueous dispersions of calf lung surfactant in a cone and plate viscometer, which showed that the lung surfactant exhibited a complex non-Newtonian behaviour, with higher viscosities at low shear rates [75]. Magnetic microrheometer The magnetic micrcorheometer involves a pair of permanent magnets or electromagnets for generating a rotating magnetic field [13, 76,77,78]. The test fluid sample is placed in a small test tube with a concave and clear bottom. A metal microsphere is inserted in the sample. The tube, sealed to prevent evaporation of the sample, is centred between the two magnets. The rotating magnetic field generates a magnetic driving force that rotates the metal sphere. In the case of low frequencies and small sphere diameters, the sphere inertia can be neglected. Therefore, the angular speed of the sphere is determined by the rotational speed and strength of the magnetic field as well as the viscosity of the sample around the sphere. The motion of the sphere is monitored by a high-resolution video microscope set below the sample cell. The torque exerted on the sphere is proportional to the difference between the angular velocity of the magnetic field ΩB and that of the sphere ΩS, and the shear rate of the flow is linearly proportional to ΩS. Thus, the viscosity of the test sample can be calculated depending on the shear rate by measuring ΩS at different values of ΩB [78]. In another type of displacement magnetic microrheometer, a micron-sized magnetic bead is carefully positioned in the sample and oscillated by means of an electromagnet at a variable frequency (ω) and amplitude [16, 79, 80]. Images of the bead are captured by a charge-coupled device (CCD) camera to measure bead displacement. The amplitude of the displacement of the bead and its phase shift with respect to the magnetic force are determined to calculate the elastic modulus G′ and viscous modulus G′′ of the sample using: $$ {G}^{\prime }=\frac{f_0}{6\pi a{X}_0\left(\omega \right)}\cos \varphi \left(\omega \right) $$ $$ {G}^{\prime \prime }=\frac{f_0}{6\pi a{X}_0\left(\omega \right)}\sin \varphi \left(\omega \right) $$ where f0 is the amplitude of the applied force, a is the radius of the bead, X0(ω) is the frequency-dependent amplitude of the bead displacement, and φ(ω) is the frequency-dependent phase shift between the force and the bead displacement. The two remarkable features of the magnetic microrheometer are the need for only microlitre quantities of sample volume and freedom from contamination. Consequently, it is well suited to the investigation of the rheological properties of lung fluids partially because only small lung fluid samples can be obtained in normal or disease conditions. King and coworkers pioneered the use of a magnetic rheometer to determine the viscoelastic properties of normal tracheal mucus from canines and discussed the significance of these rheological behaviours in terms of the clearance of secretions from the lung [77]. Particle tracking microrheometer Particle tracking microrheology can be used to characterize the linear viscoelasticity of complex fluids with the accuracy of bulk rheology measurements but with smaller sample volumes on the order of picolitres to microlitres required [13, 81, 82]. A modern experimental set up to perform particle tracking microrheology experiments primarily consists of a light source, a colloidal probe, optical microscopy, a fast COMS camera, and specialized particle tracking software. Colloidal spheres are embedded into a soft viscoelastic fluid, and movies are made of the Brownian motion of the colloidal probes in the sample using the fast COMS camera. The positions of the centroids of the colloidal probes are subsequently matched frame by frame using a specialized routine to identify each particle and generate its trajectory. Then, the mean squared displacements (MSD),〈Δr2(t)〉, of individual particles are calculated from the colloidal sphere trajectories. Mathematical analysis of MSD can provide a measure of the linear viscoelasticity of the test fluid as a function of time or frequency. The simplest method, for example, is to calculate the creep compliance J(t) in the form [81,82,83,84]: $$ J(t)=\frac{\pi a}{kT}\left\langle \varDelta {r}^2(t)\right\rangle $$ where kT is the thermal energy and a is the radius of the particle. A purely viscous liquid of shear viscosity η, such as water or glycerol, subjected to a constant stress, exhibits a creep compliance that increases linearly with time, J(t) = t/η; a highly elastic material of modulus G0 under stress exhibits a time-independent compliance J = 1/G0; the time-dependent compliance of a viscoelastic material such as mucus or cytoplasm shows an intermediate behaviour [13, 83]. In fact, people often prefer to work with the complex shear modulus (G∗(ω) = G′(ω) + iG′′(ω)) since its real and imaginary parts more clearly define the contributions of elasticisty (G′) and dissipation (the viscosity, η = G′′/ω as ω → 0) to the viscoelasticity response [81]. The G∗(ω) of the complex fluids can be obtained from measurements of the time-dependent mean square displacement, 〈∆r2(t)〉, of thermally driven colloidal spheres suspended in the fluid using a generalized Stokes-Einstein (GSE) equation [85]. The frequency-domain representation of the GSE equation takes the following form: $$ {\mathrm{G}}^{\ast}\left(\upomega \right)=\frac{\mathrm{kT}}{\uppi \mathrm{a}\mathrm{i}\upomega \mathrm{Fu}\left\{\left\langle \Delta {\mathrm{r}}^2\left(\mathrm{t}\right)\right\rangle \right\}}=\frac{\mathrm{kT}}{6\uppi \mathrm{a}{\mathrm{D}}^{\ast}\left(\upomega \right)} $$ where D is the time-dependent diffusion coefficient and Fu is the Fourier transform. Consider spheres diffusing in a purely viscous fluid or a viscoelastic material; 〈∆r2(t)〉 can be calculated by Eq. (9) and (10), respectively, $$ \left\langle \Delta {\mathrm{r}}^2\left(\mathrm{t}\right)\right\rangle =6\mathrm{Dt} $$ $$ \left\langle \Delta {\mathrm{r}}^2\left(\mathrm{t}\right)\right\rangle ={\mathrm{r}}_0^2\left[1-\exp \left(-6\mathrm{Dt}/{\mathrm{r}}_0^2\right)\right] $$ where \( {\mathrm{r}}_0^2 \) is the saturation value of the mean square displacement of the spheres as time approaches infinity. Combining Eq. (8) with (9) or (10), the frequency-independent viscosity is recovered \( \upeta =\frac{\mathrm{kT}}{6\uppi \mathrm{aD}} \). Dawson et al. used multiple particle tracking to determine the effective viscoelastic properties of human cystic fibrotic sputum at the micron scale. They found that CF sputum microviscosity was an order of magnitude lower than its macroviscosity, suggesting that the enhanced viscoelasticity of CF sputum correlates with the increased microheterogeneity in particle transport [28]. A primary problem with a particle tracking micorheology-based characterization of airway mucus is a possible overestimation of the true mucus viscoelasticity due to adhesive interactions between colloidal probes and mucus. In addition, the maximally achievable viscosity and shear rate ranges are limited due to restrictions on particle sizes and the temporal resolution of tracking, respectively. Methods for measuring surface tension of airway surface liquid Over the past few decades, a variety of measuring techniques have been developed for determining the surface activity of surfactant materials derived from the lung. Among these are film balances, bubble methods and drop shape analysis methods. In addition, the surface tension of pulmonary surfactant can be inferred from pressure-volume data. In the following section, we will discuss classical methods and recent techniques in terms of their basic principles, advantages and limitations to help research workers select the method(s) best suited to their needs. The Langmuir-Wilhelmy balance Clements first introduced the Langmuir-Wilhelmy surface balance to determine the surface tension-area relationship in his pioneering studies of lung extracts [86, 87]. In this method, lung extracts are dropped onto the surface of a subphase substance (usually normal saline) contained in the trough of the surface balance, while the exposed surface area is varied over a wide range by means of a movable barrier. A roughened and clean platinum plate is attached to a balance with a thin metal wire. When the plate is perpendicularly dipped into the liquid, the downward force (F) on it due to wetting is measured by a balance connected to a transducer. The surface tension (γ) can then be calculated using the following equation: $$ \upgamma =\frac{F}{L\bullet \cos \left(\theta \right)} $$ where L is the wetted perimeter of the plate and θ is the contact angle between the liquid phase and the plate. In practice, the platinum plate is assumed to be completely wetted, and therefore, the contact angle is 0°. In this way, Clements and Brown showed that the surface tension of a lung-derived surface approached a lower limiting of 10~15 dyn/cm and an upper limiting of 40~50 dyn/cm [86, 88]. Although the Langmuir-Wilhelmy balance is one of the most commonly used tools for measuring surface tension, there are still some drawbacks to this apparatus in terms of investigating the tension-area behaviour of lung extracts. One of the most intractable problems has been the film leakage that occurs on the surfaces of the restraining walls and barrier, causing experimental artefacts [89]. Furthermore, large sample volumes are required on the Langmiur-Wilhelmy surface balance because of its large size. Finally, the apparatus does not seem readily adaptable to rapid oscillations of surface area at rates corresponding to a normal cycle of breathing [57]. Captive bubble method The captive bubble method developed by Schürch and coworkers is useful for reproducing the in situ behaviour of lung surfactant monolayers because it eliminates the possibility of surface film leakage [90,91,92,93]. In this method, a lung surfactant suspension is placed into a glass flow-through chamber, and a bubble of atmospheric air is introduced and allowed to float against the slightly concave hydrophilic agarose ceiling of the chamber (Fig. 4) [90, 94]. The bubble volume is compressed and expanded by varying the fluid pressure in the chamber. From the beginning of adsorption measurements, the chamber pressure is rapidly reduced to an estimated level at which the bubble just doubles its diameter from 2~3 mm to 4~6 mm. Then, the captive bubble is submitted to a number of compression-expansion cycles. As the pressure is increased, the bubble volume and surface area are decreased, compressing the absorbed surfactant monolayer at the air-water interface, during which the bubble progressively flattens, indicating a lower surface tension. Captive bubble chamber. A lung surfactant suspension (A) is placed into a glass flow-through chamber, and a captive bubble (B) is formed by a syringe within the aqueous phase and then allowed to float against the agarose gel (G) ceiling. After the stopcock (S) is closed, B can be compressed or expanded by withdrawing fluid through the pressure control port (P) [94] Images of the bubble are recorded by a computer digital image system. By measuring the height and diameter of the bubble in the video picture, the surface tension and area can be calculated using the approach of Malcolm and Elliott or the formulas of Rotenberg [94]. One important feature of the captive bubble approach is that low and stable surface tensions of 1~2 mN/m can be obtained upon the first quasi-static or dynamic compression following adsorption, and therefore, it is suitable for studies on the role of surfactant proteins in surfactant formation [90, 94]. Microdroplet method Schürch et al. reported the use of the microdroplet method to measure alveolar surface tension directly in an excised lung [92, 95,96,97]. The method is based on the observation that a droplet of a nonpolar test liquid resting on top of a monolayer at the air-water interface changes its shape from a sphere to a thin lens as the interfacial tension is raised beyond the value characteristic of the test fluid. In a prior calibration experiment, test fluid droplets with known surface tension are placed onto a surfactant film on a fluid substrate in the surface balance. The shape of the droplets is changed by varying the surface tension of the film and is monitored with a microscope. The relative diameter of the lens (defined as the diameter normalized to its spherical shape) can thus be plotted as a function of the film surface tension. Conversely, if the relative diameter is known, the film surface tension can be determined with this calibration curve. In the subsequent measurements of alveolar surface tension in situ, the alveolus is punctured by a micropipette with a tip diameter of 1~2 μm, and then a test fluid droplet of fluorocarbon liquid or silicone oil is deposited onto the alveolar surface. The diameter of the spherical drop from the pipette before deposition is usually 10~100 μm [95]. After deposition onto the alveolar surface, the drop spreads out into a lens shape whose diameter depends on the alveolar surface tension. By using the microdroplet method, Schürch et al. confirmed that the surface tension in the excised rat lungs at functional residual capacity was between 1 and 10 mN/m [97] and further showed that the maximum surface tension at TLC was approximately 30 mN/m [96]. Most importantly, Schürch et al. found that in excised cat lungs at a given volume between 40 and 70% of TLC, no difference in surface tension could be detected with respect to alveolar size and location [95]. Pendant drop method The pendant drop method is extensively used for surface tension measurement, even at elevated temperatures and pressures [98,99,100,101]. In the pendant drop setup, a drop is formed at the tip of a stainless-steel needle. The volume and surface area are controlled by moving the plunger of the syringe connected to a stepper motor. The pendant drop is constrained inside a glass cuvette to maintain controllable environmental conditions (i.e., temperature and pressure). The drop images are magnified by a horizontally mounted microscope and then acquired using an automatic camera. Then, these images are digitalized and stored in a computer for further extraction of the pendant drop profile to calculate the surface tension [102, 103]. In the traditional method, the maximum diameter DE is measured from the drop profile and the diameter DS at a horizontal plane at a distance DE from the bottom of the drop (Fig. 5), after which the surface tension is calculated by the equation $$ \upgamma =\Delta \uprho \mathrm{g}{D}_E^2/H $$ where ∆ρ is the density difference between the drop and the surrounding medium, g is the gravitational acceleration, and H is a function of S = DS/DE, which can be read from a table [99, 104]. Axisymmetric drop shape analysis (ADSA) has recently been developed as a more accurate and applicable technique for measuring surface tension based on acquired drop images. Briefly, ADSA matches the experimental drop profile through an optimization procedure to a theoretical profile calculated from the Laplace equation of capillarity. The matched theoretical drop profile is used to find the surface tension of the pendant drop [99, 100, 102, 103]. Geometry of the pendant drop method The pendant method offers several distinct advantages over the conventional film balance, including a high degree of automation, a small sample size, and the ease with which the sample can be isolated from the environment to protect it from contamination [89, 100]. Since the pendant drop method does not suffer a similar restriction in compression rate compared with the Langmuir-Wilhelmy surface balance, it has been used to examine the rate dependence of the surface pressure-surface area isotherm of a DPPC monolayer. The results obtained by the pendant drop method are in good agreement with those from the conventional film balance measurements [103, 105]. Pressure-volume measurements Some early studies used pressure-volume data from air- and liquid-filled excised lungs to calculate alveolar surface tension [106,107,108,109,110]. For example, assuming a maximum surface tension and a constant relationship between lung surface area and lung volume, Bachofen and coworkers [110] derived an equation that relates the surface tension (γ) to lung volme (V) and the component of recoil pressure (Ps) caused by surface tension in the form $$ \upgamma =\frac{3}{2\mathrm{k}}{\mathrm{P}}_{\mathrm{s}}{\mathrm{V}}^{1/3} $$ where k is the shape factor, which can be found from the maximum surface tension. Thus, for each pair of air and saline P-V curves, the corresponding surface tension-surface area can be obtained by the above equation. Smith and Stamenovic [109] provided another approach to deduce values of alveolar surface tension from pressure-volume data. Their approach rests on the assumption that at a given volume, Ps is uniquely determined by γ. First, the pressure-volume curves of control air-filled lungs and test liquid-filled lungs with a limited set of fixed interfacial tensions are measured. As the intersections between the curves with the normal and test liquid interfaces define states of equal surface tension, by pooling data from all intersections, the surface tension-lung volume relationships are obtained. In addition, Wilson used an energy analysis method to calculate the values of surface tension from pressure-volume loops. The reader is referred to reference [108] for a detailed description. In recent years, measuring the rheological properties of airway surface liquid has attracted considerable clinical attention due to new developments in microrheology instruments and methods. The quantitative characterization of the viscoelasticity and surface tension of airway surface liquid contributes to a better understanding of physiological processes such as airway mucus trapping and clearance and ciliary action to further identify potential markers for ranking the severity of relevant muco-obstructive lung diseases and to develop muco-inert nanoparticle systems for improved drug and gene delivery to mucosal tissues; a good knowledge of lung surfactant dynamics to improve surfactant replacement therapy for respiratory distress in neonates and even adults; and a deep insight into the micromechanical mechanisms of VILI to prescribe personalized mechanical ventilation in ARDS patients. In terms of measurements of viscosity and surface tension of airway surface liquid, the cone and plate system is currently the most commonly used instrument for determining bulk viscosity in clinical practice, while multiple particle tracking is more suitable for probing the microviscosity of lung fluids. In light of the use of the axisymmetric drop shape analysis algorithm and the rapid development of data acquisition and image processing techniques, pendant drop methods have seen a broad prospective application in the surface tension measurement of airway surface liquid. ADSA: Axisymmetric drop shape analysis ALI: ARDS: Charge-coupled device COPD: DPPC: Dipalmitoyl phosphatidyl choline FRC: Functional residual capacity GSE: Generalized Stokes-Einstein equation HBE: Human bronchial epithelial INSURE: INtubtion-SURfactant-Extubation MSD: Mean squared displacement NRDS: PEG: Pulmonary surfactant SP: Surfactant protein SRT: Surfactant replacement therapy TLC: Total lung capacity VILI: Ventilator-induced lung injury Lumb A, PF. Chapter 3 Elastic forces and lung volumes. In: Nunn's applied respiratory physiology. 7th Ed ed: Televise Health Sciences. Churchill Livingstone Elsevier; 2010. ISBN 9780702029967. Widdicombe JH, Widdicombe JG. Regulation of human airway surface liquid. Respir Physiol. 1995;99(1):3–12. Widdicombe JH, et al. Regulation of depth and composition of airway surface liquid. Eur Respir J. 2010;201(4):313–8. Widdicombe JH, Wine JJ. Airway gland structure and function. Physiol Rev. 2015;95(4):1241. Widdicombe J. Airway and alveolar permeability and surface liquid thickness: theory. J Appl Physiol. 1997;82(1):3–12. Cribb JA, et al. Nonlinear signatures of entangled polymer solutions in active microbead rheology. J Rheol (N Y N Y). 2013;57:1247. Suk JS, et al. The penetration of fresh undiluted sputum expectorated by cystic fibrosis patients by non-adhesive polymer nanoparticles. Biomaterials. 2009;30(13):2591–7. John VF, Burton DF. Airway mucus function and dysfunction. N Engl J Med. 2010;363(23):2233–47. Levy R, et al. Pulmonary fluid flow challenges for experimental and mathematical modeling. Integr Comp Biol. 2014;54(6):985. Nkadi PO, Merritt TA, Pillers D-AM. An overview of pulmonary surfactant in the neonate: genetics, metabolism, and the role of surfactant in health and disease. Mol Genet Metab. 2009;97(2):95–101. Veldhuizen EJA, Haagsman HP. Role of pulmonary surfactant components in surface film formation and dynamics. BBA. 2000;1467(2):255–70. Cone RA. Barrier properties of mucus ☆. Adv Drug Deliv Rev. 2009;61(2):75–85. Lai SK, et al. Micro- and macrorheology of mucus ☆. Adv Drug Deliv Rev. 2009;61(2):86–100. Rubin BK, et al. Collection and Analysis of respiratory mucus from subjects without lung disease. Am Rev Respir Dis. 1990;141(1):1040–3. Puchelle E, Zahm JM, Quemada D. Rheological properties controlling mucociliary frequency and respiratory mucus transport. Biorheology. 1987;24(6):557–63. Jeanneret-Grosjean, A., ., et al., Sampling technique and rheology of human tracheobronchial mucus. Am Rev Respir Dis, 1988. 137(3): p. 707–710. Woodring, et al. Types and mechanisms of pulmonary atelectasis : Atelectasis, Part 1. J Thorac Imaging. 1996;11(2):92–108. Li Bassi G, et al. Following tracheal intubation, mucus flow is reversed in the semirecumbent position: possible role in the pathogenesis of ventilator-associated pneumonia. Crit Care Med. 2008;36(2):518–25. Wong JW, et al. Effects of gravity on tracheal mucus transport rates in normal subjects and in patients with cystic fibrosis. Pediatrics. 1977;60(2):146–52. Button B, et al. A Periciliary brush promotes the lung health by separating the mucus layer from airway epithelia. Science. 2012;337(6097):937. Polin RA, Carlo WA. Surfactant replacement therapy for preterm and term neonates with respiratory distress. Pediatrics. 2014;133(1):156. Thompson BT, Chambers RC, Liu KD. Acute respiratory distress syndrome. N Engl J Med. 2017;377(6):562–72. Sweeney RM, Mcauley DF. Acute respiratory distress syndrome. Lancet. 2016;388(10058):2416–30. Sweet DG, et al. European consensus guidelines on the Management of Respiratory Distress Syndrome - 2019 update. Neonatology. 2019;115(4):432–50. Papazian L, et al. Formal guidelines: management of acute respiratory distress syndrome. Ann Intensive Care. 2019;9(1):69. Puchelle E, Zahm JM, Aug F. Viscoelasticity, protein content and ciliary transport rate of sputum in patients with recurrent and chronic bronchitis. Biorheology. 1981;18(3–6):659–66. Baconnais, S., ., et al., Ion composition and rheology of airway liquid from cystic fibrosis fetal tracheal xenografts. Am J Respir Cell Mol Biol, 1999. 20(4): p. 605–611. Dawson M, Wirtz D, Hanes J. Enhanced viscoelasticity of human cystic fibrotic sputum correlates with increasing microheterogeneity in particle transport. J Biol Chem. 2003;278(50):50393–401. Feather EA, Russell G. Sputum viscosity in cystic fibrosis of the pancreas and other pulmonary diseases. Br J Dis Chest. 1970;64(4):192–200. Heil M, Hazel AL, Smith JA. The mechanics of airway closure. Respir Physiol Neurobiol. 2008;163(1):214–21. Grotberg JB, Jensen OE. Biofluid mechanics in flexible tubes. Annu Rev Fluid Mech. 2004;36(1):121–47. Kamm RD, Schroter RC. Is airway closure caused by a liquid film instability? Respir Physiol. 1989;75(2):141–56. Otis DR, et al. Role of pulmonary surfactant in airway closure: a computational study. J Appl Physiol. 1993;75(3):1323–33. Halpern D, Grotberg JB. Nonlinear saturation of the Rayleigh instability due to oscillatory flow in a liquid-lined tube. J Fluid Mech. 2003;492(492):251–70. White JP, Heil M. Three-dimensional instabilities of liquid-lined elastic tubes: a thin-film fluid-structure interaction model. Phys Fluids. 2005;17(3):529. Halpern D, Fujioka H, Grotberg JB. The effect of viscoelasticity on the stability of a pulmonary airway liquid layer. Phys Fluids. 2010;22(1):2333. Halpern, D., . and J.B. Grotberg, Surfactant effects on fluid-elastic instabilities of liquid-lined flexible tubes: a model of airway closure. J Biomech Eng, 1993. 115(3): p. 271–277. Iii, D.P.G., et al., The significance of air-liquid interfacial stresses on low-volume ventilator-induced lung injury. 2006. Gaver DP, Samsel RW, Solway J. Effects of surface tension and viscosity on airway reopening. J Appl Physiol. 1990;69(1):74–85. Slutsky AS, Ranieri VM. Ventilator-induced lung injury. N Engl J Med. 2013;369(22):2126–36. Bates JHT, Smith BJ, Allen GB. Computational models of ventilator induced lung injury and surfactant dysfunction. Drug Discov Today Dis Models. 2015;15:17–22. Zheng-long C, Ya-zhu C, Zhao-yan H. A micromechanical model for estimating alveolar wall strain in mechanically ventilated edematous lungs. J Appl Physiol. 2014;117(6):586–92. Perlman CE, Lederer DJ, Jahar B. Micromechanics of alveolar edema. Am J Respir Cell Mol Biol. 2011;44(1):34. Bilek AM, Dee KC, Gaver DP. Mechanisms of surface-tension-induced epithelial cell damage in a model of pulmonary airway reopening. J Appl Physiol. 2003;94(2):770–83. Kay SS, et al. Pressure gradient, not exposure duration, determines the extent of epithelial cell damage in a model of pulmonary airway reopening. J Appl Physiol. 2004;97(1):269–76. Zheng-Long C, et al. An estimation of mechanical stress on alveolar walls during repetitive alveolar reopening and closure. J Appl Physiol. 2015;119(3):190. McEwan AD, Taylor GI. The peeling of a flexible strip attached by a viscous adhesive. J Fluid Mech. 2006;26(1):1–15. Laura C, et al. Hydraulic fracture during epithelial stretching. Nat Mater. 2015;14(3):343. Zuo YY, et al. Current perspectives in pulmonary surfactant — inhibition, enhancement and evaluation. Biochim Biophys Acta Biomembr. 2008;1778(10):1947–77. Speer CP, Sweet DG, Halliday HL. Surfactant therapy: past, present and future. Early Hum Dev. 2013;89:S22–4. Walsh BK, et al. AARC Clinical Practice Guideline. Surfactant Replacement Therapy: 2013. Respir Care. 2013;58(2):367. Gregory TJ, et al. Bovine surfactant therapy for patients with acute respiratory distress syndrome. Am J Respir Crit Care Med. 1997;155(4):1309–15. Filoche M, Tai C-F, Grotberg JB. Three-dimensional model of surfactant replacement therapy. Proc Natl Acad Sci. 2015;112(30):9287. Halpern D, Jensen OE, Grotberg JB. A theoretical study of surfactant and liquid delivery into the lung. J Appl Physiol. 1998;85(1):333–52. Zheng Y, et al. Effects of inertia and gravity on liquid plug splitting at a bifurcation. J Biomech Eng. 2006;128(5):707–16. Zheng Y, et al. Effect of gravity on liquid plug transport through an airway bifurcation model. J Biomech Eng. 2005;127(5):798–806. Uhlig S, Taylor AE. Methods in pulmonary research; 1998. Hill DB, et al. A biophysical basis for mucus solids concentration as a candidate biomarker for airways disease. PLoS One. 2014;9(2):e87681. Puchelle E, Zahm JM. Influence of rheological properties of human bronchial secretions on the ciliary beat frequency. Biorheology. 1984;21(1–2):265. Button B, et al. Roles of mucus adhesion and cohesion in cough clearance. Proc Natl Acad Sci U S A. 2018;115(49):12501–6. Kesimer M, et al. Airway Mucin concentration as a marker of chronic bronchitis. N Engl J Med. 2017;377(10):911–22. Huckaby JT, Lai SK. PEGylation for enhancing nanoparticle diffusion in mucus. Adv Drug Deliv Rev. 2017;124:S0169409X17301813. Samuel KL, et al. Rapid transport of large polymeric nanoparticles in fresh undiluted human mucus. Proc Natl Acad Sci U S A. 2007;104(5):1482–7. Feather EA, Russell G. Effect of tolazoline hydrochloride on sputum viscosity in cystic fibrosis. Thorax. 1970;25(6):732. Rosencranz R, Bogen SA. Clinical laboratory measurement of serum, plasma, and blood viscosity. Am J Clin Pathol. 2006;125 Suppl(Suppl 1):S78–86. Baldry PE, Josse SE. The measurement of sputum viscosity. Am Rev Respir Dis. 1968;98(3):392. Basch FP, Holinger P, Poncher HG. Physical and chemical properties of sputum: I. factors determining variations in portions from different parts of the tracheobronchial tree. Am J Dis Child. 1941;62(5):981–90. John Forbes MDL, FRCP, Wise BSL, Manc MB. Expectorants and sputum viscosity. Lancet. 1957;270(6999):767–70. Sutterby JL. Falling sphere viscometer. J Phys E Sci Instrum. 1973;6(10):1001. Davis AMJ, Brenner H. The falling-needle viscometer. Phys Fluids. 2001;13(10):3086–8. White JC, Elmes PC. The rheological problem in chronic bronchitis. Rheol Acta. 1958;1(2–3):96–102. Ma L, et al. Keratin filament suspensions show unique micromechanical properties. J Biol Chem. 1999;274(27):19145–51. Tseng Y, et al. Micromechanics and ultrastructure of actin filament networks crosslinked by human fascin: a comparison with alpha-actinin. J Mol Biol. 2001;310(2):351–66. Lieberman J. Measurement of sputum viscosity in a cone-plate viscometer. I. Characteristics of sputum viscosity. Am Rev Respir Dis. 1968;97(4):654. King DM, et al. Concentration-dependent, temperature-dependent non-Newtonian viscosity of lung surfactant dispersions. Chem Phys Lipids. 2001;112(1):11–9. Chang-Young P, Omar AS. Image-based synchronization of force and bead motion in active electromagnetic microrheometry. Meas Sci Technol. 2014;25(12):125010. King M, Macklem PT. Rheological properties of microliter quantities of normal mucus. J Appl Physiol. 1977;42(6):797–802. Keiji S, Taichi H, Maiko H. Electromagnetically spinning sphere viscometer. Appl Phys Express. 2010;3(1):016602. Ziemann F, Rädler J, Sackmann E. Local measurements of viscoelastic moduli of entangled actin networks using an oscillating magnetic bead micro-rheometer. Biophys J. 1994;66(6):2210–6. Tassieri M, et al. Analysis of the linear viscoelasticity of polyelectrolytes by magnetic microrheometry—pulsed creep experiments and the one particle response. J Rheol. 2010;54(1):117–31. Thomas Andrew W. Advances in the microrheology of complex fluids. Rep Prog Phys. 2016;79(7):074601. Mason TG, et al. Particle tracking microrheology of complex fluids. Phys Rev Lett. 1997;79(17):3282–5. Tseng Y, An KM, Wirtz D. Microheterogeneity controls the rate of gelation of actin filament networks. J Biol Chem. 2002;277(20):18143–50. Apgar J, et al. Multiple-particle tracking measurements of heterogeneities in solutions of actin filaments and actin bundles. Biophys J. 2000;79(2):1095–106. Mason TG. Estimating the viscoelastic moduli of complex fluids using the generalized Stokes-Einstein equation. Rheol Acta. 2000;39:371–8. Clements JA. Surface tension of lung extracts. Proc Soc Exp Biol Med. 1957;95(1):170–2. Clements JA, et al. Pulmonary surface tension and alveolar stability. J Appl Physiol. 1961;16(3):444–50. Brown ES, Johnson RP, Clements JA. Pulmonary surface tension. J Appl Physiol. 1959;14(14):717. Prokop RM, Neumann AW. Measurement of the interfacial properties of lung surfactant. Curr Opin Colloid Interface Sci. 1996;1(5):677–81. Schürch S, et al. Surface properties of rat pulmonary surfactant studied with the captive bubble method: adsorption, hysteresis, stability. Biochim Biophys Acta Biomembr. 1992;1103(1):127–36. Schoel WM, Schürch S, Goerke J. The captive bubble method for the evaluation of pulmonary surfactant: surface tension, area, and volume calculations. Biochim Biophys Acta. 1994;1200(3):281. Schürch S, Bachofen H, Possmayer F. Surface activity in situ, in vivo, and in the captive bubble surfactometer. Comp Biochem Physiol A Mol Integr Physiol. 2001;129(1):195–207. Putz, G., ., et al., Evaluation of pressure-driven captive bubble surfactometer. J Appl Physiol, 1994. 76(4): p. 1417–1424. Schurch S, et al. A captive bubble method reproduces the in situ behavior of lung surfactant monolayers. J Appl Physiol. 1989;67(6):2389–96. Schürch S. Surface tension at low lung volumes: dependence on time and alveolar size. Respir Physiol. 1982;48(3):339–55. Schürch S, Goerke J, Clements JA. Direct determination of volume- and time-dependence of alveolar surface tension in excised lungs. Proc Natl Acad Sci U S A. 1978;75(7):3417–21. Schürch S, Goerke J, Clements JA. Direct determination of surface tension in the lung. Proc Natl Acad Sci U S A. 1976;73(12):4698–702. Berry JD, et al. Measurement of surface and interfacial tension using pendant drop tensiometry. J Colloid Interface Sci. 2015;454:226–37. Hansen FK, Rødsrud G. Surface tension by pendant drop : I. a fast standard instrument using computer image analysis. J Colloid Interface Sci. 1991;141(1):1–9. Saad SMI, Policova Z, Neumann AW. Design and accuracy of pendant drop methods for surface tension measurement. Colloids Surf A Physicochem Eng Aspects. 2011;384(1):442–52. Franses EI, Basaran OA, Chang CH. Techniques to measure dynamic surface tension. Curr Opin Colloid Interface Sci. 1996;1(1):296–303. Chen P, et al. Axisymmetric drop shape Analysis (ADSA) and its applications. Adv Colloid Interface Sci. 2016;238(98):62. Jyoti A, Prokop RM, Neumann AW. Manifestation of the liquid-expanded/liquid-condensed phase transition of a dipalmitoylphosphatidylcholine monolayer at the air-water interface. Colloids Surf B Biointerfaces. 1997;8(3):115–24. Fordham S. On the calculation of surface tension from measurements of pendant dram. Proc R Soc A. 1948;194(1036):1–16. Yu K, Yang J, Zuo YY. Automated Droplet Manipulation Using Closed-Loop Axisymmetric Drop Shape Analysis. Langmuir. 2016;32(19):4820–6. Bachofen H, et al. Relations among alveolar surface tension, surface area, volume, and recoil pressure. J Appl Physiol. 1987;62(5):1878–87. Fisher MJ, Wilson MF, Weber KC. Determination of alveolar surface area and tension from in situ pressure-volume data. Respir Physiol. 1970;10(2):159–71. Wilson TA. Surface tension-surface area curves calculated from pressure-volume loops. J Appl Physiol Respir Environ Exerc Physiol. 1982;53(6):1512. Smith JC, Stamenovic D. Surface forces in lungs. I. Alveolar surface tension-lung volume relationships. J Appl Physiol. 1986;60(4):1341–50. Bachofen, H., ., J. Hildebrandt, ., and M. Bachofen, . Pressure-volume curves of air- and liquid-filled excised lungs-surface tension in situ. J Appl Physiol, 1970. 29(4): p. 422–431. The authors would like to thank Mr. Chunyuan Zhang for preparing the figures in the manuscript. This study was supported by the National Natural Science Foundation of China. (81770075). School of Medical Instrumentation, Shanghai University of Medicine & Health Sciences, 257 Tianxiong Road, Shanghai, 201318, China Zhenglong Chen & Yuzhou Luo Department of Intensive Care Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China Ming Zhong Institute of Biomedical Engineering and Health Sciences, Changzhou University, Changzhou, 213164, Jiangsu, China Linhong Deng Zhaoyan Hu Department of Pulmonary Medicine, Zhongshan Hospital Fudan University, 180 Fenglin Road, Xuhui District, Shanghai, 200032, China Yuanlin Song Search for Zhenglong Chen in: Search for Ming Zhong in: Search for Yuzhou Luo in: Search for Linhong Deng in: Search for Zhaoyan Hu in: Search for Yuanlin Song in: All authors substantially contributed to the drafting and review of this article. All authors read and approved the final manuscript. Correspondence to Zhaoyan Hu or Yuanlin Song. Chen, Z., Zhong, M., Luo, Y. et al. Determination of rheology and surface tension of airway surface liquid: a review of clinical relevance and measurement techniques. Respir Res 20, 274 (2019) doi:10.1186/s12931-019-1229-1 Airway surface liquid Lung surfactant
CommonCrawl
Computation and analysis of temporal betweenness in a knowledge mobilization network Amir Afrasiabi Rad1, Paola Flocchini1 & Joanne Gaudet2 Computational Social Networks volume 4, Article number: 5 (2017) Cite this article Highly dynamic social networks, where connectivity continuously changes in time, are becoming more and more pervasive. Knowledge mobilization, which refers to the use of knowledge toward the achievement of goals, is one of the many examples of dynamic social networks. Despite the wide use and extensive study of dynamic networks, their temporal component is often neglected in social network analysis, and statistical measures are usually performed on static network representations. As a result, measures of importance (like betweenness centrality) typically do not reveal the temporal role of the entities involved. Our goal is to contribute to fill this limitation by proposing a form of temporal betweenness measure (foremost betweenness). Our method is analytical as well as experimental: we design an algorithm to compute foremost betweenness, and we apply it to a case study to analyze a knowledge mobilization network. We propose a form of temporal betweenness measure (foremost betweenness) to analyze a knowledge mobilization network and we introduce, for the first time, an algorithm to compute exact foremost betweenness. We then show that this measure, which explicitly takes time into account, allows us to detect centrality roles that were completely hidden in the classical statistical analysis. In particular, we uncover nodes whose static centrality was negligible, but whose temporal role might instead be important to accelerate mobilization flow in the network. We also observe the reverse behavior by detecting nodes with high static centrality, whose role as temporal bridges is instead very low. In this paper, we focus on a form of temporal betweenness designed to detect accelerators in dynamic networks. By revealing potentially important temporal roles, this study is a first step toward a better understanding of the impact of time in social networks and opens the road to further investigation. Highly dynamic networks are networks where connectivity changes in time and connection patterns display possibly complex dynamics. Such networks are more and more pervasive in everyday life and the study of their properties is the object of extensive investigation in a wide range of very different contexts. Some of these contexts are typically studied in computer science, such as wireless, ad hoc networks, transportation, vehicular networks, satellites, military, and robotic networks (e.g., see [1,2,3,4,5,6]), while others belong to totally different disciplines. This is the case, for example, of the nervous system, livestock trade, epidemiological networks, and multiple forms of social networks (e.g., see [7,8,9,10,11,12]). Clearly, while being different in many ways, these domains display common features; a time-varying graph (TVG) is a model that formalizes highly dynamic networks encompassing the above contexts into a unique framework and emphasizes their temporal nature [13]. Knowledge mobilization (KM) refers to the use of knowledge toward the achievement of goals [14]. Scientists, for example, use published papers to produce new knowledge in further publications to reach professional goals. In contrast, patient groups can use scientific knowledge to help foster change in patient practices, and corporations can use scientific knowledge to reach financial goals. Recently, researchers have started to analyze knowledge mobilization networks (KMN) using a social network analysis (SNA) approach (e.g., see [15,16,17,18,19,20]). In particular, [19] proposed a novel approach where a heterogeneous network composed of a main class of actors subdivided into three subtypes (individual human and non-human actors, organizational actors, and non-human mobilization actors) associated according to one relation, knowledge mobilization (a mobilization-network approach). Data covered a 7-year period with static networks for each year. The mobilization network was analyzed using classical SNA measures (e.g., node centrality measures, path length, density) to produce understanding for KM using insights from network structure and actor roles [19]. The KM SNA studies mentioned above, however, lack a fundamental component: in fact, their analysis is based on a static representation of KM networks, incapable of sufficiently accounting for the time of appearance and disappearance of relations between actors beyond static longitudinal analysis. Indeed, incorporating the temporal component into analysis is a challenging task, but it is undoubtedly a critical one, because time is an essential feature of these networks. Temporal analysis of dynamic graphs is in fact an important and extensively studied area of research (e.g., see [21,22,23,24,25,26,27]), but there is still much to be discovered. In particular, most temporal studies simply consider network dynamics in successive static snapshots, thus capturing only a partial temporal component by observing how static parameters evolve in time while the network changes. Moreover, very little work has been dedicated to empirically evaluating the usefulness of metrics in time (e.g., see [28, 29]). In this paper, we represent KMN by TVGs and we propose to analyze them in a truly temporal setting. We design a deterministic algorithm to compute a form of temporal betweenness in time-varying graphs (foremost betweenness) that measures centrality of nodes in terms of how often they lie within temporal paths with the earliest arrival. We then provide, for the first time on a real data set, an empirical indication for the effectiveness of foremost betweenness. In particular, we focus on data extracted from [19], here referred to as Knowledge-Net. We first consider static snapshots of Knowledge-Net corresponding to the 7 years of its existence, and by studying the classical centrality measures in those time intervals, we provide rudimentary indications of the networks' temporal behavior. To gain a finer temporal understanding, we then concentrate on temporal betweenness following a totally different approach. Instead of simply observing the static network over consecutive time intervals, we focus on the TVG that represent Knowledge-Net and we compute foremost betweenness, explicitly and globally taking time into account. We compare the temporal results that we obtain with classical static betweenness measures to gain insights into the impact that time has on the network structure and actor roles. We notice that, while many actors maintain the same role in static and dynamic analysis, some display striking differences. In particular, we observe the emergence of important actors that remained invisible in static analysis, and we advance explanations for these. Results show that the form of temporal betweenness we apply is effective at highlighting the role of nodes whose importance has a temporal nature (e.g., nodes that contribute to mobilization acceleration). A limitation of our algorithm is its applicability to small networks. In fact, any deterministic solution to the computation of foremost betweenness is inevitably very costly and, when faced with large networks, it is feasible to apply it only on small components. This research opens the road to the design of approximate variations of the algorithm so to make it applicable to larger scenarios, as well as to the study of other temporal measures designed for TVGs. Time-varying graphs Time-varying graphs are graphs whose structure varies over time. Following [13], a time-varying graph (TVG) is defined as a quintuple \(\mathcal{G} = {(V,E,\mathcal{T},\rho ,\zeta )}\), where V is a finite set of nodes and \(E \subseteq V \times V\) is a finite set edges. The graph is considered within a finite time span \(\mathcal {T}\subseteq \mathbb {T}\), called lifetime of the system. \(\rho {:}\, E \times \mathcal{T} \rightarrow \{0,1\}\) is the edge presence function, which indicates whether a given edge is available at a given time; \(\zeta {:} \,E \times \mathcal{T} \rightarrow {\mathbb T}\) is the latency function, which indicates the time it takes to cross a given edge if starting at a given date. The model may, of course, be extended by defining the vertex presence function \((\psi {:}\,V\times \mathcal {T}\rightarrow \{0,1\})\), and vertex latency function \((\phi {:}\,V\times \mathcal {T}\rightarrow \{0,1\})\). The footprint of \(\mathcal{G}\) is a static graph composed by the union of all nodes and edges ever appearing during the lifetime \(\mathbb {T}\). A journey route R in a TVG \(\mathcal {G}\) is a walk in G defined as a sequence of edges \(\{e_1,e_2,\ldots ,e_k\}\). A journey \(\mathcal {J}\), then, is a temporal walk in \(\mathcal {G}\) comprising the sequence of ordered pairs \(\{(e_1,t_1),(e_2,t_2),\ldots,\) \((e_k,t_k)\}\) if and only if \(\rho (e_i,t_i)=1\) and \(t_{i+1}\ge t_i+\zeta (e_i,t_i)\) for all \(i<k\). Every journey has a departure \((\mathcal {J})\) and an arrival \((\mathcal {J})\) that refer to journey's starting time \(t_1\) and its finish time \(t_k+\zeta (e_k,t_k)\), respectively. Journeys are divided into three classes based on their variations based on the temporal and topological distance [30]. Journeys that have the earliest arrival times are called foremost journeys, journeys with the smallest topological distance are referred to as the shortest journeys, while the journey that takes the smallest amount of time is called the fastest. Moreover, we call foremost increasing journeys the ones whose route \(\{e_1,e_2, \ldots , e_k\}\) is such that birth-date \((e_i) \le \) birth-date \((e_{i+1})\). Temporal betweenness Betweenness is a classic measure of centrality extensively investigated in the context of social network analysis. The betweenness of a node \(v\in V\) in a static graph \(G=(V,E)\) is defined as follows: $$\begin{aligned} B(v) =\sum _{u\ne w\ne v\in V}\frac{|P(u,w,v)|}{|P (u,w)|}, \end{aligned}$$ where |P(u, w)| is the number of shortest paths from u to w in G, and |P(u, w, v)| is the number of those passing through v. Even if static betweenness is "atemporal," we denote here by \(B(v)^ \mathcal {T}\) the static betweenness of a node v in a system whose lifetime is \(\mathcal {T}\). Typically, vertices with high betweenness centrality direct a greater flow and, thus, have a high load placed on them, which is considered as an indicator for their importance as potential gatekeepers in the network. While betweenness in static graphs is based on the notion of the shortest path, its temporal version can be extended into three different measures to consider the shortest, foremost, and fastest journeys for a given lifetime \(\mathcal {T}\) [25]. In this paper, we consider foremost betweenness. Nodes with a high foremost betweenness values do not simply act as gatekeepers of flow, like their static counterparts. In fact, they direct the flow that conveys a message in an earliest transmission fashion. In other words, if the message transmission takes the path from foremost between nodes, such nodes provide a means to transmit the message in a more timely manner to all other nodes in the graph compared to the nodes that have lower foremost centrality. Thus, intuitively, they provide some form of "acceleration" in the flow of information. Foremost betweenness \( TB^ \mathcal {T}_\mathcal{F} (v) \) for node v with lifetime \(\mathcal {T}\) is here defined as follows: $$\begin{aligned} TB^ \mathcal {T}_\mathcal{F} (v) = {n(v) \over n} \sum _{u\ne w\ne v\in V}\frac{|\mathcal {F}^\mathcal {T} (u,w,v)|}{|\mathcal {F}^ \mathcal {T}(u,w)|}, \end{aligned}$$ where \(|\mathcal {F}^ \mathcal {T}(u,w)|\) is the number of foremost journey routes between u and w during time frame \(\mathcal {T}\) and \(|\mathcal {F}^ \mathcal {T}(u,w,v)|\) is the number of the ones passing through v in the same time frame; n is the total number of nodes, and n(v) is the number of nodes in the connected component to which v belongs. The factor \(n(v) \over n\) is an adjustment coefficient to take into account possible network disconnections. In fact, it makes the betweenness of a node depend on the actual size of the connected component to which the node belongs, thus avoiding anomalous situations where a node in a very small component could be otherwise perceived as globally central. This would be the case, for example, of the center v of a small component in the shape of a star, where v would have maximum global betweenness while its central role is applied only to a very small portion of the overall network. Computing foremost betweenness The computation of betweenness centrality in static graphs can be done quite efficiently. Several approaches exist in the literature (e.g., see [31,32,33,34,35]) proposing either polynomial deterministic solutions or approximate ones for a variety of different graphs. Computing shortest-path betweenness in TVG can also be done in polynomial time, for example by adapting the algorithms described in [26, 30]. The situation is rather different in the case of foremost betweenness, for which no algorithm has been proposed so far. In fact, it is easy to see that there exist TVGs where counting all foremost journeys or journey routes between two vertices is #P-complete, which means that no polynomial-time algorithm is known. Consider, for example, TVGs where edges always exist (note that a static graph is a particular TVG) and latency is zero. In such a case, any journey between any pair of nodes is a foremost journey. Counting all of them is then equivalent to counting all paths between them, which is a #P-complete problem (see [36]). In general, it is then unavoidable to have worst-case exponential algorithms to compute foremost betweenness in an arbitrary TVG. In this section, we first focus on foremost betweenness based on journey routes in the general setting (Algorithm 1). We then focus on foremost betweenness for special TVGs with zero latency and instant edges (Algorithm 2), which correspond to the characteristics of the knowledge mobilization network that we analyze in "Knowledge-Net". Note that each solution has the same worst-case time complexity, linear in the number of nodes in all the journey routes in the TVG, which can clearly be exponential. The advantages of the algorithm designed for the special temporal condition of instant edges and zero latency are mainly practical. In fact, the worst-case complexities are the same, but the execution time is better for our particular dataset. A general algorithm In this section, we describe an algorithm for counting all journey routes from a given node to all the other nodes in the TVG, passing through any possible intermediate node. This module is at the basis of the computation of foremost betweenness. We start by introducing some notations and functions used in the algorithm. Given an edge (x, y), let function arriv(x, y, t) return the arrival time to y, leaving x at time t. Given a time-stamped journey \(\pi \), with an abuse of notation, let us indicate by arriv\((\pi )\) the arrival time at the last node of \(\pi \). The foremost arrival time in G to any node v from a given source s can be computed using the Algorithm from [30]. Let foremost(s, v) denote such a time. We are now ready to describe the algorithm. The input of Algorithm CountFormemostJRoutes is a pair (G, s), where \(G=(V,E) \) is a TVG and s is a starting node. The algorithm returns a matrix Count\(_s[x,y]\), for all \(x,y \in V\) containing the number of foremost journeys from s to y passing through x (note that Count\(_s[x,x]\) denotes the number of foremost journeys from s to x). The counting algorithm is simple and it is based on multiple Depth-First Search (DFS) traversals. It consists of visiting every journey route of G starting from s, incrementing the appropriate counters every time a newly encountered journey is foremost. We remind that a node can reappear more than once in a journey route, with various occurrences corresponding to different times. This means that we need to store the time when a node is visited in the journey route so that, if it is visited again, we can determine whether the subsequent visit corresponds to a later time and thus the node has to be considered again. Note that this is the main difference with respect to a DFS in a static graph, where instead every node is visited exactly once. To perform the traversal managing multiple visits (corresponding to different traversal times), we use two stacks: Path and S, where Path contains the nodes corresponding to the journey currently under visit and S contains the edges to be visited. In both Path and S, we store also time-stamps, to register the time of the first visit of nodes in Path and the time for the future visits of edges in S. If a node happens to be revisited at a later time, in fact, it is treated as a new node. The traversal starts as a typical DFS, pushing the incident edges of the source s onto stack S with their arrival times in these journeys (lines 4–6). The nodes corresponding to the current journey under visit are kept in the second stack Path (these nodes are implicitly marked visited), initially containing only the source. When considering the next candidate edge (x, y) to visit (line 8), we may be continuing the current journey (if the top of stack Path contains x) or we may have backtracked to some previous nodes (if the top is different from x). In this last case, the content of Path is updated to reflect the backtracking (lines 9–11). After visiting a node y (line 16), the DFS continues pushing on S the edges incident to y that are feasible with the current journey under visit (i.e., the edges whose target is not already in Path, and whose latest traversal time is greater than or equal to the earliest arrival time at y) (lines 17–19). The if clause at line 20 checks whether the discovered journey is foremost and updates the corresponding counters. In other words, as soon as a journey \(\pi = [(x_0,x_1),(x_1,x_2), \ldots ,(x_{k-1},x_k)]\) is encountered in the traversal, Count\([x_i,x_k]\), \(i\le k\) is updated only if \(\pi \) is a foremost journey, and, regardless of it being foremost, the traversal continues pushing on the stack the edges incident to \(x_k\) that are temporally feasible with \(\pi \). Whenever backtracking is performed, however, the already visited nodes on the backtracking path are popped from Path (thus implicitly remarked unvisited) in such a way that they can be revisited as part of different journey routes, not explored yet. Observations on complexity The running time of Algorithm CountFormemostJRoutes is linear in the number of nodes belonging to different foremost journeys, because it traverses each one of them. However, depending on the structure of the TVG, such a number could be exponential, thus an overall exponential worst-case complexity. More precisely, let \(\mu _s\) be the number of foremost journeys from a source node s to all the other nodes in \(\mathcal{G}\), \(n(\mu _s)\) be the number of nodes belonging to those journeys, and n the number of nodes of \(\mathcal{G}\). Moreover, let \(\mu \) and \(n(\mu )\) be, respectively, the overall number of foremost journeys in \(\mathcal{G}\) and the overall number of nodes in those journeys. The algorithm to count all foremost journeys from s to all the other nodes traverses every foremost journey from the source to any other node, and it performs an update for every visited node in each foremost journey that it encounters. Thus, its time complexity is \(O(n(\mu _s))\). To compute foremost betweenness, the algorithm has to be repeated for every possible source, thus traversing every possible foremost journey in \(\mathcal{G}\) for a total time complexity of \(O(n(\mu ))\). Since \(n(\mu )\) could be exponential in n, we have a worst-case exponential complexity in the size of the network. Note that the high cost is inevitable for any deterministic algorithm to compute foremost betweenness. Algorithm for KnowledgeNet Algorithm 1 is applicable to a general TVG. We now consider a very special type of TVG with specific temporal restrictions that correspond to the type of network that we analyze in this paper. One such peculiarity is given by instant edges (edges that appear only during a unique time interval). Another characteristic is zero latency (i.e., edges that can be traversed instantaneously). Finally, in this setting, we base betweenness computation on increasing journey routes. We then describe a variation of the general algorithm specifically designed for those conditions (instant edges with zero latency), and we compute foremost betweenness applying the foremost betweenness formula restricted to foremost increasing journeys. Given a TVG \(G=(V,E)\), since we assume the presence of instant edges, we can divide time in consecutive intervals \(I_1, I_2, \ldots , I_k\) corresponding to k snapshots \(G_1, G_2, \ldots G_k\) (\(G_i=(V_i,E_i)\)), in such a way that \((x,y)\in E_i\) implies that \((x,y)\not \in E_j\) for \(j\ne i\). Furthermore, we know by \(\zeta =0\) that an edge can be traversed in zero time. The key idea that can be applied to this very special structure is based on the observation that, given a foremost route \(\pi _{x,y}\) from x to y with edges in time intervals \(I_j\), provided that \(j>i\) and j appears immediately after i, and given any journey route \(\pi '_{s,x}\) from s to x with edges only in \(I_i\), the concatenation of \(\pi '_{s,x}\) and \(\pi _{x,y}\) is a foremost route from s to y, passing through x. This observation leads to the design of an algorithm that starts by counting the foremost routes belonging to the last snapshot \(G_k\) only, and proceeds backwards using the information already computed. More precisely, when considering snapshot \(G_i\) from a source s, the goal is to count all foremost routes involving only edges in \(\cup _ {j\ge i} E_j\) (i.e., with time intervals in \(\cup _ {j\ge i} I_j\)), and when doing so, all the foremost routes involving only edges strictly in the "future" (i.e., time intervals \(\cup _ {j>i} I_j\)) have been already calculated for any pair of nodes. The already computed information is used when processing snapshot \(G_i\) in a dynamic programming fashion. As for Algorithm 1, the input of Algorithm 2 is a pair: a snapshot \(G_i\) and a starting node s. The algorithm returns an array, Count\(_{s}[u,v]\), where Count\(_{s}[u,v]\) for all \(u,v \in V\) contains the number of foremost journeys from s to u passing through v counted so far (i.e., considering only edges in \(\cup _{j\ge i} E_j\)). The actual counting algorithm on snapshot \(G_i\) is a modified version of Algorithm 1, still based on Depth-First Search (DFS) traversal. Lines 2–11 are exactly the same as in Algorithm 1, except that here we do not need to keep track of the arrival time for each edge, as we run Algorithm 2 in a single snapshot and the latency for edges is zero. In line 13, we examine whether the target of the current edge y has already been visited or not. If it has not been visited already, it either falls in the current snapshot, or it flows into the next snapshot. In the case where y stays in the current snapshot (lines 15–22), we push its adjacent nodes into the stack S and determine whether the route ending at y is foremost. If a foremost route is discovered at y, we update Count\(_{s}[z,v]\) by incrementing its value for all \(z \in Path\) (z being the node that falls on the journey route from s to y). If instead it is not a foremost route in the current interval (lines 23–25), meaning that y is a node that existed in the "future," a special update is performed using the data already calculated for the "future snapshots." More precisely, when a journey route (in this case a foremost journey route) from s to x (\(s\leadsto x\)) is a prefix of a journey route \(x\leadsto y\) at a later time snapshot, we perform a procedure called SpecialCount (Algorithm 3). The special count procedure involves aggregating the values of Count\(_s[v,x]\) with Count\(_x[v',y]\), for all nodes (resp. v, \(v'\)) occurring in the journey routes between s and x and between x and y (see Algorithm 3). Algorithm 3 simply calculates the product of the number of foremost journeys between two routes \(s\leadsto x\), and \(x\leadsto y\), if they do not share any vertex (lines 4–9). If instead they share some vertex v, the calculation is slightly more complicated: let a be the number of foremost journeys from s to y where v is visited at least once on the route between x and y; let b be the number of foremost journeys from s to y where v is visited at least once on the route between s and x; and let c be the sum of a and b. c represents the number of all foremost journeys from s to y that pass through v. However, c counts the journey route passing through v multiple times if v happened to exist in both Count\(_s[v,x]\) and Count\(_x[v,y]\), and we need to remove such multiple counting of journeys, which is done along with the update to Count\(_s[v,y]\) in line 13. The worst-case time complexity of Algorithm 2, CountAllZeroLatency, is the same as the one of the general algorithm, CountFormemostJRoutes. In our network, however, it performed better than Algorithm 1. We try to explain below the reasons for this. Algorithm 2 has to be executed in anti-chronological order of the different snapshots, starting from the last one, since it uses the previously calculated results in the computation of the new results. This approach is amenable to concurrent computations. In fact, since the graph is divided into independent snapshots, the number of all journeys can be computed separately for each snapshot, and the result of the calculation can be aggregated at the end. This has the advantage of eliminating all the special updates from the first part of the algorithm (while detecting all the journey routes), and deferring them to the second part (when aggregating all the information for the final update). Thus, instead of performing the special count at each level, we can postpone it to the last step of the algorithm, and loop once through all the collected counts with hard-coded intervals in the loop. While not being advantageous in worst-case scenarios, this strategy results in a more efficient solution from a practical point of view. Still, the algorithm is very costly, even in such a small network (KnowledgeNet has 366 vertices and 750 edges) and it did run in almost a month when implemented in C++ with a machine with 40 cores and 1TB RAM. Knowledge-Net Knowledge-Net is a heterogeneous network where nodes represent human and non-human actors (researchers, projects, conference venues, papers, presentations, laboratories) and edges represent knowledge mobilization between two actors. The network was collected for a period of 7 years [19]. Once an entity or a connection is created, it remains in the system for the entire period of the analysis. Table 1 provides a description of the Knowledge-Net dataset. The dataset consists of 366 vertices and 750 edges in 2011. The numbers of entities and connections vary over time starting from only 10 vertices and 14 edges in 2005 and accumulating to the final network year in 2011. Knowledge-Net is mainly composed of non-human actors, 272 in total (non-human mobilization actors, NHMA, non-human individual actors, NHIA, and organizational actors, OA), in relation with 94 human actors (HA). Human actors include principle investigators (PI), highly qualified personnel (HQP), and collaborators (CO). It is through mobilization actors (NHMA) that individual, organizational actors and mobilization actors associate and mobilize knowledge to reach goals. For example, scientists mobilize knowledge through articles where not all contributing authors might be in relation with all other authors, yet all relate with the publication [19]. These non-human mobilization actors make up the bulk of the network including conference venues, presentations (invited oral, non-invited oral, and poster), articles, journals, laboratories, research projects, websites, and theses. According to an interpretation of the the Actor-Network Theory [37], the nature/ type/characteristics of the mobilizer nodes have no interference with their role as a mobilizer. Following this interpretation, we consider that knowledge mobilization is beyond the role and nature of the nodes and we treat KnowledgeNet as a homogeneous network of knowledge mobilizers. All nodes of this network have the same function as knowledge mobilizer despite the fact that they might be quite different from each other from the view point of nature, type, and/or characteristics. Table 1 Knowledge-Net data set with characteristics of actors and their roles at different times Classical statistical parameters have been calculated for Knowledge-Net, representing it as a static graph where the time of appearance of nodes and edges did not hold any particular meaning. In doing so, several interesting observations were made regarding the centrality of certain nodes as knowledge mobilizers and the presence of communities [19]. In particular, all actor types increased in number over the 7 years indicating a rise in new mobilization relations over time. Although non-human individual actor absolute numbers remained small (ranging from 3 in 2006 to 15 in 2011), these actors were critical to making visible tacit (non-codified) knowledge mobilization from around the world (mostly laboratory material sharing, including from organizations and universities in the USA, from Norway, and from Canadian universities). Finally, embedded in human individual actor counts were individuals that the laboratory acknowledged in peer-reviewed papers, thus making further tacit and explicit knowledge mobilization visible. A small portion of Knowledge-Net represented as a TVG When representing Knowledge-Net as a TVG, we notice that the latency \(\zeta \) is always zero, as an edge represents a relationship and its creation does not involve any delay; moreover, edges and nodes exist from their creation (their birth-date) to the end of the system lifetime. Let birth-date(e) denote the year when edge e is created. An example of a small portion of Knowledge-Net represented as a TVG is shown in Fig. 1. We also notice that, due to zero latency, edges spanning only one interval, and to the fact that edges never disappear once created, any shortest journey route in \(\mathcal{G}\) is equivalent to a shortest path on the static graph corresponding to its footprint; moreover, the notion of fastest journey does not have much meaning in this context, because on any route corresponding to a journey, there would be a fastest one. On the other hand, the notion of foremost journey, and in particular of foremost increasing journey, is extremely relevant as it describes timely mobilization flow, i.e., flow that arrives at a node as early as possible. Note that in this setting the computation of foremost betweenness can be performed using Algorithm 2 introduced in the previous section. Study of KnowledgeNet Analysis on consecutive snapshots To provide more clear statistics on the Knowledge-Net dataset and a ground for better understanding of temporal metrics, we first calculated classical statistical measures (e.g., node centrality measures, path length, density) on the seven static graphs, corresponding to the 7 years of study. The average for each value for the graphs is calculated to represent a benchmark on how the rank for each node is compared to others. Table 2 Some static statistical parameters calculated for successive snapshots The statistical data presented in Table 2 provide valuable information about the graph. The steady decrease in the centrality values (normalized in the [0,1] range) confirms that the network growth is not symmetric, so the centrality values have long tails. According to Hanneman and Riddle [38], we should expect a high value of betweenness in dense graphs due to the fact that it is highly possible that a path crosses every node. Meanwhile, when the betweenness values are normalized, they become low if all of the betweenness values are close to each other. Thus, the high value of betweenness (in the range of hundreds), and the low value of its normalized counterpart (close to zero) in Knowledge-Net, indicates that the graph is either dense or is coupled in a way that there is a large number of shortest paths between any two arbitrary vertices. The graph is not dense as it is confirmed by the highest density metric of six. Therefore, the high number of shortest paths in the graphs caused the betweenness for most vertices to be similar and quite low when compared to the ones of nodes with the highest betweenness. Low average path length (highest being 3.50) is a sign that the network presents small-world characteristics and the knowledge mobilization to the whole network is expected to be conducted only in a few hops. Meanwhile, the decreasing graph density (from 0.3 to 0.1) along with the increasing average degree (from 1.4 to 2.04) represents the slow growth in the number of edges compared to the number of nodes. Escalation in the number of communities (by 8 communities) with an increase in graph modularity metrics (from 0.17 to 0.54) shows that the knowledge mobilization actors tend to form communities as time progresses. As the normalized average betweenness decreases steadily, it might be concluded that a few vertices in each community play the role of mediators and create the link between communities. Apart from these general observations, a static analysis of consecutive snapshots does not provide temporal understanding. For example, it does not reflect which entities engage in knowledge mobilization in a timely fashion, e.g., by facilitating fast mobilization, or slowing mobilization flow. To tackle some of these questions, we represent Knowledge-Net as a TVG and we propose to study it by employing a form of temporal betweenness that makes use of time in an explicit manner. Foremost betweenness of Knowledge-Net In this section, we focus on Knowledge-Net, and we study \(TB^ \mathcal {T}_\mathcal{F} (v)\) for all v. Nodes are ranked according to their betweenness values and their ranks are compared with the ones obtained calculating their static betweenness \(B^ \mathcal {T}(v)\) in the same time frame. Given the different meaning of those two measures, we expect to see the emergence of different behaviors, and, in particular, we hope to be able to detect nodes with important temporal roles that were left undetected in the static analysis. Foremost Betweenness during the lifetime of the system Table 3 shows the temporally ranked actors accompanied by their static ranks, and the high-ranked static actors with their temporal ranks, both with lifetime \(\mathcal {T}=\) [2005–2011]. In our naming convention, an actor named Xi(yy) is of type X, birth-date yy, and it is indexed by i; types are abbreviated as follows: H (human), L (Lab), A (article), C (conference), J (journal), P (project), C (paper citing a publication), I (invited oral presentation), and O (oral presentation). Note that only the nodes whose betweenness has a significant value are considered, in fact betweenness values tend to lose their importance, especially when the differences in the values of two consecutive ranks are very small [34]. Table 3 List of the highest ranked actors according to temporal (resp. static) betweenness, accompanied by the corresponding static (resp. temporal) rank in lifetime [2005–2011] Interestingly, the four highest ranked nodes are the same under both measures; in particular, the highest ranked node (L1(05)) corresponds to the main laboratory where the data are collected and it is clearly the most important actor in the network whether considered in a temporal or in a static way. On the other hand, the table reveals several differences worth exploring. From a first look, we see that, while the vertices highest ranked statically appear also among the highest ranked temporal ones, there are some nodes with insignificant static betweenness, whose temporal betweenness is extremely high. This is the case, for example, of nodes S1(10) and J1(06). The case of node S1(10) To provide some interpretation for this behavior, we observe vertex S1(10) in more detail. This vertex corresponds to a poster presentation at a conference in 2010. We explore two insights. First, although S1(10) has a relatively low degree, it has a great variety of temporal connections. Only three out of ten incident edges of S1(10) are connected to actors that are born on and after 2010, and the rest of the neighbors appear in different times, accounting for at least one neighbor appearing each year for which the data are collected. This helps the node to operate as a temporal bridge between different time instances and to perhaps act as a knowledge mobilization accelerator. Second, S1(10) is close to the center of the only static community present in [2010–2011] and it is connected to the two most important vertices in the network. The existence of a single dense community, and the proximity to two most productive vertices can explain its negligible static centrality value: while still connecting various vertices S1(10) is not the shortest connector, and its betweenness value is thus low. However, a closer temporal look reveals that it plays an important role as an interaction bridge between all the actors that appear in 2010 and later, and the ones that appear earlier than 2010. This role remained invisible in static analysis and only emerges when we pay attention to the time of appearance of vertices and edges. On the basis of these observations, we can interpret S1(10)'s high temporal betweenness value as providing a fast bridge from vertices created earlier and those appearing later in time. This might indicate reasons for further study of the importance of poster presentations that can blend tacit and explicit knowledge mobilization in human–poster presentation–human relations during conferences, and continue into future mobilization with new non-human actors as was the case for S1(10). The case of node J1(06) J1(06), the Journal of Neurochemistry, behaves similarly to S1(10) with its high temporal and low static rank. As opposed to S1(10), this node is introduced very early in the network (2006); however, it is only active (i.e., has new incident edges) in 2006 and 2007. It has only three neighbors, A1(06), A3(07), and C1(07), all highly ranked vertices statically (A1(06), A3(07)), or temporally (C1(07)). Since its neighboring vertices are directly connected to each other or in close proximity of two hops, J1(06) fails to act as a static short bridge among graph entities. However, its early introduction and proximity to the most prominent knowledge mobilizers helps it become an important temporal player in the network. This is because temporal journeys overlook geodesic distances and are instead concerned with temporal distances for vertices. These observations might explain the high temporal rank of J1(06) in the knowledge mobilization network. A finer look at foremost betweenness A key question is whether the birth-date of a node is an important factor influencing its temporal betweenness. To gain insights, we conducted a finer temporal analysis by considering \(TB^ \mathcal {T}_\mathcal{F}\) for all possible birth-dates, i.e., for \( \mathcal {T}=[x,2011],\forall x\in \{2005,2006,2007,2008,2009,2010,2011\}\). This allowed us to observe how temporal betweenness varies depending on the considered birth-date. Before concentrating on selected vertices (statically or temporally important with at least one interval), and analyzing them in more detail, we briefly describe a temporal community detection mechanism that we employ in analysis. Detection of temporal communities According to Tantipathananandh et al. [27], accurately detecting communities in TVGs is an NP-hard and APX-hard task. Tantipathananandh et al. [27] used a heuristic to approximate the community detection for a more efficient algorithm. However, when the number of nodes in a dense graph exceeds double digits, the algorithm becomes computationally unfeasible to run. To the best of our knowledge, the only other work that attacked the community detection problem in TVGs is [39], where the problem is tackled by transforming the TVG into a series of static snapshot graphs with no repeated nodes in snapshots, and by incrementally detecting and adding to communities. While the complexity of the algorithm is not provided, it immediately proves inapplicable to our problem as it (a) works only on series of snapshots with no repetition and (b) includes aging factor in calculations. Thus, we take an approach similar to the one proposed in [27], by only focusing on approximating the communities for the purpose of this research. To do that, we first transform our TVG into a static weighted directed graph (the journey graph), which gives a rough indication of the foremost journeys of the actual TVG. We then use the journey graph as input to an existing community detection algorithm, designed for weighted graphs [40]. More precisely, given a TVG \(G= (V,E)\), we construct the journey graph of G, \(J(G) = (V,E')\) as follows: the nodes of J(G) correspond to the original nodes of G and \((x,y) \in E'\) if there exists at least a foremost journey between x and y in G. The weight associated to edge \((x,y) \in E'\) is equal to the number of foremost journeys between x and y in G. An example of this construction is shown in Fig. 2. Note that Knowledge-Net, over time, creates only one connected component, but the community analysis of the Knowledge-Net graph results in 14 communities. The largest community consists of almost 39% of nodes and is centered around L1(05). Given the large number of the nodes belonging to communities and the low number of detected communities, it is clear that some of the central nodes share communities with each other. Transformation of a TVG into a journey graph The case of node P1(06) This is a research project led by the principle investigator at L1(05). The project was launched in 2006 and its official institutional and funded elements wrapped up in 2011. Data in Table 3 support that P1(06) has similar temporal and static ranks with regard to its betweenness in lifetime [2005–2011]. One could conclude that the temporal element does not provide additional information on its importance and that the edges that are incident to P(06)-1 convey the same temporal and static flow. However, there is still an unanswered question on whether or not edges act similarly if we start observing the system at different times. Will a vertex keep its importance throughout the system's lifetime? The result of such analysis is provided in Fig. 3, where \(TB^ \mathcal {T}_\mathcal{F} (P1(06))\) is calculated for each birth-date (indicated in the horizontal axis), with all intervals ending in 2011. Comparison between different values for vertex P1(06). Ranks of the vertex in the last interval are not provided as both betweenness values are zero While both equally important during the entire lifetime [2005–2011] of the study, this project seems to assume a rather more relevant temporal role when observing the system in a lifetime starting in year 2007 (i.e., \(\mathcal {T} =\) [2007–2011]), when its static betweenness is instead negligible. This seems to indicate that the temporal flow of edges incident to P1(06) appearing from 2007 on is more significant than the flow of the edges that appeared previously. With further analysis of P1(06)'s neighborhood in [2007–2011], we can formulate technical explanations for this behavior. First, its direct neighbors also have better temporal betweenness than static betweenness. Moreover, its neighbors belong to various communities, both temporally and statically. However, looking at the graph statically, we see several additional shortest paths that do not pass through P1(06) (thus making it less important in connecting those communities). In contrast, looking at the graph temporally P1(06) acts as a mediator and accelerator between communities. More specifically, we observe that the connections P1(06) creates in 2006 contribute to the merge of different communities that appear only in 2007 and later. When observing within interval [2006–2011], we then see that P1(06) is quite central from a static point of view, because the appearance of time of edges does not matter, but, when observing it in lifetime [2007–2011], node P1(06) loses this role and becomes statically peripheral because the newer connections relay information in an efficient temporal manner. In other words, it seems that P1(06) has an important role for knowledge acceleration in the period [2007–2011], a role that was hidden in the static analysis and that does not emerge even from an analysis of consecutive static snapshots. For research funders, revealing a research project's potentially invisible mobilization capacity is relevant. Research projects can thus be understood beyond mobilization outputs and more in terms of networked temporal bridges to broader impact. The case of node A3(07) Comparison between different values for vertex A3(07) are shown in Fig. 4, where ranks of the vertex in the last interval are not provided as both betweenness values are zero. The conditions for this node, a paper published in 2007, illustrate a different temporal phenomenon. Node A3(07) has several incident edges in 2007 (similarly to node P1(06)) when both betweenness measures are high. Peering deeper into the temporal communities formed around A3(07) is revealing: up to 2007, this vertex is two steps from vertices that connect two diffrent communities in the static graph. The situation radically changes, however, with the arrival of edges in 2008 that modify the structure of those communities, and push A3(07) to the periphery. The shift is dramatic from a temporal perspective because A3(07) loses its accelerator role where its temporal betweenness becomes negligible, while statically there is only a slight decrease in betweenness. The reason for a dampened decrease in static betweenness is that this vertex is close to the center of the static community, connecting peripheral vertices to the most central nodes of the network (such as L1(05) and H1(05)). It is mainly proximity to these important vertices that sustains A3(07)'s static centrality. Such temporal insights lend further support to understanding mobilization through a network lens coupled with sensitivity to time. A temporal shift to the periphery for an actor translates into decreased potential for sustained mobilization. Comparison between different values for vertex A3(07). Ranks of the vertex in the last interval are not provided as both betweenness values are zero Invisible rapids and brooks On the basis of our observations, we define two concepts to differentiate the static and temporal flow of vertices in knowledge mobilization networks. We call rapids the nodes with high foremost betweenness, meaning that they can potentially mobilize knowledge in a timelier manner, and brooks the ones with insignificant foremost betweenness. Moreover, we call invisible rapids those vertices whose temporal betweenness rank is considerably more significant than their static rank (i.e., the ones whose centrality was undetected by static betweenness), and invisible brooks the ones whose static betweenness is considerably higher than their temporal betweenness, meaning that these vertices can potentially be effective knowledge mobilizers, yet they are not acting as effectively as others due to slow or non-timely relations. Invisible rapids and brooks can be present in different lifetimes as their temporal role might be restricted to some time intervals only; for example, as we have seen in the previous section, S1(10) and J1(06) are invisible rapids in \(\mathcal{T} = \) [2005–2011], P1(06) is an invisible rapid in \(\mathcal{T} = \) [2007–2011], and A3(07) is an invisible brook in \(\mathcal{T} = \) [2008–2011]. Tables 4 and 5 indicate the major invisible rapids and brooks observed in Knowledge-Net. Table 4 Major invisible rapids The presence of a poster presentation, a research project, two journals, and a conference publication among the invisible rapids supports that different types of mobilization actors can impact timely mobilization while not being as effective at creating short paths among entities for knowledge mobilization. In other words, they can play a role of accelerating knowledge mobilization, but to a concentrated group of actors. Table 5 Major invisible brooks As for invisible brooks, we observe a journal (the Biochemica et Biophysica Acta-Molecular Cell Research (J3(08)), three papers (C3(11), C4(07), and C5(07)) that cite publications by the main laboratory in the study (L1(05)), a publication (A3(07)) mobilizing knowledge from members of L1(05), and a research assistant who worked on several research projects as an HQP. In comparison with invisible rapids, there is a wider variety in the type of mobilization actors that act as brooks which does not readily lend itself to generalization. Interestingly, we see the presence of journals among invisible rapids and brooks. From our analysis, it seems that journals can hold strikingly opposite roles: on the one hand, they can contribute considerably to more timely mobilization of knowledge while not being very strong bridges between communities, while on the other hand they can play critical roles in bridging network communities, but at a slow pace. A brook, the journal Biochemica et Biophysica Acta-Molecular Cell Research (J3(08)), for example, helped mobilize knowledge in two papers for L1(05) (in 2008 and 2009), and is a journal in which a paper (in 2011) citing a L1(05) publication was also published. Given expected variability in potential mobilization for a journal, further research is needed to establish their roles in mobilization, whether these mobilization actors exist at both ends of the spectrum, or they have a neutral role in mobilization of knowledge. In contrast, the presence of a research project as an invisible rapid might indicate meaningful observations that should be studied further. First, because when public funders invest in research projects as a mobilization actor, an implicit if not explicit measure of success is timely mobilization with potential impact inside and outside of academia [19]. Ranking as a rapid (for a mobilization actor) is one measure that could therefore help funding agencies monitor and detect temporal change in mobilization networks. Second, a research project as rapid might be meaningful because by its very nature a research project can help accelerate mobilization for the full range of mobilization actors, including other research projects. As such, it is not surprising that they can become temporal conduits to knowledge mobilization in all of its forms. In this paper, we proposed the use of a temporal betweenness measure (foremost betweenness) to analyze a knowledge mobilization network that had been already studied using classical "static" parameters. Our goal was to see the impact on the perceived static central nodes when employing a measure that explicitly takes time into account. We observed interesting differences. In particular, we witnessed the emergence of invisible rapids: nodes whose static centrality was considered negligible, but whose temporal centrality appears relevant. Our interpretation is that nodes with high temporal betweenness contribute to accelerate mobilization flow in the network and, as such, they can remain undetected when the analysis is performed statically. We conclude that foremost betweenness is a crucial tool to understand the temporal role of the actors in a dynamic network, and that the combination of static and temporal betweenness is complementary to provide insights into their importance and centrality. The algorithm proposed in this paper to compute foremost betweenness constitutes a deterministic solution and its running time can be exponential in the worst case, which makes it applicable only on very small-scale networks. Since counting all foremost journeys in a graph is a #P-complete problem, such a high cost is inevitable for any deterministic solution. An open interesting direction is the design of approximate solutions, feasible for large networks. Temporal network analysis as performed here is especially pertinent for KM research that must take time into account to understand academic research impact beyond the narrow short-term context of academia. Measures of temporal betweenness, as studied in this paper, can provide researchers and funders with critical tools to more confidently investigate the role of specific mobilization actors for short- and long-term impact within and beyond academia. The same type of analysis could clearly be beneficial when applied to any other temporal context. In conclusion, we focused here on a form of temporal betweenness designed to detect accelerators. This is only a first step toward understanding temporal dimensions of social networks; other measures are already under investigation. Casteigts A, Flocchini P, Mans B, Santoro N. Deterministic computations in time-varying graphs: broadcasting under unstructured mobility. Proceedings of 6th IFIP conference on theoretical computer science. 2010; 111–124. Casteigts A, Flocchini P, Mans B, Santoro N. Measuring temporal lags in delay-tolerant networks. IEEE Trans Comput. 2014;63(2):397–410. Jones EPC, Li L, Schmidtke JK, Ward PAS. Practical routing in delay-tolerant networks. IEEE Trans Mob Comput. 2007;6(8):943–59. Kuhn F, Lynch N, Oshman R. Distributed computation in dynamic networks, Proceedings of 42nd ACM Symposium on theory of computing (STOC). 2010; 513–522. Liu C, Wu J. Scalable routing in cyclic mobile networks. IEEE Trans Parallel Distrib Syst. 2009;20(9):1325–38. Michail O, Chatzigiannakis I, Spirakis P. Distributed computation in dynamic networks. J Parallel Distrib Comput. 2014;74(1):2016–26. Article MATH Google Scholar Konschake M, Lentz HH, Conraths FJ, Hövel PH, Selhorst T. On the robustness of in-and out-components in a temporal network. PloS ONE. 2013;8(2):e55223. Lentz HHK, Selhorst T, Sokolov IM. Unfolding accessibility provides a macroscopic approach to temporal networks. Phys Rev Lett. 2013;110:118701–6. Mutlu AY, Bernat E, Aviyente S. A signal-processing-based approach to time-varying graph analysis for dynamic brain network identification. Comput Math Methods Med. 2012;2012:451516. doi:10.1155/2012/451516 Quattrociocchi W, Conte R, Lodi E. Opinions manipulation: media, power and gossip. Adv Complex Syst. 2011;14(4):567–86. Saba H, Vale VC, Moret MA, Miranda J-G. Spatio-temporal correlation networks of dengue in the state of Bahia. BMC Public Health. 2014;14(1):1085. Saramaki J, Holme P. Temporal networks. Phys Rep. 2012;519(3):97–125. Casteigts A, Flocchini P, Quattrociocchi W, Santoro N. Time-varying graphs and dynamic networks. Int J Parallel Emerg Distrib Syst. 2012;27(5):387–408. Gaudet J. It takes two to tango: knowledge mobilization and ignorance mobilization in science research and innovation. Prometheus. 2013;13(3):169–87. Binz C, Truffer B, Coenen L. Why space matters in technological innovation systems mapping global knowledge dynamics of membrane bioreactor technology. Res Policy. 2014;43(1):138–55. Boland WP, Phillips PWB, Ryan CD, McPhee-Knowles S. Collaboration and the generation of new knowledge in networked innovation systems: a bibliometric analysis. Procedia Soc Behav Sci. 2012;52:15–24. Chan K, Liebowitz J. The synergy of social network analysis and knowledge mapping: a case study. Int J Manag Decis Mak. 2006;7(1):19–35. Eppler MJ. Making knowledge visible through intranet knowledge maps: concepts, elements, cases. Proceedings of 34th Annual Hawaii international conference on system sciences. 2001; 9–19. J. Gaudet. The mobilization-network approach for the social network analysis of knowledge mobilization in science research and innovation. uO Research, (PrePrint). 2014; 1–28. Klenk NL, Dabros A, Hickey GM. Quantifying the research impact of the sustainable forest management network in the social sciences: a bibliometric study. Can J For Res. 2010;40(11):2248–55. Galati A, Vukadinovic V, Olivares M, Mangold S. Analyzing temporal metrics of public transportation for designing scalable delay-tolerant networks. proceedings of 8th ACM Workshop on performance monitoring and measurement of heterogeneous wireless and wired networks. 2013; 37–44. Kossinets G, Kleinberg J, Watts D. The structure of information pathways in a social communication network, Proceedings of 14th international conference on knowledge discovery and data mining (KDD).2008; 435–443. Kostakos V. Temporal graphs. Phys A. 2009;388(6):1007–23. Kim H, Anderson R. Temporal node centrality in complex networks. Phys Rev E. 2012;85(2):026107. Santoro N, Quattrociocchi W, Flocchini P, Casteigts A, Amblard F. Time-varying graphs and social network analysis: temporal indicators and metrics. Proceedings of 3rd social networks and multiagent systems symposium (SNAMAS) Tang J, Musolesi M, Mascolo C, Latora V. Temporal distance metrics for social network analysis. Proceeding of 2nd ACM Workshop on online social networks (WOSN). 2009; 31–36. Tantipathananandh C, Berger-Wolf T, Kempe D. A framework for community identification in dynamic social networks, Proceedings of 13th ACM SIGKDD international Conference on knowledge discovery and data mining. 2007; 717–726. Amblard F, Casteigts A, Flocchini P, Quattrociocchi W, Santoro N. On the temporal analysis of scientific network evolution. International conference on computational aspects of social networks (CASoN). 2011; 169–174. Xuan B, Ferreira A, Jarry A. Computing shortest, fastest, and foremost journeys in dynamic networks. Int J Found Comput Sci. 2003;14(02):267–85. Barthelemy M. Betweenness centrality in large complex networks. Eur Phys J B-Condens Matter Complex Syst. 2004;38(2):163–8. Brandes U. A faster algorithm for betweenness centrality. J Math Sociol. 2001;25:163–77. Brandes U. On variants of shortest-path betweenness centrality and their generic computation. Soc Netw. 2008;30(2):136–45. Freeman LC. A set of measures of centrality based on betweenness. Sociometry. 1977;1:35–41. Newman MEJ. A measure of betweenness centrality based on random walks. Soc Netw. 2005;27(1):39–54. Valiant LG. The complexity of enumeration and reliability problems. SIAM J Comput. 1979;8(3):410–21. Law J. Notes on the theory of the actor-network: ordering, strategy, and heterogeneity. Syst pract. 1992;5(4):379–93. Hanneman R, Riddle M. Introduction to social network methods. Riverside: University of California Riverside; 2005. Chan S, Hui P, Xu K. Community detection of time-varying mobile social networks. Proceedings of international conference on complex sciences. 2009; 1154–1159. Gómez SG, Jensen P, Arenas A. Analysis of community structure in networks of correlated data. Phys Rev E. 2009;80(1):016114. PF has proposed the problem. AAR and PF have discussed and designed together the two algorithms for the computation of foremost betweenness. AAR has implemented the algorithms. JG has provided the knowledge mobilization network data, which she had previously collected for a different study. AAR has conducted the analysis of foremost betweenness for these data. All three co-authors have discussed the results; in particular, JG has provided interpretation in the context of knowledge mobilization. All authors read and approved the final manusript. Paola Flocchini is Professor at the School of Electrical Engineering and Computer Science. Her work and background are in distributed computing and algorithms. Amir Afrasiabi Rad has recently completed his Ph.D. on temporal analysis of social networks under Prof. Flocchini's supervision. Joanne Gaudet is co-president of an Ottawa-based company. The data collection she performed is from the time when she was a Ph.D. student at the University of Ottawa. A preliminary version of this paper appeared in the Proc. of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Workshop on Dynamics in Networks, 2015. This work was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under Discovery Grant, and by Dr. Flocchini's University Research Chair. School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario, Canada Amir Afrasiabi Rad & Paola Flocchini Alpen Path Solutions Inc., Ottawa, Ontario, Canada Joanne Gaudet Amir Afrasiabi Rad Paola Flocchini Correspondence to Paola Flocchini. Afrasiabi Rad, A., Flocchini, P. & Gaudet, J. Computation and analysis of temporal betweenness in a knowledge mobilization network. Comput Soc Netw 4, 5 (2017). https://doi.org/10.1186/s40649-017-0041-7 Dynamic networks Temporal analysis
CommonCrawl
Journal of the International Society of Sports Nutrition Effect of sodium bicarbonate ingestion during 6 weeks of HIIT on anaerobic performance of college students Jieting Wang1, Junqiang Qiu1, Longyan Yi1, Zhaoran Hou1, Dan Benardot2,3 & Wei Cao1 Journal of the International Society of Sports Nutrition volume 16, Article number: 18 (2019) Cite this article Past studies have found that sodium bicarbonate ingestion prior to exercise has a performance-enhancing effect on high-intensity exercise. The aim of this study was to investigate the effects of continuous sodium bicarbonate (NaHCO3) supplementation on anaerobic performance during six weeks of high-intensity interval training (HIIT). Twenty healthy college-age male participants were randomly assigned to either the HCO3− group (SB) or the placebo group (PL), with 10 subjects in each group. Both groups completed 6 weeks (3 days/week) of HIIT with the SB ingesting an orange-flavored solution containing 15 g xylitol and 0.2 g HCO3−/kg body mass during each training day, and PL ingesting a similar beverage that was HCO3−-free. This study separated 6 weeks of training into two stages with different training intensities, with the first 3 weeks at a lower intensity than the second 3 weeks. Blood samples to measure serum HCO3− were obtained 5 min before and 30 min after the following HIIT training sessions: Week 1, training session 1; week 3, training session 3; week 6, training session 3. Three 30s Wingate tests (WAnT) were conducted before, in the middle, and after the training and the supplementation interventions, with peak power, mean power, and fatigue index obtained during WAnT, and blood lactate and heart rate obtained after WAnT. Our findings indicate the following: 1) Serum HCO3− level of SB was significantly higher than PL (p < 0.05) both before and after each HIIT; 2) Relative peak power in WAnT was significantly higher in the SB group after 6 weeks (p < 0.01); 3) Lactate clearance rate and the lactate clearance velocity after 10 min of WAnT were both significantly higher in SB in the post-test (p < 0.01); 4) Heart rate recovery rate at 10 min after WAnT in both SB and PL after 6 weeks were significantly improved (p < 0.01 and p < 0.05, respectively), resulting in no difference between groups on these measures. These data suggest that supplementation of HCO3− at the level of 0.2 g/kg body mass before HIIT training enhances the effect of HIIT on anaerobic performance, and improves the blood lactate clearance rate and the blood lactate clearance velocity following anaerobic exercise. High-intensity interval training (HIIT) refers to a training protocol involving multiple bouts of high-intensity exercise or all-out sprints that are interspersed with recovery periods [1]. Results of previous studies support the idea that HIIT significantly promotes both aerobic and anaerobic exercise capacity [2,3,4,5]. Studies have found that aerobic capacity could be positively influenced by HIIT through an increase in oxidative enzymes [6], higher oxidative enzymes activity [7,8,9], better oxygen transfer to cells [10], more mitochondria per cell, and better mitochondrial function [10,11,12]. Tabata et al. (1996) compared the effects of endurance training and HIIT training on anaerobic abilities [2]. In this study, eight sets of 20 s high-intensity exercise, with 10 s intervals between each set, for 5 days/week were completed each training day by the HIIT group [2]. After 6-weeks of training, the anaerobic capacity of the HIIT group increased by 28%, while the endurance training experienced no significant change [2]. Although other studies assessing HIIT have used different training methods, training protocols (training equipment, training intensity and time, etc.), and subject groups, most results suggest that HIIT training effectively improves anaerobic capacity [13,14,15,16]. Studies have shown that, because of the high intensity and short duration, HIIT is characterized by an energy supply derived primarily from anaerobic metabolism, although it is known that all three energy systems support the exercise in different proportions during different exercise time periods [17,18,19]. The ability of maintaining the required power output is related to the capacity to continuously supply ATP by anaerobic glycolysis. The benefit of pursuing higher power output in high-intensity exercise alters the kinetics of oxygen uptake (VO2), which can also support anaerobic performance by diminishing the demand on relatively limited anaerobic fuel sources. The study of Tomlin et al. (2001) showed a positive relationship between aerobic fitness and power recovery from high intensity intermittent exercise [20]. It appears, therefore, that the improvement in anaerobic capacity during HIIT training is likely from the combined results of enhanced phosphocreatine energy supply capacity [21], improved glycolytic enzyme activity [22, 23], and enhanced aerobic metabolism [20]. While in a state of rest, human blood is slightly alkaline (pH ~ 7.4), while muscle is neutral (pH ~ 7.0). Constant mediation of the blood and muscle acid-base balance is one of the important requirements to assure normal cellular metabolism. By neutralizing excess acidity and/or alkalinity, buffering systems in the body attempt to maintain the pH in a desired/healthy range. A primary buffering system is the carbonic acid (H2CO3)-bicarbonate ion (HCO3−) system, which functions through the reaction below: $$ {\mathrm{H}}^{+}+{{\mathrm{H}\mathrm{CO}}_3}^{-}\leftrightharpoons {\mathrm{H}}_2{\mathrm{CO}}_3\leftrightharpoons {\mathrm{H}}_2\mathrm{O}+{\mathrm{CO}}_2 $$ Studies have confirmed that acidosis negatively impacts the release of calcium ions during muscle contraction, the activation of electrical signal receptors, the binding of calcium ions to troponin C, and metabolic enzyme activity [24,25,26,27]. These changes have the effect of hindering ATP re-synthesis and slowing glycolysis. Acidosis can result from a drop in intracellular pH during short-duration intense anaerobic exercise. Studies have found that HCO3− can buffer the accelerated release of hydrogen ions associated with this intense anaerobic activity, thereby lowering acidosis. In addition, the sodium ion (Na+) in sodium bicarbonate (HCO3−) can be beneficial by neutralizing the acid impact of the hydrogen ion (H+). It has also been established that ingestion of Na+ can increase the plasma volume [28], which could be an additional benefit to anaerobic activity by creating an enlarged buffering potential through dilution of the H+ concentration. Other studies also have found that acute or chronic exogenous HCO3− may improve performance in 400 and 800 m races, 2 min sprints, the Wingate test, and other anaerobic activities [29,30,31,32,33,34,35]. The timing of acute HCO3− supplementation in past studies typically ranges from 1 to 3 h prior to exercise, which significantly raises the blood HCO3− level and pH in the blood [36]. Studies have found that the activity of H+ transmembrane protein increases in proportion to the rise of intracellular H+ concentration [37]. As a result, the H+ and lactate ions, which overflow from the cells due to exercise, can be buffered by HCO3−. Thus, the acid-buffering capacity of the muscles is improved while the increase of H+ in muscles is reduced to delay muscular fatigue. A meta-analysis summarizing 29 studies found that supplementation of HCO3− can improve anaerobic exercise capacity, significantly extending the time of exercise to exhaustion [38]. Studies also suggest that the greater the pH drop during exercise, the more beneficial the supplementation of HCO3– [34, 39]. Some studies have compared the effects of acute and chronic HCO3− supplementation on anaerobic ability, with results suggesting that chronic supplementation is more effective at increasing anaerobic exercise capacity than acute supplementation. A potential problem with HCO3− supplementation is that an inappropriate dosage may result in acute gastrointestinal reactions, with symptoms that include nausea, stomach pain, diarrhea, and vomiting, all of which may negatively impact exercise performance [40]. Dose-response studies assessing commonly used HCO3− doses, typically ranging from 0.1–0.5 g/kg body mass (BM), found that the most commonly used dose was 0.3 g/kg BM [38, 40,41,42,43]. Studies have also shown that chronic supplementation at doses lower than 0.3 g/kg BM results in better gastrointestinal tolerance than one-time acute and larger HCO3− supplementation dose before exercise [44,45,46]. Current studies have assessed the effects of HCO3− supplementation on HIIT exercise performance, but few studies have assessed the independent effects of HIIT and HCO3− on anaerobic performance. There are several studies that have also assessed the impact of HIIT with HCO3− supplementation on aerobic performance [47,48,49]. The studies of Jourkesh et al. (2011) [47] and Edge et al. (2004) [49] found that HCO3− supplementation with HIIT positively affects aerobic capacity. In contrast, a study by Driller et al. (2013) found that this combination had the opposite effect, although the researchers hypothesized that the finding may have been due to the unique subject characteristics (highly trained members on national teams), whose aerobic performance was sufficiently well-developed that it would not likely be impacted through the addition of supplemental HCO3− [48, 50, 51]. Because of this finding, we chose healthy college students as our subjects instead of highly trained athletes, to enable a better understanding of the possible impact of HCO3− ingestion when coupled with HIIT. The results of past studies led us to develop a research project that would assess the combination of chronic HCO3− supplementation with HIIT training, to explore whether this intervention can effectively improve anaerobic capacity in healthy young men. We hypothesized that the combination of chronic HCO3− supplementation during HIIT will result in an improvement of anaerobic exercise performance in this population. Healthy college-age male participants (N = 20; \( \overline{x} \) age = 20.45 ± 0.94 yr.; \( \overline{x} \) height = 1.76 ± 0.05 m; \( \overline{x} \) body mass = 70.55 ± 5.65 kg; \( \overline{x} \) BMI = 22.70 ± 1.60 kg/m [2], \( \overline{x} \) lean mass% = 54.6 ± 2.05, \( \overline{x} \) body fat% = 15.96 ± 4.03) constituted the subject pool (See Table 1). The inclusion criteria were as follows: 1) 18–24 year-old males; 2) BMI was between 18.5–23 kg/m2; 3) Healthy, with no hypertension, diabetes or cardiovascular risk factors, and no other diagnosed diseases; 4) Subjects had no professional training during the 6-month period prior to the onset of this experimental protocol; 5) Subjects voluntarily agreed to participate in the experiment. Approval for the study was obtained from the Institutional Review Board of Beijing Sport University (BSU IRB). Informed consent was obtained from all participants. In addition, participants were required to avoid other training protocols and supplementation during the experiment, and were asked to maintain their typical diet and activities. Table 1 Subjects characteristics for two groups* The research proposal was approved by the Institutional Review Board of Beijing Sport University. All subjects were fully informed about the purposes, procedures, and potential risks of this study. Subjects signed an informed consent and completed a health history questionnaire and physical activity questionnaire at the first visit, followed by an assessment of body composition. After these steps, they performed a graded exercise testing (GXT) on a cycle ergometer to evaluate maximal aerobic power output (MAP). This MAP value was considered the subject's basic exercise capacity for development of the individualized training plan. Subjects were randomly assigned to HCO3− (SB) and Placebo (PL) groups (see Table 1) to enhance the possibility of group equivalence. We found no significant differences in BMI, lean mass%, body fat% and MAP between the two groups. All subjects completed 6 weeks of HIIT (3 days per week) on a cycle ergometer. The intensity of the training was set to a higher level in the second three week assessment/training period. Throughout this 6-week training period, the SB group ingested a fluid containing HCO3− on every training day, while the placebo group ingested a similar beverage containing no HCO3−. Blood samples were taken before and after the first training, the third training in the third week, and the final training to measure serum HCO3−. A 30 s Wingate Anaerobic 30 s Cycling Test (WAnT) was conducted pre-, mid-, and post-intervention (i.e., before both HIIT and HCO3− interventions, after three weeks of interventions, and after six weeks of interventions respectively). During the WAnT, peak power, mean power, fatigue index, blood lactate, and heart rate were recorded. Subjects were asked to keep their normal schedule on the day before all interventions, and to avoid consuming, within 2 h before the test, carbonated drinks, alcohol, caffeine or other substances that could affect the GXT, WAnT and blood test results. Subjects were also asked to avoid any strenuous exercise before GXT, WAnT, and blood test. All sports training, GXT, and WAnT were performed using a cycle ergometer. During the interventions, subjects were also asked to avoid any other professional training regimen and nutrition supplementation. Daily food consumption was recorded by subjects as a means of monitoring nutritional intake. Graded exercise testing The first segment of the exercise capacity test was GXT on a cycle ergometer (starting at 40 W + 20 W·2 min− 1), which was the basis for all the following steps. The maximal aerobic power output was identified as the highest workload that subjects can maintain during the entire test. Below are the equations for MAP. If subjects completed 2 min of their last stage: MAP = Wfinal. Where, MAP is the maximal aerobic power output, Wfinal is the power output (in watts) of the last complete stage that subjects performed. If subjects did not complete 2 min of their last stage: MAP = Wfinal + (t/ 120 × 20). Where, MAP is the maximal aerobic power output, Wfinal is the power output (in watts) of the last complete stage that subjects performed, and 't' is the duration (i.e., time) that subjects performed in the uncompleted stage [52]. Polar monitors were worn by subjects to monitor heart rate. The standards for exhaustion included: 1) Heart rate reached 180 bpm/min; 2) Subjects subjectively felt tired; 3) Subjects couldn't pedal at a constant load for more than 15 s. Subjects were considered exhausted when they simultaneously met any two of the above criteria. HIIT intervention Both groups performed 6 weeks of the same HIIT training, the intensity of which was set based on MAP. This study set the first 3-week period and the second 3-week period as two distinct training stages with different training intensities, following the standard of prior research [53]. During the first 3-week period the goal was for subjects to finish 4 sets of training during training days, with a 1-min interval between each set. Every set consisted of 1 min cycling, which included 30 s cycling at an intensity of 100% MAP and 30 s at 70% MAP. The second set intensity was changed to 40 s cycling at 100% MAP, plus 20 s cycling at 70% MAP. Training was performed by subjects three times per week, with the training sessions separated by at least one rest day. Prior to the training, subjects were required to do a warm-up at 60 W for 5 min. HCO3 − supplementation The HCO3− supplementation dose for SB was 0.2 g/kg BM. HCO3− was dissolved into 1 L drinking water with 15 g xylitol and 2 g orange flavoring to enhance the acceptability of the fluid. With the goal of lowering the potential for gastrointestinal distress, which could occur by drinking a high fluid volume too quickly, subjects were asked to consume the beverage over several opportunities during the 60–90 min before training. The beverage for PL was similar to that consumed by SB, except for containing no HCO3−. Wingate Anaerobic 30 s Cycling Test Subjects performed a Wingate Anaerobic 30 s Cycling Test with a Monark 894E (Model: Ergomedic 894E, Manufacturer: Healthcare International) at pre-, mid- and post-intervention. A 1-day (24 h) interval was required between the test and the training. Subjects were instructed to take these three tests at the same time of the day. Subjects were given a 5-min standard warm-up prior to WAnT [54], including two 3–5 min all-out sprints and a constant warm-up at 1 W/kg BM. Following a 5-min warm-up, WAnT was conducted according to the standard method of Bar-Or [55]. Peak power (PP), mean power (MP) and fatigue index (PD%) of the test were calculated via Monark Anaerobic Testing software (Version: 3.3.0.0, Developed in Co-operation with HUR Labs). Heat rate was monitored by the Polar Rs800cx chest-worn heart rate monitor (Manufacturer: Polar Electro Oy) from the resting state to the recovery period for analyzing the heart rate recovery rate (HRR%). The equation for HRR%: $$ {\mathrm{HR}\mathrm{R}}_{10}\%=\left({\mathrm{HR}}_{\mathrm{max}}-{\mathrm{HR}}_{10}\right)/\left({\mathrm{HR}}_{\mathrm{max}}-{\mathrm{HR}}_{\mathrm{rest}}\right)\times 100\% $$ Where HRR10% is the rate of heart rate recovery, HRmax is the maximum heart rate immediately after exercise, HR10 is the heart rate after 10 min of exercise, and HRrest is the resting heart rate. Lactate test Finger blood samples (10 μl) were obtained while in a non-exercise state as follows: immediately, 3 min, 5 min, 7 min, and 10 min after exercise following all three WAnT. The samples were tested via a glucose and lactate analyzer (Biosen C-Line, EKF Diagnostics), and the lactate clearance rate was predicted as follows: $$ {\mathrm{LA}}_{10}\%=\left({\mathrm{LA}}_{\mathrm{max}}-{\mathrm{LA}}_{10}\right)/\left({\mathrm{LA}}_{\mathrm{max}}-{\mathrm{LA}}_{\mathrm{rest}}\right)\times 100\% $$ Where LA10% is the lactic acid clearance rate at 10 min after exercise, LAmax is the peak lactate value after exercise, LA10 is the lactic acid value at 10 min after exercise, and LArest is the value of lactic acid before exercise. Lactate clearance velocity was predicted as follows: $$ {\mathrm{V}}_{10}=\left({\mathrm{LA}}_{\mathrm{max}}-{\mathrm{LA}}_{10}\right)/\times \left(10-\mathrm{t}\right) $$ Where V10 is the lactate clearance velocity at 10 min after exercise, LAmax is the peak lactate value after exercise, LA10 is the lactic acid value at 10 min after exercise, and t is the time when peak lactate value appears. Serum HCO3 − test Venous blood samples of 2 ml were collected before and 30 min after the first training, the third training in the third week, and the last training to measure the serum HCO3− by a Mindary BS-420 automatic biochemical analyzer. Data were entered into Microsoft Excel 2010 and analyzed using SPSS version 22.0. Two-way repeated measures ANOVA was performed on the three tests to assess the interaction between time (before intervention, during intervention, after intervention) and the cohort (SB and PL). Simple effect analysis was performed for the horizontal comparison of between-group-factors when there was an interaction, and longitudinal comparison of within-group-factors was performed when there was no interaction. Otherwise, one-way repeated measures ANOVA and independent samples t-tests were used to respectively compare both intra- and inter-group factors, or only one of these two factors based on the main influencing effects. Statistical significance was set at p < 0.05. All subjects completed 6 weeks of intervention, with no subject dropouts. Subjects characteristics There were no significant differences of measured factors between SB and PL (p > 0.05). (see Table 1). HCO3 − level analysis The alkaline reserve in SB was improved following supplementation (see Table 2). The results of all three tests showed that the serum HCO3− of SB was significantly higher (p < 0.05) than that of PL both before and after training. Moreover, compared to the pre-intervention, serum HCO3− both before and after HIIT were significantly increased (p < 0.01) in mid-intervention. Compared to the mid-intervention, serum HCO3− both before and after HIIT were significantly increased (p < 0.05) at post-intervention in SB, while the PL showed no significant changes. Table 2 Serum HCO3− level before and after HIIT in the pre-, mid- and post-intervention Wingate Anaerobic 30 s Cycling Test The relative PP of SB was significantly higher in both the first three week period and the second three week period (p < 0.01). However, in PL a significant increase was only found in the first three week period (p < 0.05) (see Fig. 1). There was also a significant difference (p < 0.01) in PP between SB and PL after six weeks of training, with the SB having higher PP. Although the relative MP in both groups improved significantly (p < 0.01) after only three weeks, no significant difference was found between groups. PD% was also not different between groups. However, SB experienced a significant decrease in PD% from pre-intervention to mid-intervention (p < 0.05), and from mid-intervention to the post-intervention (p < 0.01). PL experienced a significant decrease of PD% after six weeks of intervention (P < 0.05) (see Table 3). Relative peak power in the pre-, mid- and post-intervention. SB=HCO3− group, PL = placebo group. △△ indicates a significant difference from mid- to post-intervention (p < 0.01). ** indicates a significant difference from pre- to mid-interventin or from pre- to post-intervention (p < 0.01). †† indicates a significant difference (p < 0.01) vs. PL-group Table 3 WAnT related indexes data in the pre-, mid- and post-intervention Blood lactate In the three WAnT tests, there was no difference in the resting LA, either between or within groups. LAmax in SB increased significantly after three weeks of intervention (p < 0.01), while PL experienced no significant difference after six weeks. V10 and LA10% were significantly different between groups after six weeks of intervention (p < 0.01) (see Fig. 2). a, b LA clearance velocity and LA clearance rate (%) in the pre-, mid- and post-intervention. SB=HCO3− group, PL= placebo group Though no significant difference of HRR10% was found between SB and PL during six weeks, only the SB group improved significantly from the mid-intervention to post-intervention (p < 0.05) (see Table 3). The purpose of this study was to investigate whether HCO3− supplementation during 6 weeks of HIIT results in a greater improvement in anaerobic capacity in healthy young men than HIIT alone. We hypothesized that chronic supplementation of HCO3− would be in synergy with HIIT training to improve the subjects' alkaline reserve and increase the acid-buffering capacity during exercise. We believed that this combination could postpone muscular fatigue and improve the body's capacity to provide ATP and enhance HIIT's effect on anaerobic performance. Our results support this hypothesis. The findings demonstrated that serum HCO3− was significantly different between groups in all tests (p < 0.05). In addition, serum HCO3− of the SB increased significantly after each stage of intervention (from pre-intervention to mid-intervention, p < 0.01; from mid-intervention to post-intervention, p < 0.05), suggesting that the alkaline reserve of subjects in SB increased after HCO3− supplementation. These results correspond to the results of several other studies [44, 48]. According to the test results of WAnT, the significant positive changes were more evident in SB rather than in PL, which is likely due to the chronic HCO3− supplementation intervention. During our 6-week experiment, only one subject experienced problematic gastrointestinal effects after taking HCO3− supplements. Through our inquiry and observation, and the fact that this was limited to a single subject, we believe that this may have been the result of a preexisting GI condition. Another possibility is that the drinking speed of the supplement was excessively fast, creating an increased gastrointestinal burden. As mentioned earlier, studies assessing HCO3− supplementation have most commonly used HCO3− at a concentration is 0.3 g/kg BM [38, 40,41,42,43]. It has been found that longer-term supplementation at this and lower dosage levels result in improved gastrointestinal tolerance [44,45,46]. The dosage applied in this study was 0.2 g/kg BM, which, according to the study by Bishop et al. (2005), could effectively elevate the blood HCO3− level without causing negative gastrointestinal reactions [56]. According to the analyzed data, after 6 weeks of intervention relative mean power (p < 0.05), relative peak power (p < 0.05), and fatigue index (p < 0.01) of PL all significantly improved, implying that the effect of HIIT training alone significantly improves anaerobic capacity. This finding is consistent with earlier studies [13, 14, 57]. In the study of Naimo et al. (2015) [14], 4 weeks of HIIT training significantly increased the peak power and mean power in WAnT of college hockey players. Astorino et al.(2012) [13] conducted a short-term HIIT training for 20 males and females who often engaged in physical activity and found that both relative PP and relative MP improved significantly. However, neither of these two studies found a significant improvement in the fatigue index [13]. It is possible that this may be due to the longer training period in these two studies than in the study we designed. Therefore, by imposing more intensive physiological stimuli through the training, the transport and clearance ability of lactic acid could be more obvious, the muscle acid resistance could be stronger and, ultimately, the onset of fatigue could be slower. Although significant differences in relative MP and PD% between groups were not observed in our study, there was a significant (p < 0.01) difference in relative PP over the six week intervention period. Hence, our data suggest that HCO3− supplementation aided in the improvement of anaerobic performance. In addition, some additional changes occurred in SB, including the significant increase of relative PP in the second three week intervention period, and a significant improvement of PD% (p < 0.01) after six weeks. Neither of these changes was observed in PL. As a result of the mechanisms described previously, taking a sufficient dose of HCO3− before high intensity exercise can reduce the accumulation of acid in muscle cells, delay the generation of fatigue, and enhance skeletal muscle contraction [46]. It has been demonstrated that changes in the metabolism of skeletal muscles resulting from HCO3− supplementation before exercise improve the function of anaerobic metabolism of skeletal muscles. This alteration is beneficial for high-intensity exercise [58]. Therefore, our data suggest that, in the population studied, supplementation of HCO3− during HIIT may have a positive impact of HIIT training on anaerobic capacity through multiple mechanisms. The 30 s Wingate anaerobic test is heavily reliant on the ATP/CP and anaerobic glycolytic system. The peak lactate level after exercise represents the body's maximum tolerance to lactic acid. It also reflects the capacity of the glycolytic system to produce ATP. As can be seen in Table 3, there is a significant increase in peak lactate level only in SB over six weeks of intervention (p < 0.01). This result suggests that the ability of glycolytic system to produce ATP is enhanced with HCO3− supplementation. In addition, although lactic acid clearance velocity and clearance rate at 10 min after WAnT in PL improved, our findings indicate that the increase in SB is more evident on these values, resulting in a significant difference between SB and PL (p < 0.01). Similar results have been demonstrated in earlier studies [34]. This suggests that the supplementation of HCO3− positively effects the clearance of lactate after exercise. The improvement in lactic acid clearance suggests that exogenous HCO3− supplementation can increase the intracellular alkali reserve, slow the pH reduction in muscles, and delay the onset of fatigue. We initially speculated that HCO3− supplementation may promote lactic acid clearance after anaerobic exercise, reduce lactic acid accumulation and increase blood pH, which could increase the partial pressure of oxygen (PO2) in blood and accelerate heart rate recovery. The possible mechanism for this is that blood pH and partial pressure of carbon dioxide (PCO2) are pertinent to the affinity of Hb with O2 and the PO2 in the blood [59]. Both elevated pH and decreased PCO2 can increase the affinity of Hb with O2 and increase blood PO2 [59]. It is known that heart rate recovery and PO2 are positively correlated, while hypoxia and cardiac autonomic nervous dysfunction are closely related [60]. We found, however, that the rate of heart rate recovery in both groups increased after six weeks of intervention, and there was no significant difference between groups in heart rate recovery (p > 0.05). This is consistent with the fact that we have not found any report demonstrating an improvement of the heart rate recovery after exercise following ingestion of HCO3−. Our data suggest that oral supplementation of HCO3− in a low dosage may positively affect heart rate recovery after exercise. While this may be the result of improved neurological regulation, there is no obvious cause of this finding. The present study has several limitations, including no data on how to enhance the taste of supplements without affecting its efficacy. In addition, our study used a relatively small sample of only healthy young men, which limits the generalizability to women and other populations. Therefore, we would encourage a duplication of this study's protocol with other populations in future research. Our data suggest that, in healthy young men, the combination of HCO3− supplementation and HIIT can enhance the effect of HIIT on anaerobic performance, including improving power output, delaying fatigue onset, and improving the blood lactate clearance rate and velocity after the anaerobic exercise. BM: GXT: H+ : Hydrogen ion H2CO3 : Carbonic acid HCO3 − : Bicarbonate ion NaHCO3 : HIIT: HRR%: Heart rate recovery rate Maximal aerobic power Mean power Na+ : Sodium ion PCO2 : Partial pressure of carbon dioxide PD%: Fatigue index PO2 : Partial pressure of oxygen Peak power Weston M, Taylor KL, Batterham AM, Hopkins WG. Effects of low-volume high-intensity interval training (HIT) on fitness in adults: A meta-analysis of controlled and non-controlled trials. Sports Med. 2014;44(7):1005–17. Tabata I, Nishimura K, Kouzaki M, Hirai Y, Ogita F, Miyachi M, Yamamoto K. Effects of moderate-intensity endurance and high-intensity intermittent training on anaerobic capacity and VO2max. Med Sci Sports Exerc. 1996;28(10):1327–30. Mcrae G, Payne A, Zelt JGE, Scribbans TD, Jung ME, Little JP, Gurd BJ. Extremely low volume, whole-body aerobic-resistance training improves aerobic fitness and muscular endurance in females. Appl Physiol Nutr Metab. 2012;37(6):1124. Metcalfe RS, Babraj JA, Fawkner SG, Vollaard NB. Towards the minimal amount of exercise for improving metabolic health: beneficial effects of reduced-exertion high-intensity interval training. Eur J Appl Physiol. 2012;112(7):2767–75. Adamson S, Lorimer R, Cobley JN, Lloyd R, Babraj J. High intensity training improves health and physical function in middle aged adults. Biology. 2014;3(2):333–44. Perry CG, Heigenhauser GJ, Bonen A, Spriet LL. High-intensity aerobic interval training increases fat and carbohydrate metabolic capacities in human skeletal muscle. Appl Physiol Nutr Metab. 2008;33(6):1112–23. Burgomaster KA, Hughes SC, Heigenhauser GJ, Bradwell SN, Gibala MJ. Six sessions of sprint interval training increases muscle oxidative potential and cycle endurance capacity in humans. J Appl Physiol. 2005;98(6):1985–90. Burgomaster KA, Heigenhauser GJ, Gibala MJ. Effect of short-term sprint interval training on human skeletal muscle carbohydrate metabolism during exercise and time-trial performance. J Appl Physiol. 2006;100(6):2041–7. Gibala MJ, Little JP, Essen MV, Wilkin GP, Burgomaster KA, Safdar A, Raha S, Tarnopolsky MA. Short-term sprint interval versus traditional endurance training: similar initial adaptations in human skeletal muscle and exercise performance. J Physiol. 2010;575(3):901–11. Jacobs RA, Flück D, Bonne TC, Bürgi S, Christensen PM, Toigo M, Lundby C. Improvements in exercise performance with high-intensity interval training coincide with an increase in skeletal muscle mitochondrial content and function. J Appl Physiol. 2013;115(6):785–93. Hoshino D, Yoshida Y, Kitaoka Y, Hatta H, Bonen A. High-intensity interval training increases intrinsic rates of mitochondrial fatty acid oxidation in rat red and white skeletal muscle. Appl Physiol Nutr Metab. 2013;38(3):326–33. Little JP, Safdar A, Bishop D, Tarnopolsky MA, Gibala MJ. An acute bout of high-intensity interval training increases the nuclear abundance of PGC-1α and activates mitochondrial biogenesis in human skeletal muscle. Am J Physiol Regul Integr Comp Physiol. 2011;300(6):R1303. Astorino TA, Allen RP, Roberson DW, Jurancich M. Effect of high-intensity interval training on cardiovascular function, VO2max, and muscular force. J Strength Cond Res. 2012;26(1):138. Naimo MA, de Souza EO, Wilson JM, Carpenter AL, Gilchrist P, Lowery RP, Averbuch B, White TM, Joy J. High-intensity interval training has positive effects on performance in ice hockey players. Int J Sports Med. 2015;36(01):61–6. Hoshino D, Kitaoka Y, Hatta H. High-intensity interval training enhances oxidative capacity and substrate availability in skeletal muscle. J Phys Fitness Sports Med. 2016;5(1):13–23. Buckley S, Knapp K, Lackie A, Lewry C, Horvey K, Benko C, Trinh J, Butcher S. Multimodal high-intensity interval training increases muscle function and metabolic performance in females. Appl Physiol Nutr Metab. 2015;40(11):1157–62. Gastin PB, Costill DL, Lawson DL, Krzeminski K, Mcconell GK. Accumulated oxygen deficit during supramaximal all-out and constant intensity exercise. Med Sci Sports Exerc. 1995;27(2):255. Gastin PB, Lawson DL. Influence of training status on maximal accumulated oxygen deficit during all-out cycle exercise. Eur J Appl Physiol Occup Physiol. 1994;69(4):321–30. Gastin PB. Energy system interaction and relative contribution during maximal exercise. Sports Med. 2001;31(10):725. CAS PubMed Article PubMed Central Google Scholar Tomlin DL, Wenger H. The relationship between aerobic fitness and recovery from high intensity intermittent exercise. Sports Med. 2001;31(1):1–11. Thorstensson A, Sjödin B, Karlsson J. Enzyme activities and muscle strength after "sprint training" in man. Acta Physiol. 2010;94(3):313–8. Macdougall JD, Hicks AL, Macdonald JR, Mckelvie RS, Green HJ, Smith KM. Muscle performance and enzymatic adaptations to sprint interval training. J Appl Physiol. 1998;84(6):2138–42. Simoneau JA, Lortie G, Boulay MR, Marcotte M, Thibault MC, Bouchard C. Effects of two high-intensity intermittent training programs interspaced by detraining on human skeletal muscle and performance. Eur J Appl Physiol Occup Physiol. 1987;56(5):516–21. Péronnet F, Meyer T, Aguilaniu B, Juneau CE, Faude O, Kindermann W. Bicarbonate infusion and pH clamp moderately reduce hyperventilation during ramp exercise in humans. J Appl Physiol. 2007;102(1):426–8. Clamann HP, Broecker KT. Relation between force and fatigability of red and pale skeletal muscles in man. Am J Phys Med. 1979;58(2):70–85. Kemp G. Muscle cell volume and pH changes due to glycolytic ATP synthesis. J Physiol. 2007;582(1):461–5. Jackson DC. Ion regulation in exercise: lessons from comparative physiology. Biochem Exerc. 1990;7:375–86. Greenleaf JE, Brock PJ. Na+ and Ca2+ ingestion: plasma volume-electrolyte distribution at rest and exercise. J Appl Physiol Respir Environ Exerc Physiol. 1980;48(48):838–47. Lindh AM, Peyrebrune MC, Ingham SA, Bailey DM, Folland JP. Sodium bicarbonate improves swimming performance. Int J Sports Med. 2007;29(06):519–23. Mcnaughton LR, Siegler J, Midgley A. Ergogenic effects of sodium bicarbonate. Curr Sports Med Rep. 2008;7(4):230–6. Wilkes D, Gledhill N, Smyth R. Effect of acute induced metabolic alkalosis on 800-m racing time. Med Sci Sports Exerc. 1983;15(4):277–80. Goldfinch J, Naughton LM, Davies P. Induced metabolic alkalosis and its effects on 400-m racing time. Eur J Appl Physiol Occup Physiol. 1988;57(1):45–8. Horswill CA, Costill DL, Fink WJ, Flynn MG, Kirwan JP, Mitchell JB, Houmard JA. Influence of sodium bicarbonate on sprint performance: relationship to dosage. Med Sci Sports Exerc. 1988;20(6):566-569. Mc Naughton L, Thompson D. Acute versus chronic sodium bicarbonate ingestion and anaerobic work and power output. J Sports Med Phys Fitness. 2001;41(4):456–62. Mueller SM, Gehrig SM, Frese S, Wagner CA, Boutellier U, Toigo M. Multiday acute sodium bicarbonate intake improves endurance capacity and reduces acidosis in men. J Int Soc Sports Nutr. 2013;10(1):16. Price MJ, Singh M. Time course of blood bicarbonate and pH three hours after sodium bicarbonate ingestion. Int J Sports Physiol Perform. 2008;3(2):240–2. Roth DA. The sarcolemmal lactate transporter: transmembrane determinants of lactate flux. Med Sci Sports Exerc. 1991;23(8):925. Matson LG, Tran ZV. Effects of sodium bicarbonate ingestion on anaerobic performance: a meta-analytic review. Int J Sport Nutr. 1993;3(1):2–28. Driller MW, Gregory JR, Williams AD, Fell JW. The effects of serial and acute NaHCO3 loading in well-trained cyclists. J Strength Cond Res. 2012;26(10):2791. Burke LM, Pyne DB. Bicarbonate loading to enhance training and competitive performance. Int J Sports Physiol Perform. 2007;2(1):93. Applegate E. Effective nutritional ergogenic aids. Int J Sport Nutr. 1999;9(2):229–39. McNaughton LR, Dalton B, Tarr J, Buck D. Neutralize acid to enhance performance. Sportscience Training & Technology. http://www.sportsci.org/traintech/buffer/lrm.htm. Higgins MF, James RS, Price MJ. The effects of sodium bicarbonate (NaHCO3) ingestion on high intensity cycling capacity. J Sports Sci. 2013;31(9):972–81. Edge J, Bishop D, Goodman C. Effects of chronic NaHCO3 ingestion during interval training on changes to muscle buffer capacity, metabolism, and short-term endurance performance. J Appl Physiol. 2006;101(3):918–25. Douroudos II, Fatouros IG, Gourgoulis V, et al. Dose-related effects of prolonged NaHCO3 ingestion during high-intensity exercise. Med Sci Sports Exerc. 2006;38(10):1746–53. Shelton J, Kumar GVP. Sodium bicarbonate - a potent ergogenic aid? Food Nutr Sci. 2010;1(1):1–4. Jourkesh M, Ahmaidi S, Keikha BM, Sadri I, Ojagi A. Effects of six weeks sodium bicarbonate supplementation and high-intensity interval training on endurance performance and body composition. Ann Biol Res. 2011;2:403–13. Driller MW, Gregory JR, Williams AD, Fell JW. The effects of chronic sodium bicarbonate ingestion and interval training in highly trained rowers. Int J Sport Nutr Exerc Metab. 2013;23(1):40–7. Edge J, Bishop D, Goodman C. Chronic sodium bicarbonate ingestion affects training adaptations during severe exercise training. Med Sci Sports Exerc. 2004;36(36):S201. Cameron SL, Gray AR, Fairbairn KA. Increased blood pH but not performance with sodium bicarbonate supplementation in elite Rugby union players. Int J Sport Nutr Exerc Metab. 2010;20(4):307–21. Zabala M, Peinado AB, Calderón FJ, Sampedro J, Castillo MJ, Benito PJ. Bicarbonate ingestion has no ergogenic effect on consecutive all out sprint tests in BMX elite cyclists. Eur J Appl Physiol. 2011;111(12):3127–34. Dantas JL, Pereira G, Nakamura FY. Five-kilometers time trial: preliminary validation of a short test for cycling performance evaluation. Asian J Sports Med. 2015;6(3):e23802. Baker D. Recent trends in high-intensity aerobic training for field sports. Professional Strength & Conditioning; 2011. Ziemann E, Grzywacz T, Łuszczyk M, Laskowski R, Olek RA, Gibson AL. Aerobic and anaerobic changes with high-intensity interval training in active college-aged men. J Strength Cond Res. 2011;25(4):1104. Bar-Or O. The Wingate anaerobic test an update on methodology, reliability and validity. Sports Med. 1987;4(6):381–94. D B BC. Effects of induced metabolic alkalosis on prolonged intermittent-sprint performance. Med Sci sports Exerc. 2005;37(5):759–67. Foster C, Farland CV, Guidotti F, et al. The effects of high intensity interval training vs steady state training on aerobic and anaerobic capacity. J Sports Sci Med. 2015;14(4):747. Requena B, Zabala M, Padial P, Feriche B. Sodium bicarbonate and sodium citrate: ergogenic aids? J Strength Cond Res. 2005;19(1):213. Collins J, A R, J G, L H, O'Driscoll R. Relating oxygen partial pressure, saturation and content: the haemoglobin–oxygen dissociation curve. Breathe. 2015;11(3):194–201. Crisafulli E, Vigna M, Ielpo A, et al. Heart rate recovery is associated with ventilatory constraints and excess ventilation during exercise in patients with chronic obstructive pulmonary disease. Eur J Prev Cardiol. 2018;25(15):1667–74. Thank you to all the participants for participating in this study. Funding was provided by the Fundamental Research Funds for the Central Universities. The datasets generated and/or analyzed as part of the current study are not publicly available due to confidentiality agreements with subjects. However, they can be made available solely for the purpose of review and not for the purpose of publication from the corresponding author upon reasonable request. College of Kinesiology, Beijing Sport University, Beijing, BJ, China Jieting Wang, Junqiang Qiu, Longyan Yi, Zhaoran Hou & Wei Cao Department of Nutrition, Georgia State University, Atlanta, GA, USA Dan Benardot Center for the Study of Human Health, Emory University, Atlanta, GA, USA Jieting Wang Junqiang Qiu Longyan Yi Zhaoran Hou Wei Cao JW was responsible for data collection, data interpretation, writing and revision of the manuscript, under the direction and assistance of JQ and LY who assisted with each step and completion of the manuscript. ZH assisted in the data collection and data interpretation. DB assisted in the revision of the manuscript. WC assisted in the completion of the manuscript. The authors declare no conflict of interests with the current publication, and all authors approved the final version of the manuscript. Correspondence to Junqiang Qiu. The research proposal was approved by the Institutional Review Board of Beijing Sport University (BSU IRB) and all participants gave written informed consent prior to study participation. Not applicable, no individual person's data was presented. Wang, J., Qiu, J., Yi, L. et al. Effect of sodium bicarbonate ingestion during 6 weeks of HIIT on anaerobic performance of college students. J Int Soc Sports Nutr 16, 18 (2019). https://doi.org/10.1186/s12970-019-0285-8 Accepted: 28 March 2019 Anaerobic performance
CommonCrawl
A detailed account of the measurements of cold collisions in a molecular synchrotron Aernout P. P. van der Poel1 & Hendrick L. Bethlem ORCID: orcid.org/0000-0003-4575-85121 We have recently demonstrated a general and sensitive method to study low energy collisions that exploits the unique properties of a molecular synchrotron (Van der Poel et al., Phys Rev Lett 120:033402, 2018). In that work, the total cross section for ND3 + Ar collisions was determined from the rate at which ammonia molecules were lost from the synchrotron due to collisions with argon atoms in supersonic beams. This paper provides further details on the experiment. In particular, we derive the model that was used to extract the relative cross section from the loss rate, and present measurements to characterize the spatial and velocity distributions of the stored ammonia molecules and the supersonic argon beams. Collision studies at low temperatures are of interest from both a practical and theoretical viewpoint. Interstellar clouds, which make out a large fraction of our universe, typically have temperatures well below 100 K. Collision data of simple molecules at low temperatures is crucial for understanding the chemistry that goes on in these clouds, which is of special interest because it is from these clouds that solar systems form [1, 2]. Furthermore, at low temperatures the de Broglie wavelength, associated with the relative velocity of the colliding molecules, becomes comparable to, or larger than, the intermolecular distances and quantum effects become important. Particularly interesting are resonances of the collision cross section as a function of collision energy. The position and shape of these resonances are very sensitive to the exact shape of the potential energy surface (PES) and thus serve as precise tests of our understanding of intermolecular forces [3–6]. Precise knowledge of the PES is fundamental to fields such as combustion physics, atmospheric physics, or in fact any field involving chemical reactions. Although several techniques have been developed to create samples of cold molecules [7–10], the obtained densities are low (typically 108 molecules/cm3). As the cross sections of collisions involving neutral molecules or atoms are small (typically below 500 Å2), the main challenge to studying cold collisions is to reach a sufficiently high sensitivity. In recent years, several experiments have managed to measure low energy collisions by leveraging the unique properties of the systems they study. For instance, by exploiting the extreme state-purity of Stark-decelerated beams combined with sensitive ion-detection techniques, van de Meerakker and co-workers have measured quantum-state changing collisions of OH and NO molecules with rare gas atoms to temperatures as low as 5 K [11–14]. Costes and co-workers have studied inelastic collisions of O2 and CO with H2 molecules and helium at energies between 5 and 30 K using cryogenically cooled beams under a small (and variable) crossing angle [15–17]. Even lower temperatures have been obtained by using magnetic or electric guides to merge two molecular beams. Narevicius and co-workers and Osterwalder and co-workers have exploited the advantages of metastable helium to study Penning ionization reactions with various atoms and molecules [18–26]. In a similar fashion, Merkt and co-workers have measured collisions between ground-state hydrogen molecules and hydrogen in highly excited Rydberg states that were merged on a chip [27]. Finally, cold collision have been studied by sending slow beams of atoms and molecules through trapped samples of calcium ions [28, 29], lithium atoms [30] and OH radicals [31], exploiting the fact that collision signal can be accumulated over long time-intervals. We have developed a method that enables the study of collisions at low energy by exploiting the unique properties of a molecular synchrotron. Our approach combines the low collision energies obtained in experiments that use merged molecular beams with the high sensitivity of experiments that monitor trap loss. In Van der Poel et al. [32], the total cross section for ND3 + Ar collisions was determined from the rate at which ammonia molecules were lost from the synchrotron due to collisions with argon atoms in supersonic beams. This paper provides further details on this experiment. Before going into detail, we will first outline the main principles and virtues of our technique. Main principles In its simplest form, a storage ring is a trap that confines molecules along a circle rather than around a point. As such, a storage ring for molecules in low-field seeking states can be made by bending an electrostatic hexapole focuser into a circle, which was demonstrated in 2001 by Crompvoets et al. [33]. Since in such a storage ring no longitudinal forces exist to keep the faster and slower molecules together, injected packets of molecules will disperse until the ring is filled homogeneously. As demonstrated in 2007 by Heiner et al. [34], this can be prevented by breaking the ring into two half-circles and switch the voltages in such a way that molecules are bunched together as they fly through the gap between the two half rings. In 2010, an improved synchrotron, consisting of 40 straight hexapoles placed in a circle was demonstrated by Zieger et al. [35]. Using many segments rather than two has a number of advantages: Due to the high symmetry ring of the, instabilities resulting from the variation of the trapping force are less important and the transverse depth of the ring is increased. The longitudinal well is also increased as bunching happens many times per round-trip. Zieger et al. illustrated the stability of their design by demonstrating that they could observe trapped ammonia molecules (ND3) in well-defined, mm-sized packets, after storing them for more than 10 s in the ring [35]. The fact that elements can be switched individually implies that different packets can be injected and detected independently, allowing the synchrotron to hold up to 19 packets simultaneously. As the stored packets of ammonia molecules have both a small velocity spread (corresponding to a temperature of ∼10 mK) and widely (100–150 m/s) tunable velocities, they are well suited for collision studies, as will be demonstrated in this paper. The main idea of our experiment is illustrated in Fig. 1. Beams of argon atoms are sent through the synchrotron and made to collide with the stored packets of ammonia. The argon beams moves in the same direction as the stored ammonia molecules such that low collision energies can be achieved [15–17, 36, 37]. The experiment is triggered in such a way that some of the ammonia packets—the probe packets—encounter a fresh argon beam every round-trip, while other packets—the reference packets—do not. When after a certain number of round-trips the packets are detected, the probe and reference signals are compared to find the rate at which ammonia molecules are lost from the synchrotron due to collisions with the argon beam. The longer the packets are stored before detection, the more molecules are lost from the probe beam. In this way, collision signal is accumulated and the sensitivity to measure collisions is strongly enhanced. Schematic sketch of the synchrotron and argon beamline. The red circles denote probe packets of ammonia molecules which are targeted by the argon beam (shown in green). The blue circles denote reference packets of ammonia, which do not encounter argon atoms and provide a simultaneous background measurement. The grey circles denote reference packets of ammonia that encounter the leading or trailing end of the argon beams, and are therefore discarded The expected enhancement in sensitivity is illustrated in Fig. 2. The signals of the probe (red, Sprobe) and reference packets (blue, Sref), using numbers from the experiment that was discussed in Van der Poel et al. [32]., are shown as a function of storage time in the synchrotron. Both signals are modelled by exponential decays. While the probe and reference packets share the same background loss rate (in this calculation kbg=1.46% per round-trip), the probe packets are modelled to experience additional loss due to collisions with particles from the collision partner beamline (at a rate of kcol=1.26% per round-trip). After a given number of round-trips rt, the loss rate due to collisions can be found using $$ k_{\text{col}}=\frac{1}{\text{rt}}\ln\left(\frac{S_{\text{ref}}}{S_{\text{probe}}}\right). $$ Model of the probe (red) and reference (blue) signals (using numbers from the experiment discussed in Van der Poel et al. [32]), and the signal-to-noise ratio (orange, right y-axis) that results when these signals are used to calculate the loss rate due to collisions kcol of the probe packets The uncertainty in the loss rate is found from the statistical uncertainties in the probe and reference signals. Since the number of detected ions follows Poisson statistics, the uncertainty is given by the square-root of this number: $$ \sigma_{\mathrm{k_{col}}}=\frac{1}{\text{rt}}\sqrt{\left(\frac{1}{S_{\text{ref}}}\right)+\left(\frac{1}{S_{\text{probe}}}\right).} $$ The orange curve shows the ratio of the calculated loss rate kcol and its uncertainty \(\sigma _{k_{\text {col}}}\), after a single measurement only (i.e., a single measurement of a probe and a reference packet, requiring two shots). This signal-to-noise ratio increases dramatically the first tens of round-trips, due to the factor of 1/rt in \(\sigma _{k_{\text {col}}}\). After about 90 round-trips, roughly 2 times the 1/e lifetime of the probe packet, the statistical uncertainty in Sprobe becomes the limiting factor and the signal-to-noise ratio decreases again. For the numbers used in this calculation, the expected signal-to-noise ratio at the optimal number of round-trips is ∼1 after a single measurement of the probe packet and reference packet. The uncertainty can thus be reduced to below 1% by measuring 20,000 shots, or a little over half an hour when measuring at a rate of 10 Hz. Note that the signal-to-noise ratio at the optimal number of round-trips is 34 times larger than after a single round-trip. Consequently, the sensitivity of the synchrotron reduces the measuring time by over a factor of 1000 with respect to a hypothetical crossed beam experiment with the same densities. This enhancement in sensitivity is what motivated us to do this experiment. Since the collision partners move in the same direction as the stored molecules, the collision energy is determined by the difference in velocity. Thus, by choosing beams that move with the same speed as the stored molecules, one can, in principle, measure collisions at zero collision energy. Unfortunately, our current synchrotron only allows us to store molecules with a speed up to 150 m/s, which is much lower than the speed of the argon beams used in our experiment (420-570 m/s). As a result of this, the lowest collision energy obtained in our work is 40 cm −1 (corresponding to 56 K). The rest of this paper is organized as follows: The first two sections provide a detailed overview and characterization of the experimental setup. We present measurements of the argon beams at two positions along the beamline to determine the position and velocity distributions of the argon beam at any position and time, and discuss the position and velocity distributions of the ammonia molecules inside the ring. In the succeeding section, we present a model that combines trajectory simulations of ammonia molecules in the synchrotron with the argon beam. This critical piece of the puzzle also provides the collision energy distributions probed in the experiments. In the succeeding section, all pieces are combined, and measurements are presented of the ND3 + Ar cross section as a function of collision energy. The paper ends with conclusions and a short discussion on possible future experiments. Molecular synchrotron Our molecular synchrotron, schematically depicted in Fig. 3, consists of 40 electric hexapole elements arranged in a half-meter-diameter circle. Voltages of up to ±5 kV are applied to the electrodes in order to transversely trap ammonia molecules in the low-field seeking sublevel of the J=1, K=1 rovibrational ground state within the hexapoles. The molecules are bunched longitudinally by switching the voltages temporarily to higher voltages as the molecules fly through the gap between one hexapole and the next. In principle 20 packets can be stored simultaneously, with velocities of 100–150 m/s. In the experiments presented here, we store 14 packets with a velocity of either 121.1 m/s or 138.8 m/s. Top view of the experimental set-up (to scale). ND3 molecules are decelerated to velocities of 100–150 m/s using a Stark decelerator and injected into a molecular synchrotron consisting of 40 electric hexapole elements arranged in a half-meter-diameter circle. Ammonia molecules can be detected at the injection point or at a quarter ring downstream (detection zone II). The argon beamline consists of a cooled solenoid valve that is skimmed by a 5 mm and a 1.5 mm skimmer. Argon atoms are detected 770 mm in front of the synchrotron (detection zone I) or inside the synchrotron (detection zone II). The argon beamline makes an angle of 94,5 degrees with respect to the injection beamline such that the argon beam moves parallel to the first hexapole in the synchrotron behind detection zone II New packets are injected into the synchrotron by an injection beamline consisting of (1) a Gentry type pulsed valve (R.M. Jordan company), that releases packets of 5% ammonia seeded into xenon with velocities around 350 m/s, (2) a Stark decelerator, that decelerates ammonia molecules in the J=1, K=1 rovibrational ground state to velocities down to 100 m/s, (3) a buncher that focuses the molecules longitudinally into the synchrotron, and (4) hexapole elements that focus the molecules transversely. To allow the molecules to enter the synchrotron, 4 hexapole elements of the synchrotron are temporarily switched off. The beamline is synchronized with the cyclotron frequency of the stored molecules in such a way that new packets are injected two hexapole segments in front of the packets that are already stored, at a rate around 10 Hz. For molecules that are stored at a velocity of 121.1 m/s, this implies that they make 6+38/40 round-trips before a new packet is injected, while molecules stored at a velocity of 138.8 m/s make 7+38/40 round-trips before a new packet is injected. The velocity of the injected ammonia molecules is determined by the trigger sequence applied to the Stark decelerator, and can be changed almost instantly; it takes 1.4 s to load the synchrotron with new packets after changing the cyclotron frequency and the trigger sequence. In this way, the collision energy can be varied on relatively short timescales to counteract possible drifts of the intensity and timing of the argon beam during collision measurements. At the same rate that new packets are injected, the oldest packet in the synchrotron is detected at one of two detection zones: A focused laser pulse (typically 5 ns long, 10 mJ/pulse, 317 nm) ionizes ammonia molecules by 2+1 Resonance-Enhanced Multi-Photon Ionization (REMPI) via the electronic B-state. The UV-beams are focused in-between the hexapole elements using lenses with focal lengths of 50 mm, which are mounted on three-dimensional translation stages to allow precise scanning of the position of the laser focus. The two adjacent hexapole elements are switched to a voltage configuration that accelerates the ions upwards to a drift tube, for time-of-flight mass-spectrometry. The ions are detected on a Multi-Channel-Plate (MCP) detector. The two detection zones in the synchrotron are marked by a laser beam in Fig. 3. During the collision measurements, the ammonia molecules are detected in detection zone II. Figure 4 shows the ammonia signal measured in the synchrotron as a function of time. It demonstrates that a packet of ammonia molecules is still clearly visible even after completing more than 1000 round-trips, corresponding to a trapping time of over 13 s. In this time the molecules have traversed a distance of over a mile [35, 38, 39]. Time-of-flight measurements of ammonia molecules traversing the synchrotron. The numbers above the peaks denote the number of round-trips the molecules have completed at the time of measurement. The inset shows that the 1/e trapping time is 3.2 s. Reprinted with permission from Ref. [35] Ⓒ2010 APS When the system is kept under vacuum for many weeks, the pressure reaches 5 ×10−9 mbar. Under these conditions the 1/e-lifetime of the stored packets is 3.2 s, determined equally by collisions with the background gas and black-body-radiation-induced transitions to non-trappable states [35]. In the experiments described in "Results" section the pressure is typically 2 ×10−8 mbar, resulting in a lifetime of 1.0 s. Figure 5 shows laser height scans of ammonia packets with velocities of 121.1 and 138.8 m/s, both after 90 round-trips. From these measurements, in combination with trajectory simulations, it is found that the emittance of the stored ammonia – describing the position and velocity spread of the molecules with respect to the so-called synchronous molecule – is [1 mm ·5 m/s]2 ·[4 mm ·1 m/s]. More details on the synchrotron, the beamline, and the trajectory-simulations of molecules through the synchrotron can be found in Zieger et al. [35, 38, 39]. Laser height scans of ammonia packets traveling at 121.1 and 138.8 m/s at detection zone II Collision partner beamline Longitudinal distribution A schematic overview of the argon beamline is shown in Fig. 3. A supersonic argon beam is formed by releasing a high pressure (∼4 bar) gas into vacuum by a pulsed solenoid valve (Even Lavie type E.L.-5-2005 RT, HRR [40]) running at two times the cyclotron frequency of the synchrotron (153–175 Hz). To help keep the pressure in the source chamber below 1×10−4 mbar, an additional turbo-molecular pump provides a pre-vacuum of < 5×10−4 mbar to the turbo pumps of the source and detection chambers. The 5 mm diameter skimmer between the source and detection chambers and the 1.5 mm diameter skimmer between the detection and the synchrotron chambers select only the coldest part of the argon beam and ensures that the pressure in the synchrotron chamber remains below 2.2×10−8 mbar during operation. In order to tune the velocity of the argon beam, the temperature of the valve housing can be varied between −150 °C and +30 °C. The temperature of the valve is regulated as follows: A flow of nitrogen gas is cooled down by passing it through a spiral tube immersed in liquid nitrogen, then passes a heater, and finally flows through a heatsink mounted onto the valve. The current flow through the heater is controlled by a Eurotherm 2408 PID-controller, which uses a thermocouple attached to the heatsink to read out its temperature. The time it takes for the valve temperature to reach its target temperature and stabilize can be between 45 min for a set temperature of 30 °C and 2 h for −150 °C. To avoid losing time, we typically operate the valve at a constant temperature throughout the day. The argon atoms can be detected at two locations: in detection zone I, approximately half-way the argon beamline, and in detection zone II, inside the synchrotron, where also the ammonia molecules are detected. UV laser pulses (5 ns long pulses with typically an energy of 10 mJ/pulse at λ=314 nm) are focused into the detection zones to ionize the argon atoms by 3+1 REMPI via the \(3s^{2}3p^{5}\left ({~}^{2}P_{1/2}\right)4s\phantom {\dot {i}\!}\)-state [41]. In detection zone I, the resulting ions are extracted upwards by a stack of electrostatic ion lenses and mass-selectively detected on an MCP detector. Inside the synchrotron (detection zone II) the ions are extracted by switching the two adjacent hexapole elements to the appropriate voltages, as discussed in the preceding section. Figure 6 shows spectra measured at three different laser powers illustrating that the transition is significantly broadened and shifted by the intensity of the laserpulses. 3+1 REMPI spectra at frequencies around the 3s23p6(1S) to 3s23p5(2P1/2)4s-transition in argon In our experiment, described in the next sections, we operate the valve that releases the argon beam at four different temperatures (-150, -90, -30 and 30 °C) and use two ammonia velocities (121.1 and 138.8 m/s), which imply that the valve will operate at two repetition frequencies (153 and 175 Hz). To characterize the beams under these conditions, we have recorded many TOF profiles at both detection zones (I and II). The duration of the current pulse applied to the valve is limited to prevent multiple pulses due to bouncing of the plunger [40] (at high temperature) or a too high pressure in the source chamber (at low temperature). Typical measurements taken with the valve running at 153 Hz are shown in Fig. 7. Note that the MCP used in the first detection zone is rather old and its quantum efficiency has degraded over time. Whereas the density of the beam at the first detector is at least a factor of 10 higher than the density at the second detector, the count rate at both detectors is similar. Argon time-of-flights as measured in detection zones I (a) and II (b), at different temperatures of the pulsed valve. For each temperature, the two time-of-flight measurements are fitted simultaneously using the model discussed in the text. The parameters resulting from the fits are listed in Table 1 Table 1 Overview of the properties of the argon beam at different temperatures of the valve housing T valve, repetition rates frep (which are determined by the velocity of the stored ammonia molecules), ammonia velocities \(v_{\mathrm {ND_{3}}}\), and durations of the current pulse applied to the valve Δtpulse. The second block lists properties of the beams measured at detection zone II: the relative beam intensity, mean arrival time, 〈τ〉, the width of the arrival time distribution Δτ, and the full width at half maximum (FWHM) length of the argon packet \(L=\left \langle v_{Ar} \right \rangle \times 2 \sqrt {2\ln 2}~\Delta \tau \). Finally, the last block lists the parameters found from a simultaneous fit to the time-of-flight distributions measured at both detectors: the mean 〈vAr〉 and standard deviation \(\sigma _{v_{Ar}}\) of the velocity distribution, duration of valve opening time, and the effective distance between the valve and detection zone II The longitudinal properties of the argon beam are modelled as follows. In general, the spatial density distribution nAr(z|v,t) of argon atoms with a velocity between v and v+dv at a time t can be described by $$ n_{\text{Ar}}(z|v,t) = N ~ f^{v}(v) ~ \mathrm{d}v ~ f^{z}(z|v,t), $$ where N is the number of atoms in the packet, fv(v) is the time-independent, normalized, velocity distribution, and fz(z|v,t) is the normalized position distribution of argon atoms with a velocity between v and v+dv at time t. We assume the velocity distribution fv(v) to be given by the normal distribution gv(v|〈v〉,σv) with average 〈v〉 and standard deviation σv. The position distribution fz(z|v,t) at a certain time t can be extrapolated from the position distribution at the source fz(z|v,t=0). This initial distribution is assumed to be a normal distribution with 〈z〉=zvalve. From the opening time of the valve and the velocity of the atoms in question, the standard deviation is given by v times pw. Here pw stands for pulse width, i.e. the time that the valve is open. In time, the center of the initial position distribution moves with velocity v while the shape remains constant, as all the atoms have the same velocity v. Thus, the position distribution at a certain time t is given by the normal distribution gz(z|zvalve+vt,v pw). The measured signal S(z,t) at position z and at time t is proportional to the argon density, which is found by integrating nAr(z|v,t) over all velocities: $$ S(z,t) \propto \int\limits_{-\infty}^{+\infty} g^{v}(v|\left\langle v \right\rangle, \sigma_{\mathrm{v}}) ~ g^{z}(z|z_{\text{valve}}+vt, v~pw) ~ \mathrm{d}v. $$ Equation 4 is simultaneously fitted to the TOF profiles measured at detection zone I and detection zone II. The fit parameters, 〈v〉, σv, pw, and zvalve, are determined by minimizing the weighted sum of the squares of the residuals, and are presented in Table 1. In order to infer the collision cross sections with high precision, it is crucial to determine the argon beam intensities accurately. Therefore, time-of-flight measurements were made at detection zone II (close to where the collisions will happen), at each of the four valve temperatures and two repetition rates, in a single day. Care was taken to keep the laser power at 8.00±0.05 mJ/p. These measurements are shown in Fig. 8. They are fitted with the arrival time distribution for a beam with a normal velocity distribution originating from a point source, given by $$ S(t)=S_{0}~\exp\left\lbrace -\left(\frac{1}{t}-\frac{1}{\left\langle\tau\right\rangle}\right)^{2}/\left(2\frac{\Delta\tau^{2}}{\left\langle\tau\right\rangle^{4}}\right)\right\rbrace, $$ Argon time-of-flight measurements (with fits) in detection zone II at the different valve temperatures for the purpose of argon beam intensity calibration where S0 is the peak signal, 〈τ〉 the mean arrival time, and Δτ a measure of the width of the distribution similar to a standard deviation. The distribution resembles a normal distribution that is unevenly stretched along the time axis: the slower part of the beam needs a longer time to arrive at the detection zone, and therefore has more time to disperse. The relative beam intensities are given by the fit maxima (S0), normalized to the argon beam at Tvalve=−150 °C. The resulting calibrations are also presented in Table 1. Transverse distribution The geometry of the beamline, i.e., the skimmers and the gap between the hexapole rods through which the argon beam enters the synchrotron, determines partially, but not completely the transverse spatial and velocity distributions of the argon beam. Following the recommendations given in Ref. [40], we have installed a skimmer with a large aperture (Beams dynamics, type II, 5 mm) at a distance of 200 mm in front of the valve. The effective size of the source, i.e., the transverse size of the argon beam when the density has dropped to such extent that collisions cease to be important (the freezing point), is smaller than the aperture of the first skimmer. Hence, this effective size, together with the second skimmer and the gap between the electrodes, determines the transverse spatial and velocity distribution of the argon beam in the synchrotron. To determine the effective size of the source, we have performed two types of measurements: (1) we have recorded the argon signal inside the synchrotron while scanning the horizontal and vertical positions of the valve, and (2) we have scanned the height of the laser focus for two positions of the valve. The results of both measurements are shown in Fig. 9. The blue squares in Fig. 9a show the signal when the valve is displaced in the horizontal direction, while the red triangles show the signal when the valve is displaced in the vertical direction. Note that, since the skimmer is located at ∼80% along the path between the valve and the detection zone, a displacement of 4 mm of the valve results in a beam displacement of ∼1 mm in the detection zone. At two vertical positions of the valve, the vertical distribution is measured by scanning the laser focus as shown in Fig. 9b. The orange curve is measured while the valve is close to the center position (indicated by the arrow "A" in Fig. 9a), while the purple curve is measured when the valve is 4 mm down from the center (indicated by the arrow "B" in Fig. 9a). a Argon signal in detection zone II as function of the horizontal (blue squares) or vertical (red triangles) position of the valve. Error bars represent standard errors of the means. b Scan of the vertical position of the laser focus when the valve is positioned close to the center (corresponding to arrow "A" in panel a) or 4 mm below the center (corresponding to arrow "B" in panel a). The grey lines in (a) and (b) show simulations with different values of the effective size of the beam source These measurements are compared with simulations that calculate the trajectories of argon atoms through the machine. The simulation starts by initializing argon atoms with random positions and velocities, given by Gaussian distributions. The atoms then fly in a straight line from the valve to the detection zone. They are counted if they fly through both skimmers and arrive at the laser focus. The results of simulations performed with effective source sizes of 1.5, 3, 4.5, and 6 mm are depicted by the gray lines in Fig. 9. The widths of the velocity distributions are unimportant, as the divergence of the beam at the source is much larger than what is accepted by the skimmers, and are set to be >10 m/s. From the measurements, we conclude that the effective size of the source is ∼4.5 mm. Figure 10 shows a simulation of the transverse distribution of the argon beam at the longitudinal position where the atoms (first) collide with the stored ammonia molecules. This simulation uses the parameters found from the experiment. In order to simplify the analysis of our collision experiment, we will approximate the beam by a cylinder with a diameter of 1.66 mm, indicated by the red circle in Fig. 10; the density of the gas within this cylinder is assumed to be independent of the radial direction and is taken to be equal to the peak density. In this way, the number of atoms in the model equals the number of atoms in the experiment. To account for the effect of the gap, the distribution is limited to 1.52 mm in the vertical direction. The size of the beam increases by 5% over the course of 40 mm. The intensity decreases concomitantly so that the number of molecules remains constant. Transverse position distribution of simulated argon atoms at the longitudinal position where the atoms first collide with the stored ammonia molecules. The horizontal lines reflect the cut-off of the electrodes that make up the hexapole element through which the argon beam first enters the synchrotron. The circle approximates the circular cut-off due to the second skimmer. The resulting flattened circle shows the transverse shape of the argon beam that is used in trajectory simulations of ammonia molecules In the vertical plane, the overlap of the argon beam with respect to the synchrotron is optimized by using the height scans discussed above. In the horizontal plane, the optimal alignment is more difficult to find. In the collision experiments presented in the next sections, we deliberately aligned the argon beam inwards with respect to the equilibrium orbit of the stored ammonia molecules, such that it intersects the path of the stored molecules twice: once close to the detection zone in the synchrotron, and once further downstream. As a result of this alignment a measurement of the loss rate as a function of delay between the trigger of the valve that releases the argon beam and the arrival time of the ammonia molecules in the detection zone displays two maxima rather than one. From the delay between two maxima we can infer the displacement. Collisions at a specific collision energy We now arrive at the heart of this paper: the collision measurements. As explained in "Molecular synchrotron" section, we store multiple packets of ammonia molecules in a synchrotron while sending in beams of argon that are made to collide with some of the packets–the probe packets. Packets that do not encounter the co-propagating beam–the reference packets–provide a simultaneous measurement of background loss. In the experiments discussed here, the valve that releases the argon beams runs at twice the cyclotron frequency such that every tenth ammonia packet encounters a fresh argon beam every round trip. Figure 11 shows a typical measurement of an experiment where ammonia molecules, revolving in the synchrotron with a mean velocity of 121.1 m/s, are detected after 90 (and a quarter) round-trips. These molecules are made to collide with beams of argon atoms with mean velocities of 474.5 m/s. Typically, we take data in blocks of 4 min corresponding to 2400 individual measurements, taken at a rate of 10 Hz. Measurements that are taken at the same delay with respect to the partner beam are grouped together, resulting in ten sets of 240 measurements each. The measurements in each set are averaged and further analysed. The top panel of Fig. 11 shows two traces corresponding to the probe packet (red) and one of the reference packets (blue). The noise on the measured traces has a standard deviation of 5.3 ions per shot which is only slightly larger than the \(\sqrt {23.4}=4.8\) ions per shot expected for a Poissonian distribution. The difference is attributed to noise added by the MCP detector. The lower panel of Fig. 11 shows the averages for each of the 10 sets, along with the standard errors of their means (\(\text {SE}=\sigma /\sqrt {n}\)). The signal of the probe packet (#1) is about 11% smaller than the average of packets #3–10, corresponding to a loss rate of k col=(1.28±0.18)×10−3 per round-trip. As observed, the argon beam also has some overlap with packet #2 in this particular experiment; this packet is therefore discarded. Depending on the timing of the partner beam, which determines whether the trapped ammonia molecules encounter the rising, middle or trailing part of the argon beam, packets #2, #3, #9, and/or #10 are discarded. a Single-shot measurements of ammonia packets after 90 round-trips. The red trace corresponds to the probe packet, the blue trace to one of the reference packets. b 240-shot averages of the probe packet (red), a discarded reference packet (grey), and 8 reference packets (blue). The error bars denote standard errors of the means. The blue line depicts the average of the 8 reference packets Model to extract the relative cross section from the measured loss rates The measurements presented in the previous paragraph demonstrate the unique properties of a molecular synchrotron for studying collisions, in particular the high sensitivity owing to the fact that collision signal can be accumulated over the long time that the molecules are stored. Our next goal is to determine the relative (integral, total) cross sections from the measured loss rates. In order to determine these, a detailed understanding of how the ammonia molecules move through the argon packets is required. Which parts of the argon packet do the ammonia molecules encounter? What are the velocities of the argon atoms in these parts? What is the density of the argon beam in these parts? To answer these questions, trajectory simulations will be performed. The goal of the simulations is to find (1) an expectation value for the amount of encountered argon atoms, and (2) the corresponding distribution of collision energy. The number of argon atoms encountered can be found by integrating the argon density over the volume probed by the ammonia molecules as they revolve around the synchrotron. As a warming up exercise, we will first consider the simple case of an ammonia molecule flying along a path C through a homogeneous gas of argon atoms, with 〈vAr〉=0 and number density n. Throughout this section, the position and velocity of the ammonia molecule will be denoted by \(\vec {r} = (x, y, z)\) and \(\vec {v} = (v_{x}, v_{y}, v_{z})\), respectively. We will first assume that the collision cross section σtot is independent of relative velocity. In this case, the average number of argon atoms encountered by the ammonia molecule is $$ \left\langle N_{\text{coll}} \right\rangle = \int\limits_{C} \sigma_{tot} \, n \, \mathrm{d}s = \int\limits_{t_{i}}^{t_{f}} \sigma_{\text{tot}} \, n \, |v(t)| \, \mathrm{d}t = L \, \sigma_{\text{tot}} \, n, $$ where the path C is parametrized by $$ C: z(t), v(t), \quad t_{i}\leq t \leq t_{f}, $$ $$ L=\int\limits_{t_{i}}^{t_{f}} |v(t)| \, \mathrm{d}t $$ is simply the path length. In the actual experiment, of course, the argon gas is not homogeneously distributed. On the contrary: since the argon beam results from a supersonic atomic beam, it will have a distribution that depends strongly on position and time. Furthermore, the velocity distribution of the argon atoms is important: both the effective path length \(\mathrm {d}l=|\vec {v}_{\text {rel}}(t)| \, \mathrm {d}t\) and the collision cross section σtot(Ecol) depend on the velocity of the argon atoms. Thus, we need the argon atom number density \(n(\vec {r}|\vec {v}_{\text {Ar}},t)\) as function of position and time for any argon velocity, so that we can find the number of argon atoms that are encountered by an ammonia molecules using $$ \left\langle N_{\text{coll}} \right\rangle = \left\langle \sigma_{\text{tot}} \right\rangle \int\limits_{t_{i}}^{t_{f}} \int\limits_{-\infty}^{+\infty} n(\vec{r}|\vec{v}_{\text{Ar}},t) \, \mathrm{d}t \, \mathrm{d}v_{\text{rel}}, $$ where 〈σtot〉 is the result of averaging the collision cross section σtot(vrel) over the velocities of all argon atoms that are encountered. In our experiments, this average is the only thing we can ever measure. We will now make this general expression more specific in order to obtain an expression that reflects the conditions of our experiment. As discussed earlier, the longitudinal distribution of the argon beam can be described by multiplying a gaussian velocity distribution with a gaussian spatial distribution, as expressed in Eq. 4. Transversely, the argon beam is assumed to be homogeneously distributed over the area that makes up the transverse cross section of the argon beam. As the beam is divergent, the size of this area depends on the longitudinal position and will be denoted by S(z). Note that Fig. 9b reveals that the profiles are actually not entirely flat, but the approximation is sufficiently accurate for our purpose. Since contributions of the transverse velocity components of the argon atoms to the collision energy are negligible, we assume vAr,x=vAr,y=0 so that \(v_{\text {Ar}} \equiv \left | \vec {v}_{\text {Ar}} \right | v_{\mathrm {Ar,z}}\). A general expression of the argon beam is then given by $$ n_{\text{Ar}}(\vec{r}|v_{\text{Ar}},t) = N \, f^{v}_{\text{Ar}}(v) \, \mathrm{d}v \, f^{z}_{\text{Ar}}(z|v,t) / S(z), $$ for (x,y)∈S(z), where N is the number of atoms in the packet. The transverse cross section S(z) can be parameterized by $$ S(z) = S(z_{0}) \cdot (1+(d(z-z_{0}))^{2}), $$ where S(z0) is the cross section at the position where the argon beam crosses the equilibrium orbit (the first time), and where d describes the divergence of the beam. The cross section of the beam at z0 is shown in Fig. 10. Both S(z0) and d are determined from the trajectory calculations described earlier. As the total number of atoms in the beam is not of interest, we will write the argon beam density as: $$ \begin{aligned} n_{\text{Ar}}(\vec{r}|v_{\text{Ar}},t) & \equiv n_{0} \, L \cdot f^{v}_{\text{Ar}}(v_{\text{Ar}}) \, \mathrm{d}v_{\text{Ar}} \cdot f^{z}_{\text{Ar}}(z|v_{\text{Ar}},t) \cdot q(\vec{r}), \end{aligned} $$ where n0 is the peak number density in the packet at z0 and a L is a measure for the length of the (Gaussian-shaped) packet, respectively, such that n0 L=N/S(z0) is the column density of the argon packet at z0. The newly introduced function $$q(\vec{r}) = \left\{ \begin{array}{ll} \frac{1}{1+(d(z-z_{0}))^{2}} & \text{if} \enspace (x,y) \in S(z) \\[0.5ex] 0 & \text{otherwise}, \\ \end{array} \right. $$ is a shorthand to reduce the length of future equations. This function describes whether the transverse position of the ammonia molecule is located within the argon beam (or not), and also accounts for the decreasing density of the argon beam due to its divergence. Finally, since we are interested in the number of argon atoms encountered as a function of the collision energy rather than as a function of argon atom velocity, we perform a change of integration variable from vAr to vrel using the relation $$ v_{\text{rel}} = | v_{x} \hat{x} + v_{y} \hat{y} + (v_{z}-v_{\text{Ar}}) \hat{z} |. $$ Putting all of this together, we find for the expectation value for the number of encountered argon atoms $$ \begin{aligned} \left\langle N_{\text{coll}} \right\rangle (t_{\text{valve}}) & = \left\langle \sigma_{\text{tot}} \right\rangle n_{0} \, L \, \int\limits_{t_{i}}^{t_{f}} \mathrm{d}t \int\limits_{-\infty}^{+\infty} \mathrm{d}v_{\text{rel}} \dots \\ & \left\lbrace f^{z}_{\text{Ar}}(z|v_{\text{Ar}},t) \cdot f^{v}_{\text{Ar}}(v_{\text{Ar}}) \cdot v_{\text{rel}} \cdot \frac{\mathrm{d}v_{\text{Ar}}}{\mathrm{d}v_{\text{rel}}} \cdot q(\vec{r}(t)) \right\rbrace \\ & \equiv \left\langle \sigma_{\text{tot}} \right\rangle n_{0} \, L \, \int\limits_{-\infty}^{+\infty} g^{N_{\text{coll}}}(v_{\text{rel}}) \, \mathrm{d}v_{\text{rel}} \\ & \equiv \left\langle \sigma_{\text{tot}} \right\rangle n_{0} \, \beta \, L \int\limits_{-\infty}^{+\infty} f^{N_{\text{coll}}}(v_{\text{rel}}) \, \mathrm{d}v_{\text{rel}}\\ & = \left\langle \sigma_{\text{tot}} \right\rangle n_{0} \, \beta \, L, \end{aligned} $$ where a few new quantities have been introduced. Firstly, \(\phantom {\dot {i}\!}g^{N_{\text {coll}}}(v_{\text {rel}}) \, \mathrm {d}v_{\text {rel}}\) is the result of the path integral over the argon density distribution. In order to aid the interpretation of this result, however, it is separated out into a parameter β and distribution \(\phantom {\dot {i}\!}f^{N_{\text {coll}}}(v_{rel}) \, \mathrm {d}v_{rel}\). The former, called the beam-overlap, is a parameter between 0 and 1 that describes the effective length of the part of the argon packet that is probed by the ammonia molecules, relative to the total length of the packet. The latter is a normalized distribution function that describes the distribution of the relative velocities of the argon atoms that are encountered. This function determines the energy resolution of the measured cross section. The model is implemented as follows. Firstly, the trajectory simulations calculate the beam-overlap, in the form of gN(vrel) dvrel. The cross section and beam densities are not included. The beam-overlap is calculated by numerically integrating the path of the synchronous molecule from ti to tf. The simulation calculates, on every (variably-sized) time-step Δti, for every velocity vrel,j on a grid with spacing Δvrel: $$ \begin{aligned} & g^{z}_{\text{Ar}} \left\lbrace z(t)|z_{\text{valve}}+v_{\text{Ar}}(v_{\text{rel}},\vec{v}(t)) \cdot (t-t_{\text{valve}}), v~pw \right\rbrace \cdot \dots \\ & g^{v}_{\text{Ar}} \left\lbrace v_{\text{Ar}}(v_{\text{rel}},\vec{v}(t)) | \left\langle v_{\text{Ar}} \right\rangle, \sigma_{\mathrm{v_{Ar}}} \right\rbrace \cdot \dots \\ & \frac{\mathrm{d}v_{\text{Ar}}}{\mathrm{d}v_{rel}} \, q \left\lbrace \vec{r}(t) \right\rbrace \, v_{\mathrm{rel,j}} \, \Delta t_{i} \, \Delta v_{rel}, \end{aligned} $$ where \(g^{z}_{\text {Ar}}\) and \(g^{v}_{\text {Ar}}\) are the normal distributions from Eq. 4, and adds it to a histogram over vrel. This provides us with the relative velocity distribution of the beam-overlap, \(\phantom {\dot {i}\!}g^{N_{\text {coll}}}(v_{\text {rel}})\), which is then integrated over vrel to find the beam-overlap, β. The simulations were performed using ammonia distributions with different temperatures and by using either a complete model that derives the force on the ammonia molecules from the electric field in the synchrotron, which was calculated by SIMION [42], or a toy model that assumes a linear restoring force towards the synchronous molecule, using the trapping frequencies from Zieger et al. [39]. As the results of these simulations did not differ significantly from each other, we have decided to perform all further calculations using the toy model and assuming a zero temperature for the ammonia molecules. Since in this case each round-trip is identical, only a single round-trip is simulated. Note that 〈Ncoll〉 is then simply the number of collisions per round-trip, kcol. Measuring the collision cross section a function of collision energy We now have all necessary tools to reach our final goal; measuring the relative, total, integrated collision cross sections as a function of energy. The collision energy can be tuned in three different ways: by varying the velocity of the stored ammonia packets, by varying the temperature of the pulsed valve that releases the argon atoms, and by varying the timing of the supersonic argon beam with respect to the stored ammonia packets. Unfortunately, when we vary the collision energy, other parameters that influence the loss rates of the stored molecules will change as well. For instance, by changing the temperature of the valve that releases the argon beam, the average velocity of the beam will change, but so will the intensity, the velocity distribution and the beam overlap. When we change the velocity of the stored ammonia molecules, the argon beam will remain the same but the beam overlap will change. Luckily, the model that was derived in the previous section tells us how to take all these effects into account. We will first look at collision measurements as a function of the delay between the trigger of the valve that releases the argon atoms and the arrival time of the ND3 probe packet in the detection zone. This delay determines whether the ammonia molecules collide with atoms located more in the leading or trailing end of the argon packet, or, in fact, whether they collide at all. As the flight time from the valve to the synchrotron is much larger than the opening time of the valve, there is a strong correlation between the position of the argon atoms and their velocity. Hence, the delay determines the velocity of the argon atoms that are encountered by the ammonia beam. Note that in our experiments, the relative velocities are such that ammonia molecules only see part (20–30%) of the argon packet during the time they spend in the collision zone. Figures 12 and 13 show the ammonia loss rate measured as a function of the delay, for ammonia with velocities of 121.1 m/s and 138.8 m/s, respectively. The temperature of the valve that releases the argon atoms is kept at temperatures between −120 to 30°C as indicated in the figures. Each data point is the result of a 4 min collision measurement, such as the one depicted in Fig. 11. To be robust against possible drifts of the argon density, the data were taken while toggling between the two ammonia speeds after every data point and picking the timings from a list in a random order. As observed, the scans feature two peaks rather than one, resulting from the fact that the argon beam intersects the synchrotron at two distinct locations. These peaks become less well resolved as the argon packets become slower and concomitantly longer. Measured loss rate as a function of the delay between the trigger of the valve and the arrival time of the ammonia packets at the detection zone, for v\(\protect \phantom {\dot {i}\!}{~}_{\text {ND}_{3}}=121.1\) m/s. Panels a–d depict measurements with different temperatures of the valve housing that result in different average velocities as indicated in the top left corners. The black squares represent measurements containing 2400 shots each, while the coloured points represent measurements containing 21,600 shots each. The error bars, for the coloured points obscured by the symbols, denote the standard errors. The black lines show the results of simulations described in "Model to extract the relative cross section from the measured loss rates" section, scaled to fit the data in each panel. The reduced χ2's of the fits are 0.93, 1.01, 1.09, and 1.31 for panels a–d, respectively Same as Fig. 12, but for v\(\protect \phantom {\dot {i}\!}{~}_{\text {ND}_{3}}=138.8\) m/s. The reduced χ2's of the fits are 0.98, 0.89, 0.79, and 1.39 for panels a–d, respectively Additional data were taken at three specific timings of the argon beams, shown in Figs. 12 and 13 as the blue, red and grey symbols. The blue data points are measured using a short time delay between the valve and the arrival time of ammonia packet. Hence, in this case, the ammonia molecules probe the fast part of the argon beam. The red points are measurements that probe the central part of the argon beam, while the grey points are measurements that probe the slow part. Each of these measurements is the result of 21,600 shots, corresponding to a measurement time of 36 min per point. To detect and correct for possible drifts of the Ar beam density during the measurements, we cycled nine times through the six different configurations. No significant drifts were detected. The solid curves in Figs. 12 and 13 show results of simulations of our experiment using the model discussed in the previous section. The simulation uses as input the velocity and equilibrium radius of the synchronous molecule taken from simulations of the synchrotron [39], and the longitudinal and transverse position and velocity distributions of the argon beams. The horizontal displacement of the argon beam relative to the equilibrium orbit of the ammonia molecules was varied in order to optimize the fit (simultaneously for all 8 curves). From this we find (1) that the argon beam is displaced by 1.6 mm from the ammonia molecules' equilibrium orbit, which corresponds to a 58 mm distance between the two points around which the particles interact, and (2) the position of the crossing point with respect to detection zone II. The former determines the time difference between the peaks in Figs. 12 and 13, while the latter determines an off-set for the time axis. Furthermore, the simulations are scaled vertically to match the experimental data for each of the 8 curves individually. As seen from the figure, the simulations describe the measurements very well. For instance, the width of the distributions is well reproduced and so is the difference in signal at the two peaks. This agreement confirms that we have an excellent understanding of the experiment. From the simulations we retrieve the distributions of the relative velocities of the encountered argon atoms, which are shown in Figs. 14 and 15. Each panel in Figs. 14 and 15 corresponds to a panel in Fig. 12 or Fig. 13. The black curves in Figs. 14 and 15 show the distribution of collision energies integrated over the entire argon beam – this distribution is relevant when the cross section is found by scaling the simulation to the entire delay scan shown in Figs. 12 and 13. The blue, red and grey curves in Figs. 12 and 13 show the collision energy distributions for the data points taken in the front, middle and back part of the argon gas pulse, respectively. As in this case only part of the argon beam is probed, the distributions are more sharply peaked. As expected, the blue curve is centered at higher collision energies than the average, while the grey curve is centered at lower collision energies. The red curve is bi-modal which is an obvious disadvantage of the chosen alignment of the argon beam. a-d Collision energy distributions determined from simulations for collision experiments with \(v_{ND_3}=121.1\) m/s and vAr as indicated. The black curves represent the distribution for collision measurements that combine the measurements at each valve timing, i.e. all the black squares in Figs. 12 and 13. Since in this way the entire argon packet is probed, the distribution corresponds simply to the longitudinal velocity distribution of the argon beam at a particular temperature. The grey, red, and blue curves represent the collision energy distributions for the collision measurements indicated by the grey, red, and blue data points in Figs. 12 and 13. The grey, red, blue, and black bars in the bottom of each graph represent the widths of the distributions, as they are displayed by the horizontal error bars in Fig. 16 a-d Same as Fig 14a-d, but for \(v_{ND_3}=138.8\) m/s Total, integrated, ND3+Ar collision cross section versus collision energy. The meaning of the black, grey, red, and blue points is described in the main text. The vertical error bars denote standard errors, the horizontal error bars denote the collision energy distributions as depicted in Figs. 14 and 15. The grey line shows the result of scattering calculations[43], convoluted with a normal distribution with a standard deviation of 5 cm −1. The measurements are collectively fit to this calculation with a single global scaling factor that represents the density of the argon beam at T=−150 °C, which is found to be 7.8 ×109 cm −3. \(\chi _{\text {red},\nu =31}^{2}=1.3\). Reprinted with permission from Ref. [32] Ⓒ2018 APS The bars below Figs. 14 and 15 depict the standard deviation of the collision energy distributions, which are a measure for the energy spread of the collisions which are probed in our measurement. Clearly, the interpretation of the standard deviation is not obvious in the case of bi-modal distributions, but we will use these for lack of a better measure. We are now ready to retrieve the relative cross section as function of energy. Rewriting Eq. (14), we obtain an equation that relates the cross section to the measured loss rates: $$ \left\langle \sigma_{\text{tot}} \right\rangle = \frac{n_{0,c} \, n_{\mathrm{0,rel}} \, \beta \, L}{k_{\text{col}}}, $$ where the argon beam densities (at position z0) are written as the product of n0,c, the absolute density of the argon beam at Tvalve=−150 °C and frep=153 Hz, and n0,rel, the relative argon beam densities as presented in Table 1, together with the length of the argon pulses L. The beam overlap, β, is taken from the simulations. Before presenting our final result, there is one more thing to consider. In our experiment we determine the loss rate of stored ammonia molecules due to collisions with argon atoms. Although most collisions lead to loss, a small (∼10%) but significant fraction of elastic collisions takes place at such large distances that the ammonia molecules remain trapped in the ring. To correct for this effect we multiply the found loss rates with an energy dependent correction factor. This correction factor is calculated by combining our knowledge of the trapping potential with the differential collision cross section (the cross section as a function of scattering angle) from quantum close-coupling calculations of ND3 + Ar collisions, performed by Loreau and van der Avoird [43]. A detailed explanation is given in the supplementary material to Van der Poel et al. [32]. Figure 16 presents the resulting cross sections together with their uncertainties and collision energy distributions as determined by the simulations. The blue, red, and gray points in Fig. 16 are obtained from measurements at the front, center, and back of the argon beams at four different temperatures of the valve. The black data points result from averaging over all timings; these are the scaling factors obtained by fitting the simulated delay scans to the measurements shown in Figs. 12 and 13. The open and closed symbols are measured with ammonia molecules that have a velocity of 138.8 and 121.1 m/s, respectively. The uncertainties in the cross sections, in the range of 7–14%, are a combination of the uncertainties of the relative argon intensities (presented in Table 1, typically between 6–9%) and the uncertainties in the measured loss rates (as shown in Figs. 12 and 13, typically between 2–11%). The fact that we find consistent results when the collision energy is changed in different ways, gives us great confidence in the measured cross sections. Note that the loss rates from which the cross sections are derived vary from 0.5 ×10−3 to 3 ×10−3 per round-trip. We attribute the small differences observed between the different data sets to the approximations made in the model – particularly, the fact that the argon beam is assumed to have a gaussian velocity distribution. The solid line also shown in Fig. 16 is the result of theoretical calculations performed by Loreau and van de Avoird [43], convoluted with a normal distribution with a standard deviation of 5 cm −1. The measurements are collectively fit to this calculation with a single global scaling factor that represents the density of the argon beam at T=−150 °C, which is found to be 7.8 ×109 cm −3 about 1.2 m down-stream from the valve. This density is in agreement with a crude estimate of the density from the REMPI-measurements. In future, we plan to measure the density more accurately using a femtosecond laser, in a similar fashion as Meng et al. [44]. Although the ND3+Ar collision cross section in this energy range does not show spectacular features, the shallow minimum around 70 cm −1 predicted by theory is reproduced in the experiment. We have performed the first scattering experiment using a molecular synchrotron. Our measurements demonstrate that, by accumulating collision signal over the long time that the ammonia molecules are stored, the sensitivity is spectacularly increased. This high sensitivity has allowed us to measure the relative, total, integral cross section for ND3 + Ar collisions over an energy range of 40—140 cm −1 with a precision of a few percent. The collision energy was tuned in three different ways: (1) by changing the temperature of the valve that releases the argon atoms, (2) by changing the velocity of the stored packets of ammonia, and (3) by choosing which part of the argon packet, dispersed during its 1.2 m traversal from the valve to the synchrotron, is probed by the ammonia molecules. These measurements give consistent results and agree with theoretical scattering calculations. Besides the enhanced sensitivity and the relatively low energy that is obtained by using co-propagating beams, our method has a number of additional features that make it attractive: (1) By comparing packets that are simultaneously stored in the synchrotron, the measurements are independent of the ammonia intensity and immune to variations of the background pressure in the synchrotron. (2) As the probe packets interact with many argon packets, shot-to-shot fluctuations of the argon beam are averaged out. By rapidly toggling between different ammonia velocities and timings, slow drifts of the argon beam intensity are eliminated. A detailed characterization of the argon beams was crucial for obtaining a high precision. To retrieve the collision cross section from the measured loss rates, trajectory simulations were used to evaluate the overlap of the ammonia packet with the argon beam. These simulations were also used to access the energy resolution of the measurements. Measurements have shown that the (in-plane) alignment of the argon with respect to the synchrotron needs to be carefully considered. In the experiments presented here, the alignment was chosen in such a way that the argon beam crossed the path of the ammonia molecules at two distinct positions. This made it easy to check the validity of the simulations. A better energy resolution would be obtained, however, when the beamline would either be moved further inwards such that the collision zones are sufficiently far away from each other, or be moved outwards until only a single collision zone is left. The highest resolution would be obtained (at the cost of increasing the collision energy) by crossing the beams at right angles. This would make it possible to resolve the fine-structure on the elastic cross section due to scattering resonances predicted by Loreau et al. [43]. The collision energy is currently limited by the large difference between the velocity of the stored molecules and the velocities in the supersonic beam. Lower collision energy can be reached by using molecules from cryogenically cooled beams as collision partner [45]. As these beams typically have a much longer temporal profile, they would overlap with multiple packets. However, even in this case there will be packets that have no overlap and can serve as reference. Another strategy would be to use a larger synchrotron which would be able to store ammonia molecules at a higher velocity. Ideally, a synchrotron would be used that can store molecules directly from a supersonic beam without deceleration, resulting in higher densities. As the radius of the ring scales with the square of the forward velocity, such a ring would have to be ∼10 times larger (if the same voltages are used). Such a ring could be used to store beams both clockwise and anticlockwise, which makes it possible to perform calibration measurements at high energy. In this way it will be possible to measure collision energies from 0–2000 cm −1 in the same apparatus. Note that if the velocities of the beams are more similar, the energy resolution will also be improved [36], ultimately limited to the temperature of the stored ammonia packets. Finally, collision studies with paramagnetic atoms and molecules, such as hydrogen – the most abundant atom in the universe – can be performed in a magnetic synchrotron, as described in Van der Poel et al. [46]. Herbst E, Yates JT (2013) Introduction: Astrochemistry. Chem Rev 113:8707. Roueff E, Lique F (2013) Molecular excitation in the interstellar medium: Recent advances in collisional, radiative, and chemical processes. Chem Rev 113:8906. Toennies JP, Welz W, Wolf G (1979) Molecular beam scattering studies of orbiting resonances and the determination of van der Waals potentials for H–Ne, Ar, Kr, and Xe and for H 2–Ar, Kr, and Xe. J Chem Phys 71:614. Article ADS Google Scholar Chandler DW (2010) Cold and ultracold molecules: Spotlight on orbiting resonances. J Chem Phys 132:110901. Balakrishnan N, Dalgarno A, Forrey RC (2000) Vibrational relaxation of CO by collisions with 4He at ultracold temperatures. J Chem Phys 113:621. Naulin C, Costes M (2014) Experimental search for scattering resonances in near cold molecular collisions. Int Rev Phys Chem 33:427. van de Meerakker SYT, Bethlem HL, Meijer G (2008) Taming molecular beams. Nature Phys 4:595. van de Meerakker SYT, Bethlem HL, Vanhaecke N, Meijer G (2012) Manipulation and control of molecular beams. Chem Rev 112:4828. Carr LD, DeMille D, Krems RV, Ye J (2009) Cold and ultracold molecules: Science, technology and applications. New J Phys 11:055049. Bell MT, Softley TP (2009) Ultracold molecules and ultracold chemistry. Mol Phys 107:99. Gilijamse JJ, Hoekstra S, van de Meerakker SYT, Groenenboom GC, Meijer G (2006) Near-threshold inelastic collisions using molecular beams with a tunable velocity. Science 313:1617. Kirste M, Wang X, Schewe HC, Meijer G, Liu K, van der Avoird A, et al (2012) Quantum-state resolved bimolecular collisions of velocity-controlled OH with NO radicals. Science 338:1060. Vogels SN, Onvlee J, Chefdeville S, van der Avoird A, Groenenboom GC, van de Meerakker SYT (2015) Imaging resonances in low-energy NO-He inelastic collisions. Science 350:787. Onvlee J, Vogels SN, van de Meerakker SYT (2016) Unraveling Cold Molecular Collisions: Stark Decelerators in Crossed-Beam Experiments. Chem Phys Chem 17:3583. Chefdeville S, Stoecklin T, Bergeat A, Hickson KM, Naulin C, Costes M (2012) Appearance of Low Energy Resonances in CO–Para-H 2, Inelastic Collisions. Phys Rev Lett 109:023201. Chefdeville S, Kalugina Y, van de Meerakker SYT, Naulin C, Lique F, Costes M (2013) Observation of Partial Wave Resonances in Low-Energy O 2-H 2 Inelastic Collisions. Science 341:1094. Costes M, Naulin C (2016) Observation of quantum dynamical resonances in near cold inelastic collisions of astrophysical molecules. Chem Sci 7:2462. Henson AB, Gersten S, Shagam Y, Narevicius J, Narevicius E (2012) Observation of resonances in Penning ionization reactions at sub-kelvin temperatures in merged beams. Science 338:234. Lavert-Ofir E, et al (2014) Observation of the isotope effect in sub-kelvin reactions. Nature Chem 6:332. Klein A, et al (2017) Directly probing anisotropy in atom–molecule collisions through quantum scattering resonances. Nature Phys 13:35. Jankunas J, Bertsche B, Osterwalder A (2014) Study of the Ne (3 P 2) + CH 3F Electron Transfer Reaction Below 1 Kelvin. J Phys Chem A 118:3875. Jankunas J, Bertsche B, Jachymski K, Hapka M, Osterwalder A (2014) Dynamics of gas phase Ne ∗+NH 3 and Ne ∗+ND 3 Penning ionisation at low temperatures. J Chem Phys 140:244302. Jankunas J, Jachymski K, Hapka M, Osterwalder A (2015) Observation of orbiting resonances in He (3 S 1) + NH 3 Penning ionization. J Chem Phys 142:164305. Jankunas J, Jachymski K, Hapka M, Osterwalder A (2016) Importance of rotationally inelastic processes in low-energy Penning ionization of CHF 3. J Chem Phys 144:1. Jachymski K, Hapka M, Jankunas J, Osterwalder A (2016) Experimental and Theoretical Studies of Low-Energy Penning Ionization of NH 3, CH 3F, and CHF 3. Chem Phys Chem 17:3776. Gordon SD, Zou J, Tanteri S, Jankunas J, Osterwalder A (2017) Energy Dependent Stereodynamics of the Ne(\(^{3}\textit {P}_{2}\phantom {\dot {i}\!}\))+Ar Reaction. Phys Rev Lett 119:053001. Allmendinger P, et al (2016) New Method to Study Ion–Molecule Reactions at Low Temperatures and Application to the H\(\phantom {\dot {i}\!}_{2^{+}}\)+H\(\phantom {\dot {i}\!}_{2} \rightarrow \) H\(\phantom {\dot {i}\!}_{3}^{+}\)+H Reaction. Chem Phys Chem 17:3596. Willitsch S, Bell MT, Gingell AD, Softley TP (2008) Chemical applications of laser- and sympathetically-cooled ions in ion traps. Phys Chem Chem Phys 10:7200. Chang YP, Długołȩcki K, Küpper J, Rösch D, Wild D, Willitsch S (2013) Specific Chemical Reactivities of Spatially Separated 3-Aminophenol Conformers with Cold Ca + Ions. Science 342:98. Strebel M, Müller TO, Ruff B, Stienkemeier F, Mudrich M (2012) Quantum rainbow scattering at tunable velocities. Phys Rev A 86:062711. Sawyer BC, et al (2011) Cold heteromolecular dipolar collisions. Phys Chem Chem Phys 13:19059. van der Poel APP, Zieger PC, van de Meerakker SYT, Loreau J, van der Avoird A, Bethlem HL (2018) Cold Collisions in a Molecular Synchrotron. Phys Rev Lett 120:033402. Crompvoets FM, Bethlem HL, Jongma RT, Meijer G (2001) A prototype storage ring for neutral molecules. Nature 411:174. Heiner CE, Carty D, Meijer G, Bethlem HL (2007) A molecular synchrotron. Nature Phys 3:115. Zieger PC, van de Meerakker SYT, Heiner CE, Bethlem HL, van Roij AJA, Meijer G (2010) Multiple Packets of Neutral Molecules Revolving for over a Mile. Phys Rev Lett 105:173001. Shagam Y, Narevicius E (2013) Sub-Kelvin Collision Temperatures in Merged Neutral Beams by Correlation in Phase-Space. J Phys Chem C 117:22454. Osterwalder A (2015) Merged neutral beams. EPJ Tech Instrum 2:10. Zieger PC, Eyles CJ, Meijer G, Bethlem HL (2013) Resonant excitation of trapped molecules in a molecular synchrotron. Phys Rev A 87:043425. Zieger PC, Eyles CJ, van de Meerakker SYT, van Roij AJA, Bethlem HL, Meijer G (2013) A Forty-Segment Molecular Synchrotron. Z Phys Chem 227:1605. Even U (2015) The Even-Lavie valve as a source for high intensity supersonic beam. EPJ Tech Instrum 2:17. Minnhagen L (1973) Spectrum and the energy levels of neutral argon, Ar I. J Opt Soc Am 63:1185. Dahl D (2000) SIMION for the personal computer in reflection. Int J Mass Spec 200:3. Loreau J, van der Avoird A (2015) Scattering of NH 3 and ND 3 with rare gas atoms at low collision energy. J Chem Phys 143:184303. Meng C, van der Poel APP, Cheng C, Bethlem HL (2015) Femtosecond laser detection of Stark-decelerated and trapped methylfluoride molecules. Phys Rev A 92:023404. Hutzler NR, Lu HI, Doyle JM (2012) The buffer gas beam: An intense, cold, and slow source for atoms and molecules. Chem Rev 112:4803. van der Poel APP, Dulitz K, Softley TP, Bethlem HL (2015) A compact design for a magnetic synchrotron to store beams of hydrogen atoms. New J Phys 17:055012. We thank Peter Zieger, Bas van de Meerakker, Gerard Meijer and Wim Ubachs for help and support. We are indebted to Jérôme Loreau and Ad van der Avoird for providing us theoretical calculations of the differential and integral cross section for ND3 + Ar collisions and for many helpful discussions. This research has been supported by the Netherlands Foundation for Fundamental Research of Matter (FOM) via the program "Broken mirrors and drifting constants". The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. LaserLaB, Department of Physics and Astronomy, Vrije Universiteit, De Boelelaan 1081, Amsterdam, The Netherlands Aernout P. P. van der Poel & Hendrick L. Bethlem Aernout P. P. van der Poel Hendrick L. Bethlem APPvdP performed the experiments and simulations. HLB supervised the project. APPvdP and HLB interpreted the data and wrote the manuscript. Both authors read and approved the final manuscript. Correspondence to Hendrick L. Bethlem. P. van der Poel, A.P., Bethlem, H.L. A detailed account of the measurements of cold collisions in a molecular synchrotron. EPJ Techn Instrum 5, 6 (2018). https://doi.org/10.1140/epjti/s40485-018-0048-y DOI: https://doi.org/10.1140/epjti/s40485-018-0048-y Cold molecules Stark deceleration Molecular beams
CommonCrawl
Classification of irregular free boundary points for non-divergence type equations with discontinuous coefficients Serena Dipierro 1, , Aram Karakhanyan 2, and Enrico Valdinoci 1, 3, 4,, School of Mathematics and Statistics, University of Western Australia, 35 Stirling Highway, Crawley WA 6009, Australia Maxwell Institute for Mathematical Sciences and School of Mathematics, University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United Kingdom Istituto di Matematica Applicata e Tecnologie Informatiche, Consiglio Nazionale delle Ricerche, Via Ferrata 1, 27100 Pavia, Italy Dipartimento di Matematica, Università degli studi di Milano, Via Saldini 50, 20133 Milan, Italy * Corresponding author: Enrico Valdinoci Received July 2017 Revised February 2018 Published September 2018 Fund Project: Supported by INdAM Istituto Nazionale di Alta Matematica and Australian Research Council Discovery Project DP170104880 NEW Nonlocal Equations at Work We provide an integral estimate for a non-divergence (non-varia-tional) form second order elliptic equation $a_{ij}u_{ij} = u^p$, $u≥ 0$, $p∈[0, 1)$, with bounded discontinuous coefficients $a_{ij}$ having small BMO norm. We consider the simplest discontinuity of the form $x\otimes x|x|^{-2}$ at the origin. As an application we show that the free boundary corresponding to the obstacle problem (i.e. when $p = 0$) cannot be smooth at the points of discontinuity of $a_{ij}(x)$. To implement our construction, an integral estimate and a scale invariance will provide the homogeneity of the blow-up sequences, which then can be classified using ODE arguments. Keywords: Free boundary, blow-up sequences, non-divergence operators, monotonicity formulae. Mathematics Subject Classification: 35R35, 35B65. Citation: Serena Dipierro, Aram Karakhanyan, Enrico Valdinoci. Classification of irregular free boundary points for non-divergence type equations with discontinuous coefficients. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 6073-6090. doi: 10.3934/dcds.2018262 H. W. Alt and D. Phillips, A free boundary problem for semilinear elliptic equations, J. Reine Angew. Math., 368 (1986), 63-107. Google Scholar D. J. Araújo, R. Leitão and E. V. Teixeira, Infinity Laplacian equation with strong absorptions, J. Funct. Anal., 270 (2016), 2249-2267. doi: 10.1016/j.jfa.2015.12.012. Google Scholar I. Blank and K. Teka, The Caffarelli alternative in measure for the nondivergence form elliptic obstacle problem with principal coefficients in VMO, Comm. Partial Differential Equations, 39 (2014), 321-353. doi: 10.1080/03605302.2013.823988. Google Scholar X. Cabré, Elliptic PDE's in probability and geometry: Symmetry and regularity of solutions, Discrete Contin. Dyn. Syst., 20 (2008), 425-457. doi: 10.3934/dcds.2008.20.425. Google Scholar L. A. Caffarelli, The regularity of free boundaries in higher dimensions, Acta Math., 139 (1977), 155-184. doi: 10.1007/BF02392236. Google Scholar L. Caffarelli and S. Salsa, A Geometric Approach to Free Boundary Problems, Providence: American Mathematical Society, 2005. doi: 10.1090/gsm/068. Google Scholar F. Chiarenza, M. Frasca and P. Longo, W2, p-solvability of the Dirichlet problem for nondivergence elliptic equations with VMO coefficients, Trans. Amer. Math. Soc., 336 (1993), 841-853. doi: 10.2307/2154379. Google Scholar S. Dipierro, O. Savin and E. Valdinoci, A nonlocal free boundary problem, SIAM J. Math. Anal., 47 (2015), 4559-4605. doi: 10.1137/140999712. Google Scholar M. Focardi, M. S. Gelli and E. Spadaro, Monotonicity formulas for obstacle problems with Lipschitz coefficients, Calc. Var. Partial Differential Equations, 54 (2015), 1547-1573. doi: 10.1007/s00526-015-0835-0. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, New York: Springer-Verlag, 1998. Google Scholar A. Karakhanyan, Minimal Surfaces Arising in Singular Perturbation Problems, Preprint, 2016.Google Scholar M. Kassmann, Harnack inequalities: An introduction, Bound. Value Probl., 2007 (2007), Art. ID 81415, 21 pp. Google Scholar A. Petrosyan, H. Shahgholian and N. Uraltseva, Regularity of Free Boundaries in Obstacle-Type Problems, Providence: American Mathematical Society, 2012. doi: 10.1090/gsm/136. Google Scholar D. dos Prazeres and E. V. Teixeira, Cavity problems in discontinuous media, Calc. Var. Partial Differential Equations, 55 (2016), Art. 10, 15 pp. doi: 10.1007/s00526-016-0955-1. Google Scholar J. Spruck, Uniqueness in a diffusion model of population biology, Comm. Partial Differential Equations, 8 (1983), 1605-1620. doi: 10.1080/03605308308820317. Google Scholar E. V. Teixeira, Hessian continuity at degenerate points in nonvariational elliptic problems, Int. Math. Res. Not. IMRN, (2015), 6893-6906. doi: 10.1093/imrn/rnu150. Google Scholar E. V. Teixeira, Regularity for the fully nonlinear dead-core problem, Math. Ann., 364 (2016), 1121-1134. doi: 10.1007/s00208-015-1247-3. Google Scholar N. Trudinger, Elliptic equations in non-divergence form, Miniconference on Partial Differential Equations(Canberra, 1981), Proc. Centre Math. Anal. Austral. Nat. Univ., 1, Austral. Nat. Univ., Canberra, 1982, 1-16. Google Scholar Figure 1. Examples of homogeneous solutions of the obstacle problem with obtuse/acute singular free boundary Andrea Bonfiglioli, Ermanno Lanconelli and Francesco Uguzzoni. Levi's parametrix for some sub-elliptic non-divergence form operators. Electronic Research Announcements, 2003, 9: 10-18. Petri Juutinen. Convexity of solutions to boundary blow-up problems. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2267-2275. doi: 10.3934/cpaa.2013.12.2267 Teresa Alberico, Costantino Capozzoli, Luigi D'Onofrio, Roberta Schiattarella. $G$-convergence for non-divergence elliptic operators with VMO coefficients in $\mathbb R^3$. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 129-137. doi: 10.3934/dcdss.2019009 C. Brändle, F. Quirós, Julio D. Rossi. Non-simultaneous blow-up for a quasilinear parabolic system with reaction at the boundary. Communications on Pure & Applied Analysis, 2005, 4 (3) : 523-536. doi: 10.3934/cpaa.2005.4.523 Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2935-2946. doi: 10.3934/cpaa.2013.12.2935 Lan Qiao, Sining Zheng. Non-simultaneous blow-up for heat equations with positive-negative sources and coupled boundary flux. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1113-1129. doi: 10.3934/cpaa.2007.6.1113 Cristophe Besse, Rémi Carles, Norbert J. Mauser, Hans Peter Stimming. Monotonicity properties of the blow-up time for nonlinear Schrödinger equations: Numerical evidence. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 11-36. doi: 10.3934/dcdsb.2008.9.11 Huyuan Chen, Hichem Hajaiej, Ying Wang. Boundary blow-up solutions to fractional elliptic equations in a measure framework. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1881-1903. doi: 10.3934/dcds.2016.36.1881 Yihong Du, Zongming Guo, Feng Zhou. Boundary blow-up solutions with interior layers and spikes in a bistable problem. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 271-298. doi: 10.3934/dcds.2007.19.271 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Pavol Quittner, Philippe Souplet. Blow-up rate of solutions of parabolic poblems with nonlinear boundary conditions. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 671-681. doi: 10.3934/dcdss.2012.5.671 Keng Deng, Zhihua Dong. Blow-up for the heat equation with a general memory boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2147-2156. doi: 10.3934/cpaa.2012.11.2147 Yihong Du, Zongming Guo. The degenerate logistic model and a singularly mixed boundary blow-up problem. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 1-29. doi: 10.3934/dcds.2006.14.1 Claudia Anedda, Giovanni Porru. Second order estimates for boundary blow-up solutions of elliptic equations. Conference Publications, 2007, 2007 (Special) : 54-63. doi: 10.3934/proc.2007.2007.54 Pablo Álvarez-Caudevilla, V. A. Galaktionov. Blow-up scaling and global behaviour of solutions of the bi-Laplace equation via pencil operators. Communications on Pure & Applied Analysis, 2016, 15 (1) : 261-286. doi: 10.3934/cpaa.2016.15.261 Doyoon Kim, Hongjie Dong, Hong Zhang. Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4895-4914. doi: 10.3934/dcds.2016011 Jingxue Yin, Chunhua Jin. Critical exponents and traveling wavefronts of a degenerate-singular parabolic equation in non-divergence form. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 213-227. doi: 10.3934/dcdsb.2010.13.213 Hiroyuki Takamura, Hiroshi Uesaka, Kyouhei Wakasa. Sharp blow-up for semilinear wave equations with non-compactly supported data. Conference Publications, 2011, 2011 (Special) : 1351-1357. doi: 10.3934/proc.2011.2011.1351 Kyouhei Wakasa. Blow-up of solutions to semilinear wave equations with non-zero initial data. Conference Publications, 2015, 2015 (special) : 1105-1114. doi: 10.3934/proc.2015.1105 Asma Azaiez. Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2397-2408. doi: 10.3934/cpaa.2019108 Serena Dipierro Aram Karakhanyan Enrico Valdinoci
CommonCrawl
2007-08 Academic Year Colloquium Schedule ' styleString : '' //#unimpressed with the over-qualification of this selector. }; jQuery('head').append(hiddenMenu.styleString); }); An Introduction to Parallel Computing David A. Reimann Parallel computing employs the use of multiple processors and specialized algorithms to solve problems.Using multiple processors has the potential to improve performance by dividing a task among several processors, thus reducing the amount of work each a processor, in turn reducing the time required to solve a problem.An overview of historical and modern parallel computer architectures will be given.Parallel computers are classified by their connection topology and control mechanisms. The recent development of multi-core machines has the potential to deliver inexpensive parallel computing.However, special algorithms must be developed that break a task into independent components. Because the number and speed of communication channels between processors influences performance, understanding how an algorithm affects communication of information among processors is critical in overall performance.Examples of sequential and parallel algorithms to solve several tasks will be presented to help illustrate these concepts. Palenske 227 Planning for Graduate Study in Mathematics and Computer Science A degree in mathematics or computer science is excellent preparation for graduate school in areas such as mathematics, statistics, computer science, engineering, finance, and law. Come learn about graduate school and options you will have to further your education after graduation. Decoding Nazi Secrets - Part 1 NOVA Video Most historians agree that by enabling Allied commanders to eavesdrop on German plans, Station X shortened the war by 2 or 3 years. Its decoded messages played a vital role in defeating the U-boat menace, cutting off Rommel's supplies in North Africa, and launching the D-Day landings. Now, for the first time on television, a 2-hour NOVA Special tells the full story of Station X, drawing on vivid interviews with many of the colorful geniuses and eccentrics who attacked the Enigma. Wartime survivors recall such vivid episodes as the British capture of the German submarine U-110; one of its officers describes how he saved a book of love poems inscribed to his sweetheart but failed to destroy vital Enigma documents on board. Decoding Nazi Secrets also features meticulous period reenactments shot inside the original buildings at Station X, including recreations of the world's first computing devices that aided codebreakers with their breakthroughs. Station X not only helped reverse the onslaught of the Third Reich, but also laid the groundwork for the invention of the digital computer that continues to transform all our lives.See http://www.pbs.org/wgbh/nova/decoding/ for the companion website. One important note: Cayley Rice has an ongoing research project related to codebreaking at Bletchly Park, the subject of this video. She is looking for students who might be interesting in working with her. Please come to the video and talk to her afterwards for more information. An introduction to Harmonic Analysis and Dispersive Estimates for Schrodinger Operators William Green, Class of '05 Urbana-Champaign, IL Harmonic Analysis can be defined as roughly the branch of analysis that arose from the study of Fourier Series and the Fourier Transform. An overview of necessary concepts in analysis, harmonic analysis and spectral theory will be given with an eye towards discussing estimates of certain Schrodinger operators. We will discuss certain estimates of the solution operator to the Schrodinger equation, generally concentrating on results in dimensions 3 and higher. We will end with a discussion of some open questions in the field. Exploring the Best Joke of the 19th Century: The History of Mathematics in Action Deborah A. Kent Hillsdale, Michigan The story surrounding of Neptune's discovery provides an excitingillustration of what historians of mathematics do. Neptune was first sightedas a new planet on 23 September 1846 at the Berlin observatory. Thesensational news reached London a week later and the ensuing dispute createdone of the great (and ongoing) priority debates in the history of science.About a month after the initial observation, word of the new planet alsoarrived in America where the controversy captured both popular interest andscientific attention. A handful of nineteenth-century scientists who shareda vision for professionalizing science in America viewed the Neptune affairas an opportunity to establish the legitimacy of American science inresponse to perceived European scientific superiority. While Europeanadministrators of science quibbled over the priority question, the Harvardmathematician Benjamin Peirce — considered an upstart Americanscientist — dared to question the mathematical particulars of the discovery.Recent twentieth-century events and manuscript discoveries furtherilluminate the story of planetary controversy. Image reconstruction in multi-channel model under Gaussian noise Veera Holdai We will consider the problem of image reconstruction. Starting from some classical problems, we will gradually add some features to them. The main problem of image boundary reconstruction is double nonparametric due to multi-channel model and due to the object of estimation. The large sample asymptotics of the minimax risk will be discussed and asymptotically optimal estimator will be suggested. Abel's Impossibility Theorem Susan J. Sierra You know the quadratic formula, but what about the cubic formula There's also a quartic formula for fourth degree equations. You may have heard, however, that there is no formula to solve a quintic polynomial by adding, subtracting, multiplying, dividing, and taking roots of the coefficients. This was proved by the great Norwegian mathematican Niels Henrik Abel in 1824. We'll talk about the elegant algebraic structures that encode information about solving polynomials, do a bit of basic group theory and Galois theory, and prove Abel's "impossibility theorem." Time permitting, we'll end with some intriguing mathematical puzzles. How the DFA (deterministic finite automaton) is not Thomas F. Piatkowski Professor of Computer Science and Electrical and Computer Engineering Automata theory is one of the most mathematical areas of computerscience. Two of the important uses of automata are: to assist in the study and categorization of formal (computer) languages, and to specify system behavior standards for implementable discrete systems. One of the simplest types of automaton is the deterministic finiteautomaton (DFA) — the type used to recognize "regular" languages.Interestingly enough, the classical DFA is not deterministic, not finite, and not an automaton. The details of this paradoxical contention will be explored usingconcepts of state-system specification. The Fractal Calculus Project Mark M. Meerschaert Professor and Chairperson Department of Statistics and Probability Fractional derivatives are almost as old as their integer-order cousins. Recently, fractional derivatives have found new applications in engineering, physics, finance, and hydrology. In physics, fractional derivatives are used to model anomalous diffusion, where a cloud of particles spreads differently than the classical Brownian motion model predicts. A probability model for anomalous diffusion is based on particle jumps with power law tails. The probability of a jump length larger than $r$ falls off like $r^{-\alpha}$ as $r\to\infty$. For $0<\alpha<2$ these particle jumps have infinite variance, indicating a faster than usual spreading rate. Particle traces are random fractals whose dimension $\alpha$ equals the power law tail exponent. A fractional diffusion equation for the concentration of particles $c(x,t)$ at time $t$ and location $x$ takes a form that can be solved via Fourier transforms. Fractional time derivatives model particle sticking or trapping in a porous medium. In finance, price jumps replace particle jumps, and the same models apply. In this talk, we give an introduction to this new area, starting from the beginning and ending with a look at ongoing research. Summer and Off-Campus Programs Department of Mathematics and Computer Science Have you ever wondered if you can study mathematics and/or computer science off-campus? Either during the summer or during the academic year? Each year a number of high-quality academic opportunities are availableto Albion College students. Options include research/study internships at academic institutions both within the United States and abroad, numerous federal government agencies, and a number of government scientific laboratories. In this presentation we will tour a new portion of the Albion College Math/CS website that illustrates these various opportunities as well as provide adviceon how to apply, deadlines, any other pertinent information. Non-Inferiority: the Basics, a Saga, and Maybe an Opportunity Tom C. Venable, Ph.D. In clinical trials, using placebo is unethical in many therapeutic areas. In turn, non-inferiority studies are common. An experimental drug is compared against an active-control, that is, a gold standard. Simply said, we move from hypothesis testing to demonstrate that a new drug is superior to placebo to the use of confidence intervals to demonstrate that a new drug is not-inferior to the standard. However, these studies are challenging, especially in the necessary sample size, the choice of the standard and the margin itself, plus doing so in an extremely regulated environment. This non-technical presentation includes the basics of superiority and non-inferiority study designs, their pros and cons, their methodologies, a non-inferiority story, plus an opportunity for continued research. Computer Generated Effects for Film Joseph Cavanaugh Visual Effects Technical Director Sony Imageworks Los Angeles,, CA The speaker will give an overview of how a blend of art and technology is used to create computer generated effects for film. These effects range from elemental effects such as fire and water to dynamics effects such as crumbling and breaking objects. He will touch on the basic application of the mathematics used to create computer generated effects. He will show examples from recent movies such as Beowulf and Spiderman 3. A Brief Introduction to Monte Carlo Simulations Fatih Celiker Detroit, Michigan, USA In this talk I will give a brief introduction to Monte Carlosimulations. A short overview of some theoretical considerations willbe followed by numerous examples coming from various areas of math,science, engineering, and finance. In their basic form, thesesimulations are extremely easy to generate, and for that matter theyhave found numerous applications in areas where randomness play acrucial role. Some examples are, simulation of Bernoulli's experiments(coin flip), the birthday problem, traffic flow, random walk, Brownianmotion, neutron shielding, financial option pricing, and insurancepricing. Moreover, applications generalize to deterministic problemssuch as computation of areas and volumes, approximating solutions ofpartial differential equations, approximating the value of the numbermathematical constants such as pi and e, and optimization problems areonly some of these. What is an Actuary? Dustin Turner, '06 Actuarial Department North Pointe Holdings Corporation In this talk I will give a broad overview of the actuarial profession. Many math majors have heard of the actuarial profession, but few are quite sure of what it entails. I will discuss the process of becoming a credentialed Actuary, while focusing on the responsibilities and perks of entry-level Actuarial Analyst positions. I will also be presenting examples of some fundamental actuarial exercises, including insurance pricing and loss reserving. Careers in Mathematics and Computer Science A degree in mathematics or computer science is excellent preparation for employment in areas such as teaching, actuarial science, software development, engineering, and finance. Come learn about career opportunities awaiting you after graduation. Slides from the talk are available at http://zeta.albion.edu/~dreimann/talks/careers/careers.html. Conjecture and Proof Dennis Ross, '08 Senior Mathematics Major Albion,, MI Problem solving is a fundamental skill in mathematics. However, not all problems are created equally. In this interactive colloquium we will explore several seemingly innocuous problems and discover the underlying combinatorial or number theoretic structures. We will also explore the concept of Grundy Numbers (Nimbers) and their relation to bizarre combinatorial games. This is an accessible talk to mathematicians and computer scientists of all levels, and remember to bring a pencil. Location: Palenske 227 Markov Processes: Markov Chains, Poisson Processes, Brownian Motion Nadiya Fink Visiting Assistant Professor The Markov property indicates that, with knowledge of the current state, previous trajectories are irrelevant for predicting the probability of the future of a process. A Markov chain is a discrete-time stochastic (i.e. random) process possessing the Markov property. Probabilities and expected values on a Markov chain can be evaluated by a technique called First Step Analysis. An analogous technique can be applied to continuous-time processes. We will discuss an elementary introduction to Markov chains and First Step Analysis, followed by a broader description and discussion of the long-term behavior of Markov chains. Further, we will get acquainted with the Poisson Processes which are continuous-time processes with finite number of states, and, finally, will overview the continuous processes and their applications. Arctangent Identities for Pi Jack Calcut Interactic Holdings Is there a better identity for pi than pi=4arctan(1)? Are the degree angle measures ever rational in a triangle whose side lengths form a Pythagorean triple? Which regular polygons may be built on a geoboard? The answers to these questions are intimately related to arctangent identities for pi, which we will explore using the number theory of the Gaussian integers. We will present some of the historical context as well as some directions for further research. Life isn't Fair: A Mathematical Argument in Favor of Benevolent Dictatorships Cayley Rice Albion, Michigan, USA Arrow's theorem, proved in the '50s, suggests that under very reasonable restrictions, the only sensible method of societal decision making is dictatorial. In this talk we'll explore a few different models of voting, how theoretical math can be applied to models of voting, and just how un-sensible voting models can get. A part of the talk will develop notation to discuss voting scenarios in mathematical notation. We'll see how the language of abstract mathematics can be deftly applied to problems like this and, while the notation may be quite complicated, the subsequent mathematics is often already understood. This talk is in recognition of Math Awareness Month (as determined by the AMS, MAA, and SIAM to be April), whose theme this year is math and voting. How the 2007 LMMC Albion Math Team captured the infamous Klein Cup Jeremy Troisi, '08 Student, Mathematics Major and Economics and Management Major The test administered for the Lower Michigan Mathematics Competition (LMMC) in the Spring of 2007 contained a very strong focus on Proof by Mathematical Induction, often dubbed 'Induction', compared to past LMMC examinations. Being able to solve such problems quickly as well as a simple combinatorics problem, a simple bounding problem, and a college geometry problem within three hours assured victory. I will go over a complete solution process through a few of these problems and describe a few other problems time permitting.
CommonCrawl
Cost-effectiveness analysis of three algorithms for diagnosing primary ciliary dyskinesia: a simulation study Panayiotis Kouis ORCID: orcid.org/0000-0003-0511-53521,2,9, Stefania I. Papatheodorou2,3, Nicos Middleton4, George Giallouros1, Kyriacos Kyriacou5,6, Joshua T. Cohen7, John S. Evans8 & Panayiotis K. Yiallouros1 Primary Ciliary Dyskinesia (PCD) diagnosis relies on a combination of tests which may include (a) nasal Nitric Oxide (nNO), (b) High Speed Video Microscopy (HSVM) and (c) Transmission Electron Microscopy (TEM). There is variability in the availability of these tests and lack of universal agreement whether diagnostic tests should be performed in sequence or in parallel. We assessed three combinations of tests for PCD diagnosis and estimated net sensitivity and specificity as well as cost-effectiveness (CE) and incremental cost-effectiveness (ICE) ratios. A hypothetical initial population of 1000 referrals (expected 320 PCD patients) was followed through a probabilistic decision analysis model which was created to assess the CE of three diagnostic algorithms (a) nNO + TEM in sequence, (b) nNO + HSVM in sequence and (c) nNO/HSVM in parallel followed, in cases with conflicting results, by confirmatory TEM (nNO/HSVM+TEM). Number of PCD patients identified, CE and ICE ratios were calculated using Monte Carlo simulations. Out of 320 expected PCD patients, 313 were identified by nNO/HSVM+TEM, 274 with nNO + HSVM and 198 with nNO + TEM. The nNO/HSVM+TEM had the highest mean annual cost (€209 K) followed by nNO + TEM (€150 K) and nNO + HSVM (€136 K). The nNO + HSVM algorithm dominated the nNO + TEM algorithm (less costly and more effective). The ICE ratio for nNO/HSVM+TEM was €2.1 K per additional PCD patient identified. The diagnostic algorithm (nNO/HSVM+TEM) with parallel testing outperforms algorithms with tests in sequence. These findings, can inform the dialogue on the development of evidence-based guidelines for PCD diagnostic testing. Future research in understudied aspects of the disease, such as PCD-related quality of life and PCD-associated costs, is needed to help the better implementation of these guidelines across various healthcare systems. Primary Ciliary Dyskinesia (PCD) is a genetically heterogeneous disorder that affects one in approximately 15,000 live births [1]. PCD is characterized by chronic sinopulmonary symptoms and development of bronchiectasis, recurrent otitis, male infertility and situs inversus [2]. Defective components of the ciliary axoneme (e.g. dynein arms) as well as dysfunctional regulatory or transport proteins have been implicated in the etiology of PCD and to date more than 40 genes have been found to be causative for PCD [3]. This genetic heterogeneity translates into a wide spectrum of ciliary structural and beating abnormalities and a diverse diagnostic and clinical phenotype. Patients with PCD usually present with chronic cough and rhinorrhea as well as recurrent infections of unknown aetiology. Some of them also present with situs abnormalities and in the case of older patients, with infertility or subfertility [2]. Bronchiectasis may develop already in childhood in some patients [4] and it is usually present in most adult PCD patients [5]. Late diagnosis is associated with a worse clinical picture and reduced lung function [6, 7], while several patients undergo surgical resection of lung segments to control lung infection, even before diagnosis is established [8]. Situs inversus is the only characteristic manifestation associated with PCD. With the exception of chronic cough and rhinorrhea, all other manifestations may not always be present and may be characterized by considerable variability in their severity [9,10,11]. As a result, heterogeneity in the clinical picture presents a challenge to the clinician who needs to decide when to test for PCD and with which diagnostic test(s). Diagnostic approach is further perplexed by heterogeneity in the diagnostic features of the disease as respiratory epithelial samples from PCD patients exhibit diverse ciliary ultrastructure [12] and motility pattern [13] especially in the presence of infection [14]. Up to date diagnostic testing for PCD relies on a combination of tests which primarily includes nasal Nitric Oxide (nNO) [15], High Speed Video Microscopy (HSVM) [16, 17] and Transmission Electron Microscopy (TEM) [8, 18]. Measurement of nNO is considered the simplest and fastest among the PCD diagnostics tests as it only involves air suction from the nasal passage via an olive while the subject preferably maintains velum closure through active mouth exhalation against resistance [19]. The other two tests require brushing of the inferior nasal turbinate and the collection of an adequate sample of respiratory epithelial cells in order to allow for the assessment of ciliary motility using HSVM and ciliary ultrastructure using TEM [20]. As no single test has 100% sensitivity and specificity [21], which is further complicated by the fact that many centers lack either the equipment or expertise to perform all required tests, some of which are quite laborious and time consuming, different diagnostic algorithms for diagnosis of PCD have been adopted by diagnostic centers across the world [22]. Recently, nNO has been proposed as the screening test of choice in cohorts of patients with PCD-suspect manifestations due to its high ability to discriminate between PCD and non-PCD subjects [15, 23]. Although the cost of a (validated) chemiluminescence NO analyser is quite high (approximately €40,000 per piece), the recent development of handheld and cheaper electrochemical NO analysers [24] and publication of relevant technical guidelines by the American Thoracic Society (ATS) and the European Respiratory Society (ERS) [19] may further enhance the potential of nNO measurement to be used as a screening test in the clinical setting and especially in countries with limited resources or in areas that lack, or are distant from, PCD-specialist centers [25]. However, the use of a non-perfect screening test such as nNO in isolation may allow for some PCD patients with false negative results to be missed entirely or some non-PCD patients with false-positive results to undergo further diagnostic tests. For this reason, the diagnostic algorithm described as part of Standardized Operating Procedures for PCD diagnosis developed by the EU-funded Seventh Framework Program project BESTCILIA, in 2016, proposed standardized operating procedures for PCD diagnosis and a diagnostic algorithm which recommended that nNO should be performed in parallel with HSVM and confirmatory TEM assessment should follow in case of conflicting results (Additional file 1). Similarly, the recent ERS guidelines for the diagnosis of Primary Ciliary Dyskinesia also recommend a diagnostic algorithm which includes as a first step the parallel performance of both nNO and HSVM and confirmation with TEM in a second step [26]. The rationale of employing a diagnostic algorithm which proposes parallel performance of nNO and HSVM, is to take advantage of the ability of the one test to identify cases that the other test may have missed. Consequently, a positive result in both tests provides evidence that PCD is "highly likely" while a negative result in both tests, especially in the absence of very strong clinical suspicion, provides evidence to consider PCD diagnosis as "extremely unlikely" [26]. Nevertheless, such algorithms require the performance of a significantly higher number of nasal brushings for HSVM and result in higher costs compared to algorithms that only require the performance of a confirmatory test (HSVM or TEM) following a positive screening test. To better illuminate the decision-making process, the overall diagnostic accuracy of each algorithm, the associated costs as well as the resulting health benefits for PCD patients, need to be addressed and compared. This study aimed to evaluate the diagnostic accuracy, the cost-effectiveness and incremental cost-effectiveness of three distinct diagnostic algorithms for patients referred for PCD diagnostic testing across the European Union through a probabilistic decision analysis framework. Decision tree model Using a probabilistic decision tree model, three diagnostic algorithms were evaluated versus each other and against a baseline of not performing any diagnostic testing for PCD. The three diagnostic algorithms evaluated were a) Sequential testing with nNO screening followed by HSVM only when nNO was positive (nNO + HSVM), b) Sequential testing with nNO screening followed by TEM only when NO was positive (nNO + TEM), c) nNO performed in parallel with HSVM and followed, in cases with conflicting results, by confirmatory TEM (nNO/HSVM+TEM). The decision tree displaying the evaluated three diagnostic algorithms in this study is presented in Fig. 1. The starting population of referrals for PCD diagnosting testing that enters the model was defined as one thousand per year for the whole of the European Union (EU). To estimate the classification of patients under each diagnostic algorithm, Bayes' Theorem was used. Bayes' Theorem allows the calculation of probability of suffering from PCD given the pre-test probability of disease and given a positive or negative diagnostic test [27]. The formula for estimating the probability of disease given positive diagnostic test is: $$ P\left( PCD| Test+\right)=\frac{P\left( Test+| PCD\right)\ast P(PCD)}{P\left( Test+| PCD\right)\ast P(PCD)+P\left( Test+| nonPCD\right)\ast P(nonPCD)} $$ Decision Tree diagram for the three different diagnostic algorithms for PCD. The decision tree begins from the left side and the decision whether to perform nNO + TEM, nNO + HSVM or nNO/HSVM+TEM. Squares represent decision nodes, circles represent chance nodes and triangles represent outcome nodes Where P(Test+|PCD) is the probability of positive test given PCD is present (test sensitivity), P(PCD) is the prevalence of PCD in the tested population, P(Test+|nonPCD) is the probability of positive test given disease is not present (1-specificity of the test) and P(non-PCD) is the probability of not having PCD in the tested population. The formula can be rearranged accordingly to calculate probability of PCD given positive diagnostic test, probability of PCD given negative diagnostic test and probability of non-PCD given negative diagnostic test as well as probability of non-PCD given positive diagnostic test. To model the sequence of diagnostic tests in each diagnostic algorithm the resulting probability of PCD given a positive first test as calculated using Bayes' Theorem was used as the pre-test probability of PCD for the second test. The final modeled health outputs regarding the effectiveness of each diagnostic algorithm included the number of PCD patients confirmed as PCD (True Positive - TP), PCD patients missed (False Negative - FN), non-PCD patients wrongly diagnosed as PCD (False Positive - FP), and non-PCD patients that had a diagnosis of PCD excluded (True Negative - TN). In addition, the annual total cost outcome (in Euros) was calculated for each diagnostic algorithm using a micro-costing approach. This approach involves the recognition of all underlying activities that make up a specific healthcare procedure and the product of resource cost and resource use provides the total cost estimate for the procedure [28]. A detailed description of the diagnostic cost analysis is presented in the Technical Appendix (Additional file 2). The Incremental Cost-Effectiveness Ratios (ICER) were calculated as the ratio of incremental costs to incremental effectiveness, i.e. [29]: $$ ICER=\frac{Cost_A-{Cost}_B}{Effect_A-{Effect}_B} $$ Here, CostA and CostB are the total annual per-patient costs of performing test algorithms A and B, respectively, and EffectA and EffectB are the number of PCD patients correctly diagnosed with PCD for the same diagnostic algorithms. The costing perspective of this analysis is societal as it considers all relevant costs for the society (including costs borne by the patient, and/or social services) and not just the costs that are incurred by the healthcare system [30]. Ideally, the cost-effectiveness analysis should not be limited to diagnostic costs and outcomes but should include all expenditures as well as all effectiveness outcomes, preferably in terms of quality-adjusted life years (QALYs), a metric used broadly in the health economics literature [31]. For this reason, a secondary, extended analysis was performed, further described in Additional file 3. Model parameter inputs The prevalence of PCD in the general population was assumed to be 1/15,000 births and the prevalence of PCD among patients referred for diagnostic testing was allocated a probability of 0.32 (95% CI: 0.26–0.39) as reported before [32]. Data regarding the diagnostic accuracy of each test were derived from systematic reviews and meta-analyses, when possible, and from alternative data sources such as large studies and multiple sources when meta-analytic estimates were not available. The parameter inputs for sensitivity and specificity of nNO during Velum Closure (VC) were 0.95 (95% CI: 0.91–0.97) and 0.94 (0.88–0.97) respectively, based on published meta-analytic estimates [33]. For HSVM, the parameter inputs for sensitivity and specificity were 1.0 (95% CI: 0.89–1.00) and 0.92 (95% CI: 0.86–0.96) based on published evidence provided by Boon et al. 2013 and Jackson et al. 2016 [34, 35]. For assessment of ciliary ultrastructure with TEM, the parameter inputs for sensitivity and specificity were 0.74 (95% CI: 0.68–0.80) and 0.91 (95% CI: 0.86–0.96) respectively based on a recent meta-analysis of 11 studies [32]. Sensitivity and specificity values for HSVM and TEM following a positive nNO result were obtained from the study by Jackson et al. 2016 [35]. Table 1 summarizes all parameter values that were part of the basic model. Table 1 Model parameter inputs Characterization of uncertainty Reported uncertainty around pooled estimates of the meta-analyses of diagnostic effectiveness and uncertainties about the true value of costs and other parameters are reflected by the probability distributions around the parameter means which are used in this model. A Cost-Effectiveness Acceptability Curve was used to demonstrate the uncertainty in the estimation of the ICER [36] while the robustness of the estimated ICER was tested through the performance of one-way sensitivity analyses where the input parameters varied over their range. All parameters and equations constitute the final model which was developed with ANALYTICA 101 edition (Lumina decision systems, CA, United States). The model was executed with 3000 iterations per "model run" using Latin Hypercube sampling to generate samples from the underlying parameter probability distributions. The model can be assessed online (Additional file 4) and a model overview is presented in Fig. 2. Model Overview. Schematic Overview of ANALYTICA model The model output for TP, FN, TN and FP and estimates of net sensitivity, net specificity, net positive predictive value and net negative predictive value for the application of each diagnostic algorithm in a hypothetical cohort of 1000 patients suspected of PCD is presented in Table 2. Table 3 compares mean diagnostic costs with the number of PCD cases identified and reports relevant CERs and ICERs. Deterministic comparison for mean costs and effects demonstrated that the nNO/HSVM+TEM was the most effective algorithm but also the costliest (313 PCD cases identified/year, 209 thousand €/year). nNO + HSVM was the second most effective (273 PCD cases identified/year, 136 thousand €/year) while nNO + TEM was the least effective (198 PCD cases identified/year, 150 thousand €/year). The most cost-effective algorithm was nNO + HSVM with a CER of €653/PCD case identified, followed by nNO/HSVM+TEM (€678/PCD case identified) and nNO + TEM (€975/PCD case identified). The cost effectiveness frontier in presented in Fig. 3 and the resulting ICER for nNO/HSVM+TEM compared to nNO + HSVM, the second most effective algorithm, is €2097 per additional PCD case identified. The nNO + TEM algorithm is dominated (simple dominance) by nNO + HSVM as it is more expensive but less effective compared to nNO + HSVM. Figure 4 presents the cost-effectiveness acceptability curve (CEAC) for nNO/HSVM+TEM. The CEAC demonstrates the uncertainty in the estimation of ICER and provides information about the probability of nNO/HSVM+TEM being more cost effective compared to nNO + HSVM for a range of potential monetary amounts (termed willingness to pay (WTP) thresholds) that a decision maker might be willing to pay to correctly diagnose an additional PCD case. For a WTP threshold equal to €2500 the probability of nNO/HSVM+TEM being cost effective is over 70% and for a WTP threshold equal to €3500 the probability is over 97%. The results of one-way sensitivity analyses demonstrated that the modelled mean ICER for nNO/HSVM+TEM is primarily affected by changes in the input value for HSVM sensitivity, followed by the changes in input values for the prevalence of PCD among suspect patients. Changes in the input values of other modelled parameters had smaller effects on the ICER (Fig. 5). Results of secondary analysis are presented in Additional file 3. Table 2 Diagnostic accuracy of nNO + TEM, nNO + HSVM and nNO/HSVM+TEM algorithms Table 3 Diagnostic costs per year, identified PCD cases per year (mean and 95% Confidence Interval) Cost-effectiveness frontier for the three different diagnostic algorithms for PCD. Diagnostic algorithms nNO + HSVM and nNO/HSVM+TEM are cost-effective alternatives at different WTP thresholds. Diagnostic algorithm nNO + TEM is dominated by nNO + HSVM Cost Effectiveness Acceptability Curve for nNO/HSVM+TEM. The probability that diagnostic algorithm nNO/HSVM+TEM is cost-effective for a range of WTP thresholds One-way sensitivity analyses for ICER. Tornado diagram demonstrating one-way sensitivity analyses of modelled parameters that affect the ICER. The dashed vertical black line represents the base case value (ICER = 2097 Euros/additional PCD case identified). PCD: Primary Ciliary Dyskinesia, nNO: nasal Nitric Oxide, HSVM = High Speed Video Microscopy, ICER = incremental cost-effectiveness ratio.Cost Effectiveness The high genetic heterogeneity that characterizes PCD and the resulting inability to rely on a single test to confirm or exclude diagnosis of the disease has led to increased research interest in specialized diagnostic testing for PCD in recent years. This study compares three diagnostic strategies currently in use for diagnosing PCD and reports on their effectiveness and cost-effectiveness under a societal costing perspective. Data were drawn primarily from meta-analyses of diagnostic effectiveness or published estimates from large studies and were synthesized in a probabilistic cost effectiveness model. The results presented here demonstrate that when the effectiveness outcome is defined as the number of PCD patients identified, nNO/HSVM+TEM is the most effective diagnostic algorithm followed closely by nNO + HSVM. Both nNO/HSVM+TEM and nNO + HSVM are significantly more effective compared to the third diagnostic strategy evaluated, nNO + TEM. Mean estimates of CERs demonstrate that nNO + HSVM was the most cost-effective option and a decision maker should expect to pay on average an amount equal to €2097 per additional case identified if nNO/HSVM+TEM is implemented. Whether the effectiveness outcome is defined as the number of PCD patients identified or as the number of QALYs saved nNO/HSVM+TEM was still the most effective algorithm followed by nNO + HSVM and nNO + TEM. Nevertheless, the results of the extended model, which are expressed in Euros per QALY saved, demonstrate that all three diagnostic algorithms appear to be very cost-effective. Compared to no screening, the cost per QALY gained for the three diagnostic algorithms examined here ranged from €6674 to €12,930, an estimate which is lower than WTP thresholds commonly used by regulatory authorities around the world. Such WTP thresholds range between £20,000 and £30,000 per QALY saved in the UK [37] or the more conventional WTP threshold of $50,000 per QALY saved, commonly used in the US [38] or even more recently suggested WTP thresholds in the range of $100,000 to $200,000 per QALY [39]. Diagnostic algorithms including nNO measurement during VC as an initial screening could be cost-effective. However, our results demonstrate that nNO screening is more effective when the confirmatory test is HSVM and not TEM. Although in the past TEM was considered the gold standard [13], it is now known to miss an important fraction of PCD patients [32], mainly those with biallelic mutations in DNAH11 gene [40] and those with specific ultrastructural abnormalities (nexin link defects) that are not easily detectable by standard TEM [41]. Furthermore, it requires access to a specialized lab with personnel experienced in staining and interpretation of TEM micrographs and consequently involves considerable resource allocation [42]. At the same time, TEM studies are usually time consuming and results are frequently obtained and communicated to patients considerably later than results of other tests thus contributing to patient distress [43]. HSVM is easier, considerably faster and cheaper than TEM as it is usually performed on the same day following nasal brushing and the equipment required consists of standard microscope, a high speed video camera and a standard computer loaded with specialized software. It has also been reported to be a highly sensitive and specific test [35] thus it significantly outperforms TEM as a confirmatory test both in terms of overall effectiveness and cost. However, extra caution is required with HSVM as it may be affected by observer subjectivity and non-PCD specific findings which may interfere with the motility interpretation [22]. Overall, the parallel performance of two highly specific and sensitive tests such as nNO and HSVM during the first step of the diagnostic algorithm, followed by confirmatory TEM in only the few cases of conflicting findings, results in the identification of most PCD patients and does not require the performance of the more expensive and time consuming TEM analysis for the largest part of the cohort of suspect patients. In this study we did not include diagnostic algorithms that included immunofluorescence (IF) and/or genetic testing for PCD. Although a recent study has reported the first diagnostic accuracy and cost estimates for immunofluorescence testing in PCD [44], the use of this test is still very limited (as it is performed only in a small number of few highly specialized centers around the world). Genetic testing, on the other hand, is available in many centers around the world. However, as yet, there is little standardization of procedures for the conduct and interpretation of results. Different centers may use different technologies and may not test for the same number of genes [45, 46]. Thus estimation of the effectiveness or the cost of genetic testing as diagnostic for PCD was not possible at this stage and it was not included in the diagnostic algorithms considered in our analysis. This approach is in line with the recent guidelines published by the ERS where genetic testing was recommended as a last step following abnormal TEM primarily for further characterization of the underlying defect or as a final diagnostic test if all other tests were inconclusive. For immunofluorescence there was no ERS recommendation towards its use as a diagnostic test given the scarcity of evidence [26]. The main strength of this study is that it makes use of evidence-based estimates and individual good quality studies on the diagnostic accuracy of nNO, TEM and HSVM and the prevalence of PCD among cohorts of referred suspect patients. With the use of Bayes' Theorem, it was possible to estimate the diagnostic effectiveness of sequential tests and to compare the effectiveness of diagnostic algorithms instead of simply comparing the effectiveness of isolated tests, as had been done in the past. In addition, our analysis of the costs involved in diagnostic testing followed standard approaches for economic analysis of healthcare procedures [28] and made use of the extensive literature on the effort, equipment and consumables involved in the performance of nNO [47, 48], HSVM [13, 35] and TEM [18]. Based on this evidence, we were able to calculate effectiveness and economic outcomes (number of PCD patients identified, total diagnostic costs) as well as robust CERs, ICERs and identify the cost-effectiveness frontier. Nonetheless, this study also has some limitations. In the main analysis, although our data on diagnostic accuracy are mostly based on meta-analyses of well conducted studies, these are characterized by a degree of heterogeneity [32, 33]. On the other hand, our data on diagnostic cost parameters are primarily based on realistic estimates of current market values, although these may not be uniform across all EU countries. The one-way sensitivity analyses for the diagnostic ICER for NO/HSVM+TEM demonstrates that our results are most sensitive to variations in HSVM sensitivity and PCD prevalence among suspect patients. A recent, large study on diagnostic accuracy of HSVM reported a sensitivity of 100%, which is in line with the value used in our model [49]. Nevertheless, it is possible that PCD prevalence among referred suspect patients varies considerably between countries, as different countries may utilize different diagnostic protocols and referral patterns [20, 50, 51]. Even so, these disparities between countries are expected to be reduced in the future with the increasing use of clinical scoring tools [52], the intercalation between PCD clinicians in international networking projects such as the BEAT-PCD COST project [53] and the establishment of European Reference Networks for rare diseases including PCD (ERN-LUNG) [54]. Most limitations of this work however, relate to the considerable uncertainty of the parameters used in the secondary analysis and for this reason the results of the basic and extended model are presented separately. As a result, caution is advised before generalizing the results of this study, especially those regarding the extended model. Another limitation of the extended model is that despite empirical evidence about various approaches for the treatment of PCD, at the moment there are no widely recognized PCD-specific treatment protocols. The efficacy of a few treatment approaches are now under investigation through randomized control trials, for example, those now underway on the effect of azithromycin for antibiotic prophylaxis [55]. Furthermore, there are no published estimates of the annual (or lifetime) cost of various options for treatment of PCD. Although we used credible sources to estimate patient associated cost [56] and cost of each procedure (resource cost) [57,58,59], we had to rely on our own experience with the disease to characterize the typical frequency of treatment (resource use). To address this limitation, the underlying uncertainty in each parameter was characterized and included in the model. Through Latin Hypercube sampling and Monte Carlo analysis, these uncertainties in individual parameters were propagated through the model and are reflected in the uncertainty in final model outputs. Evidence about treatment costs is especially weak. We found no evidence of the cost of treatment of PCD patients who remain undiagnosed; and only limited evidence about the cost of treatment of PCD patients who are properly diagnosed. A sensitivity analysis was conducted to determine whether differences in the overall costs of treatment of diagnosed and undiagnosed PCD patients affected the estimates of cost-effectiveness from the extended model. The overall order of diagnostic algorithms was not affected and nNO/HSVM+TEM was the most cost efficient algorithm in all scenarios. However, the magnitude of the difference in the cost effectiveness of the three algorithms was significantly affected, with nNO/HSVM+TEM becoming relatively more cost-effective when it was assumed that the cost of treating undiagnosed PCD patients was at least 3 times greater than the treatment cost for properly-diagnosed PCD patients. This highlights the importance of future studies which address the economic cost of treatment in PCD patients before and after diagnosis. We encountered a similar lack of data on the impact of PCD on life expectancy and the patients' valuation of health status (health utility). Currently, PCD is considered a disease characterized by normal or near-normal life span, although cases of premature mortality among PCD patients are reported in the literature [8, 60]. To date, no study has reported on patients' life expectancy and this lack of information could be attributed to the fact that PCD has been studied primarily in small cohorts in the pediatric setting. The recently established prospective international PCD registry [61], which now includes several thousands of pediatric and adult patients, is expected in the next few years to provide data on disease progression and life-expectancy. Likewise, to date no study has reported on health state utilities in PCD and thus we used in our calculations data on health utilities from mild Cystic Fibrosis patients that have been previously reported to have similar clinical severity with PCD [62]. The one-way sensitivity analyses in the extended model, which included treatment costs and outcomes, demonstrated that the most important parameter impacting the CER of nNO/HSVM+TEM was PCD health utility followed by loss of productivity, reduction in life expectancy and antibiotics cost. In order to further improve our understanding of the disease and better inform the development and improvement of guidelines for PCD diagnosis and treatment, future studies aiming to assess the real value of cost-of-illness, healthcare utilization estimates and health state utilities are urgently needed. Across the world, many PCD diagnostic centers follow a variety of algorithms for diagnosing PCD and, most likely, in some low income countries, there is a complete lack of specialized diagnostic testing. The results of this study suggest that a diagnostic algorithm which includes nNO during VC as a screening test followed by confirmatory HSVM identifies approximately 85% of PCD patients with a mean CER of €653per PCD case identified. The algorithm which maximizes the number of PCD patients identified involves parallel performance of nNO and HSVM as the first step, followed by TEM as a confirmatory test for the few cases where nNO and HSVM yield conflicting results, with a corresponding ICER of €2097 per additional PCD patient identified. Decision analysis methods and the evidence from this study can inform the dialogue on evidence-based guidelines for PCD diagnostic testing. Future studies in understudied aspects of PCD relating to quality of life, treatment efficiency and associated costs are urgently needed to help the better implementation of these guidelines across various healthcare systems. The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. ATS: CEAC: Cost Effectiveness Acceptability Curve CER: Cost Effectiveness Ratio ERN: ERS: HSVM: High Speed Video Microscopy ICER: Incremental Cost Effectiveness Ratio nNO: Nasal Nitric Oxide QALY: Quality Adjusted Life Years TEM: VC: Vellum Closure WTP: Willingness to pay Knowles MR, Daniels LA, Davis SD, Zariwala MA, Leigh MW. Primary ciliary dyskinesia. Recent advances in diagnostics, genetics, and characterization of clinical disease. Am J Respir Crit Care Med. 2013;188(8):913–22. Noone PG, Leigh MW, Sannuti A, Minnix SL, Carson JL, Hazucha M, Zariwala MA, Knowles MR. Primary ciliary dyskinesia: diagnostic and phenotypic features. Am J Respir Crit Care Med. 2004;169(4):459–67. Kurkowiak M, Zietkiewicz E, Witt M. Recent advances in primary ciliary dyskinesia genetics. J Med Genet. 2015;52(1):1–9. Magnin ML, Cros P, Beydon N, Mahloul M, Tamalet A, Escudier E, Clément A, Pointe L, Ducou H, Blanchon S. Longitudinal lung function and structural changes in children with primary ciliary dyskinesia. Pediatr Pulmonol. 2012;47(8):816–25. Honoré I, Burgel P. Primary ciliary dyskinesia in adults. Rev Mal Respir. 2016;33(2):165–189.3. Coren M, Meeks M, Morrison I, Buchdahl R, Bush A. Primary ciliary dyskinesia: age at diagnosis and symptom history. Acta Paediatr. 2002;91(6):667–9. Ellerman A, Bisgaard H. Longitudinal study of lung function in a cohort of primary ciliary dyskinesia. Eur Respir J. 1997;10(10):2376–2379.4-6. Yiallouros PK, Kouis P, Middleton N, Nearchou M, Adamidi T, Georgiou A, Eleftheriou A, Ioannou P, Hadjisavvas A, Kyriacou K. Clinical features of primary ciliary dyskinesia in Cyprus with emphasis on lobectomized patients. Respir Med. 2015;109(3):347–56. Vallet C, Escudier E, Roudot-Thoraval F, Blanchon S, Fauroux B, Beydon N, Boule M, Vojtek AM, Amselem S, Clément A, Tamalet A. Primary ciliary dyskinesia presentation in 60 children according to ciliary ultrastructure. Eur J Pediatr. 2013;172(8):1053–60. Davis SD, Ferkol TW, Rosenfeld M, Lee HS, Dell SD, Sagel SD, Milla C, Zariwala MA, Pittman JE, Shapiro AJ, Carson JL, Krischer JP, Hazucha MJ, Cooper ML, Knowles MR, Leigh MW. Clinical features of childhood primary ciliary dyskinesia by genotype and ultrastructural phenotype. Am J Respir Crit Care Med. 2015;191(3):316–24. Goutaki M, Meier AB, Halbeisen FS, Lucas JS, Dell SD, Maurer E, Casaulta C, Jurca M, Spycher BD, Kuehni CE. (2016) Clinical manifestations in primary ciliary dyskinesia: systematic review and meta-analysis. Eur Respir J. 2016;48(4):1081–95. Shoemark A, Dixon M, Corrin B, Dewar A. Twenty-year review of quantitative transmission electron microscopy for the diagnosis of primary ciliary dyskinesia. J Clin Pathol. 2012;65(3):267–71. Raidt J, Wallmeier J, Hjeij R, Onnebrink JG, Pennekamp P, Loges NT, Olbrich H, Haffner K, Dougherty GW, Omran H, Werner C. Ciliary beat pattern and frequency in genetic variants of primary ciliary dyskinesia. Eur Respir J. 2014;44(6):1579–88. Hirst RA, Jackson CL, Coles JL, Williams G, Rutman A, Goggin PM, Adam EC, Page A, Evans HJ, Lackie PM, O'Callaghan C, Lucas JS. Culture of primary ciliary dyskinesia epithelial cells at air-liquid interface can alter ciliary phenotype but remains a robust and informative diagnostic aid. PLoS One. 2014;9(2):e89675. Collins SA, Gove K, Walker W, Lucas JS. Nasal nitric oxide screening for primary ciliary dyskinesia: systematic review and meta-analysis. Eur Respir J. 2014;44(6):1589–99. Chilvers MA, Rutman A, O'callaghan C. Ciliary beat pattern is associated with specific ultrastructural defects in primary ciliary dyskinesia. J Allergy Clin Immunol. 2003;112(3):518–24. Sisson JH, Stoner J, Ammons B, Wyatt T. All-digital image capture and whole-field analysis of ciliary beat frequency. J Microsc. 2003;211(2):103–11. Papon JF, Coste A, Roudot-Thoraval F, Boucherat M, Roger G, Tamalet A, Vojtek AM, Amselem S, Escudier E. A 20-year experience of electron microscopy in the diagnosis of primary ciliary dyskinesia. Eur Respir J. 2010;35(5):1057–63. American Thoracic Society/European Respiratory Society. ATS/ERS recommendations for standardized procedures for the online and offline measurement of exhaled lower respiratory nitric oxide and nasal nitric oxide, 2005. Am J Respir Crit Care Med. 2005;171:912–30. Barbato A, Frischer T, Kuehni CE, Snijders D, Azevedo I, Baktai G, Bartoloni L, Eber E, Escribano A, Haarman E, Hesselmar B, Hogg C, Jorissen M, Lucas J, Nielsen KG, O'Callaghan C, Omran H, Pohunek P, Strippoli MPF, Bush A. Primary ciliary dyskinesia: a consensus statement on diagnostic and treatment approaches in children. Eur Respir J. 2009;34(6):1264–76. Lucas JS, Burgess A, Mitchison HM, Moya E, Williamson M, Hogg C. National PCD service U: diagnosis and management of primary ciliary dyskinesia. Arch Dis Child. 2014;99(9):850–6. Lucas JS, Leigh MW. Diagnosis of primary ciliary dyskinesia: searching for a gold standard. Eur Respir J. 2014;44(6):1418–22. Marthin JK, Nielsen KG. Choice of nasal nitric oxide technique as first-line test for primary ciliary dyskinesia. Eur Respir J. 2011;37(3):559–65. Marthin JK, Nielsen KG. Hand-held tidal breathing nasal nitric oxide measurement–a promising targeted case-finding tool for the diagnosis of primary ciliary dyskinesia. PLoS One. 2013;8(2):e57262. Rumman N, Jackson C, Collins S, Goggin P, Coles J, Lucas JS. Diagnosis of primary ciliary dyskinesia: potential options for resource-limited countries. Eur Respir Rev. 2017;26(143). https://doi.org/10.1183/16000617.0058-2016 Print 2017 Jan. Lucas JS, Barbato A, Collins SA, Goutaki M, Behan L, Caudri D, Dell S, Eber E, Escudier E, Hirst RA, Hogg C, Jorissen M, Latzin P, Legendre M, Leigh MW, Midulla F, Nielsen KG, Omran H, Papon JF, Pohunek P, Redfern B, Rigau D, Rindlisbacher B, Santamaria F, Shoemark A, Snijders D, Tonia T, Titieni A, Walker WT, Werner C, Bush A, Kuehni CE. European Respiratory Society guidelines for the diagnosis of primary ciliary dyskinesia. Eur Respir J. 2017;49(1). https://doi.org/10.1183/13993003.01090-2016 Print 2017 Jan. Sox HC. Medical decision making. Philadelphia: ACP Press; 1988. Xu X, Nardini HKG, Ruger JP. Micro-costing studies in the health and medical literature: protocol for a systematic review. Systematic reviews. 2014;3(1):47. Owens DK. Interpretation of cost-effectiveness analyses. J Gen Intern Med. 1998;13(10):716–7. Gray AM, Clarke PM, Wolstenholme JL, Wordsworth S. Applied methods of cost-effectiveness analysis in healthcare, vol. 3. Oxford: Oxford University Press; 2011. Neumann PJ, Thorat T, Shi J, Saret CJ, Cohen JT. The changing face of the cost-utility literature, 1990–2012. Value Health. 2015;18(2):271–7. Kouis P, Yiallouros PK, Middleton N, Evans JS, Kyriacou K, Papatheodorou SI. Prevalence of primary ciliary dyskinesia in consecutive referrals of suspect cases and the transmission electron microscopy detection rate: a systematic review and meta-analysis. Pediatr Res. 2017;81(3):398–405. Kouis P, Papatheodorou SI, Yiallouros PK. Diagnostic accuracy of nasal nitric oxide for establishing diagnosis of primary ciliary dyskinesia: a meta-analysis. BMC pulmonary medicine. 2015;15(1):153. Boon M, Jorissen M, Jaspers M, Cuppens H, De Boeck K. Is the sensitivity of primary ciliary dyskinesia detection by ciliary function analysis 100%? Eur Respir J. 2013;42(4):1159–61. Jackson CL, Behan L, Collins SA, Goggin PM, Adam EC, Coles JL, Evans HJ, Harris A, Lackie P, Packham S, Page A, Thompson J, Walker WT, Kuehni C, Lucas JS. Accuracy of diagnostic testing in primary ciliary dyskinesia. Eur Respir J. 2016;47(3):837–48. Van Hout BA, Al MJ, Gordon GS, Rutten FF. Costs, effects and C/E-ratios alongside a clinical trial. Health Econ. 1994;3(5):309–19. Judgements SV. Principles for the development of NICE guidance. London: National Institute for health and care excellence; 2008. p. 478. Shiroiwa T, Sung Y, Fukuda T, Lang H, Bae S, Tsutani K. International survey on willingness-to-pay (WTP) for one additional QALY gained: what is the threshold of cost effectiveness? Health Econ. 2010;19(4):422–37. Neumann PJ, Cohen JT, Weinstein MC. Updating cost-effectiveness—the curious resilience of the $50,000-per-QALY threshold. N Engl J Med. 2014;371(9):796–7. Schwabe GC, Hoffmann K, Loges NT, Birker D, Rossier C, De Santi MM, Olbrich H, Fliegauf M, Failly M, Liebers U. Primary ciliary dyskinesia associated with normal axoneme ultrastructure is caused by DNAH11 mutations. Hum Mutat. 2008;29(2):289–98. Olbrich H, Cremers C, Loges NT, Werner C, Nielsen KG, Marthin JK, Philipsen M, Wallmeier J, Pennekamp P, Menchen T. Loss-of-function GAS8 mutations cause primary ciliary dyskinesia and disrupt the nexin-dynein regulatory complex. Am J Hum Genet. 2015;97(4):546–54. Leigh MW, O'callaghan C, Knowles MR. The challenges of diagnosing primary ciliary dyskinesia. Proc Am Thorac Soc. 2011;8(5):434–7. Behan L, Dunn Galvin A, Rubbo B, Masefield S, Copeland F, Manion M, Rindlisbacher B, Redfern B, Lucas JS. Diagnosing primary ciliary dyskinesia: an international patient perspective. Eur Respir J. 2016;48(4):1096–107. Shoemark A, Frost E, Dixon M, Ollosson S, Kilpin K, Patel M, Scully J, Rogers AV, Mitchison HM, Bush A: Accuracy of immunofluorescence in the diagnosis of primary ciliary dyskinesia. Am J Respir Crit Care Med. 2017;196(1):94–101. Boaretto F, Snijders D, Salvoro C, Spalletta A, Mostacciuolo ML, Collura M, Cazzato S, Girosi D, Silvestri M, Rossi GA. Diagnosis of primary ciliary dyskinesia by a targeted next-generation sequencing panel: molecular and clinical findings in Italian patients. J Mol Diagn. 2016;18(6):912–22. Kim RH, A Hall D, Cutz E, Knowles MR, Nelligan KA, Nykamp K, Zariwala MA, Dell SD. The role of molecular genetic analysis in the diagnosis of primary ciliary dyskinesia. Ann Am Thorac Soc. 2014;11(3):351–9. Leigh MW, Hazucha MJ, Chawla KK, Baker BR, Shapiro AJ, Brown DE, LaVange LM, Horton BJ, Qaqish B, Carson JL. Standardizing nasal nitric oxide measurement as a test for primary ciliary dyskinesia. Ann Am Thorac Soc. 2013;10(6):574–81. Harris A, Bhullar E, Gove K, Joslin R, Pelling J, Evans HJ, Walker WT, Lucas JS. Validation of a portable nitric oxide analyzer for screening in primary ciliary dyskinesias. BMC Pulm Med. 2014;14(1):18. Rubbo B, Shoemark A, Jackson CL, Hirst R, Thompson J, Hayes J, Frost E, Copeland F, Hogg C, O'Callaghan C, Reading I, Lucas JS. National PCD service, UK. Accuracy of high-speed video analysis to diagnose primary ciliary dyskinesia. Chest. 2019. https://doi.org/10.1016/j.chest.2019.01.036. O'Callaghan C, Chilvers M, Hogg C, Bush A, Lucas J. Diagnosing primary ciliary dyskinesia. Thorax. 2007;62:656–7. Kuehni CE, Frischer T, Strippoli MP, et al. ERS task force on primary ciliary dyskinesia in children. Factors influencing age at diagnosis of primary ciliary dyskinesia in European children. Eur Respir J. 2010;36:1248–58. Behan L, Dimitrov BD, Kuehni CE, Hogg C, Carroll M, Evans HJ, Goutaki M, Harris A, Packham S, Walker WT, Lucas JS. PICADAR: a diagnostic predictive tool for primary ciliary dyskinesia. Eur Respir J. 2016;47(4):1103–12. Rubbo B, Behan L, Dehlink E, Goutaki M, Hogg C, Kouis P, Kuehni CE, Latzin P, Nielsen K, Norris D, Nyilas S, Price M, Lucas JS. Proceedings of the COST action BM1407 inaugural conference BEAT-PCD: translational research in primary ciliary dyskinesia-bench, bedside, and population perspectives. BMC Proc. 2016;10(Suppl. 9):66 BioMed Central. Héon-Klin V. European reference networks for rare diseases: what is the conceptual framework? Orphanet J Rare Dis. 2017;12(1):137. Kobbernagel HE, Buchvald FF, Haarman EG, Casaulta C, Collins SA, Hogg C, Kuehni CE, Lucas JS, Omran H, Quittner AL. Study protocol, rationale and recruitment in a European multi-Centre randomized controlled trial to determine the efficacy and safety of azithromycin maintenance therapy for 6 months in primary ciliary dyskinesia. BMC Pulm Med. 2016;16(1):104. Chevreul K, Michel M, Brigham KB, López-Bastida J, Linertová R, Oliva-Moreno J, Serrano-Aguilar P, Posada-de-la-Paz M, Taruscio D, Schieppati A, Iskrov G, Pentek M, von der Schulenburg JMG, Kanavos P, Persson U, Fattore G. BURQOL-RD research network. Social/economic costs and health-related quality of life in patients with cystic fibrosis in Europe. Eur J Health Econ. 2016;17(1):7–18. NHS Reference Costs (National Health Service, UK) https://www.gov.uk/government/collections/nhs-reference-costs Federal Health Monitoring (Germany) http://www.gbe-bund.de/gbe10/pkg_isgbe5.prc_isgbe?p_uid=gast&p_aid=13634762&p_sprache=E World Health Organization CHOICE database (World Health Organization) https://www.who.int/choice/cost-effectiveness/en/ Frija-Masson J, Bassinet L, Honoré I, Dufeu N, Housset B, Coste A, Papon JF, Escudier E, Burgel PR, Maître B. Clinical characteristics, functional respiratory decline and follow-up in adult patients with primary ciliary dyskinesia. Thorax. 2017;72(2):154–60. Werner C, Lablans M, Ataian M, Raidt J, Wallmeier J, Grosse-Onnebrink J, Kuehni CE, Haarman EG, Leigh MW, Quittner AL, Lucas JS, Hogg C, Witt M, Priftis KN, Yiallouros P, Nielsen KG, Santamaria F, Uckert F, Omran H. An international registry for primary ciliary dyskinesia. Eur Respir J. 2016;47(3):849–59. Cohen-Cymberknoh M, Simanovsky N, Hiller N, Hillel AG, Shoseyov D, Kerem E. Differences in disease expression between primary ciliary dyskinesia and cystic fibrosis with and without pancreatic insufficiency. Chest. 2014;145(4):738–44. P.K. was supported by the European Union's Seventh Framework Program EC-GA No. 305404 BESTCILIA. This study was supported by EU 7th Framework Program EC-GA No. 305404 BESTCILIA. The sponsors had no role or involvement in study design; in the collection, analysis and interpretation of data. Respiratory Physiology Laboratory, Medical School, University of Cyprus, Nicosia, Cyprus Panayiotis Kouis, George Giallouros & Panayiotis K. Yiallouros Cyprus International Institute for Environmental and Public Health, Cyprus University of Technology, Limassol, Cyprus Panayiotis Kouis & Stefania I. Papatheodorou Department of Epidemiology, Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA Stefania I. Papatheodorou Department of Nursing, Cyprus University of Technology, Limassol, Cyprus Nicos Middleton Department of Electron Microscopy and Molecular Pathology, Cyprus Institute of Neurology and Genetics, Nicosia, Cyprus Kyriacos Kyriacou Cyprus School of Molecular Medicine, Nicosia, Cyprus Tufts Center for the Study of Drug Development, Boston, USA Joshua T. Cohen Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, USA John S. Evans Shiakolas Educational Center of Clinical Medicine, Palaios Dromos Lefkosias-Lemesou 215/6, 2029, Aglantzia Nicosia, Cyprus Panayiotis Kouis George Giallouros Panayiotis K. Yiallouros PK performed the development of the decision tree model under the supervision of J.S.E, extracted relevant information from the published literature and prepared the first draft of the manuscript. SIP contributed to the model development, extraction of relevant information form the published literature and contributed intellectually towards the final version of the manuscript. NM and KK contributed to the interpretation of the findings and critically revised the final version of the manuscript. GY and JC assisted in the economic analysis and contributed towards the final version of the manuscript. PKY conceived the hypothesis of the manuscript, contributed to the model development and interpretation of the findings and contributed intellectually towards the final version of the manuscript. All authors read and approved the final manuscript. Correspondence to Panayiotis Kouis. BESTCILIA Diagnostic Algorithm for PCD. (PDF 188 kb) Technical Appendix. (DOCX 20 kb) Secondary Analysis. (DOCX 169 kb) Cost-effectiveness ANALYTICA model. (ANA 51 kb) Kouis, P., Papatheodorou, S.I., Middleton, N. et al. Cost-effectiveness analysis of three algorithms for diagnosing primary ciliary dyskinesia: a simulation study. Orphanet J Rare Dis 14, 142 (2019). https://doi.org/10.1186/s13023-019-1116-3 Cost-effectiveness analysis Decision analysis Kartagener syndrome Rare pulmonary diseases
CommonCrawl
How can my character start a thieves' guild without being disruptive to the rest of the group? Background: D&D 5e with a steam punk flair. My protagonist is a changeling rogue (can assume the role of other people) with a small bit of magic (Arcane Trickster). My deity has requested that I seed a thieves guild to act as a network of spies that will slowly grow as the protagonist progresses in their adventure. In this question, I would like to focus on how I could go about doing this without being disruptive to the main storyline, or others at the table. I want to do this for fun, but not detract from the fun others are having. On Page 186-187 the PHB discusses "Between Adventures": Between trips to dungeons and battles against ancient evils, adventurers need time to rest, recuperate, and prepare for their next adventure. Many adventurers also use this time to perform other tasks, such as crafting arms and armor, performing research, or spending their hard-earned gold. In some cases, the passage of time is something that occurs with little fanfare or description. When starting a new adventure, the DM might simply declare that a certain amount of time has passed and allow you to describe in general terms what your character has been doing. At other times, the DM might want to keep track of just how much time is passing as events beyond your perception stay in motion. While this doesn't explicitly talk about seeding a underworld super cult that you will one day use to take over the world, I think this would be the appropriate place for this activity so as not to be disruptive. What I don't know is how much time/money investment should go to this. From "Building a Stronghold" (DMG, p. 128) - thanks @John: A character can spend time between adventures building a stronghold. Before work can begin, the character must acquire a plot of land. If the estate lies within a kingdom or similar domain, the character will need a royal charter (a legal document granting permission to oversee the estate in the name of the crown), a land grant (a legal document bequeathing custody of the land to the character for as long as he or she remains loyal to the crown), or a deed (a legal document that serves as proof of ownership). Land can also be acquired by inheritance or other means. Royal charters and land grants are usually given by the crown as a reward for faithful service, although they can also be bought. Deeds can be bought or inherited. A small estate might sell for as little as 100 gp or as much as 1,000 gp. A large estate might cost 5,000 gp or more, if it can be bought at all. Once the estate is secured, a character needs access to building materials and laborers. The Building a Stronghold table shows the cost of building the stronghold (including materials and labor) and the amount of time it takes, provided that the character is using downtime to oversee construction. Work can continue while the character is away, but each day the character is away adds 3 days to the construction time. \$\begin{array}{|l|r|r|} \hline \textbf{Stronghold} & \textbf{Cost} & \textbf{Time} \\ \hline \text{Abbey} & 50,000\,\text{gp} & 400\,\text{days} \\ \text{Guildhall, town or city} & 5,000\,\text{gp} & 60\,\text{days} \\ \text{Keep or small castle} & 50,000\,\text{gp} & 400\,\text{days} \\ \text{Noble estate with manor} & 25,000\,\text{gp} & 150\,\text{days} \\ \text{Outpost or fort} & 15,000\,\text{gp} & 100\,\text{days} \\ \text{Palace or large castle} & 500,000\,\text{gp} & 1,200\,\text{days} \\ \text{Temple} & 50,000\,\text{gp} & 400\,\text{days} \\ \text{Tower, fortified} & 15,000\,\text{gp} & 100\,\text{days} \\ \text{Trading post} & 5,000\,\text{gp} & 60\,\text{days} \\ \hline \end{array} \$ How can I do this within the rules of D&D 5e without being disruptive to the existing game? dnd-5e downtime CaffeineAddictionCaffeineAddiction Sounds like your best bet is a combination of building a stronghold (your secret guild HQ) and running a business. Both are described, starting on page 127 of the Dungeon Masters Guide. The DM's guild has several 3rd party resources for players building organizations. They may be worth a look. Catsphrasc's Adventurers Guild Supplement has a guide to building such an adventurers guild which should be easy to modify into a thieves guild. But really this is all going to be up to your DM, so your best bet is to discuss it with them. He Who Rules As Intended Not the answer you're looking for? Browse other questions tagged dnd-5e downtime or ask your own question. What can a character do during a rest? Can Thieves' Cant be understood without a level in Rogue? Should I boost a new player's character level to match the rest of the group? Is crafting scrolls allowed during downtime for Adventurers League characters? How many short rests can a character take while the party takes a long rest? Is this runecaster feat balanced? (Third Iteration) How do I fix the group tension caused by my character stealing and possibly killing without provocation? Can a character have downtime while on the road? Can a character rest while under the Etherealness spell?
CommonCrawl
Constraint satisfaction problem (CSP) vs. satisfiability modulo theory (SMT); with a coda on constraint programming Does someone dare to attempt to clarify what's the relation of these fields of study or perhaps even give a more concrete answer at the level of problems? Like which includes which assuming some widely accepted formulations. If I got this correctly, when you go from SAT to SMT you're basically entering the field of CSP; vice-versa, if you limit CSP to booleans you're basically talking of SAT and maybe a few related problems like #SAT. I think this much is clear (e.g. cf Kolaitis and Vardi's chapter "A Logical Approach to Constraint Satisfaction" in Finite Model Theory and Its Applications by Grädel et al.), but what's less clear to me is when are the constraints "modulo a theory" and when aren't they? Does SMT always imply the theory uses only equality and inequality constraints are always in the broader field of CSP? As far as I can tell, you can often introduce slack variables, so the distinction [if it exists] is less than obvious. The relatively recent "Satisfiability handbook" (IOP Press 2009) gathers both SMT and CSP problems under its broad "satisfiability" umbrella, but given the way it is structured (chapters written by various authors) doesn't really help me with figuring out this. I would hope the terminology gets less confusing when you talk of constraint programming, which (by analogy with the term ''mathematical programming'') I hope involves minimizing/maximizing some objective function. The Wikipedia article on constraint programming is alas so vague that I can't really say if this framing happens though. What I can gather from Essentials of Constraint Programming by Frühwirth and Abdennadher (p. 56) is that a "constraint solver" usually provides more than just a satisfiability checker, with simplification etc. being important in practice. Although this is hardly an actual CS-theory research question, I don't expect good answers to this one on the undergraduate CS.SE site given what I saw at https://cs.stackexchange.com/questions/14946/distinguish-decision-procedure-vs-smt-solver-vs-theorem-prover-vs-constraint-sol (which contains a lot of words but not what I would consider a real answer, alas). soft-question sat terminology FizzFizz $\begingroup$ add to this ASP. SMT/ ASP relatively recent developments. the previously separate fields are blending. see eg Hybrid Automated Reasoning Tools: from Black-box to Clear-box Integration / Balduccini, Lierler as rough recent survey. $\endgroup$ – vzn Feb 8 '15 at 17:50 SAT, CP, SMT, (much of) ASP all deal with the same set of combinatorial optimisation problems. However, they come at these problems from different angles and with different toolboxes. These differences are largely in how each approach structures information about the exploration of the search space. My working analogy is that SAT is machine code, while the others are higher level languages. Based on my thesis work on the structural theory of CSPs, I have come to believe that the notion of "clause structure" is essential in unifying all these paradigms and in understanding how they differ. Each clause of a SAT instance represents a forbidden partial assignment; a clause like $x_1 \lor \overline{x_2} \lor x_3$ forbids the partial assignment $\{(x_1,0),(x_2,1),(x_3,0)\}$ that simultaneously sets $x_1$ and $x_3$ to false and $x_2$ to true. The clause structure of a combinatorial optimisation problem is its representation as a SAT instance, using some suitable encoding. However, the clause structure includes all the forbidden partial assignments, not just the ones given at the start. The clause structure is therefore usually too large to manipulate directly: typically it has at least exponential size in the number of variables, and may be infinite. Hence, the clause structure has to be approximated with a limited amount of space. SAT/CP/SMT/ASP maintain and update a more-or-less implicit representation of an underlying clause structure. This is possible because if one partial assignment is known to be in the clause structure, then this implies that many other clauses are also present. For instance, the SAT clause above also forbids any partial assignment that contains it as a subset, so clauses like $x_1 \lor \overline{x_2} \lor x_3 \lor x_4$ and $x_1 \lor \overline{x_2} \lor x_3 \lor \overline{x_4} \lor x_5$ are all in the clause structure of that instance. An approximation of the clause structure is kept to narrow down the set of solutions, and to help determine whether this set is empty. During search some partial assignments may turn out not to be contained in any solution (even if they individually satisfy each of the constraints in the instance). These are known as nogoods, a term introduced by ("Mr GNU") Stallman and Sussman. A nogood clause is therefore in the clause structure and can be included in an approximation of the clause structure, as a compact representation of many clauses that can be pruned from the search for solutions. Adding nogoods to the approximate clause structure retains all the solutions, but better approximates those solutions. So the approximate clause structure usually changes as search progresses. Further, the way the problem is modelled in one of the combinatorial optimisation approaches affects the clause structure, often quite significantly. For instance, propositional variables can represent intervals such as $x \le 5$ or points such as $x = 5$. Hence there isn't a single general clause structure but one associated with each choice of representation, depending on what the singletons (literals) of the clause structure represent. Constraint programming (CP) was traditionally an AI discipline, with a focus on scheduling, timetabling, and combinatorial problems, and therefore has a central role for variables that can take more than just two values (but usually only finitely many). CP has emphasised efficient search and, motivated by the traditional applications, has given a central role to the all-different (injectivity) constraint, but has also developed efficient propagators for many other kinds of constraints. The formal definitions of CP have been around since at least Montanari's 1974 paper Networks of constraints, with precursors going back even earlier. This weight of history may have contributed to CP lagging behind other approaches in raw performance over the last decade. CP classically maintains an approximation of the complement of the clause structure, via a set of active domains for the variables. The aim is to eliminate values from the active domains, exploring the clause structure by trying to assign candidate values to variables and backtracking when necessary. Satisfiability modulo theories (SMT) came out of the verification community. Each theory in an SMT solver forms an implicit representation of potentially infinitely many SAT clauses. The theories used with SMT and the constraints used in CP reflect their different historical applications. Most of the theories SMT considers have to do with integer-indexed arrays, real closed fields, linear orders, and suchlike; these arise from static analysis of programs (in computer aided verification) or when formalising mathematical proofs (in automated reasoning). In contrast, in timetabling and scheduling the injectivity constraint is central, and although the standard SMTLIB language has had an injectivity constraint since its inception in 2003 (via the distinct symbol), until 2010 SMT solvers only implemented distinct via a naive algorithm. At that stage the matching technique from the standard CP propagator for all-different was ported across, to great effect when applied to large lists of variables; see An Alldifferent constraint solver in SMT by Banković and Marić, SMT 2010. Moreover, most CP propagators are designed for problems with finite domains, whereas standard SMT theories deal with infinite domains (integers, and more recently reals) out of the box. SMT uses a SAT instance as the approximation of the clause structure, extracting nogood clauses from the theories as appropriate. A nice overview is Satisfiability Modulo Theories: Introduction and Applications by De Moura and Bjørner, doi:10.1145/1995376.1995394. Answer set programming (ASP) came out of logic programming. Due to its focus on solving the more general problem of finding a stable model, and also because it allows universal as well as existential quantification, ASP was for many years not competitive with CP or SMT. Formally, SAT is CSP on Boolean domains, but the focus in SAT on clause learning, good heuristics for conflict detection, and fast ways to backtrack are quite different to the traditional CSP focus on propagators, establishing consistency, and heuristics for variable ordering. SAT is usually extremely efficient, but for many problems huge effort is required to first express the problem as a SAT instance. Using a higher level paradigm like CP can allow a more natural expression of the problem, and then either the CP instance can be translated into SAT by hand, or a tool can take care of the translation. A nice overview of the nuts and bolts of SAT is On Modern Clause-Learning Satisfiability Solvers by Pipatsrisawat and Darwiche, doi:10.1007/s10817-009-9156-3. Now let's move on from generalities to present day specifics. Over the last decade some people in CP have started to focus on lazy clause generation (LCG). This is essentially a way to bolt CP propagators together using more flexible SMT-like techniques rather than the rather rigid active domains abstraction. This is useful because there is a long history of published CP propagators to efficiently represent and solve many kinds of problems. (Of course, a similar effect would be achieved by concerted effort to implement new theories for SMT solvers.) LCG has performance that is often competitive with SMT, and for some problems it may be superior. A quick overview is Stuckey's CPAIOR 2010 paper Lazy Clause Generation: Combining the power of SAT and CP (and MIP?) solving, doi:10.1007/978-3-642-13520-0_3. It is also worth reading the position paper of Garcia de la Banda, Stuckey, Van Hentenryck and Wallace, which paints a CP-centric vision of The Future of Optimization Technology, doi:10.1007/s10601-013-9149-z. As far as I can tell, much of the focus of recent SMT research seems to have shifted to applications in formal methods and formalised mathematics. An example is reconstructing proofs found by SMT solvers inside proof systems such as Isabelle/HOL, by building Isabelle/HOL tactics to reflect inference rules in SMT proof traces; see Fast LCF-Style Proof Reconstruction for Z3 by Böhmer and Weber at ITP 2010. The top ASP solvers have over the last few years been developed to become competitive with CP, SMT and SAT-only solvers. I'm only vaguely familiar with the implementation details that have allowed solvers such as clasp to be competitive so cannot really compare these with SMT and CP, but clasp explicitly advertises its focus on learning nogoods. Cutting across the traditional boundaries between these formalisms is translation from more abstract problem representations into lower level efficiently implementable formalisms. Several of the top ASP and CP solvers now explicitly translate their input into a SAT instance, which is then solved using an off-the-shelf SAT solver. In CP, the Savile Row constraint modelling assistant uses compiler design techniques to translate problems expressed in the medium level language Essence' into a lower level formalism, suitable for input to CP solvers such as Minion or MiniZinc. Savile Row originally worked with a CP representation as the low-level formalism but introduced SAT as a target in version 1.6.2. Moreover, the even higher-level language Essence can now be automatically translated into Essence' by the Conjure tool. At the same time, low level SAT-only solvers like Lingeling continue to be refined each year, most recently by alternating clause learning and in-processing phases; see the brief overview What's Hot in the SAT and ASP Competitions by Heule and Schaub in AAAI 2015. The analogy with the history of programming languages therefore seems appropriate. SAT is becoming a kind of "machine code", targeting a low-level model of exploration of the clauses in the clause structure. The abstract paradigms are becoming more like higher level computer languages, with their own distinct methodologies and applications they are good at addressing. Finally, the increasingly dense collection of links between these different layers is starting to resemble the compiler optimisation ecosystem. András SalamonAndrás Salamon $\begingroup$ Tks for this very useful answer. $\endgroup$ – Xavier Labouze Jul 21 '15 at 13:12 $\begingroup$ Note: in the FOCS/STOC community a narrower definition of CSP is used. These CSPs are of the form CSP(L), "all CSP instances that can be expressed using a fixed set L of constraint relations". The all-different constraint doesn't fit into this framework, nor do problems that have tree-like structure. $\endgroup$ – András Salamon Jul 21 '15 at 16:08 Not the answer you're looking for? Browse other questions tagged soft-question sat terminology or ask your own question. Constraint satisfaction problem on a graph Encoding quadratic constraints in a constraint satisfaction problem into SAT Understanding performance of QFBV SMT solvers SMT solving with less-than theory and monotonic functions
CommonCrawl
Predictability for Heavy Rainfall over the Korean Peninsula during the Summer using TIGGE Model Hwang, Yoon-Jeong;Kim, Yeon-Hee;Chung, Kwan-Young;Chang, Dong-Eon 287 The predictability of heavy precipitation over the Korean Peninsula is studied using THORPEX Interactive Grand Global Ensemble (TIGGE) data. The performance of the six ensemble models is compared through the inconsistency (or jumpiness) and Root Mean Square Error (RMSE) for MSLP, T850 and H500. Grand Ensemble (GE) of the three best ensemble models (ECMWF, UKMO and CMA) with equal weight and without bias correction is consisted. The jumpiness calculated in this study indicates that the GE is more consistent than each single ensemble model. Brier Score (BS) of precipitation also shows that the GE outperforms. The GE is used for a case study of a heavy rainfall event in Korean Peninsula on 9 July 2009. The probability forecast of precipitation using 90 members of the GE and the percentage of 90 members exceeding 90 percentile in climatological Probability Density Function (PDF) of observed precipitation are calculated. As the GE is excellent in possibility of potential detection of heavy rainfall, GE is more skillful than the single ensemble model and can lead to a heavy rainfall warning in medium-range. If the performance of each single ensemble model is also improved, GE can provide better performance. Validation of Quality Control Algorithms for Temperature Data of the Republic of Korea Park, Changyong;Choi, Youngeun 299 This study is aimed to validate errors for detected suspicious temperature data using various quality control procedures for 61 weather stations in the Republic of Korea. The quality control algorithms for temperature data consist of four main procedures (high-low extreme check, internal consistency check, temporal outlier check, and spatial outlier check). Errors of detected suspicious temperature data are judged by examining temperature data of nearby stations, surface weather charts, hourly temperature data, daily precipitation, and daily maximum wind direction. The number of detected errors in internal consistency check and spatial outlier check showed 4 days (3 stations) and 7 days (5 stations), respectively. Effective and objective methods for validation errors through this study will help to reduce manpower and time for conduct of quality management for temperature data. The Error Structure of the CAPPI and the Correction of the Range Dependent Error due to the Earth Curvature Yoo, Chulsang;Yoon, Jungsoo 309 It is important to characterize and quantify the inherent error in the radar rainfall to make full use of the radar rainfall. This study verified the error structure of the reflectivity and corrected the range dependent error in the CAPPI using a VPR (vertical profile of reflectivity) model. The error of the CAPPI to display the reflectivity data becomes bigger for the range longer than 100 km. This range dependent error, however, is significantly improved by corrected the CAPPI data using the VPR model. A Numerical Study on Clear-Air Turbulence Events Occurred over South Korea Min, Jae-Sik;Kim, Jung-Hoon;Chun, Hye-Yeong 321 Generation mechanisms of the three moderate-or-greater (MOG)-level clear-air turbulence (CAT) encounters over South Korea are investigated using the Weather Research and Forecasting (WRF) model. The cases are selected among the MOG-level CAT events occurred in Korea during 2002-2008 that are categorized into three different generation mechanisms (upper-level front and jet stream, anticyclonic flow, and mountain waves) in the previous study by Min et al. For the case at 0127 UTC 18 Jun 2003, strong vertical wind shear (0.025 $s^{-1}$) generates shearing instabilities below the enhanced upper-level jet core of the maximum wind speed exceeding 50 m $s^{-1}$, and it induces turbulence near the observed CAT event over mid Korea. For the case at 2330 UTC 22 Nov 2006, areas of the inertia instability represented by the negative absolute vorticity are formed in the anticyclonically sheared side of the jet stream, and turbulence is activated near the observed CAT event over southwest of Korea. For the case at 0450 UTC 16 Feb 2003, vertically propagating mountain waves locally trigger shearing instability (Ri < 0.25) near the area where the background Richardson number is sufficiently small (0.25 < Ri < 1), and it induces turbulence near the observed CAT over the Eastern mountainous region of South Korea. The Characteristics of the Change of Hadley Circulation during the Late 20th Century in the Current AOGCMs Shin, Sang-Hye;Chung, Il-Ung 331 The changes in the Hadley circulation during the second half of the 20th century were examined using observations and the 20C3M (Twentieth Century Climate in Coupled Models) simulations by the 21 IPCC AR4 models. Multi-model ensemble (MME) mean shows that the mean features of the Hadley circulation, such as the intensity, magnitude, and the seasonal variations, are very realistically reproduced, compared to the ERA40 reanalysis. But the long-term trends of the Hadley circulation in 20C3M MME are quite different to those of observations. The observed intensity of the Hadley cell is persistently enhanced, particularly during boreal winter. In comparison, the meridional overturning circulations reproduced in the MME mean remains invariant in time, and even weakened in boreal summer. This discrepancy between the ERA40 and 20C3M MME is consistently shown in the overall structure of the Hadley circulations, such as mass streamfunction, the velocity potential, the vertical shear of meridional wind, and the vertical velocity in the tropical region. This results indicate that the current climate models are skill-less to capture the long-term trend of Hadley circulation yet, and should be improved in simulation of the large-scale features to enhance the confidence level of future climate change projection. The Improvement of Forecast Accuracy of the Unified Model at KMA by Using an Optimized Set of Physical Options Lee, Juwon;Han, Sang-Ok;Chung, Kwan-Young 345 The UK Met Office Unified Model at the KMA has been operationally utilized as the next generation numerical prediction system since 2010 after it was first introduced in May, 2008. Researches need to be carried out regarding various physical processes inside the model in order to improve the predictability of the newly introduced Unified Model. We first performed a preliminary experiment for the domain ($170{\times}170$, 10 km, 38 layers) smaller than that of the operating system using the version 7.4 of the UM local model to optimize its physical processes. The result showed that about 7~8% of the improvement ratio was found at each stage by integrating four factors (u, v, th, q), and the final improvement ratio was 25%. Verification was carried out for one month of August, 2008 by applying the optimized combination to the domain identical to the operating system, and the result showed that the precipitation verification score (ETS, equitable threat score) was improved by 9%, approximately. Evaluation of High-Resolution Hydrologic Components Based on TOPLATS Land Surface Model Lee, Byong-Ju;Choi, Young-Jean 357 High spatio-temporal resolution hydrologic components can give important information to monitor natural disaster. The objective of this study is to create high spatial-temporal resolution gridded hydrologic components using TOPLATS distributed land surface model and evaluate their accuracy. For this, Andong dam basin is selected as study area and TOPLATS model is constructed to create hourly simulated values in every $1{\times}1km^2$ cell size. The observed inflow at Andong dam and soil moisture at Andong AWS site are collected to directly evaluate the simulated one. RMSEs of monthly simulated flow for calibration (2003~2006) and verification (2007~2009) periods show 36.87 mm and 32.41 mm, respectively. The hourly simulated soil moisture in the cell located Andong observation site for 2009 is well fitted with observed one at -50 cm. From this results, the cell based hydrologic components using TOPLATS distributed land surface model show to reasonably represent the real hydrologic condition in the field. Therefore the model driven hydrologic information can be used to analyze local water balance and monitor natural disaster caused by the severe weather. Aerosol Size Distributions and Optical Properties during Severe Asian Dust Episodes Measured over South Korea in Spring of 2009-2010 Kang, Dong-Hun;Kim, Jiyoung;Kim, Kyung-Eak;Lim, Byung-Sook 367 Measurements of $PM_{10}$ mass concentration, aerosol light scattering and absorption coefficients as well as aerosol size distribution were made to characterize the aerosol physical and optical properties at the two Korean WMO/GAW regional stations, Anmyeondo and Gosan. Episodic cases of the severe Asian dust events occurred in spring of 2009-2010 were studied. Results in this study show that the aerosol size distributions and optical properties at both stations are closely associated with the dust source regions and the transport routes. According to the comparison of the $PM_{10}$ mass concentration at both stations, the aerosol concentrations at Anmyeondo are not always higher than those at Gosan although the distance from the dust source region to Anmyeondo is closer than that of Gosan. The result shows that the aerosol concentrations depend on the transport routes of the dust-containing airmass. The range of mass scattering efficiencies at Anmyeon and Gosan was 0.50~1.45 and $0.62{\sim}1.51m^2g^{-1}$, respectively. The mass scattering efficiencies are comparable to those of the previous studies by Clarke et al. (2004) and Lee (2009). It is noted that anthropogenic fine particles scatter more effectively the sunlight than coarse dust particles. Finally, we found that the aerosol size distribution and optical properties at Anmyeondo and Gosan show somewhat different properties although the samples for the same dust_episodic events are compared. The 17th Century Dry Period in the Time Series of the Monthly Rain and Snow Days of Seoul Lim, Gyu-Ho;Choi, Eun-Ho;Koo, Kyosang;Won, Myoungsoo 381 The monthly number of days with rain or snow in Seoul extends backward to 1626 from the present. The number of rain and snow days are from the ancient records and combined with the modern precipitation records from 1907 to the present. There are two distinct and abrupt changes in the time series, which allow us to divide the entire period into three sub-periods of CR-I, CR-II, and MR. For each sub-period, we calculated the basic statistics and the associated distributions. The analysis proves Seoul, which may comprise East Asia when considering the lengthy period of dry condition, had dry climate for the Maunder Minimum when Europe experienced cold climate. We also note relationships between the rain days and sunspot numbers in various frequency bands. Lee, Jeong-Soon 387
CommonCrawl
Homotopy harnessing higher structures Timetable (HHHW04) Monday 3rd December 2018 to Friday 7th December 2018 09:00 to 09:50 Registration 09:50 to 10:00 Welcome from Christie Marr (INI Deputy Director) INI 1 10:00 to 11:00 Soren Galatius (University of Copenhagen) H_{4g-6}(M_g) The set of isomorphism classes of genus g Riemann surfaces carries a natural topology in which it may be locally parametrized by 3g-3 complex parameters. The resulting space is denoted M_g, the moduli space of Riemann surfaces, and is more precisely a complex orbifold of that dimension. The study of this space has a very long history involving many areas of mathematics, including algebraic geometry, group theory, and stable homotopy theory. The space M_g is not compact, essentially because a family of Riemann surface may degenerate into a non-smooth object, and may be compactified in several interesting ways. I will discuss a compactification due to Harvey, which looks like a compact real (6g-6)-dimensional manifold with corners, except for orbifold singularities. The combinatorics of the corner strata in this compactification may be encoded using graphs. Using this compactification, I will explain how to define a chain map from Kontsevich's graph complex to a chain complex calculating the rational homology of M_g. The construction is particularly interesting in degree 4g-6, where our methods give rise to many non-zero classes in H_{4g-6}(M_g), contradicting some predictions. This is joint work with Chan and Payne (arXiv:1805.10186). INI 1 11:00 to 11:30 Morning Coffee 11:30 to 12:30 Alexander Kupers (Harvard University) Cellular techniques in homological stability 1: general theory This is the first of two talks about joint work with S. Galatius and O Randal-Williams on applications higher-algebraic structures to homological stability. The main tool is cellular approximation of E_k-algebras, and we start with a discussion of the general theory of such cellular approximations. This culminates in a generic homological stability result. 12:30 to 13:30 Lunch at Churchill College 14:30 to 15:30 George Raptis (Universität Regensburg) The h-cobordism category and A-theory A fundamental link between Waldhausen's algebraic K-theory of spaces (A-theory) and manifold topology is given by an identification of A-theory in terms of stable homotopy and the stable smooth h-cobordism space. This important result has had many applications in the study of diffeomorphisms of manifolds. In more recent years, the theory of cobordism categories has provided a different approach to the study of diffeomorphism groups with spectacular applications. In collaboration with W. Steimle , we revisit the classical Waldhausen K-theory in light of these developments and investigate new connections and applications. In this talk, I will first discuss a cobordism-type model for A-theory, and then I will focus on the h-cobordism category, the cobordism category of h-cobordisms between smooth manifolds with boundary, and its relationship to the classical h-cobordism space of a compact smooth manifold. This is joint work with W. Steimle. 15:30 to 16:00 Afternoon Tea 16:00 to 17:00 Christopher Schommer-Pries (University of Notre Dame) The Relative Tangle Hypothesis I will describe recent progress on a non-local variant of the cobordism hypothesis for higher categories of bordisms embedded into finite dimensional Euclidean space. 17:00 to 18:00 Welcome Wine Reception at INI 09:00 to 10:00 Wolfgang Lueck (Universität Bonn) On the stable Cannon Conjecture The Cannon Conjecture for a torsionfree hyperbolic group $G$ with boundary homeomorphic to $S^2$ says that $G$ is the fundamental group of an aspherical closed $3$-manifold $M$. It is known that then $M$ is a hyperbolic $3$-manifold. We prove the stable version that for any closed manifold $N$ of dimension greater or equal to $2$ there exists a closed manifold $M$ together with a simple homotopy equivalence $M \to N \times BG$. If $N$ is aspherical and $\pi_1(N)$ satisfies the Farrell-Jones Conjecture, then $M$ is unique up to homeomorphism. This is joint work with Ferry and Weinberger. 10:00 to 11:00 Thomas Willwacher (ETH Zürich) Configuration spaces of points and real Goodwillie-Weiss calculus The manifold calculus of Goodwillie and Weiss proposes to reduce questions about embedding spaces of manifolds to questions about mapping spaces of the (little-disks modules of) configuration spaces of points on those manifolds. We will discuss real models for these configuration spaces. Furthermore, we will see that a real version of the aforementioned mapping spaces is computable in terms of graph complexes. In particular, this yields a new tool to study diffeomorphism groups and moduli spaces. Cellular techniques in homological stability 2: mapping class groups This is the second of two talks about joint work with S. Galatius and O Randal-Williams on applications higher-algebraic structures to homological stability. In it we apply the general theory to the example of mapping class groups of surfaces. After reproving Harer's stability result, I will explain how to prove the novel phenomenon of secondary homological stability; there are maps comparing the relative homology groups of the stabilization map for different genus and there are isomorphisms in a range tending to infinity with the genus. 14:30 to 15:30 Victor Turchin (Kansas State University) Embeddings, operads, graph-complexes I will talk about the connection between the following concepts: manifold calculus, little discs operads, embedding spaces, problem of delooping, relative rational formality of the little discs, and graph-complexes. I will review main results on this connection by Boavida de Brito and Weiss, my coauthors and myself. At the end I will briefly go over the current joint work in progress of Fresse, Willwacher, and myself on the rational homotopy type of embedding spaces. Co-authors: Gregory Arone (Stockholm University), Julien Ducoulombier (ETH, Zurich), Benoit Fresse (University of Lille), Pascal Lambrechts (University of Louvain), Paul Arnaud Songhafouo Tsopméné (University of Regina), Thomas Willwacher (ETH, Zurich). 16:00 to 17:00 Nathalie Wahl (University of Copenhagen) Homotopy invariance in string topology In joint work with Nancy Hingston, we show that the Goresky-Hingston coproduct, just like the Chas-Sullivan product, is homotopy invariant. Unlike the Chas-Sullivan product, this coproduct is a "compactified operation", coming from a certain compactification of the moduli space of Riemann surfaces. I'll give an idea of the ingredients used in the proof. 09:00 to 10:00 Cary Malkiewich (Binghamton University) Periodic points and topological restriction homology I will talk about a project to import trace methods, usually reserved for algebraic K-theory computations, into the study of periodic orbits of continuous dynamical systems (and vice-versa). Our main result so far is that a certain fixed-point invariant built using equivariant spectra can be "unwound" into a more classical invariant that detects periodic orbits. As a simple consequence, periodic-point problems (i.e. finding a homotopy of a continuous map that removes its n-periodic orbits) can be reduced to equivariant fixed-point problems. This answers a conjecture of Klein and Williams, and allows us to interpret their invariant as a class in topological restriction homology (TR), coinciding with a class defined earlier in the thesis of Iwashita and separately by Luck. This is joint work with Kate Ponto. 10:00 to 11:00 Christine Vespa (University of Strasbourg) Higher Hochschild homology as a functor Higher Hochschild homology generalizes classical Hochschild homology for rings. Recently, Turchin and Willwacher computed higher Hochschild homology of a finite wedge of circles with coefficients in the Loday functor associated to the ring of dual numbers over the rationals. In particular, they obtained linear representations of the groups Out(F_n) which do not factorize through GL(n,Z). In this talk I will explain how viewing higher Hochschild homology of a finite wedge of circles as a functor on the category of free groups provides a conceptual framework which allows powerful tools such as exponential functors and polynomial functors to be brought to bear. In particular, this allows the generalization of the results of Turchin and Willwacher; this gives rise to new linear representations of Out(F_n) which do not factorize through GL(n,Z). (This is joint work with Geoffrey Powell.) 11:30 to 12:30 Fabian Hebestreit (University of Bonn) The homotopy type of algebraic cobordism categories Co-authors: Baptiste Calmès (Université d'Artois), Emanuele Dotto (RFWU Bonn), Yonatan Harpaz (Université Paris 13), Markus Land (Universität Regensburg), Kristian Moi (KTH Stockholm), Denis Nardin (Université Paris 13), Thomas Nikolaus (WWU Münster), Wolfgang Steimle (Universität Augsburg). Abstract: I will introduce cobordism categories of Poincaré chain complexes, or more generally of Poincaré objects in any hermitian quasi-category C. One interest in such algebraic cobordism categories arises as they receive refinements of Ranicki's symmetric signature in the form of functors from geometric cobordism categories à la Galatius-Madsen-Tillmann-Weiss. I will focus, however, on a more algebraic direction. The cobordism category of C can be delooped by an iterated Q-construction, that is compatible with Bökstedt-Madsen's delooping of the geometric cobordism category. The resulting spectrum is a derived version of Grothendieck-Witt theory and I will explain how its homotopy type can be computed in terms of the K- and L-Theory of C. 13:30 to 18:00 Free afternoon 09:00 to 10:00 Alexander Berglund (Stockholm University) Rational homotopy theory of automorphisms of manifolds I will talk about differential graded Lie algebra models for automorphism groups of simply connected manifolds M. Earlier results by Ib Madsen and myself on models for block diffeomorphisms combined with rational models for Waldhausen's algebraic K-theory of spaces suggest a model for the group of diffeomorphisms homotopic to the identity, valid in the so-called pseudo-isotopy stable range. If time admits, I will also discuss how to express the generalized Miller-Morita-Mumford classes in the cohomology of BDiff(M) in terms of these models. 10:00 to 11:00 Johannes Ebert (Westfalische Wilhelms-Universitat Munster) Cobordism categories, elliptic operators and positive scalar curvature We prove that a certain collection of path components of the space of metrics of positive scalar curvature on a high-dimensional sphere has the homotopy type of an infinite loop space, generalizing a theorem by Walsh. The proof uses an version of the surgery method by Galatius and Randal--Williams to cobordism categories of manifolds equipped with metrics of positive scalar curvature. Moreover, we prove that the secondary index invariant of the spin Dirac operator is an infinite loop map. The proof of that fact uses a generalization of the Atiyah--Singer index theorem to spaces of manifolds. (Joint work with Randal--Williams) 11:30 to 12:30 Ben Knudsen (Harvard University) Configuration spaces and Lie algebras away from characteristic zero There is a close connection between the theory of Lie algebras and the study of additive invariants of configuration spaces of manifolds, which has been exploited in many calculations of rational homology. We begin the computational exploration of this connection away from characteristic zero, exhibiting a spectral sequence converging to the p-complete complex K-theory of configuration spaces---more generally, to their completed Morava E-(co)homology---and we identify its second page in terms of an algebraic homology theory for Lie algebras equipped with certain power operations. We construct a computationally accessible analogue of the classical Chevalley--Eilenberg complex for these Hecke Lie algebras, and we use it to perform a number of computations. This talk is based on joint work in progress with Lukas Brantner and Jeremy Hahn. 14:30 to 15:00 Manuel Krannich (University of Cambridge) Contributed talk - Mapping class groups of highly connected manifolds The group of isotopy classes of diffeomorphisms of a highly connected almost parallelisable manifold of even dimension 2n>4 has been computed by Kreck in the late 70's. His answer, however, left open two extension problems, which were later understood in some particular dimensions, but remained unsettled in general. In this talk, I will explain how to resolve these extension problems in the case of n being odd, resulting in a complete description of the mapping class group in question in terms of an arithmetic group and the cokernel of the stable J-homomorphism. 15:00 to 15:30 Rachael Boyd (Norwegian University of Science and Technology) Contributed Talk - The low dimensional homology of Coxeter groups Coxeter groups were introduced by Tits in the 1960s as abstractions of the finite reflection groups studied by Coxeter. Any Coxeter group acts by reflections on a contractible complex, called the Davis complex. This talk focuses on a computation of the first three integral homology groups of an arbitrary Coxeter group using an isotropy spectral sequence argument: the answer can be phrased purely in terms of the original Coxeter diagram. I will give an introduction to Coxeter groups and the Davis complex before outlining the proof. 16:00 to 16:30 Csaba Nagy (University of Melbourne) Contributed Talk - The Sullivan-conjecture in complex dimension 4 The Sullivan-conjecture claims that complex projective complete intersections are classified up to diffeomorphism by their total degree, Euler-characteristic and Pontryagin-classes. Kreck and Traving showed that the conjecture holds in complex dimension 4 if the total degree is divisible by 16. In this talk I will present the proof of the remaining cases. It is known that the conjecture holds up to connected sum with the exotic 8-sphere (this is a result of Fang and Klaus), so the essential part of our proof is understanding the effect of this operation on complete intersections. This is joint work with Diarmuid Crowley. 16:30 to 17:00 Danica Kosanović (Max-Planck-Institut für Mathematik, Bonn) Contributed talk - Extended evaluation maps from knots to the embedding tower The evaluation maps from the space of knots to the associated embedding tower are conjectured to be universal knot invariants of finite type. Currently such invariants are known to exist only over the rationals (using the existence of Drinfeld associators) and the question of torsion remains wide open. On the other hand, grope cobordisms are certain operations in ambient 3-space producing knots that share the same finite type invariants and give a geometric explanation for the appearance of Lie algebras and graph complexes. I will explain how grope cobordisms and an explicit geometric construction give paths in the various levels of the embedding tower. Taking components recovers the result of Budney-Conant-Koytcheff-Sinha, showing that these invariants are indeed of finite type. This is work in progress joint with Y. Shi and P. Teichner. 18:15 to 19:15 Reception at Cambridge University Press Bookshop 19:30 to 20:30 Formal Dinner at Christs College (Hall 110) 10:00 to 11:00 Andre Henriques (University of Oxford) The complex cobordism 2-category and its central extensions I will introduce a symmetric monoidal 2-category whose objects are 0-manifolds, whose 1-morphisms are 1-dimensional smooth cobordisms, and whose 2-morphisms are Riemann surfaces with boundary and cusps. I will introduce a certain central extension by ℝ₊ and explain its relevance in chiral conformal field theory. Finally, I will explain the state of my understanding on the question of classification of such extensions by ℝ₊. 11:30 to 12:30 Sam Nariman (Northwestern University) Topological and dynamical obstructions to extending group actions. For any 3-manifold $M$ with torus boundary, we find finitely generated subgroups of $\Diff_0(\partial M)$ whose actions do not extend to actions on $M$; in many cases, there is even no action by homeomorphisms. The obstructions are both dynamical and cohomological in nature. We also show that, if $\partial M = S^2$, there is no section of the map $\Diff_0(M) \to \Diff_0(\partial M)$. This answers a question of Ghys for particular manifolds and gives tools for progress on the general program of bordism of group actions. This is a joint work with Kathryn Mann. 13:30 to 14:30 Ryan Budney (University of Victoria) Some prospects with the splicing operad Roughly six years ago I described an operad that acts on spaces of `long knots'. This is the space of smooth embeddings of R^j into R^n. The embeddings are required to be standard (linear) outside of a disc, and come equipped with a trivialisation of their normal bundles. This splicing operad gives a remarkably compact description of the homotopy-type of the space of classical long knots (j=1, n=3), that meshes well with the machinery of 3-manifold theory: JSJ-decompositions and geometrization. What remains to be seen is how useful this splicing operad might be when n is larger than 3. I will talk about what is known at present, and natural avenues to explore. 14:30 to 15:30 Diarmuid Crowley (University of Melbourne) Relative kappa-classes Diff(D^n), the the space of diffeomorphisms of the n-disc fixed near the boundary has rich rational topology. For example, Weiss's discovery of ``surreal'' Pontrjagin classes leads to the existence of rationally non-trivial homotopy classes in BDiff(D^n). For any smooth n-manifold M, extension by the identity induces a map BDiff(D^n) \to BDiff(M). In this talk I will report on joint work with Wolfgang Steimle and Thomas Schick, where we consider the problem of computing the image of the ``Weiss classes'' under the maps on homotopy and homology induced by extension. This problem naturally leads one to consider relative kappa-classes. Via relative kappa-classes, we show that the maps induced by extension are rationally non-trivial for a wide class of manifolds M, including aspherical manifolds (homology, hence also homotopy) and stably parallelisable manifolds (homotopy). When M is aspherical, our arguments rely on vanishing results for kappa-classes due to Hebestreit, Land, Lueck and Randal-Williams.
CommonCrawl
Piecewise modelling and parameter estimation of repairable system failure rate Chong Peng ORCID: orcid.org/0000-0003-4219-38271, Guangpeng Liu1 & Lun Wang1 Lifetime failure data usually presents a bathtub-shaped failure rate, and random failure phase dominates its life cycle. Relatively large errors always existed when single distribution model is used for analysis and modelling. Therefore, a bathtub-shaped reliability model based on piecewise intensity function was proposed for repairable system. Parameter estimation was studied by using maximum likelihood method in conjunction with two-stage Weibull process interval estimation and Artificial Bee Colony Algorithm. The proposed model and estimation approach fit the failure rate under bathtub curve very well, adequately reflect the duration of random failure phase, and guarantee the feasibility of quantitative analysis. Moreover, the estimated critical point of bathtub curve could be calculated. At last, the proposed model was applied in the reliability analysis of a repairable Computer Numerical Control system, and the validity of the proposed model and its parameter estimation method were verified. Reliability modelling is a process in which the distribution curve of system failure rate is fitted based on lifetime failure data generated in system life cycle test or system operation (Myers 2010). The failure rate distribution curve can reveal system's failure mechanism or the system-specific use phase, which may be conducive to the predication and control of failure, and carrying out an available predictive maintenance, reducing unexpected failures (Lin and Tseng 2005). Furthermore, the quantitative fitting of distribution curve is also the premise that reliability analysis, design, and test are effectively carried out. Therefore, it is important to analyze the variation trend of failure rate in a quantitative way for the purpose of reliability modelling. At present, common distribution models for fitting failure rate include exponential distribution, normal distribution, lognormal distribution and Weibull distribution, etc. Among them, exponential distribution deals with the situation where failure rate remains unchanged, and its function is relatively simple. Weibull distribution is the most widely applied since it is able to fit the progressive increase or decrease of failure rate effectively by changing parameters. Some scholars improved exponential distribution, proposed the exponential geometric distribution (Adamidis and Loukas 1998), exponentiated exponential distribution (Gupta and Kundu 1999), the exponential-Poisson distribution (Kus 2007), the exponentiated exponential-poisson distribution (Barreto-Souza and Cribari-Neto 2009), those improved models also achieve description of the progressive increasing or decreasing of failure rate. All of which have been used monotonic function to model the failure patterns for the repairable system. However, the lifetime failure data are often non-monotonic, and have a bathtub-shaped failure rate, which can be divided into three phases—early failure, random failure, and wear-out failure. For this reason, many scholars studied the phenomenon. Chen (2000) put forward a new two-parameter Weibull distribution model, in which the failure rate function followed the trend of bathtub curve when the shape parameter satisfied a certain condition. Later, Xie et al. (2002) and Zhang (2004) expanded the two-parameter model above by introducing the third parameter, and proposed the three-parameter Weibull distribution models independently. The results indicated that those models can reach high accuracy in fitting the trend of bathtub curve more easily than traditional models. Lai et al. (2003), based on Weibull distribution and Extreme Value Type I distribution, proposed an improved Weibull distribution model, which also described the trend of bathtub curve very well. They also completed the model's parameter estimation by means of Weibull probability plot. El-Gohary et al. (2013) gave a new distribution known as the generalized Gompertz distribution by dealing with a new generalization of exponential. Generalized Gompertz distribution, and generalized exponential distributions, which had bathtub curve failure rate depending upon the shape parameter. And the maximum likelihood estimators of the parameters were derived using a simulations study. Wang et al. (2015) proposed a new four parameter interval life model to describe the bathtub curve. Cordeiro et al. (2014) extended the modified Weibull distribution, and established a five-parameter Weibull distribution. Aiming at the failure censored data of industrial devices, Using Newton–Raphson type algorithm for parameter estimation, finding that the extended model fitted the bathtub curve very well. The above models fitted the trend of bathtub-shaped failure rate well, but random failure phase was not described in detail, as well as only one critical point of early failure phase and random failure phase was included. In fact, random failure phase is the main part of life cycle for most repairable system. Single distribution model may be inadequate for analysis and modelling as well as relatively errors may occur. Parameter estimation of multi parameter model, because of the unknown parameters, is difficult to solve the problem directly by using the maximum likelihood estimation. Therefore, a bathtub-shaped reliability model based on piecewise intensity function was proposed for repairable system. According to the characteristics of failure rate at three phases of bathtub curve, using non-homogeneous Poisson process (NHPP) and homogeneous Poisson process (HPP), the failure rate of different phases was fitted respectively. In view of the complexity of parameter estimation, relevant parameter estimation was studied by virtue of maximum likelihood method and artificial bee colony algorithm. At last, the approach was applied and verified by the reliability analysis of a repairable Computer Numerical Control (CNC) system. The remainder of the paper is organized as follows. "Set up piecewise model" section builds up a piecewise model under bathtub curve. "Parameter estimation" section studies the parameter estimation of the model. "Application" section gives an example application for verifying the proposed method. Finally, the paper is concluded. Set up piecewise model Currently, there are two main methods for studying reliability of repairable system: Counting Process and Markov Process (Rausand and Høyland 2004). Markov Process is mainly targeted at the multi-state problem of repairable system. Counting Process focuses on two states which are normal operation and fault of the system, and record normal operating time of system, fault occurrence time as well as frequency, which is consistent with the trend of fault rate in this research. Consequently, Counting Process was selected for piecewise reliability modelling of trend of bathtub curve. Process is as follows: Early failure phase At early failure phase, intensity function presents a decline trend. Due to the defects in the process of selection, manufacture and assembly of components, failure rate of system is high in the initial phase, but decreases rapidly as the run time increases and maintenance is provided. Therefore, we assume that the repair process at early failure phase is minimal repair, NHPP is used for modelling, intensity function complies with the most common Weibull process, and b 1 < 1, namely: $$\omega_{1} (t) = a_{1} b_{1} t^{{b_{1} - 1}}$$ Random failure phase At random failure phase, early defects of a system have been corrected, and failure intensity is tending towards stability. Thus, we assume that the repair process at random failure phase is perfect repair, HPP and exponential distribution are adopted for modelling, and intensity function remains unchanged (as a constant): $$\omega_{2} (t) = \omega_{1} (t_{0} ) = a_{1} b_{1} t_{0}^{{b_{1} - 1}}$$ where, t 0 is the critical point between early failure phase and random failure phase. Wear-out failure phase As run time lapses, the system slowly enters wear-out failure phase due to loss of components and mechanical fatigue, and its intensity function is increasing. Just like early failure phase, we assume that the repair process at wear-out failure phase is minimal repair, NHPP is used for modelling, and intensity function complies with Weibull process. Thus, the system's intensity function is: $$\omega_{3} (t) = a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (t - t_{1} )^{{b_{2} - 1}}$$ where, t 1 is the critical point between random failure phase and wear-out failure phase. To sum up, the system's intensity function throughout the life cycle is: $$\omega (t) = \left\{ {\begin{array}{*{20}l} {a_{1} b_{1} t^{{b_{1} - 1}} } \hfill & \quad {t \le t_{0} } \hfill \\ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} } \hfill & \quad {t_{0} < t \le t_{1} } \hfill \\ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (t - t_{1} )^{{b_{2} - 1}} } \hfill & \quad {t > t_{1} } \hfill \\ \end{array} } \right.$$ Then, the system's cumulative intensity function is: $$W_{c} (t) = \int_{0}^{t} {\omega (u)du} = \left\{ {\begin{array}{*{20}l} {a_{1} t^{{b_{1} }} } \hfill & \quad {t \le t_{0} } \hfill \\ {a_{1} t_{0}^{{b_{1} }} + a_{1} b_{1} t_{0}^{{b_{1} - 1}} (t - t_{0} )} \hfill & \quad {t_{0} < t \le t_{1} } \hfill \\ {a_{1} t_{0}^{{b_{1} }} + a_{1} b_{1} t_{0}^{{b_{1} - 1}} (t - t_{0} ) + a_{2} (t - t_{1} )^{{b_{2} }} } \hfill & \quad {t > t_{1} } \hfill \\ \end{array} } \right.$$ There are a lot of methods for parameter estimation of reliability model, such as maximum likelihood estimation, moment estimation, least square method, and Bayesian method. Among these methods, maximum likelihood estimation is the most widely used owing to its good theoretical basis and high estimation accuracy. Therefore, maximum likelihood estimation was applied for parameter estimation of the proposed reliability model. First, the likelihood function shall be determined. Based on the reliability model above, the cumulative distribution function of the ith Mean Time Between Failure (MTBF) is: $$\begin{aligned} F(S_{i} \left| {S_{i - 1} } \right.) & = \frac{{F(S_{i} ) - F(S_{i - 1} )}}{{1 - F(S_{i - 1} )}} \\ & = \frac{{\exp [ - W(S_{i - 1} )] - \exp [ - W(S_{i} )]}}{{\exp [ - W(S_{i - 1} )]}} \\ & = 1 - \exp [ - W(S_{i} ) + W(S_{i - 1} )] \\ \end{aligned}$$ where, S i is the moment when the ith failure occurs. Then, the conditional probability density function of S i is: $$f\left( {S_{i} \left| {S_{i - 1} } \right.} \right) = \omega (S_{i} )\exp \left[ { - W(S_{i} ) + W(S_{i - 1} )} \right]$$ Equations (4) and (5) are plugged into (7), and we obtain: When S i ≤ t 0, $$f(S_{i} \left| {S_{i - 1} } \right.) = a_{1} b_{1} S_{i}^{{b_{1} - 1}} \exp \left( { - a_{1} S_{i}^{{b_{1} }} + a_{1} S_{i - 1}^{{b_{1} }} } \right)$$ When S i−1 < t 0 ≤ S i < t 1, $$f(S_{i} \left| {S_{i - 1} } \right.) = a_{1} b_{1} t_{0}^{{b_{1} - 1}} \exp \left[ { - a_{1} (1 - b_{1} )t_{0}^{{b_{1} }} - a_{1} b_{1} t_{0}^{{b_{1} - 1}} S_{i} + a_{1} S_{i - 1}^{{b_{1} }} } \right]$$ When t 0 < S i−1 < S i ≤ t 1, $$f(S_{i} \left| {S_{i - 1} } \right.) = a_{1} b_{1} t_{0}^{{b_{1} - 1}} \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} )} \right]$$ When S i −1 < t 1 < S i , $$f(S_{i} \left| {S_{i - 1} } \right.) = \left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{i} - t_{1} )^{{b_{2} - 1}} } \right]\exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} ) - a_{2} (S_{i} - t_{1} )^{{b_{2} }} } \right]$$ When t 1 < S i-1 < S i , $$\begin{aligned} f(S_{i} \left| {S_{i - 1} } \right.) & = \left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{i} - t_{1} )^{{b_{2} - 1}} } \right] \\ & \quad \times \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} ) \, - a_{2} (S_{i} - t_{1} )^{{b_{2} }} + a_{2} (S_{i - 1} - t_{1} )^{{b_{2} }} } \right] \\ \end{aligned}$$ When the system's failure time data is Type-II censored data, the likelihood function is: $$\begin{aligned} L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) & = \prod\limits_{i = 1}^{n} {f(S_{i} \left| {S_{i - 1} } \right.)} \\ & = \prod\limits_{i = 1}^{k} {a_{1} b_{1} S_{i}^{{b_{1} - 1}} \exp \left( { - a_{1} S_{i}^{{b_{1} }} + a_{1} S_{i - 1}^{{b_{1} }} } \right)} \times a_{1} b_{1} t_{0}^{{b_{1} - 1}} \\ & \quad \times \exp \left[ { - a_{1} (1 - b_{1} )t_{0}^{{b_{1} }} - a_{1} b_{1} t_{0}^{{b_{1} - 1}} S_{k + 1} + a_{1} S_{k}^{{b_{1} }} } \right] \\ & \quad \times \prod\limits_{i = k + 2}^{l} {a_{1} b_{1} t_{0}^{{b_{1} - 1}} \exp [ - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} )]} \\ & \quad \times \left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{l + 1} - t_{1} )^{{b_{2} - 1}} } \right] \\ & \quad \times \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{l + 1} - S_{l} ) - a_{2} (S_{l + 1} - t_{1} )^{{b_{2} }} } \right] \\ & \quad \times \prod\limits_{i = l + 2}^{n} {\left\{ {\left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{i} - t_{1} )^{{b_{2} - 1}} } \right]} \right.} \\ & \quad \left. { \times \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} ) - a_{2} (S_{i} - t_{1} )^{{b_{2} }} + a_{2} (S_{i - 1} - t_{1} )^{{b_{2} }} } \right]} \right\} \\ \end{aligned}$$ where n is the total number of failures, θ = (a1, b1, t0, a2, b2, t1, ρ), S k < t 0 < S k+1 < ⋯ < S l < t 1 < S l+1. When the system's failure time data is Type-I censored data, the likelihood function is: $$L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) = \prod\limits_{i = 1}^{n} {f(S_{i} \left| {S_{i - 1} } \right.)} R(t_{s} \left| {S_{n} } \right.)$$ Where t s is censoring time, $$\begin{aligned} R(t_{s} \left| {S_{n} } \right.) & = \exp [ - W_{c} (t_{s} ) + W_{c} (S_{n} )] \\ & = \exp [ - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (t_{s} - S_{n} ) - a_{2} (t_{s} - t_{1} )^{{b_{2} }} + a_{2} (S_{n} - t_{1} )^{{b_{2} }} ] \\ \end{aligned}$$ To sum up, the likelihood function is: $$L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) = \prod\limits_{i = 1}^{n} {f(S_{i} \left| {S_{i - 1} } \right.)} \delta R(t_{s} \left| {S_{n} } \right.)$$ Where δ is defined as: $$\delta = \left\{ {\begin{array}{*{20}l} 0 \hfill & \quad {{\text{Type-II}}\,{\text{censored}}\,{\text{data}}} \hfill \\ 1 \hfill & \quad {{\text{Type I}}\,{\text{censored}}\,{\text{data}}} \hfill \\ \end{array} } \right.$$ Given that the specific equation of likelihood function is related to the interval of failure time data where t 0 and t 1 are located, and it is difficult to solve the complicated equation of likelihood function by calculating logarithmic derivative. Hence, artificial bee colony algorithm was used for parameter optimization to maximize likelihood in accordance with the piecewise processing of t 0 and t 1. The optimization model is as follows: $$\begin{aligned} & \mathop{\hbox{max} }\limits_{\theta } \quad L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) \\ & {\text{subject}}\,{\text{to}}\;\left\{ {\begin{array}{*{20}l} {a_{1} > 0,0 < b_{1} \le 1} \hfill \\ {a_{2} > 0,b_{2} \ge 1} \hfill \\ {0 < t_{0} < t_{1} < S_{n} } \hfill \\ \end{array} } \right. \\ \end{aligned}$$ In addition, the range of variable parameter shall be optimized to improve efficiency and accuracy of the model's parameter optimization. First, the value range of t 0 and t 1 , i.e. k and l, shall be determined based on the system's failure time data and trend estimation. Second, the first k time data at early failure phase and the last n–l time data at wear-out failure phase are adopted for interval estimation of Weibull process, and the extreme value of estimation interval is regarded as the optimization range of parameters a 1 , b 1 , a 2, and b 2. The flowchart of parameter estimation of reliability model is shown in Fig. 1. Flowchart of parameter estimation of reliability model Weibull process interval estimation Interval estimation includes two parts—point estimation and margin of error that describes estimation accuracy. Therefore, to complete the interval estimation of Weibull process, the point estimation of parameters shall be first performed, followed by the calculation of its margin of error. Point estimation Based on Literature (Ebeling 2004), the maximum likelihood point estimation of Weibull process parameters a and b is as follows: When the failure time data is Type-II censored data: $$\left\{ {\begin{array}{*{20}l} {\hat{b} = \frac{n}{{\sum\nolimits_{i = 1}^{n} {\ln \frac{{S_{n} }}{{S_{i} }}} }}} \hfill \\ {\hat{a} = \frac{n}{{S_{n}^{{\hat{b}}} }}} \hfill \\ \end{array} } \right.$$ When the failure time data is Type-I censored data: $$\left\{ {\begin{array}{*{20}l} {\hat{b} = \frac{n}{{\sum\nolimits_{i = 1}^{n} {\ln \frac{{t_{s} }}{{S_{i} }}} }}} \hfill \\ {\hat{a} = \frac{n}{{t_{s}^{{\hat{b}}} }}} \hfill \\ \end{array} } \right.$$ Generally, in the case of large sample (≥30), the maximum likelihood point estimation presents consistency and asymptotically normal distribution; In the case of small sample, the logarithm of point estimation of parameter is much closer to normal distribution. Specific equations are as follows: In the case of large sample: $$\frac{{a - \hat{a}}}{{\sqrt {D(\hat{a})} }} \sim N(0,1);\quad \frac{{b - \hat{b}}}{{\sqrt {D(\hat{b})} }} \sim N(0,1)$$ In the case of small sample: $$\frac{{\ln a - \ln \hat{a}}}{{\sqrt {D(\hat{a})} }} \sim N(0,1);\quad \frac{{\ln b - \ln \hat{b}}}{{\sqrt {D(\hat{b})} }} \sim N(0,1)$$ where \(D(\hat{a})\) and \(D(\hat{b})\) are the variance of parameters a and b, respectively. Their value can be obtained using Fisher Information Matrix below (Kijima 1989; Ye 2003): $$\left( {\begin{array}{*{20}l} {D(\hat{a})} \hfill & \quad {\text{cov} (\hat{a},\hat{b})} \hfill \\ {\text{cov} (\hat{b},\hat{a})} \hfill & \quad {D(\hat{b})} \hfill \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} { - \frac{{\partial^{2} \ln L}}{{\partial a^{2} }}} \hfill & \quad { - \frac{{\partial^{2} \ln L}}{\partial a\partial b}} \hfill \\ { - \frac{{\partial^{2} \ln L}}{\partial b\partial a}} \hfill & \quad { - \frac{{\partial^{2} \ln L}}{{\partial b^{2} }}} \hfill \\ \end{array} } \right)^{ - 1}$$ According to normal distribution, the confidence interval under 1−α: $$ P\left( {\frac{{\left| {a - \hat{a}} \right|}}{{\sqrt {D(\hat{a})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha ;\quad P\left( {\frac{{\left| {b - \hat{b}} \right|}}{{\sqrt {D(\hat{b})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha $$ $$ P\left( {\frac{{\left| {\ln a - \ln \hat{a}} \right|}}{{\sqrt {D(\hat{a})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha ;\quad P\left( {\frac{{\left| {\ln b - \ln \hat{b}} \right|}}{{\sqrt {D(\hat{b})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha $$ Solving Eq. (25) can obtain the confidence interval for a and b: $$ \left[ {\hat{a} - Z_{{\alpha /2}} \sqrt {D(\hat{a})} , \, \hat{a} + Z_{{\alpha /2}} \sqrt {D(\hat{a})} } \right];\quad \left[ {\hat{b} - Z_{{\alpha /2}} \sqrt {D(\hat{b})} , \, \hat{b} + Z_{{\alpha /2}} \sqrt {D(\hat{b})} } \right] $$ $$ \left[ {\hat{a}/\exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{a})} } \right),\,\hat{a} \cdot \exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{a})} } \right)} \right];\quad \left[ {\hat{b}/\exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{b})} } \right),\,\hat{b} \cdot \exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{b})} } \right)} \right] $$ Parameter optimization based on artificial bee colony algorithm With respect to the proposed model for parameter optimization and the range optimized by Weibull process interval estimation, artificial bee colony algorithm was used for solving optimization problem. Artificial bee colony algorithm is characterized by strong global optimization, rapid rate of convergence, and suitability in solving different problems, compared with other swarm intelligence optimization algorithms such as evolutionary algorithm, artificial immune algorithm, particle swarm optimization, and ant colony algorithm (Chen 2015). The correspondence between bee's search for nectar source and optimization in artificial bee colony algorithm is shown in Table 1. Table 1 Correspondence between bee's behavior and optimization The algorithm process is as follows: Initialization of parameter First, the two basic parameters in artificial bee colony algorithm shall be initialized. One parameter is the quantity of nectar source (S n ), which represents the number of solution. Besides, S n is also the number of leaders and followers; the other parameter is the limit value of gathering nectar source (limit). When the gathering times of a nectar source exceed the limit, the nectar source will be abandoned. Initialization of solution space Random initialization produces the primary S n solutions. Let θ i = (a 1i , b 1i , t 0i , a 2i , b 2i , t 1i ) = (θ i1 , θ i2 , θ i3 , θ i4 , θ i5 , θ i6 ), which represents the location of the ith nectar source. The initialization formula for each value is as follows: $$\theta_{ij} = LB_{j} + (UB_{j} - LB_{j} ) \times r\quad j = 1,2, \ldots ,4\quad i = 1,2, \ldots ,Sn$$ where r is the random number between [0, 1], and LB j and UB j are the value range of θ j , i.e. minimum and maximum. Later, the leaders start to gather these nectar sources at random, and relevant fitness value is calculated. Stage of leaders When the leaders decide to gather a nectar source at random, they will search new nectar sources at random around the nectar source. Their search is in line with the equation below: $$\theta_{new(j)} = \theta_{ij} + (\theta_{kj} - \theta_{ij} ) \times r\quad k \in (1,2, \ldots ,Sn) \cap k \ne i\quad j = 1,2, \ldots ,4$$ The fitness value corresponding to new nectar sources θ new is then calculated and compared with previous nectar sources to select the solution with higher fitness. Stage of followers In terms of followers, their follow probability is calculated based on normalization and fitness value of each nectar source. The equation is as follows: $$P_{i} = \frac{{fit_{i} }}{{\sum\nolimits_{i = 1}^{Sn} {fit_{i} } }}$$ where, fit i represents the fitness value of nectar source at θ i . Larger fit i indicates greater probability that the followers select the nectar source. When the followers select a nectar source, they also search new nectar sources at random around the nectar source based on Eq. (29) just as the leaders do, and then choose a better nectar source by comparing with the nectar source they follow. Stage of scouters At this stage, if a nectar source θ i is still not improved after being gathered limit times, it will be abandoned. The leaders here will become scouters, and search a new nectar source at random based on Eq. (28). Iterations The nectar source with the maximum fitness is recorded. Whether iterations reach the maximum set point is judged. If the maximum set point is reached, the algorithm ends and the optimal nectar source, namely optimal solution, is output. Otherwise, return to (3) to continue the cycle. The proposed model was applied in reliability analysis of a repairable CNC system containing servo drive unit. In order to test the reliability of CNC system, a long-term multi-sample test (Type-I censored) under the same environment was carried out, and the environment of test laboratory as shown in Fig. 2. This test has been taken to record the failures of CNC system in the past two years. The failure time data of 18 sets of CNC system with 50 failures were recorded, and processed by the Total Test Time method (Barlow and Campo 1975), as shown in Table 2. The environment of CNC system reliability test laboratory Table 2 Data of total failure time Based on the data above, the system's failure trend estimation and test were carried out, and TTT diagram (Bergman 1979) was drawn, as shown in Fig. 3. Where, the total number of failures n = 50. TTT diagram of system failure time data As can be seen in Fig. 3, the TTT scatter diagram is concave under the diagonal of unit square at the beginning, indicating that the failure rate presents a decline at early phase. Later, the diagram fluctuates slightly along a straight line. The failure rate remains unchanged, and the system is gradually stabilized. After a period of time, the diagram looks like a convex under the diagonal of unit square, indicating that the failure rate increases. Meanwhile, Laplace test (Louit et al. 2009) and Anderson–Darling test (Pulcini 2001) were used for trend test of failure data in Table 2. The significance level was set as α = 0.05. The results of statistical tests are shown in Table 3. Table 3 Results of statistical tests at the significance level α = 0.05 The results of trend test are consistent with the estimation output of TTT diagram. The system failure time data follows the trend of bathtub curve, and undergoes three phases—early failure, random failure and wear-out failure. Then, the system reliability model was set up using the piecewise function above, and parameter estimation was performed. First, the optimum ranges of t 0 and t 1 were processed in segment, and within the scope of (0, n) select the values of k and l in turn. But in order to improve calculation efficiency, by observing the total failure time data and the inflection point trend of TTT diagram, the range of values for k and l can be reduced and make a preliminary judgment. So, the value of k may be 6, 7, 8, and 9, while the value of l may be 33, 34, 35, and 36. And, the value range of t 0 and t 1 is S k < t 0 < S k+1 < S l < t 1 < S l+1. Second, parameter optimization was conducted in accordance with different k and l. Based on the failure time sequence S 1 , S 2 , S 3 ,…, S k (considered as Type-II censored data), interval estimation of minimal repair model at early failure phase was conducted. In the condition that the given confidence was 99.73 %, the confidence interval was a 1 ∈ [a 1min , a 1max ], b 1 ∈ [b 1min , b 1max ]. Similarly, based on the failure time sequence S l+1 ,…, S 50 , t s (considered as Type-I censored data), interval estimation of minimal repair model at wear-out failure phase was conducted. In the condition that the given confidence was 99.73 %, the confidence interval was a 2 ∈ [a 2min , a 2max ], b 2 ∈ [b 2min , b 2max ]. The improved optimization model was: $$\begin{aligned} & \mathop {\text{max} }\limits_{\theta } L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) \\ & {\text{subject}}\,{\text{to}}\,\left\{ {\begin{array}{*{20}l} {a_{1{\text{min}} } \le a_{1} \le a_{1{\text{max}} } ,b_{1{\text{min}} } \le b_{1} \le b_{1{\text{max}} } } \hfill \\ {a_{2{\text{min}} } \le a_{2} \le a_{2{\text{max}} } ,b_{2{\text{min}} } \le b_{2} \le b_{2{\text{max}} } } \hfill \\ {S_{k} < t_{0} < S_{k + 1} ,S_{l} < t_{1} < S_{l + 1} } \hfill \\ \end{array} } \right. \\ \end{aligned}$$ Artificial bee colony algorithm was then used for parameter optimization of the optimization model above. The algorithm's iterations, size of bee colony, and limit were set. The optimal parameter in the case of k and l was θ = (a 1 , b 1 , t 0 , a 2 , b 2 , t 1 ). At last, optimal parameters in the case of different k and l were compared. The parameter that maximizes likelihood was selected as the optimal parameter of overall model, i.e. k = 7, l = 33, a 1 = 0.0427, b 1 = 0.5066, t 0 = 16,567, a 2 = 5.0503 × 10–11, b 2 = 2.4984, t 1 = 168,242. The goodness of fit of the two-stage Weibull process above was tested using Cramer-Von Mises test (Ebeling 2004). The given significance level was α = 0.05. The test results are shown in Table 4. Table 4 Results of test of goodness of fit at the significance level α = 0.05 In conclusion, 18 sets of CNC system intensity functions were obtained: $$\omega (t) = \left\{ {\begin{array}{*{20}l} {0.0216 \times t^{ - 0.4934} } \hfill & \quad {t \le 16567} \hfill \\ { 1. 7 8 9 3\times 1 0^{ - 4} } \hfill & \quad { 16567< t \le 168242} \hfill \\ { [ 1.7893\times 1 0^{ - 4} + 1.2618 \times 10^{ - 10} \times (t - 168242)^{1.4984} } \hfill & \quad {t > 168242} \hfill \\ \end{array} } \right.$$ Meanwhile, the results suggest that the critical point between early failure phase and random failure phase of a single system was located at about t 0/18 = 920 h, while the critical point between random failure phase and wear-out failure phase was located at about t 1/18 = 9347 h. The intensity function of a single system was: $$\omega_{1} (t) = \left\{ {\begin{array}{*{20}l} {5.1882 \times 10^{ - 3} \times t^{ - 0.4934} } \hfill & \quad {t \le 920} \hfill \\ { 1.7893\times 1 0^{ - 4} } \hfill & \quad { 920< t \le 9347} \hfill \\ { [ 1.7893\times 1 0^{ - 4} + 9.5916 \times 10^{ - 9} \times (t - 9347)^{1.4984} } \hfill & \quad {t > 9347} \hfill \\ \end{array} } \right.$$ The curve of intensity function is shown in Fig. 4. Intensity function of a single system Based the instantaneous MTBF(t) = 1/ω(t), we found that the MTBF of the system at random failure phase was calculated as MTBF = 1/(1.7893 × 10−4) = 5589 h. Therefore, the example above proves that the proposed piecewise reliability model can fit the failure rate under bathtub curve very well and obtain two critical points of bathtub curve. It provides an important basis for reliability analysis, design and test. A modelling study on the phenomenon that the failure rate of lifetime failure data presents a bathtub curve was carried out for repairable system. The following achievements were made: Considering the characteristics of failure rate at three phases of bathtub curve, and combining homogeneous Poisson process and non-homogeneous Poisson process during Counting Process, a bathtub-shaped reliability model based on piecewise intensity function was proposed. Moreover, with respect to the difficulty of common parameter estimation methods in solving equations, maximum likelihood estimation and artificial bee colony algorithm were combined to estimate the model-related parameters and guarantee the feasibility of the model in application. Given that extensive range and excessive time consumption of parameter optimization during parameter estimation, the interval estimation of two-stage Weibull process was studied, and the range of model optimization was improved. As a result, the efficiency of parameter optimization was substantially increased. The proposed model, as well as estimation approach, fits the failure rate under bathtub curve very well and adequately reflects the duration of random failure phase. Besides, two critical points of bathtub curve are obtained. At last, the practical use of the proposed model is demonstrated by a set of failure data of a repairable CNC system, and the validity of the proposed model and its parameter estimation method was verified. The study will be useful for reliability analysis of repairable systems and provide an important basis for relevant reliability research. Adamidis K, Loukas S (1998) A lifetime distribution with decreasing failure rate. Stat Prob Lett 39(98):35–42 Barlow RE, Campo RA (1975) Total time on test processes and applications to failure data analysis. California Univ Berkeley Operations Research Center Barreto-Souza W, Cribari-Neto F (2009) A generalization of the exponential-poisson distribution. Stat Prob Lett 79(24):2493–2500 Bergman B (1979) On age replacement and the total time on test concept. Scand J Stat 6(4):161–168 Chen Z (2000) A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Stat Prob Lett 49(2):155–161 Chen W (2015) Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem. Phys A 429:125–139 Cordeiro GM, Ortega EMM, Silva GO (2014) The Kumaraswamy modified Weibull distribution:theory and applications. J Stat Comput Simul 84(84):1387–1411 Ebeling CE (2004) An introduction to reliability and maintainability engineering. Tata McGraw-Hill Education, New York, pp 294–307 El-Gohary A, Alshamrani A, Al-Otaibi AN (2013) The generalized Gompertz distribution. Appl Math Model 37(1–2):13–24 Gupta RD, Kundu D (1999). Generalized exponential distributions. Aust N Zeal J Stat 41(12):173-188(16) Kijima M (1989) Some results for repairable systems with general repair. J Appl Prob 26(1):89–102 Kus C (2007) A new lifetime distribution. Gen Inf 51(9):4497–4509 Lai CD, Xie M, Murthy DNP (2003) A modified Weibull distribution. IEEE Trans Reliab 52(1):33–37 Lin Chang-Ching, Tseng Hsien-Yu (2005) A neural network application for reliability modelling and condition-based predictive maintenance. Int J Adv Manuf Technol 25:174–179 Louit DM, Pascual R, Jardine AKS (2009) A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data. Reliab Eng Syst Safety 94(10):1618–1628 Myers A (2010) Complex system reliability: multichannel systems with imperfect fault coverage. Springer, London, pp 7–9 Pulcini G (2001) Modeling the failure data of a repairable equipment with bathtub type failure intensity. Reliab Eng Syst Safety 71(2):209–218 Rausand M, Høyland A (2004) System reliability theory: models, statistical methods, and applications. Wiley, New York, pp 127–171 Wang X, Yu C, Li Y (2015) A new finite interval lifetime distribution model forfitting bathtub-shaped failure rate curve. Math Problems Eng 2015:1–6 Xie M, Tang Y, Goh TN (2002) A modified Weibull extension with bathtub-shaped failure rate function. Reliab Eng Syst Safety 76(3):279–285 Ye CN (2003) Fisher information matrix of LBVW distribution. Acta Math Appl Sinica 26(4):715–725 Zhang R (2004) A new three-parameter lifetime distribution with bathtub shape or increasing failure rate function and a flood data application. Concordia University, Canada All authors contributed equally to this work. All authors read and approved the final manuscript. The research was supported by the National Science and Technology Major Project "High-Grade CNC Machine Tools and Basic Manufacturing Equipments" (Grant No. 2016ZX04004-006). School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China Chong Peng , Guangpeng Liu & Lun Wang Search for Chong Peng in: Search for Guangpeng Liu in: Search for Lun Wang in: Correspondence to Chong Peng. Peng, C., Liu, G. & Wang, L. Piecewise modelling and parameter estimation of repairable system failure rate. SpringerPlus 5, 1477 (2016) doi:10.1186/s40064-016-3122-4 Piecewise intensity function Reliability modelling Artificial bee colony algorithm Bathtub curve
CommonCrawl
Only show content I have access to (392) Only show open access (42) Chapters (189) Last 6 months (12) Over 3 years (1443) Statistics and Probability (83) Earth and Environmental Sciences (72) Area Studies (34) Politics and International Relations (25) Microscopy and Microanalysis (142) Epidemiology & Infection (70) The Journal of Laryngology & Otology (27) Powder Diffraction (24) The Lichenologist (24) The Classical Review (23) British Journal of Nutrition (22) Parasitology (22) Publications of the Astronomical Society of Australia (22) Psychological Medicine (20) Journal of Fluid Mechanics (19) Infection Control & Hospital Epidemiology (17) Journal of Materials Research (17) Proceedings of the International Astronomical Union (17) The British Journal of Psychiatry (17) Symposium - International Astronomical Union (16) Mathematical Proceedings of the Cambridge Philosophical Society (14) Proceedings of the Nutrition Society (14) The Journal of Agricultural Science (14) Mathematika (11) Cambridge University Press (196) The University of Adelaide Press (1) AMMS - Australian Microscopy and Microanalysis Society (46) International Astronomical Union (43) MSC - Microscopical Society of Canada (34) BSAS (28) British Lichen Society (BLS) (24) Royal College of Speech and Language Therapists (23) MSA - Microscopy Society of America (20) Nestle Foundation - enLINK (20) Nutrition Society (20) LMS (18) Society for Healthcare Epidemiology of America (SHEA) (18) AMA Mexican Society of Microscopy MMS (17) Mineralogical Society (15) Brazilian Society for Microscopy and Microanalysis (SBMM) (14) Classical Association (14) Weed Science Society of America (14) AEPC Association of European Paediatric Cardiology (12) The Classical Association (12) Canadian Political Science Association (11) British Institute of International and Comparative Law (10) Cambridge Mathematical Library (27) Cambridge Planetary Science (26) London Mathematical Society Lecture Note Series (12) Society for Experimental Biology Seminar Series (6) Cambridge Handbooks in Psychology (5) Advances in Microscopy and Microanalysis (2) British Mycological Society Symposia (2) European Observatory on Health Systems and Policies (2) Cambridge Companions to Music (1) Cambridge Companions to Philosophy (1) Cambridge Handbooks in Behavioral Genetics (1) Cambridge Library Collection - Music (1) Case Studies in Neurology (1) Conservation Biology (1) Contemporary Issues in Cancer Imaging (1) Elements in Child Development (1) Studies in Interactional Sociolinguistics (1) Systematics Association Special Volume Series (1) The Cambridge RF and Microwave Engineering Series (1) Cambridge Handbooks of Psychology (4) Cambridge Library Collection (1) The Cambridge Companions to Music (1) The Cambridge Companions to Philosophy and Religion (1) The privatisation of remembering practices in contemporary inpatient mental healthcare: going beyond Agnes's Jacket Inaugural Collection Steven D. Brown, Paula Reavey Journal: Memory, Mind & Media / Volume 1 / 2022 Published online by Cambridge University Press: 20 December 2021, e7 In this paper, we consider changes to memorial practices for mental health service users during the asylum period of the mid-nineteenth up to the end of the twentieth century and into the twenty-first century. The closing of large asylums in the UK has been largely welcomed by professionals and service-users alike, but their closure has led to a decrease in continuous and consistent care for those with enduring mental health challenges. Temporary and time-limited mental health services, largely dedicated to crisis management and risk reduction have failed to enable memory practices outside the therapy room. This is an unusual case of privatised memories being favoured over collective memorial activity. We argue that the collectivisation of service user memories, especially in institutions containing large numbers of long-stay patients, would benefit both staff and patients. The benefit would be in the development of awareness of how service users make sense of their past in relation to their present stay in hospital, how they might connect with others in similar positions and how they may connect with the world and others upon future release. This seems to us central to a project of recovery and yet is rarely practised in any mental health institution in the UK, despite being central to other forms of care provision, such as elderly and children's care services. We offer some suggestions on how collective models of memory in mental health might assist in this project of recovery and create greater visibility between past, present and future imaginings. Care-giver wellbeing: exploring gender, relationship-to-care-recipient and care-giving demands in the Canadian Longitudinal Study on Aging Neena L. Chappell, Margaret Penning, Helena Kadlec, Sean D. Browning Journal: Ageing & Society , First View Published online by Cambridge University Press: 17 December 2021, pp. 1-37 The three-way intersection of gender, relationship-to-care-recipient and care-giving demands has not, to our knowledge, been examined in relation to the wellbeing of family care-givers. We explore inequalities in depressive symptoms and life satisfaction, comparing wives, husbands, daughters and sons providing very-intensive care (36+ hours/week) with those providing less care and disparities between these groups in the factors related to disadvantage. Data from the Canadian Longitudinal Study on Aging (N = 5,994) support the existence of differences between the groups. Very-intensive care-giving wives report the most depressive symptoms and lowest life satisfaction; less-intensive care-giving sons report the fewest depressive symptoms, and less-intensive care-giving daughters report the highest life satisfaction. However, group differences in life satisfaction disappear among very-intensive care-givers. Drawing on Intersectionality and Stress Process theories, data from regression analyses reveal a non-significant gender–relationship–demand interaction term, but, health, socio-economic and social support resources play a strong mediating role between care demand and wellbeing. Analyses of the eight groups separately reveal diversity in the care-giving experience. Among less-intensive care-givers, the mediating role of resources remains strong even as differences are evident. Among very-intensive care-givers, the role of resources is less and differences in wellbeing between the groups are magnified. Policy implications emphasise the imperative to personalise services to meet the varied needs of care-givers. Risk for depression tripled during the COVID-19 pandemic in emerging adults followed for the last 8 years Elisabet Alzueta, Simon Podhajsky, Qingyu Zhao, Susan F. Tapert, Wesley K. Thompson, Massimiliano de Zambotti, Dilara Yuksel, Orsolya Kiss, Rena Wang, Laila Volpe, Devin Prouty, Ian M. Colrain, Duncan B. Clark, David B. Goldston, Kate B. Nooner, Michael D. De Bellis, Sandra A. Brown, Bonnie J. Nagel, Adolf Pfefferbaum, Edith V. Sullivan, Fiona C. Baker, Kilian M. Pohl Journal: Psychological Medicine , First View Published online by Cambridge University Press: 02 November 2021, pp. 1-8 The coronavirus disease 2019 (COVID-19) pandemic has significantly increased depression rates, particularly in emerging adults. The aim of this study was to examine longitudinal changes in depression risk before and during COVID-19 in a cohort of emerging adults in the U.S. and to determine whether prior drinking or sleep habits could predict the severity of depressive symptoms during the pandemic. Participants were 525 emerging adults from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA), a five-site community sample including moderate-to-heavy drinkers. Poisson mixed-effect models evaluated changes in the Center for Epidemiological Studies Depression Scale (CES-D-10) from before to during COVID-19, also testing for sex and age interactions. Additional analyses examined whether alcohol use frequency or sleep duration measured in the last pre-COVID assessment predicted pandemic-related increase in depressive symptoms. The prevalence of risk for clinical depression tripled due to a substantial and sustained increase in depressive symptoms during COVID-19 relative to pre-COVID years. Effects were strongest for younger women. Frequent alcohol use and short sleep duration during the closest pre-COVID visit predicted a greater increase in COVID-19 depressive symptoms. The sharp increase in depression risk among emerging adults heralds a public health crisis with alarming implications for their social and emotional functioning as this generation matures. In addition to the heightened risk for younger women, the role of alcohol use and sleep behavior should be tracked through preventive care aiming to mitigate this looming mental health crisis. C.5 Musashi-1 is a master regulator of aberrant translation in MYC-amplified Group 3 medulloblastoma MM Kameda-Smith, H Zhu, E Luo, C Venugopal, K Brown, BA Yee, S Xing, F Tan, D Bakhshinyan, AA Adile, M Subapanditha, D Picard, J Moffat, A Fleming, K Hope, J Provias, M Remke, Y Lu, J Reimand, R Wechsler-Reya, G Yeo, SK Singh Journal: Canadian Journal of Neurological Sciences / Volume 48 / Issue s3 / November 2021 Published online by Cambridge University Press: 05 January 2022, p. S19 Background: Medulloblastoma (MB) is the most common solid malignant pediatric brain neoplasm. Group 3 (G3) MB, particularly MYC amplified G3 MB, is the most aggressive subgroup with the highest frequency of children presenting with metastatic disease, and is associated with a poor prognosis. To further our understanding of the role of MSI1 in MYC amplified G3 MB, we performed an unbiased integrative analysis of eCLIP binding sites, with changes observed at the transcriptome, the translatome, and the proteome after shMSI1 inhibition. Methods: Primary human pediatric MBs, SU_MB002 and HD-MB03 were kind gifts from Dr. Yoon-Jae Cho (Harvard, MS) and Dr. Till Milde (Heidelberg) and cultured for in vitro and in vivo experiments. eCLIP, RNA-seq, Polysome-seq, and TMT-MS were completed as previously described. Results: MSI1 is overexpressed in G3 MB. shRNA Msi1 interference resulted in a reduction in tumour burden conferring a survival advantage to mice injected with shMSI1 G3MB cells. Robust ranked multiomic analysis (RRA) identified an unconventional gene set directly perturbed by MSI1 in G3 MB. Conclusions: Our robust unbiased integrative analysis revealed a distinct role for MSI1 in the maintenance of the stem cell state in G3 MB through post-transcriptional modification of multiple pathways including identification of unconventional targets such as HIPK1. Ultrastructural characterization of host–parasite interactions of Plasmodium coatneyi in rhesus macaques E. D. Lombardini, B. Malleret, A. Rungojn, N. Popruk, T. Kaewamatawong, A. E. Brown, G. D. H. Turner, B. Russell, D. J. P. Ferguson Journal: Parasitology , First View Published online by Cambridge University Press: 04 October 2021, pp. 1-10 Plasmodium coatneyi has been proposed as an animal model for human Plasmodium falciparum malaria as it appears to replicate many aspects of pathogenesis and clinical symptomology. As part of the ongoing evaluation of the rhesus macaque model of severe malaria, a detailed ultrastructural analysis of the interaction between the parasite and both the host erythrocytes and the microvasculature was undertaken. Tissue (brain, heart and kidney) from splenectomized rhesus macaques and blood from spleen-intact animals infected with P. coatneyi were examined by electron microscopy. In all three tissues, similar interactions (sequestration) between infected red blood cells (iRBC) and blood vessels were observed with evidence of rosette and auto-agglutinate formation. The iRBCs possessed caveolae similar to P. vivax and knob-like structures similar to P. falciparum. However, the knobs often appeared incompletely formed in the splenectomized animals in contrast to the intact knobs exhibited by spleen intact animals. Plasmodium coatneyi infection in the monkey replicates many of the ultrastructural features particularly associated with P. falciparum in humans and as such supports its use as a suitable animal model. However, the possible effect on host–parasite interactions and the pathogenesis of disease due to the use of splenectomized animals needs to be taken into consideration. The Evolutionary Map of the Universe pilot survey Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Repeated antigen testing among severe acute respiratory coronavirus virus 2 (SARS-CoV-2)–positive nursing home residents Erin D. Moritz, Susannah L. McKay, Farrell A. Tobolowsky, Stephen P. LaVoie, Michelle A. Waltenburg, Kristin D. Lecy, Natalie J. Thornburg, Jennifer L. Harcourt, Azaibi Tamin, Jennifer M. Folster, Jeanne Negley, Allison C. Brown, L. Clifford McDonald, Preeta K. Kutty, for the CDC Infection Prevention and Control Group Published online by Cambridge University Press: 20 August 2021, pp. 1-4 Repeated antigen testing of 12 severe acute respiratory coronavirus virus 2 (SARS-CoV-2)–positive nursing home residents using Abbott BinaxNOW identified 9 of 9 (100%) culture-positive specimens up to 6 days after initial positive test. Antigen positivity lasted 2–24 days. Antigen positivity might last beyond the infectious period, but it was reliable in residents with evidence of early infection. UNIVERSITY OF WASHINGTON QUATERNARY ISOTOPE LABORATORY RETROSPECTIVE Paula J Reimer, Thomas F Braziunas, Thomas A Brown, Robert L Burk, Tracy Furutani, Diana Greenlee, Pieter M Grootes, Paul D Quay, Ingrid Stuiver Journal: Radiocarbon , First View The Quaternary Isotope Laboratory (QIL) at the University of Washington was launched in 1969 and directed by Minze Stuiver until his retirement in 1998. Here we review some of the scientific work undertaken in the QIL and the memories of some of Minze's former students and colleagues. The earliest water buffalo in the Caucasus: shifting animals and people in the medieval Islamic world Paul D. Wordsworth, Ashleigh F. Haruda, Alicia Ventresca Miller, Samantha Brown Journal: Antiquity / Volume 95 / Issue 383 / October 2021 The expansion of the Umayyad and Abbasid Caliphates (seventh to ninth centuries AD) brought diverse regions from the Indus Valley to the Eurasian Steppe under hegemonic control. An overlooked aspect of this political process is the subsequent translocation of species across ecological zones. This article explores species introduction in the early Islamic world, presenting the first archaeological evidence for domestic water buffalo in the Caucasus—identified using zooarchaeological and ZooMS methods on material from the historical site of Bardhaʿa in Azerbaijan. We contextualise these finds with historical accounts to demonstrate the exploitation of medieval marginal zones and the effects of centralised social reorganisation upon species dispersal. Automated detection and staging of malaria parasites from cytological smears using convolutional neural networks Mira S. Davidson, Clare Andradi-Brown, Sabrina Yahiya, Jill Chmielewski, Aidan J. O'Donnell, Pratima Gurung, Myriam D. Jeninga, Parichat Prommana, Dean W. Andrew, Michaela Petter, Chairat Uthaipibull, Michelle J. Boyle, George W. Ashdown, Jeffrey D. Dvorin, Sarah E. Reece, Danny W. Wilson, Kane A. Cunningham, D. Michael. Ando, Michelle Dimon, Jake Baum Journal: Biological Imaging / Volume 1 / 2021 Published online by Cambridge University Press: 02 August 2021, e2 Microscopic examination of blood smears remains the gold standard for laboratory inspection and diagnosis of malaria. Smear inspection is, however, time-consuming and dependent on trained microscopists with results varying in accuracy. We sought to develop an automated image analysis method to improve accuracy and standardization of smear inspection that retains capacity for expert confirmation and image archiving. Here, we present a machine learning method that achieves red blood cell (RBC) detection, differentiation between infected/uninfected cells, and parasite life stage categorization from unprocessed, heterogeneous smear images. Based on a pretrained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection, our model performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. Application of a residual neural network-50 model to infected cells also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Finally, combining our method with a regression model successfully recapitulates intraerythrocytic developmental cycle with accurate lifecycle stage categorization. Combined with a mobile-friendly web-based interface, called PlasmoCount, our method permits rapid navigation through and review of results for quality assurance. By standardizing assessment of Giemsa smears, our method markedly improves inspection reproducibility and presents a realistic route to both routine lab and future field-based automated malaria diagnosis. Direct observation of breathing phenomenon and phase transformation in Ni-rich cathode materials by in situ TEM Weiqun Li, Ioannis Siachos, Juhan Lee, Serena A. Corr, Clare P. Grey, Nigel D. Browning, B. Layla Mehdi Scattering Matrix Determination in Crystalline Materials from 4D Scanning Transmission Electron Microscopy at a Single Defocus Value Scott D. Findlay, Hamish G. Brown, Philipp M. Pelz, Colin Ophus, Jim Ciston, Leslie J. Allen Journal: Microscopy and Microanalysis / Volume 27 / Issue 4 / August 2021 Recent work has revived interest in the scattering matrix formulation of electron scattering in transmission electron microscopy as a stepping stone toward atomic-resolution structure determination in the presence of multiple scattering. We discuss ways of visualizing the scattering matrix that make its properties clear. Through a simulation-based case study incorporating shot noise, we shown how regularizing on this continuity enables the scattering matrix to be reconstructed from 4D scanning transmission electron microscopy (STEM) measurements from a single defocus value. Intriguingly, for crystalline samples, this process also yields the sample thickness to nanometer accuracy with no a priori knowledge about the sample structure. The reconstruction quality is gauged by using the reconstructed scattering matrix to simulate STEM images at defocus values different from that of the data from which it was reconstructed. UNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORK Arun K. Kuchibhotla, Lawrence D. Brown, Andreas Buja, Edward I. George, Linda Zhao Journal: Econometric Theory , First View Published online by Cambridge University Press: 04 June 2021, pp. 1-47 For the last two decades, high-dimensional data and methods have proliferated throughout the literature. Yet, the classical technique of linear regression has not lost its usefulness in applications. In fact, many high-dimensional estimation techniques can be seen as variable selection that leads to a smaller set of variables (a "submodel") where classical linear regression applies. We analyze linear regression estimators resulting from model selection by proving estimation error and linear representation bounds uniformly over sets of submodels. Based on deterministic inequalities, our results provide "good" rates when applied to both independent and dependent data. These results are useful in meaningfully interpreting the linear regression estimator obtained after exploring and reducing the variables and also in justifying post-model-selection inference. All results are derived under no model assumptions and are nonasymptotic in nature. Practices and activities among healthcare personnel with severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection working in different healthcare settings—ten Emerging Infections Program sites, April–November 2020 Nora Chea, Taniece Eure, Austin R. Penna, Cedric J. Brown, Joelle Nadle, Deborah Godine, Linda Frank, Christopher A. Czaja, Helen Johnston, Devra Barter, Betsy Feighner Miller, Katie Angell, Kristen Marshall, James Meek, Monica Brackney, Stacy Carswell, Stepy Thomas, Lucy E. Wilson, Rebecca Perlmutter, Kaytlynn Marceaux-Galli, Ashley Fell, Sarah Lim, Ruth Lynfield, Sarah Shrum Davis, Erin C. Phipps, Marla Sievers, Ghinwa Dumyati, Cathleen Concannon, Kathryn McCullough, Amy Woods, Sandhya Seshadri, Christopher Myers, Rebecca Pierce, Valerie L. S. Ocampo, Judith A. Guzman-Cottrill, Gabriela Escutia, Monika Samper, Sandra A. Pena, Cullen Adre, Matthew Groenewold, Nicola D. Thompson, Shelley S. Magill Healthcare personnel with severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection were interviewed to describe activities and practices in and outside the workplace. Among 2,625 healthcare personnel, workplace-related factors that may increase infection risk were more common among nursing-home personnel than hospital personnel, whereas selected factors outside the workplace were more common among hospital personnel. Optimizing Experimental Conditions for Accurate Quantitative Energy-Dispersive X-ray Analysis of Interfaces at the Atomic Scale Katherine E. MacArthur, Andrew B. Yankovich, Armand Béché, Martina Luysberg, Hamish G. Brown, Scott D. Findlay, Marc Heggen, Leslie J. Allen Journal: Microscopy and Microanalysis / Volume 27 / Issue 3 / June 2021 The invention of silicon drift detectors has resulted in an unprecedented improvement in detection efficiency for energy-dispersive X-ray (EDX) spectroscopy in the scanning transmission electron microscope. The result is numerous beautiful atomic-scale maps, which provide insights into the internal structure of a variety of materials. However, the task still remains to understand exactly where the X-ray signal comes from and how accurately it can be quantified. Unfortunately, when crystals are aligned with a low-order zone axis parallel to the incident beam direction, as is necessary for atomic-resolution imaging, the electron beam channels. When the beam becomes localized in this way, the relationship between the concentration of a particular element and its spectroscopic X-ray signal is generally nonlinear. Here, we discuss the combined effect of both spatial integration and sample tilt for ameliorating the effects of channeling and improving the accuracy of EDX quantification. Both simulations and experimental results will be presented for a perovskite-based oxide interface. We examine how the scattering and spreading of the electron beam can lead to erroneous interpretation of interface compositions, and what approaches can be made to improve our understanding of the underlying atomic structure. Prenatal maternal transdiagnostic, RDoC-informed predictors of newborn neurobehavior: Differences by sex Mengyu (Miranda) Gao, Brendan Ostlund, Mindy A. Brown, Parisa R. Kaliush, Sarah Terrell, Robert D. Vlisides-Henry, K. Lee Raby, Sheila E. Crowell, Elisabeth Conradt Journal: Development and Psychopathology / Volume 33 / Issue 5 / December 2021 We examined whether Research Domain Criteria (RDoC)-informed measures of prenatal stress predicted newborn neurobehavior and whether these effects differed by newborn sex. Multilevel, prenatal markers of prenatal stress were obtained from 162 pregnant women. Markers of the Negative Valence System included physiological functioning (respiratory sinus arrhythmia [RSA] and electrodermal [EDA] reactivity to a speech task, hair cortisol), self-reported stress (state anxiety, pregnancy-specific anxiety, daily stress, childhood trauma, economic hardship, and family resources), and interviewer-rated stress (episodic stress, chronic stress). Markers of the Arousal/Regulatory System included physiological functioning (baseline RSA, RSA, and EDA responses to infant cries) and self-reported affect intensity, urgency, emotion regulation strategies, and dispositional mindfulness. Newborns' arousal and attention were assessed via the Neonatal Intensive Care Unit (NICU) Network Neurobehavioral Scale. Path analyses showed that high maternal episodic and daily stress, low economic hardship, few emotion regulation strategies, and high baseline RSA predicted female newborns' low attention; maternal mindfulness predicted female newborns' high arousal. As for male newborns, high episodic stress predicted low arousal, and high pregnancy-specific anxiety predicted high attention. Findings suggest that RDoC-informed markers of prenatal stress could aid detection of variance in newborn neurobehavioral outcomes within hours after birth. Implications for intergenerational transmission of risk for psychopathology are discussed. E C M Brown, C Caimino, C L Benton, D M Baguley Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 4 / April 2021 Published online by Cambridge University Press: 17 March 2021, p. 375 38189 Potential effect of serum from hypertensive donors on PP2A expression and activity in endothelial cells Dulce H. Gomez, Maitha Aldokhayyil, Austin T. Robinson, Michael D. Brown Journal: Journal of Clinical and Translational Science / Volume 5 / Issue s1 / March 2021 Print publication: March 2021 ABSTRACT IMPACT: Racial differences in the prevalence of hypertension and endothelial (dys)function are well established, yet research investigating the mechanism(s) underlying this disparity is still lacking. OBJECTIVES/GOALS: Investigate the influence of race and the effect of serum collected from hypertensive donors on Protein Phosphatase 2A (PP2A) and endothelial nitric oxide synthase (eNOS) expression and activity in human umbilical vein endothelial cells (HUVECs) from Caucasian (CA) and African American (AA) donors. METHODS/STUDY POPULATION: HUVECs from 3 CA & 3 AA donors were cultured in parallel. Experiments were conducted between passages 5-7. At ?90% confluency, cells were serum starved ˜12hrs prior to incubating for 24 or 48 hours in one of the following conditions: 1) Control (Fetal Bovine Serum), 2) serum from normotensives (NT; 5 CA & 5 AA donors), or 3) serum from hypertensives (HT; 5 CA & 5 AA donors). NT and HT serum was pooled from donors with the following characteristics: Male, 30-50 years, nonsmokers, no comorbidities, and non-obese (BMI < 30 kg/m2). Western blotting was used to measure protein expression of total eNOS, p-eNOSS1177, total PP2A, and p-PP2AY307. For activity p-eNOSS1177/total eNOS and p-PP2AY307/ total PP2A ratio was used. A two-way ANOVA was used for statistical analysis. RESULTS/ANTICIPATED RESULTS: Irrespective of the donors' race, there was no influence of serum treatment or interaction effect in any of the measured proteins of interest. Moreover, compared to CA, HUVECs from AA had lower expression of eNOS irrespective of condition (race p=0.01). Compared to CA, HUVECs from AA tended to have lower expression of p-eNOSS1177 irrespective of condition (race p=0.07). However, there was no racial differences in eNOS activity (p=0.68). There was no racial difference in the expression of PP2A (p=0.35), p-PP2AY307 (p=0.30), or PP2A activity (p=0.97) in all conditions. DISCUSSION/SIGNIFICANCE OF FINDINGS: Our preliminary results suggest no influence serum constituents from hypertensive donors or race on PP2A or eNOS expression and activity in HUVECS. Future research should consider conducting proteomics profiling to compare NT and HT serum. The efficacy of antidepressant medication and interpersonal psychotherapy for adult acute-phase depression: study protocol of a systematic review and meta-analysis of individual participant data Ellen Driessen, Zachary D. Cohen, Myrna M. Weissman, John C. Markowitz, Erica S. Weitz, Steven D. Hollon, Dillon T. Browne, Paola Rucci, Carolina Corda, Marco Menchetti, R. Michael Bagby, Lena C. Quilty, Michael W. O'Hara, Caron Zlotnick, Teri Pearlstein, Marc B. J. Blom, Mario Altamura, Carlos Gois, Lon S. Schneider, Jos W. R. Twisk, Pim Cuijpers Journal: BJPsych Open / Volume 7 / Issue 2 / March 2021 Published online by Cambridge University Press: 19 February 2021, e56 Antidepressant medication and interpersonal psychotherapy (IPT) are both recommended interventions in depression treatment guidelines based on literature reviews and meta-analyses. However, 'conventional' meta-analyses comparing their efficacy are limited by their reliance on reported study-level information and a narrow focus on depression outcome measures assessed at treatment completion. Individual participant data (IPD) meta-analysis, considered the gold standard in evidence synthesis, can improve the quality of the analyses when compared with conventional meta-analysis. We describe the protocol for a systematic review and IPD meta-analysis comparing the efficacy of antidepressants and IPT for adult acute-phase depression across a range of outcome measures, including depressive symptom severity as well as functioning and well-being, at both post-treatment and follow-up (PROSPERO: CRD42020219891). We will conduct a systematic literature search in PubMed, PsycINFO, Embase and the Cochrane Library to identify randomised clinical trials comparing antidepressants and IPT in the acute-phase treatment of adults with depression. We will invite the authors of these studies to share the participant-level data of their trials. One-stage IPD meta-analyses will be conducted using mixed-effects models to assess treatment effects at post-treatment and follow-up for all outcome measures that are assessed in at least two studies. This will be the first IPD meta-analysis examining antidepressants versus IPT efficacy. This study has the potential to enhance our knowledge of depression treatment by comparing the short- and long-term effects of two widely used interventions across a range of outcome measures using state-of-the-art statistical techniques. It should not require a pandemic to make community engagement in research leadership essential, not optional Kevin Grumbach, Linda B. Cottler, Jen Brown, Monique LeSarre, Ricardo F. Gonzalez-Fisher, Carla D. Williams, J. Lloyd Michener, Donald E. Nease, Darius Tandon, Deepthi S. Varma, Milton Eder Journal: Journal of Clinical and Translational Science / Volume 5 / Issue 1 / 2021 Efforts to move community engagement in research from marginalized to mainstream include the NIH requiring community engagement programs in all Clinical and Translational Science Awards (CTSAs). However, the COVID-19 pandemic has exposed how little these efforts have changed the dominant culture of clinical research. When faced with the urgent need to generate knowledge about prevention and treatment of the novel coronavirus, researchers largely neglected to involve community stakeholders early in the research process. This failure cannot be divorced from the broader context of systemic racism in the US that has contributed to Black, Indigenous, and People of Color (BIPOC) communities bearing a disproportionate toll from COVID-19, being underrepresented in COVID-19 clinical trials, and expressing greater hesitancy about COVID-19 vaccination. We call on research funders and research institutions to take decisive action to make community engagement obligatory, not optional, in all clinical and translational research and to center BIPOC communities in this process. Recommended actions include funding agencies requiring all research proposals involving human participants to include a community engagement plan, providing adequate funding to support ongoing community engagement, including community stakeholders in agency governance and proposal reviews, promoting racial and ethnic diversity in the research workforce, and making a course in community engaged research a requirement for Masters of Clinical Research curricula.
CommonCrawl
Expected Utility and Risk Preferences More about Expected Utility Theory In Nursing Gibbs model of reflection nursing Ninoska Case Study Communication Reflection Paper Comparison Of Alice In Wonderland And A Connecticut Yankee In King Arthurs Court Pinterest.com Pelvic Pain Research Paper Kissing Hanks Ass Summary Identify The Current Legislation Guidelines Policies And Procedures For Safeguarding Lyddie Petition - Expected Utility Theory In Nursing Autonomy In Nursing Essay. Justice is justice, the principle of justice, this is the basis of nursing actions for a Importance Of Using Credible And Relevant Evidence. Part 1 Explain why it is important for nurses to use credible and CNA Experience Research. Expected utility theory is commonly expected in nursing behaviour, it mostly applies to patient care and treatment and is used in day-to-day decision-making (McKenna, show more content ). The benefits of this skill greatly impact the patient, as having a nurse to critically analyse possible treatments and care options around a single patient's situation ensures a patient-centred care approach and . Expected Utility Expected Utility Theory is the workhorse model of choice under risk Unfortunately, it is another model which has something unobservable The utility of every possible outcome of a lottery So we have to –gure out how to test it We have already gone through this process for the model of ™standard™(i.e. not expected) utility maximizationFile Size: KB. Personal Narrative: Moving Back Home Disney And Culture Case Study Amy Tans Short Essay Fish Cheeks - Application: Expected Utility Theory and Prospect Theory. Many people enjoy watching game shows. The premise of such shows is that for very little effort you might receive a huge pay-off. One of the most popular game shows in recent years is a show titled Deal or No Deal. EXPECTED UTILITY THEORY Prepared for the Handbook of Economic Methodology (timexwatch-jp.somee.com, timexwatch-jp.somee.com, and timexwatch-jp.somee.com, eds. London, Edward Elgar, , p. ). Slightly longer version than the published one. Expected Utility Theory (EUT) states that the decision maker (DM) chooses between risky or uncertain prospects by comparing their expected utility Missing: Nursing. Expected Utility. Best we could hope for is representation by utility function of following form: Definition. A utility function U: P → R. has an expected utility form if there exists a function u: C → R. such that. U (p) = ∑ p (c) u (c) for all p ∈ P. c∈C. In this case, the function U Missing: Nursing. Jimi Hendrix Just Kid Speech Analysis Postmodern Theory In Research Unit 6: Escalation Of Commitment - Expected Utility Theory. This is a theory which estimates the likely utility of an action – when there is uncertainty about the outcome. It suggests the rational choice is to choose an action with the highest expected utility. This theory notes that the utility of a money Missing: Nursing. The expected utility theory then says if the axioms provided by von Neumann-Morgenstern are satisfied, then the individuals behave as if they were trying to maximize the expected utility. The most important insight of the theory is that the expected value of the dollar outcomes may provide a ranking of choices different from those given by Missing: Nursing. that are currently ignored in EU theory. Finally, the discussion section synthesizes the divergent strands of research touched upon, with an eye to future roles of the EU model. II. Expected Utility Variants Expected utility models are concerned with choices among risky prospects whose outcomes may be either single or timexwatch-jp.somee.comg: Nursing. Disney And Culture Case Study Personal Narrative: I Just Say My Name Lecturer Traits - • Expected utility allows people to compare gambles • Given two gambles, we assume people prefer the situation that generates the greatest expected utility – People maximize expected utility 18 Example • Job A: certain income of $50K • Job B: 50% chance of $10K and 50% chance of $90K • Expected income is the same ($50K) but in one case,Missing: Nursing. Oct 01, · From its normative perspective, Subjective Expected Utility Theory stipulates that individuals should "maximise" their expected utility by choosing the decision option that has the highest probability of leading to an outcome that most corresponds with their personal values or beliefs.1 In decision analysis, the decision problem is structured using a decision tree, in which the probability and Cited by: 4. The expected utility theory allows the expected value of utility to represent preference under risk. While the expected utility is a simple application of statistical concepts to the utility theory, the preference of an individual does not have to com-ply with the statistical operations. As a result, the application of the expected utility Missing: Nursing. Macbeths Influence On Society Dorotheas Orems Nursing Theory: Application Of Theory characters of alice in wonderland - 2. Conventional Expected Utility Theory and Prospect Theory Conventional expected utility theory. Under the simplest form, conventional expected utility theory assumes that a consumer's utility, U, is a function of disposable income, Y. Assuming a health insurance context, there is a probability, x, that the consumer will becomeMissing: Nursing. Feb 01, · The utility theory (UT) utilized in this work is the expected utility theory (EUT) [79][80][81]. It was first proposed for the solution to the Saint Petersburg paradox [79], Author: Brian Jeffrey Cohen. We define "expected utility theory" as the theory of decision-making under risk based on a set of axioms for a preference ordering that includes the independence axiom or an alternative that implies that the (expected) utility function that represents the ordering is linear in timexwatch-jp.somee.comg: Nursing. Nt1330 Unit 5 Exercise 1 Common Man Vs Philosopher King - • Expected utility theory adds to this preferences over uncertain combinations of bundles where uncertainty means that these bundles will be available with known probabilities that are less than unity. • Hence, EU theory is a superstructure that sits atop consumer theory. timexwatch-jp.somee.comg: Nursing. Shanteau J & Pingenot A () In M. W. Kattan (Ed.), Encyclopedia of Medical Decision Making. Subjective Expected Utility Theory. Definition: Subjective Expected Utility (SEU) is an approach to Estimated Reading Time: 5 mins. Abstract. Although the seeds of expected utility theory were sown almost two and one-half centuries ago by Daniel Bernoulli () and Gabriel Cramer, the first rigorous axiomatization of the theory was developed by John von Neumann and Oskar Morgenstern ().Missing: Nursing. What Success Did Alexander II Attempt To Modernize Russia And Preserve Imperial Power? Free Speech Coalition Case Study Divergence Movie Analysis - Expected utility theory comes from a series of assumptions (axioms) on these prospects: – Probability-weighted expected value of the different possible utility levels Expected utility 1- 13 Say one has to choose between two prospects. Based on assumptions such as ordering andMissing: Nursing. Jan 18, · Expected utility (EU) theory remains the dominant approach for modeling risky decision-making and has been considered the major paradigm in decision making since World War II, being used predictively in economics and finance, prescriptively in management science, and descriptively in psychology ().Furthermore, EU is the common economic approach for addressing public policy Cited by: A variety of educational approaches were reported. Study quality and content reporting was generally poor. Pedagogical theories were widely used but use of decision theory (with the exception of subjective expected utility theory implicit in decision analysis) was rare. The effectiveness and efficacy of interventions was mixed. Summary: Introducing Hydroslim Naturalism In English Literature Whooping Cough Persuasive Speech - The theory of rational choice has mainly been developed as a core theory for the science of economics, but finds wide application also in areas like operations research, risk analysis, organizational theory, and so forth. Apart from rational-choice theory, the name expected-utility theory is also much in timexwatch-jp.somee.comg: Nursing. Economic theory and nursing administration are a good fit when balanced with the values and goals of nursing. Search terms: Economic theory, In the context of a good or service with high utility, the expected response is an increase in price to compensate for the increase in cost. In other words, goods and services with high utility have. Expected utility, in decision theory, the expected value of an action to an agent, calculated by multiplying the value to the agent of each possible outcome of the action by the probability of that outcome occurring and then summing those timexwatch-jp.somee.com concept of expected utility is used to elucidate decisions made under conditions of risk. According to standard decision theory, when Missing: Nursing. Richard Colvin Reid: Theoretical Analysis Pros And Cons Of Pro-Social Spending Briley Belling Weekly Reflection - UTILITY THEORY The expected utility/subjective probability model of risk preferences and beliefs has long been the preeminent model of individual choice under conditions of uncer-tainty. It exhibits a tremendous flexibility in representing aspects of attitudes toward risk, has a well-developed ana-Missing: Nursing. EXPECTED UNCERTAIN UTILITY THEORY 3 Aand with gon the Ac, the complement of A; that is, fAgis the unique act hsuch that A⊂{h=f} and Ac ⊂{h=g}. Ideal events are events Esuch that Savage's sure thing principle holds for Eand Ec. DEFINITION:AneventE is ideal if [fEh gEhand hEf hEg] implies [fEh gEh and h Ef h Eg]forallactsfgh,timexwatch-jp.somee.comg: Nursing. Aug 28, · Our theory enjoys a weak form of the expected utility hypothesis. This will be discussed in Sect. timexwatch-jp.somee.com the case of \(\rho =\infty \), restricting our attention to the set of measurable pure alternatives, Sect. 7 shows that our theory exhibits a form of the classical EU theory. We provide a further extension of \(\succsim _{\infty }\) to have the full form of the classical EU theory; this Missing: Nursing. How To Write A Persuasive Essay About High School Sports Immigrants Exposed In Upton Sinclairs The Jungle Dress Code Thesis - Expected utility theory is a theory about how to make optimal decisions under a given probability of risk. It has a normative interpretation which economists used to think applies in all situations to rational agents but now tend to regard as a useful and insightful first order approximation. In empirical applications, a number of violations have been proven to be systematic and these falsifications have Missing: Nursing. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality. While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition Missing: Nursing. Finally, existing research on decision-making in nursing is assessed and the strengths and limitations of different types of measurement discussed. Conclusion: We suggests that researchers examining the quality of judgement and decision-making in nursing need to be aware of both the strengths and limitations of existing methods of measurement. victorian menu for the rich Advantage Of Bilingual Education Louise Erdrich Love Medicine Analysis - Sep 25, · Nightingale's Theory of the Environment. Orem's Self Care Model. Neuman's Health Care Systems Model. King's Theory of Goal Attainment. Pender's Health Promotion Model. Roy's Adaptation Model. Salmon's Construct for Public Health Nursing. Social Learning Theory; Social Cognitive Theory; Social Action Theory Theory of Planned Behavior; Subjective Expected Utility Theory; Transtheoretical Model/Stages of Change; Social Support and Social Networks; Community Organization; Social Marketing; Diffusion of Innovation Health Promotion, Health Science, Nursing. W. University. (CO#1) Analyze theories from nursing and relevant fields with respect to their components, relationships among the components, logic of the propositions, comprehensiveness, and utility to advanced nursing. (PO1) (CO#3) Communicate the analysis of and proposed strategies for the use of a theory in nursing practice. (PO3, 7, 10). great gatsby - daisy To browse Academia. Remember The Role Of The Bombing Of Hiroshima In World War II on this computer. Enter the email address you signed up with and we'll email you a reset link. Need an account? Click here to sign up. Download Free PDF. Descriptive and prescriptive models of decision-making: implications for the development of decision aids IEEE Transactions on Systems, Man, and Cybernetics, Elke Weber. A short summary of this paper. Descriptive and prescriptive models of decision-making: implications for the development of decision aids. Weakening the assump- tions of the latter has led to the development of new theories such as influences described by behavioral decision theory. One goal of the paper is thus to Expected Utility Theory In Nursing a foundation for the development of flexible decision aids that Expected Utility Theory In Nursing minimize zyxwvutsrqpon prospect The Great Depression By Heather Mallick Analysis. The the impact of psychological biases and shortcomings on first part of the paper reviews recent work in this area with Expected Utility Theory In Nursing emphasis on prescriptive procedures. These implications manifest Some 35 years ago, psychologists started to Expected Utility Theory In Nursing that theniselves at ever?. Finallit is proposed that Expected Utility Theory In Nursing may he possible to enhance the e. The third part of the Expected Utility Theory In Nursing situations [14]. The ensuing behavioral and cogni- paper discusses the relevance of artificial intelligence techniques for en- tive Expected Utility Theory In Nursing of human decision processes developed at least hancing existing decision tools. The paper shows some links between the partially as a dialectic macbeth lady macbeth quotes behavioral data and norma- descriptive decision research Expected Utility Theory In Nursing psychology, prescriptive models of opera- tions research, and s! This approach of measuring observed perfor- Expected Utility Theory In Nursing paper argues for the necessih Expected Utility Theory In Nursing usefulness of future coordination of mance against prescriptive models has been deplored by research in these three areas. It is argued that these From The Deep Woods To Civilization Summary have E xperimental psychology has uncovered many in- stances where decisionmakers consistently and per- sistently violate the postulates of classical normative theo- different implications for prescriptive recommendations or decision aiding. The focus troversial. Early counterexamples [2], [ls] and more re- of these disciplines is on decisionmakers, for example in cent data [33] have largely questioned the empirical valid- organizational environments, who do not and should not ity of the substitution principle of EU theory alternatively have the liberty of violating normative rules as a conse- known as either Macbeths Influence On Society axiom or monotonicity as- quence of cognitive limitations or to minimize, Expected Utility Theory In Nursing exam- sumption. Several good reviews of this literature exist and ple, psychological regret. However, this paper argues that need not be repeated here. See [40], [48], [63]. Weher is with the Center for Decision Research. University of model describes the observed behavior. Such models e. Coskunoglu is with the Department of General Engineering. Uni- versity of Illinnix. Restructuring of Problem Representation Expected Utility Theory In Nursing dual-bilinear model is the most general type of One of the Expected Utility Theory In Nursing assumptions of the classical economic Expected Utility Theory In Nursing utility model of which S EU theory and model of rational choice e. Expected Utility Theory In Nursing is, Expected Utility Theory In Nursing outcomes cal implications of these generalized utility models are of choice alternatives should be combined with current somewhat unclear. Most of their originators are either assets and that alternative should be selected that provides silent or Virtue In Judith Guests Ordinary People on the prescriptive status of their the most desirable final asset position. Continuously up- models. As descriptive theories, these models offer little dating current asset levels and integrating those with the assistance for decision aiding. The experts as well as naive decisionmakers e. In making such Cognitive Limitations of Human Information Processing judgments, people over time and experience learn certain Several limitations on Expected Utility Theory In Nursing information processing regularities in their environment. One such regularity is the capacity can be distinguished First, perception of fact that similarity is an index of class membership. Esti- information is selective because humans cannot process mating the probability that object A belongs to class B on the multitude of incoming information. Another regularity is the fact that procedures designed to reduce mental effort that may not more probable events occur more frequently and thus always produce optimal results. The final Rise Of Detroit In The 1960-1970s is on produce more memories. Estimating the probability of an Expected Utility Theory In Nursing capacity. Unlike computers which can access all event by the ease with which instances or occurrences can stored information in its original form, much of human Expected Utility Theory In Nursing brought to mind is an example of the availability memory works by Good Conquers Evil In Romeo And Juliet less reliable process of reconstruc- heuristic. These heuristics often lead to biased probability zyxwv tion. The two Expected Utility Theory In Nursing ena discussed next, namely restructuring of the problem space and the use of heuristics, can be seen as procedures people develop over time and with experience to deal with their cognitive limitations. Use of the representative heuristic, for example, can under some circumstances lead to the neglect of base rates or Responsive Web Design Essay probabilities [58]. Use of the availability heuristic can lead to overestimation of easily imagined events and to the perception of illusory correlations. That is, his or her preference order for a set Expected Utility Theory In Nursing their Expected Utility Theory In Nursing CAUTI Thesis. According to one such view- outcomes or Expected Utility Theory In Nursing should not depend on the particu- point, human cognitive limitations and biases, even if they lar method by which it is assessed. However, empirical exist outside of the laboratory, do not Expected Utility Theory In Nursing any significant evidence is beginning to accumulate that suggests that effect on the quality of decisions [9]. The instances cited to people often may not have stable and well-defined prefer- Expected Utility Theory In Nursing this position, however, seem to be restricted Being A Bystander ences [22], [24], [51]. In such situations, judged or re- decision situations with a flat criterion maximum. Different Expected Utility Theory In Nursing procedures high- namely for the analyst to be careful and prudent [61], light different aspects of decision alternatives and may [62]. This prevalent point of view is repeated in a recent suggest different heuristics or different decision frames, article by some of the leading proponents of prescriptive thus giving rise to inconsistent responses [59]. Pros and cons Brave New World: Chapter Analysis agents Expected Utility Theory In Nursing can think separately and distinctly about un- different assessment procedures have been reviewed Expected Utility Theory In Nursing certainties and values and who can then integrate these [19], [27]. Understanding the processes underlying deliberations jointly to determine preferences for actions. The sophisticated Expected Utility Theory In Nursing intervenor, who more informed way as well as providing them with Trespass By Julia Alvarez Essay wants to help a real client to decide wisely, must be vocabulary to explain and possibly resolve discrepancies zyxwvutsrqpon cognizant of these realities. The prescriptive analyst and between procedures to their clients. The empirical results of behavioral re- decision problem. This perception lies at the root of the current unclear which one to use when and how. Experimental results demonstrating systematic and per- It is neither fair nor realistic to expect a practicing sistent deviations of human behavior from normative stan- analyst to have the expertise, experience, and clinical skills dards have turned decision Expected Utility Theory In Nursing into a Expected Utility Theory In Nursing controver- that researchers like Edwards or von Winterfeldt may have sial field from a theoretical point of view [SO]. However, at in dealing with such conflicting situations. After all, practi- the level of applications, decision analysts seem little per- tioners, despite their professional training and ethics, oper- turbed. New utility models are either completely ignored ate under their own limitations, biases, and utilities. Consequently, the authors of this paper have Expected Utility Theory In Nursing and others concerned. For example Expected Utility Theory In Nursing from [57]a divorce themselves from their Expected Utility Theory In Nursing. If they do. These and a Promoting Adoption Research Paper of Johnathan Wayne Nobles Essay A Death In Texas two immunization programs are presented in terms Side Effects Of Depression In Catcher In The Rye lives Expected Utility Theory In Nursing necessitate the development of sound theoretical saved, but may Narcissus And Echo Analysis his or her preference when the principles and methodological tools. In their personal choices, of the paper seem Expected Utility Theory In Nursing have either been dismissed by pre- decisionmakers may or may not Expected Utility Theory In Nursing to represent alterna- scriptive analysts or are perceived as Expected Utility Theory In Nursing additional tives in terms of their final asset position. However, in demands Expected Utility Theory In Nursing the art and skills of the analyst. In such situations, decisionmakers need The Mccandless Journey be ogy of decision tools does not seem to have received much Expected Utility Theory In Nursing and encouraged to use a final assets perspective. A prescriptive ap- ploy decision frames in such a way that the public official proach to decision aiding goes through four principal or employee will make decisions in line with the policies of stages: 1 problem formulation, 2 solution, 3 post-solu- his or her organization. The formulation stage takes into consideration the some important factor has been overlooked in the formula- nature of the problem and may lead to a representation of tion of the problem. For example, a quality control engi- the problem in form of a decision tree or a linear program. The part may not necessarily be three components: identifying the How Race Becomes Biology: Embodiment Of Social Inequality, options, pa- defective, but Expected Utility Theory In Nursing defectiveness will be revealed only in the rameters, and objectives: establishing the relationships be- actual assembly process. Should the engineer accept the tween them e. The normative answer would, of the Expected Utility Theory In Nursing i. However, the can be affected by nonnormative behavior. Elicitation of quality control engineer will most likely insist on scrapping probabilities and utility assessment, for example, can be the part even in situations where this decision has smaller affected by certainty and framing effects among others. Situations where The Gynocentric Feminism In The 1960s And Early 1970s effect, in turn, violates the independence a retained part turns out to be defective are not only principle and hence the solution procedure of folding back costly, but also constitute an identifiable and visible error the decision tree. This suggests two, complementary, ap- on her part. Expected Utility Theory In Nursing theory [ 5 ] or some other multiattribute proaches to solving the problem: the first one is to change representation that incorporates the cost of making a Foot Cramps Research Paper the nonnormative behavior and the second to Expected Utility Theory In Nursing the decision or accountability could perhaps explain the deci- decision tools. These types of knowledge correspond in the original problem formulation which is of sufficient roughly to the three components of the decisionmaking normative appeal to be included in future versions of the process: the decision problem, decisionmaking tools, and prescriptive model. Similar arguments can be made Expected Utility Theory In Nursing decision tools de- In order to accomplish the second and third enhance- signed to elicit probability judgments. Awareness, under- ments discussed above, a decision tool must have a flexible Expected Utility Theory In Nursing, and anticipation of the Similarities Between Monty Python And The Holy Grail used to make Expected Utility Theory In Nursing strategy, a knowledge base, Expected Utility Theory In Nursing an explanatory such judgments and the conditions under which these capability. Development of techniques and technologies zyxwvutsrq heuristics will lead to biases, can actually help prevent supporting these features is at the core of the research in their occurrence by suggesting effective Expected Utility Theory In Nursing. Consequently, the Thus it has been shown, for example, that base rate neglect potentials of AI techniques have recently received much as a function of use of the representative heuristic can be attention [25], [26], [32], [34], [61] amidst some cautionary Disney And Culture Case Study reduced by explicitly emphasizing the causal remarks warning against excessive optimism [12], [55]. First, existing prescriptive techniques search e. However, for the purposes of this nonlinearities introduced by some extended utility theories paper i. A preliminary effort niques can incorporate some of the insights and overcome towards developing an alternative to the standard decision some of the shortcomings detected by descriptive research tree fold-up procedure has recently been made [4], Autism Everyday: Video Analysis. The second point is more encompassing, namely the A computerized decision tool, like most nontrivial pro- need to increase the effectiveness of existing tools through Expected Utility Theory In Nursing, is made up of many modules each carrying out a increasing their domain of application and Thomas Jeffersons Tension In The Barbary States range of particular subtask e. The former necessitates relaxing rigid data and or any other procedure. How a given task can be divided assumption requirements and the latter involves introduc- into subtasks and how the execution of subtasks can be ing reasoning and interactive explanatory capabilities thus sequenced, is highly dependent on the Food Diary Assignment facilities providing decisionmakers with the information necessary offered by the Expected Utility Theory In Nursing language. In conventional to justify the decision taken. Existing decision tools e. If a single tests. Such control structures are not adequate for design- data point is missing in a linear program, the program will ing a flexible program where the decisions on what The Importance Of Genetics In Sports not run. A structural sensitivity analysis on a decision to execute when are preferred to be left to the program.
CommonCrawl
The role of microfinance institutions on poverty reduction in Ethiopia: the case of Oromia Credit and Saving Share Company at Welmera district Dejene Adugna Chomen ORCID: orcid.org/0000-0003-4327-32981 The purpose of this study is to assess the contribution of Oromia Credit and Saving Share Company microfinance institution on poverty alleviation in Welmera district, Oromia Special Zone Surrounding Finfine, Oromia Regional State, Ethiopia. Both random and purposive sampling techniques were used for data collection. Three hundred and fifty-seven respondents were selected from twelve different villages for the data collection. The study used a binary logistic regression to identify the key determinants of the income improvement of respondents. The findings confirmed that education level, voluntary saving, and utilization of loan for the intended purposes are statistically significant and positively contributed to the income improvement of the respondents in the study area. The finding revealed that most of the respondents' income improved after they joined the program which impacted positively in improving their standards of living. Initially, microfinance was introduced to the globe by Muhammad Yunus in 1976 in Jobra's village in Bangladesh [29]. It has currently been an effective instrument for poverty reduction [4]. However, the contribution of microfinance services in poverty reduction got more attention in 2005, after the United Nations (UN) announced the year of international microcredit. Many microfinance institutions have arisen and have attracted the poorer communities and have developed new strategies to realize their vision [29]. Since then, most developing countries have been using microfinance as the best strategy to eradicate poverty [19, 45]. Several microfinance institutions (MFIs) emerged in Africa to fulfill the limitless need for financial services and provide different benefits in the past decades. For instance, some of these microfinance institutions successfully address materials shortage, the tangible materials like goods, intangible (services) to realize them. Once properly managed, microfinancing's materials benefits can range beyond the household into the community [45]. However, some scholars describe that few of these microfinance institutions focus on offering loans, whereas others offer both credit facilities and gather the deposit[12]. In Ethiopia, microfinance was introduced in 1995 to reduce poverty, and since then, Ethiopia's government has stimulated the expansion of modern financial services in the country. Presently, around 31 licensed microfinance institutions are operating throughout the country [17]. In recent times, the government of Ethiopia developed various developmental strategies such as a poverty reduction strategy paper which is aimed at enhancing and supportable growth, among other documents, considered microfinance as the best reference in achieving the intended development objectives and limiting the risky trends in poverty problem and meeting the millennium development goals [46]. Normally, almost most of Ethiopia's microfinance institutions have common goals: poverty reduction by providing loans and saving services by using the group-approach lending system [20]. The Oromia Credit and Saving Share Company (OCSSCo) microfinance institution was introduced in the Oromia region in 1996 to deliver microfinance facilities (credit and saving) to the farmers in the rural areas in the region [44]. The OCSSCo microfinance institution is one of the country's biggest microfinance institutions in terms of loan provision and having a high number of customers [16].Many research findings in Ethiopia link microfinance to poverty reduction. For instance, Abebe [1] on the Specialized Financial Promotion Institute (SFPI) showed that after the households took the loan from the institution, their average monthly income increased and the intervention improved the beneficiaries' living standard. Similarly, a study of the Dedebit Credit and Saving Institution (DECSI) indicated that the institution contributed to the beneficiaries' financial and physical well-being (household consumption and housing improvement) [22]. Even though various studies were carried out to identify the importance of MFIs in reducing poverty in Ethiopia, to the best of the author's knowledge, this is the first research to access the contribution of microfinance institutions in poverty alleviation in this study area. Therefore, this research's broad aim is to assess the contribution of Oromia Credit and Saving Share Company in poverty reduction at Welmera district, Oromia Special Zone Surrounding Finfine, Oromia Regional State, Ethiopia. This study has the following specific objectives: (1) to assess the contribution of microfinance on children's education enrollment and the saving attitude of the beneficiaries after they join the program and (2) to identify factors that determine the income improvement of the beneficiaries. Microfinance can be effective in poverty reduction if it is integrated with other developmental strategies that work to meet the poor's basic needs to take them from poverty [5, 32, 33]. Several studies in different disciplines used different approaches to assess the impacts of microfinance in poverty reduction. For instance, the interventions of the microfinance program influence social associations somewhat through their economic impacts. In various examples, credit systems implementers have demanded that the work leads to advanced social transformation, by authorizing women and shifting gender relations in the household and the community [2]. Likewise, Nichols [38] tried to conclude the influence of microfinance on the poor in China's rural areas. He found that microcredit had a range of positive impacts on a poor community in central China. A study done in Vietnam exhibited that after the beneficiaries joined the microfinance institutions, they saw significant progress in their beneficiaries' income and consumption [15]. Another study carried out in India revealed that the poor improve their microenterprises, smooth consumption, rise returns making ability, and appreciate a better quality of life because of microcredit's efficient provision [4]. In a panel data approach/framework, Khandker [30] investigate the relationship between microfinance and poverty for a sample size of 1,789 households drawn from 87 villages in 29 Thanas in Bangladesh and show that access to microfinance helped both poor female participants in the program and the local economy. The study confirms that there is evidence of reducing overall poverty decline at the community level. Research conducted in Pakistan also shows that microfinance programs have positively contributed to household expenditure and children's education [39]. Correspondingly, a study conducted in Zanzibar concluded that after the household became the microfinance institution customer, they could improve their income to expand their business [23]. Similarly, a report conducted in Nigeria confirmed that microfinance contributed to eradicating poverty among women after receiving loans by improving their economic status and social and political conditions [25]. Besides, participation in an MFI program empowers clients to change managing their enterprise ones [34]. Likewise, a study undertaken in Kenya showed that microfinance institutions could contribute to poverty reduction [40]. The study also actively exhibited the loan use helped the beneficiaries have their dairy farm and small retail shops. Correspondingly, research conducted by Chowdhury [13] and Abebe [1] found that the households' living conditions improved because of microfinance intervention. Contrary to these views, Mosley and Hulme [35] found that microfinance institutions' clients could not generate income for the poor sections of the people. Similarly, research conducted in Bangladesh suggested that no result to provision assertions that programs raise consumption level and/or the program's contribution that shows incremental education enrollment for children [33]. To sum up, extant works of literature are inconsistent and inconclusive. Furthermore, previous study works have not examined the role of Oromia Credit and Saving Share Company microfinance institution on poverty alleviation by targeting the study area. This study, therefore, intends to contribute to the literature by filling these gaps. The study was carried out in Welmera District which is found in Oromia Special Zone Surrounding Finfine, Oromia Regional State, Ethiopia. The district's capital town Holeta is at a distance of 29 km to the west along the main road to Ambo from Addis Ababa (which is the capital city of the country). The district is bounded on the south by the Sebeta Hawas district, on the west by Ejere district, on the north by Mulo district, on the northeast by the Sululta district, on the east by the city of Burayu. The Welmera district comprises 23 rural villages and 3 towns. Data sources and sample size determination The study used several primary data collection methods like survey/questionnaire, direct observation, and key informant interviews (KII). The questionnaire was a combination of closed and open-ended questions. The choice of sampling size/designs depends on the type of research and the type of conclusion the researcher wants to draw[31]. To meet the objectives of this study, we have implemented both purposive and random sampling techniques. To select the respondents from the district through random sampling techniques, the first step was to identify the total number of villages known as beneficiaries of the institution using purposive sampling technique. Out of eighteen (18) villages, twelve (12) were selected from the district using a random sampling method. To select the required number of respondents, we applied random sampling. According to Neuman [37], a population size less than 1000 must take 30% a sample ratio. However, as Saunders (2005) cited in Muiruri [36], a sample is considered adequate if the sample size is greater than 30 and over 10% of the population. In contrast to Neuman and Saunders, this study used 15% of the total population as a sample (see Table 1). Table 1 Sample villages and sample size. Model specifications Most of the time, the binary logistic regression framework depicts the connections between one or more independent variables as well as a binary outcome variable [18]. Furthermore, binary logistic regression is the most applicable model when the dependent variable is a dummy variable [43]. Thus, to identify the key factors for the beneficiaries' improvement in income, this study used binary logistic regression analysis. The generalized linear model was written as follows: $$\begin{aligned} Y & = B_{o} + B_{1} X_{1} + B_{2} X_{2} + B_{3} X_{3} + B_{4} X_{4} + B_{5} X_{5} \\ & \quad + B_{6} X_{6} + B_{7} X_{7} + B_{8} X_{8} + B_{9} X_{9} + B_{10} X_{10} + \mu \\ \end{aligned}$$ where X1 to X4 represents a quantitative/continuous independent variable and X5 to X10 stands for independent variables that are considered as a dummy and µ is the error term $$Y = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {{\text{if}}\;{\text{income}}\;{\text{improved}}} \hfill \\ {0,} \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right.$$ Specifically, binary logistic regression was expressed as below: $$Y = \ln \left( {\frac{p}{1 - p}} \right) = B_{0} + B_{1} X_{1} + B_{2} X_{2} + B_{3} X_{3} + \cdots B_{N} X_{N}$$ where Y is the probability of income improvement; B0 is constant; and B1… + BN are parameters. Thus, the model is specifically expressed as follows: $${\text{INIM}}\Pr = \beta o + \beta_{1} {\text{age}} + \beta_{2} {\text{fms}} + \beta_{3} {\text{dwm}} + \beta_{4} {\text{dfm}} + \beta_{5} {\text{s}} + \beta_{6} {\text{tr}} + \beta_{7} {\text{ul}} + \beta_{8} {\text{ms}} + \beta_{9} {\text{edus}} + \beta_{10} {\text{gend}}$$ where INIMPr stands for income improvement, and age,fms, dwm,dfm,s,tr,ul,ms,edus,and gend stand for age, family size, duration with microfinance, distance from the lending institution, saving, training, utilization of loan, marital status, education status, and gender, respectively (see Table 2). Table 2 Description of explanatory variables in the study. Purpose of loan and loan usage of the respondents All beneficiaries took the loan from the institution to satisfy their unlimited needs. In this study, since all beneficiaries are farmers, they used the loan for agricultural and related activities. 51.26% of respondents took the loan for purchasing agricultural inputs (such as fertilizer and improved seed (see Table 3). However, 16.5% of them used loans for animal fatting. But almost less than two (1.16%) of the respondents used the loan for other (poultry production). Thus, this study confirms that most of the respondents in the study area used the loan to purchase agricultural inputs and animal fatting. Besides, the study tried to figure out whether the respondents used the loans for their intended purpose or not. A majority (94.67%) of the respondents used the loan for the proposed purpose from the survey. However, a few respondents have used the loan for the unintended purpose (see Table 3). Some key informants also confirmed that some beneficiaries did not use the loan for the intended purpose; sometimes, they could not repay the loan. This shows that if the beneficiaries miss the loan's use for the intended purpose, they cannot improve their income and cannot repay the loan. Table 3 Respondents' loan purpose and usage. Impacts of microfinance at the beneficiaries' level Impacts on beneficiaries income One of the measures of microfinance institution loan effectiveness is its ability to generate income for its beneficiaries. An increase in income can be a component of a better life. In other ways, the major objectives of microfinance are to help in generating income for low-income households and help in alleviating poverty. The majority of respondents' incomes were increased after they took loans from a microfinance institution (see Table 4). However, due to the improper use of the loan, the income of a few beneficiaries in the study area was decreased (see Table 4). Thus, as compared to before taking the loan, after they took a loan a large number of the respondents' income increased. Table 4 Income change and children education enrollment of the respondents after taking the loan. Besides, key informant's interviews also confirmed that the beneficiaries' status of income was improved after they took loans. Thus, it was found that the poor have undoubtedly benefited from the institution in several ways. From this, the study concluded that the microfinance institution has a great contribution to improve the living standards of the households in the study area. This result is confirmed with the previous studies done by Abebe [1] and Gutu and Mulugeta [21] in different parts of the country. Impacts on education A service provided to the beneficiaries is believed in one way or another way to promote the education status of children. This study observed that the beneficiaries' numbers of children attending school were increased after the beneficiaries joined the institution. As presented in the table above, 69% of the beneficiaries responded that after they took the loan, they could buy materials for their children and educate them as compared to before (see Table 4). From this, the results conclude that the OCSSCO microfinance brought an important influence on children's education enrollment which in turn helps to lift their families out of poverty. This study promotes the findings of Bantige [9]; Bhuiya [11], and Mosley and Rock [35], who found that the loan from microfinance help the respondents to give better education to their children. Correspondingly, another study that has been done on the Amhara Credit and Saving Institute (ACSI) in other regions of the country also points out that all beneficiaries in the study area could send their children to the school after the beneficiaries became the clients of the institution [6]. However, this finding converses with the finding of Banerjee et al. [7] which suggested that there is no change in the probability that children or teenagers are enrolled in school after the beneficiaries took the loan. Impacts on nutrition improvement Access to microfinance can create improvement in the nutritional intake of the beneficiaries and their families based on their effectiveness in the program. Regarding the improvement of food, the respondents were asked to reflect their responses about food diet improvement like increase consumption of food in the amount and variety. As stated in Table 5 from the total respondents, about 204 (57.14%) gave their responses from strongly agree to agree that there is an increase in the quantity of consumption of food after they joined the program. However, about 153 (42.85%) of the respondents don't agree with the improvement. Furthermore, about 192 (53.78%) of the respondents gave their responses from strongly agree to agree for the existence of an increase in food in variety after they joined the institution. Table 5 Nutritional improvements of the respondents after the program intervention. About 165 (46.22%) gave their response regarding increase food in variety from neutral to strongly disagree (see Table 5). This shows that how microfinance contributes to improving beneficiaries' food intake. Thus, since the beneficiaries' diet improved, it is possible to conclude that OCSSCo microfinance has an important role in reducing poverty by lifting the people from poverty. This result confirms another study done in the other region of the country by Doocy et al. [14], which indicated that microfinance programs have a significant influence on the improvement of nutritional position as well as on the well-being of the women participants and their families. Similarly, Ajit and Anu [4] report the same result. Impacts on the beneficiaries saving attitude In most cases, knowledge of microfinance improved attitude toward microcredit and the saving behavior of its beneficiaries [26]. Table 6 confirms that 236 (66.10%) of the respondents responded that Oromia Credit and Saving Share Company (OCSSCo) has developed their culture of saving after they joined the institution. However, 121 (33.89%) responded negatively. Correspondingly, research done by Gutu and Mulugeta [21] indicated improvement of profit, saving, and diet after the beneficiaries joined the institution. Furthermore, this study observed that about 87% of the beneficiaries know the benefit of credit and saving institutions on poverty reduction (see Table 6). From this, it is possible to conclude that most of the respondents have a good awareness of credit and saving and the benefit of the institution on poverty reduction. Thus, this indicates that OCSSCo had a major role to improve the saving habits of the beneficiaries which are the source of capital and finally a key for the economic development of the country. Table 6 Respondents attitude about saving. Training is very crucial to develop private outcomes and inclusive well-being for beneficiaries and to progress institutional consequences for the microfinance institutions [27]. In this regard, in addition to financial services, Oromia Credit and Saving Share Company provides nonfinancial services for the beneficiaries to build up their capacity. A positive response was obtained from the respondents which indicated about 96% of them got training from the institution especially on saving, loan utilization, and about the market and general training (see Table 7). Nevertheless, insignificant numbers of respondents (4%) do not get training. Therefore, the study concludes that the given training helped the beneficiaries in adapting the behavior of saving and investing the loan in income-generating activities. Table 7 Respondents about training. Challenges of the beneficiaries to access the service Microfinance institutions in Ethiopia are facing many challenges in operating efficiently [10]. Similarly, the beneficiaries of microfinance institutions face various constraints and challenges. Concerning challenges and constraints, more than (29%) of the beneficiaries said their great problem for accessing funds from this microfinance institution was the rate of interest they were charged for assessing funds (see Table 8). Likewise, a large share of the respondents also sees the distance from the lending institute (26%) as another big challenge. Even though the two mentioned above are the major challenges, group lending, time of credit available, time of repayment, working ethics, and working time of the institute were also seen as challenges by the respondents in the study area (see Table 8). This study agrees with the study done by Sabit and Mohammed [42] on the same microfinance institution in the southwest part of the country. Table 8 Challenges of the respondents to access service. Econometric analyses Result of multicollinearity test Multicollinearity problems will occur while two or more predictor variables in a multiple regression model are extremely interrelated [3]. As a measure of multicollinearity in the model, the variance inflation factor (VIF) and tolerance levels are used. It is an indication of multicollinearity when tolerance levels may be less than 0.40 and VIF is higher than 2.50 [3]. Therefore, as it is observed from Table 9, the model has no multicollinearity problem (see Table 9). Table 9 Multicollinearity test among explanatory variables. Test of goodness fit and model summary The goodness-of-fit tests can be utilized when our data are discrete or continuous data and the final incorporating grouped continuous data through addition [24]. Most of the time, the Hosmer–Lemeshow goodness-of-fit test is the most appropriate and often used for binary logistic regression models [18]. To say the model is statistically fit to describe data, our p-value must be insignificant. Consequently, the below table shows that ψ2 (df = 8, N = 357, X2 = 8.800) and its p = 0.359 which is a more than 5% significance level (see Table 10). Furthermore, the Omnibus test of the model coefficient in the last iteration reveals that the addition of each variable into the model is statistically significant when p = 0.000, which is less than that of a cutoff value of 0.05. Moreover, the share of the total variation of the dependent variable is measured using Nagelkerke R Square. Accordingly, Nagelkerke R Square indicated that the model described around 61.7% of the variation in the data (see Table 10). Likewise, the predictive correctness of the model was calculated using the classification table. Thus, the study suggested that the overall percentage of correction of prediction was 89.7% (see Table 10). Table 10 Hosmer and Lemeshow test and model summary. Result of the binary logistic regression The binary logistic regression is specified as a more exact and effective estimation system due to its capacity to provide the essential binary nature of the combination of the dependent variable [41]. Additionally, it allows the estimate of group association when independent variables are continuous, discrete, or both [8]. Table 11 reveals the logistic regression coefficient results, such as the standard errors, Exp(β), Wald test statistics, p-value for each predictor. Accordingly, the statistical significance and the result of each of the predictor variables are described in Table 11. Table 11 The output of binary logistic regressions. Therefore, from Table 11, gender, education level, having voluntary savings, distance from the lending institute, and utilization of loans for intended purposes are determinants of income improvement. Gender has a significant effect on beneficiaries' income improvement: In the above table, gender (1) the reference category (1) shows males. The negative B (coefficient of the independent variable) shows that the probability of income improvement decreases from male to the next gender: while the probability of the next gender, female household 0 in income improvement increases by 0.400 (see Table 11). This is because females are more savers and take care of the risk than males. Educated people (literate) will have the probability of income improvement by 7.658 times more than illiterate. This is because educated persons can use the loan properly as stated on their plan than illiterate groups. They have a better understanding of how to invest the loan in income-generating activities than those uneducated groups. Beneficiaries who have voluntary savings have a chance of income improvement by 2.348 than those without voluntary savings (see Table 11). This is because more savings will bring more opportunities to investments in income-generating activities which help them in income improvement. This result confirmed the country's study by Kebede and Menza [28], which found that the total number of voluntary and savers increases from year to year. Besides, this study tried to look at the utilization of the loan. Utilization of the loan for the intended purposes helped those who used the loan accordingly more than those who not used the loan for the intended purposes by 64.169. However, the study shows that as the distance from financial institutions increases by one unit, the improvement of beneficiaries' income decreases by 0.91 (see Table 11). This is because as beneficiaries far away from the lending institution, they lack access to market and information, and even they are not getting regular supervision from the institution compared to those living nearest to the institution. In this study, we targeted at evaluating the role of microfinance institutions in poverty reduction in Ethiopia by taking Oromia Credit and Saving Sharing Company (OCSSCo) as an example. This study points out that the intervention of OCSSCo in the study area helped the beneficiaries improve their level of living conditions. The study discovered that after the beneficiaries joined the institution, their income increased, nutrition intake improved, and most beneficiaries could buy necessary items and materials to educate their children. Besides these, the bulk of the beneficiaries argued that they developed the habit of voluntary savings after they joined the institution. This implies that the microfinance institutions made efforts to increase the beneficiaries' income by putting proper monitoring and follow an up system in place. However, regardless of these achievements, most of these microfinance beneficiaries in the study area strongly disagree with the institution's rate of interest when they take loans'. From the binary logistic regression result, the study concluded that education level, gender, distance from the lending institute, utilization of the loan for the intended purposes, and voluntary savings are significant in the model to determine income improvement of the beneficiaries. In contrast, other explanatory variables like age, family size, marital status, membership duration with the institution, and training with income improvement are not as strong. So, they do not significantly affect income improvement. This study concludes that Oromia Credit and Saving Share Company microfinance positively impacted beneficiaries' living conditions. This study recommends that to change many peoples' lives; the institution has to reduce the interest rate charged to the respondents when accessing financial products. On request, the author can provide the data used for this study. ACSI: Amhara Credit and Saving Institute DECSI: Dedebit Credit and Saving Institutions MFIs: Microfinance institutions OCSSCo: Oromia Credit and Saving Share Company VIF: Variance inflation factor SFPI: Specialized Financial and Promotion Institution KII: Key informant interviews UN: Abebe T (2006) Impact of microfinance on poverty reduction in Ethiopia, the case of three branches of Specialized Finance and Promotional Institution(SFPI): Addis Ababa University, Masters Thesis, Ethiopia Ackerly BA (1995) Testing the tools of development: credit programmes, loan involvement, and women's empowerment. IDS Bull 26(3):56–68 Adeboye N, Fagoyinbo IS, Olatayo T (2014) Estimation of the effect of multicollinearity on the standard error for regression coefficients. IOSR J Math 10(4):16–20 Ajit KB, Anu B (2012) Microfinance and poverty reduction in India. Integr Rev J Manag 5(1):31–35 Alemu KT (2008) Microfinance as a strategy for poverty reduction: a comparative analysis of ACSI and wisdom microfinance institution in Ethiopia. Master Theis, The Netherlands, 1–57 Asmamaw YC (2014) Impact of microfinance on living standards, empowerment and poverty alleviation of poor people in Ethiopia. J Field Actions 5:43–67 Banerjee A, Duflo E, Glennerster R, Kinnan C (2015) The miracle of microfinance? Evidence from a randomized evaluation. Am Econ J Appl Econ 7(1):1–53 Barbara GT, Linda SF (2012) Using multivariate statistics. In: California State University, Northridge: Vol. Fifth Edit Batinge BK (2014) An assessment of the impact of microfinance on the rural women in North Ghana. Eastern Mediterranean University Beklentİlerİ ZV, Alemu S (2018) The performance of Ethiopian microfinance institutions, challenges and prospects. Karadeniz Teknik Üniversitesi Sosyal Bilimler Enstitüsü Sosyal Bilimler Dergisi 8(16):307–324 Bhuiya M (2016) Impact of microfinance on health, education, and income of rural households: evidence from Bangladesh. Master Thesis, University of Dhaka, Bangladesh Blavy R, Basu A, Yülek MÂ (2004) Microfinance in Africa: experience and lessons from selected African Countries. IMF Working Papers 04(174):1–23 Chowdhury A (2009) Microfinance as a poverty reduction tool—a critical assessment. DESA Working Paper No. 89, 1–13 Doocy S, Teferra S, Norell D, Burnham G (2005) Credit program outcomes: coping capacity and nutritional status in the food insecure context of Ethiopia. Soc Sci Med 60(10):2371–2382 Duong HA, Nghiem HS (2014) Effects of microfinance on poverty reduction in vietnam : a pseudo-panel data analysis. J Acc Finance Econ 4(2):58–67 Ebisa D, Getachew N, Fikadu M (2013) Filling the breach: microfinance. J Bus Econ Manag 1(1):10–17 Eshete A (2010) Assessment of the role of microfinance Institutions in urban poverty Alleviation. Addis Ababa University, Master Thesis Fagerland MW, Hosmer DW (2012) A generalized Hosmer-Lemeshow goodness-of-fit test for multinomial logistic regression models. Stata J 12(3):447–453 García-Pérez I, Fernández-Izquierdo MÁ, Muñoz-Torres MJ (2020) Microfinance institutions fostering sustainable development by region. Sustainability (Switzerland) 12(7):1–23 Gobezie G (2009) Sustainable rural finance: prospects, challenges, and implications. Int NGO J 4(2):012–026 Gutu F, Mulugeta W (2016) The role of microfinance on women's economic empowerment in Southwest Ethiopia. Glob J Manag Bus Res B Econ Commerce 16(6):1–10 Guush B, Cornelis G (2011) Does microfinance reduction rural poverty? Evidence-based on household panel data from Northern Ethiopia. Am J Agric Econ 93:43–55 Haji YH (2013) The contribution of microfinance institutions to poverty reduction at south District in Zanzibar. The Open University of Tanzania Haschenburger JK, Spinelli JJ (2005) Assessing the goodness-of-fit of statistical distributions when data are grouped. Math Geol 37(3):261–276 Irobi CN (2008) Microfinance and poverty alleviation. A case of Obazu Progressive Women Association Mbieri, Imo State -Nigeria. Master Thesis, SLU Jebarajakirthy C, Lobo A, Hewege C (2015) Enhancing youth's attitudes towards microcredit in the bottom of the pyramid markets. Int J Consum Stud 39(2):180–192 Karlan D, Valdivia M (2006) Teaching entrepreneurship: impact of business training on microfinance clients and institutions. Yale University, Center Discussion Paper No.941, 1–46 Kebede Menza S, Kebede T (2016) The impact of microfinance on household saving: the case of amhara credit and saving institution feres bet Sub-Branch, Degadamot Woreda. J Poverty J 27:64–82 Khan M, Rahaman MA (2007) Impact of microfinance on living standards, empowerment and poverty alleviation of poor people: a case study on microfinance in the Chittagong District of Bangladesh. Umea School of Business, Master Thesis, 5(13), 1–95 Khandker SR (2005) Microfinance and poverty: evidence using panel data from Bangladesh. World Bank Econ Rev 19(2):263–286 Kothari C (1990) Research methodology: methods and techniques ((Second Re). New Age International Publishers Lakwo A (2006) Microfinance, rural livelihoods, and women's empowerment in Uganda. In African Studies Centre Morduch J (2000) The microfinance schism. Princeton University, New Jersey, USA. World Development, 28(4), 617–629 Morris G, Barnes C (2005) An assessment of the impact of microfinance: a case study from Uganda. J Microfinance 7(1):40–54 Mosley P, Rock J (2004) Microfinance, labor markets and poverty in Africa: a study of six institutions. J Int Dev 16(3):467–500 Muiruri PM (2014) The role of micro-finance institutions to the growth of micro and small enterprises (MSE) in Thika, Kenya (Empirical Review of Non- Financial Factors). Int J Acad Res Acc Finance Manag Sci 4(4):249–262 Neuman WL (2014) Social research methods: qualitative and quantitative approaches (Seventh Ed) Nichols S (2004) A case study analysis of the impacts of microfinance upon the Lives of the Poor in Rural China.RMIT University,Melbourne,Australia Noreen U, Imran R, Zaheer A, Saif MI (2011) Impact of microfinance on poverty : a case of Pakistan. World Appl Sci J 12(6):877–883 Okibo BW, Makanga N (2014) Effects of microfinance institutions on poverty reduction in Kenya. Int J Curr Res Acad Rev 2(2):76–95 Owolabi OE (2015) Microfinance and poverty reduction in Nigeria: A case study of LAPO microfinance bank: The University of Leeds. Ph.D. Dissertation, 1–235 Sabit J, Mohammed A (2015) Role of credit and saving share company in poverty reduction in rural communities of Gumay District, Jimma Zone, South West Ethiopia. Int J Appl Econ Finance 9(1):15–24 Santoso DB (2016) Credit accessibility: the impact of microfinance on rural indonesian households. Master Thesis Tiruneh TB (2015) A comprehensive examination of the sustainability status of microfinance institutions in Ethiopia. Eur J Bus Manag 7(19):17–31 United Nations (2006) Microfinance in Africa: combining the Best Practices of Traditional and Modern Microfinance Approaches towards Poverty Eradication Wolday A (2004) Managing growth of microfinance institutions (MFI S): Balancing sustainability and reaching a large number of clients in Ethiopia. Ethiopian J Econ XIII(2):62–102 This study did not receive specific funding. Department of Business Economics, School of International Trade and Economics, University of International Business and Economics, Chaoyang District, Beijing, 100029, China Dejene Adugna Chomen DAC made all the contributions in writing of this manuscript. The author read and approved the final manuscript. Correspondence to Dejene Adugna Chomen. The author declares that there are no competing interests. Chomen, D.A. The role of microfinance institutions on poverty reduction in Ethiopia: the case of Oromia Credit and Saving Share Company at Welmera district. Futur Bus J 7, 44 (2021). https://doi.org/10.1186/s43093-021-00082-9 Binary logistic Welmera
CommonCrawl
Research article | Open | Open Peer Review | Published: 17 July 2015 An optimization framework for measuring spatial access over healthcare networks Zihao Li1, Nicoleta Serban1 & Julie L. Swann1,2 Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas. Access to healthcare is widely recognized as essential for ensuring not only care of immediate health needs but also to enable health and wellness in the population. Access has multiple dimensions including accessibility, availability, affordability, accommodation, and acceptability [1–3] and is of great importance to decision makers in public health. In this paper, we focus on measurement models for spatial access over a health network with patients and providers, which is most closely related to the elements of accessibility (e.g., location and travel distance for care) and availability (e.g., coverage or the volume of providers). A healthcare network is defined as a transportation network with patients as demand nodes and providers as supply nodes, and an arc between patient and provider if the provider is accessible for the patient. The measurement models studied in this paper are designed to measure potential access based on the services that are available for use relative to population and distance. On the contrary, realized access reflects actual use of services, which can be affected by finances, behaviors, and other factors. Potential access is measurable although it is not observable. An optimization-based approach is described in this paper for quantifying potential access over the healthcare network and for estimating the impact of changes to the network. Optimization is a mathematical science that is widely accepted in engineering and science as providing a way to balance complex interactions across a system, and there is a history of using optimization to assist medical decision making [4–6]. In this paper, theoretical and practical optimization modeling techniques are used to assist with health care policy development by measuring access and computing the economics behind discrepancy of access. Specifically, questions such as how optimization models can be used to measure access, on what types of networks they offer the most accurate estimates of access, and ultimately, why they should be used for measuring and for suggesting interventions to improve access are addressed in this paper. The answers to these questions are useful for improving the health of populations and assisting with health policy development by informing areas of greatest need. The optimization models are compared to some existing methods. In particular, comparisons are made to variations of the two-step floating catchment area (2SFCA) method [7], including the Enhanced 2SFCA (E2SFCA) method [8] and the Modified 2SFCA (MSFCA) method [9], with some discussion of other catchment methods. The catchment methods, which are offsprings of a gravity model of attractions between populations and providers, estimate the size of population served at each provider using distance zones and compute accessibility of a community based on the availability of providers in the community's zones; communities can be captured in the zones of multiple providers. In contrast, optimization models match patients and providers based on both distances and the relatively crowdedness of each provider, and estimate the accessibility of a patient using the matching results to determine the travel distance and the corresponding crowdedness of each patient. Optimization models can take on the perspective of a centralized planner in making assignments, or they can be adapted to directly incorporate patient choice over the network. To compare the measurement models for spatial access, several specific network structures are examined, which are designed so that access measures can be compared analytically. Results on a large case study of specialty care of Cystic Fibrosis (CF), where the network has varying levels of accessibility are also provided. Analytically, this paper demonstrates that the total number of patient visits captured by all facilities in the 2SFCA methods is larger than the number of visits expected based on population size. The three-step floating catchment area (3SFCA) method [10] adds an assignment mechanism to address the competition by facilities, but the assignments are only based on distance. In contrast, in the optimization models, the willingness to travel is not only a function of distance but also of facility congestion including its size. As a result, the optimization models can capture cascading effects in the system, where a change in congestion for one population leads to different decisions and thus impacts individuals in another location. The optimization models also allow for simultaneous estimation of measures of access across the five dimensions outlined [1]. More generally, optimization models can be adapted to many contexts including different patient types (e.g., Medicaid or not), provider constraints, or others. They are also useful in optimizing interventions, where the intervention can target different aspects of access (e.g., distance versus congestion). Optimization framework In healthcare decision making and service research areas, optimization models have been used to determine the best location for a new clinic [11–13], ensure that resource locations are sufficient to cover the need across a network [14], route nurses for home health services [15], improve health outcomes among communities [16, 17], and evaluate policies for pandemic influenza, breast cancer, and HIV over a network [18–20], among others. Wang [21] reviewed several cases where optimization models could be used to improve access or service over a network. In our models, the cost of an individual is associated with two dimensions of access [1]: accessibility and availability. The first is measured with travel distance (or time). The second is measured with congestion, which for an individual is associated with the relative number of people (or visits) at a provider compared to the resources available. One can also think of this as capturing the waiting time until an appointment is available. Studies show that individuals are willing to drive further to receive an appointment more quickly [22]. Thus we assume that the utility (or disutility) associated with a patient's access is a weighted sum of the distance and a congestion term, where we scale the congestion term to trade-off the relative importance between the two. We expect that the congestion weight (α) may be different for different types of healthcare services, such as primary care or specialty care (i.e., distance may have a relatively lower cost). The congestion weight can also represent the resources available at a facility. Several elements are defined for our formulation. The total number of patients is n and the total number of facilities is m. Let i = 1,…, n be the indices of patients and j = 1,…, m be the indices of facilities. The distance between patient i and provider j is d ij ; v i is the estimated number of visits that patient i = 1,…, n will make (demand);and α j is the congestion weight at provider j. A dummy location can be introduced for the assignment of demand that cannot be met. The decision variables are x ij , which is the percentage of time assigned to facility j from patient or community i, for each i = 1,…, n and j = 1,…, m. The formulation of the basic centralized model follows: Objective function: $$ \min {\displaystyle {\sum}_{i=1}^n{\displaystyle {\sum}_{j=1}^m{d}_{ij}{x}_{ij}{v}_i}}+{\displaystyle {\sum}_{j=1}^m{\alpha}_j{\left({\displaystyle {\sum}_{i=1}^n{x}_{ij}{v}_i}\right)}^2} $$ Constraints: $$ {\displaystyle {\sum}_{j=1}^m{x}_{ij}{v}_i={v}_i,\forall i=1,\dots, n}\kern1.68em \left(\mathrm{assignment}\ \mathrm{constraint}\right) $$ $$ 0\le {x}_{ij}\le 1,\forall i=1,\dots, n\;\mathrm{and}\forall j=1,\dots, m. $$ The objective function (1) states that the total number of visits assigned should be v i for each patient or community i. Constraint (2) requires that all individuals be assigned, and equation (3) requires non-negativity of the decision variables. Each individual's congestion at a visit is proportional to the total number of visits at that facility scaled by α j . The congestion term in the objective sums over the congestion experienced by all patients resulting in an overall term that is squared. The choice of quadratic function comes from the following idea: if n patients receive care from a provider location, then each patient experiences n units of congestion, then the total congestion is n × n = n2 (similar to total latency in network congestion work [23]). Note that when α = 0, this model gives equivalent results to assignment by shortest distance, and when α = ∞, this model gives equivalent results to equally distributing patient visits to each facility. See Additional file 1 section 1 for a process to select the congestion weight. For a patient, the number of visits to a close location is expected to be more than the number of visits to a far location because of the willingness to travel. Thus, the number of visits to each location using a function that decays with distance is determined. This is analogous to step 1 in the E2SFCA method where the population is multiplied by a weight. This also implies that the number of visits covered in the network may be less than 100 %. From the results of an optimization model, several measures of spatial access are calculated. The measures include i) the distance traveled for each patient or community; ii) the congestion experienced by each patient or community; iii) the coverage, which is defined as the ratio of visits assigned to visits needed for an individual or community. Variations on the optimization model With optimization models, many variations are possible, including through the addition of constraints, the use of different objective function values, or by differentiating decision variables by type. Here we describe a major variation in our model, optimization with user choice ("Decentralized"), and include many others such as capacity, unmet demand, and willingness to travel in Additional file 1 section 2. The traditional deterministic optimization model (as presented above) often assumes a centralized planner who makes decisions for every patient in a healthcare network to achieve the best overall objective. However, user choice can be incorporated by an equilibrium constraint that represents individual choices as in game theory [24]; we call the resulting optimization model decentralized. An overall equilibrium solution requires a user choice constraint to be satisfied for each patient visit in the network, where the constraint states that the individual cannot improve their distance and congestion of that visit by switching to another facility given the other decisions on the network. The decision variable and equilibrium constraint are defined below: x ijk = decision variable is 1 if patient i chooses facility j for visit k, or 0 otherwise; $$ {d}_{ij}+{\alpha}_j{\displaystyle {\sum}_{p=1}^n{\displaystyle {\sum}_{k=1}^{v_p}{x}_{pjk}\le {d}_{iq}+{\alpha}_q}\left({\displaystyle {\sum}_{p=1}^n{\displaystyle {\sum}_{k=1}^{v_p}{x}_{pqk}+1}}\right),\forall q\ne j,\forall i,\forall k} $$ The equilibrium condition includes a separate constraint for each patient's visit and each location when there is no distance decay function. The left-hand side is the distance and congestion associated with current facility choice j for a visit k, and the right-hand side is the distance and congestion at any location other than j. See Additional file 1 section 3 for more details. Review of catchment models Gravity models use the following general form to calculate an "attraction" measure for each patient i: $$ {A}_i^G={\displaystyle {\sum}_{j=1}^m\frac{S_jw\left({d}_{ij}\right)}{{\displaystyle {\sum}_{i=1}^k}{P}_iw\left({d}_{ij}\right)}}, $$ where S j is the supply at provider j, P i is the population at location i, w(d ij ) is the decay function based on distance of each patient-provider pair (i,j). The original 2SFCA method was introduced by Luo and Wang [7]; it allows the catchment of each provider and patient to float based on the distances between each pair. E2SFCA is a variation that suggests applying different weights within travel time zones to account for decaying of the willingness to travel as distance increases [8]. Under the E2SFCA model, in the first step the "physician-to-population ratio" at each provider is calculated. Although the E2SFCA aims to estimate the number of patients that may potentially use a facility, it is easy to extend the metrics to estimate the number of visits by replicating each patient using visits demanded (e.g., a patient demanding 10 visits can be viewed as 10 patients) [25, 26]. We make a minor adjustment to allow for each patient to have multiple visits to a provider, so we use physician-to-visits ratio instead. Thus we obtain: $$ {R}_j=\frac{S_j}{{\displaystyle {\sum}_r}{\displaystyle {\sum}_{i\in \left\{{d}_{ij}<{D}_r\right\}}}\;{V}_i{W}_r}, $$ where S j is the number of physicians available at provider j, W r is the weight value corresponding to the catchment zone of d ij . The value of W r is calculated using the distance decay function, which is usually nonlinear. D r is the distance threshold of catchment zone r. The parameter V i is the number of potential visits if there is no decay in willingness to travel or the maximal demand for patient or community i. The original E2SFCA method introduced the model with three catchment zones, but an extension is to allow a different number of zones or even a continuous decay ("impedance") function across a single zone. Example choices of impedance functions include Gaussian [7, 27], exponential, inverse power, and others; [27] discusses parameter setting for the impedance function. In the second step of E2SFCA, the method defines the accessibility of each patient or community i based on the ratios at each provider and the zone weights: $$ {A}_i = {\displaystyle {\sum}_r{\displaystyle {\sum}_{j\in \left\{{d}_{ij}<{D}_r\right\}}{R}_j{W}_r.}} $$ Another catchment approach is the 3SFCA method, which incorporates competitions among multiple providers within the same catchment zone of a patient and makes assignments of patients by distance. The M2SFCA method [9] modifies the patient level accessibility in [7] by multiplying the distance weight twice, while another approach [28] allows for zones to differ by transportation modes. For a simple system, the individual measures of spatial access from optimization models can be combined to directly compare with the accessibility measures of 2SFCA methods (E2SFCA and M2SFCA). The simplest supply network consists of n communities in a circular population area with a facility at the center. Let d i be the distance from community i to the facility and S the number of physicians in the facility. Calculate the facility population-to-physician ratio R and patient accessibilityA i using [6] and [7]. Define a decay function w (d i ) ∈ [0,1]. For this system, the optimization method is equivalent to assigning by shortest distance. Let F denote the congestion at the facility, then \( F=\frac{1}{R} \). The coverage of community i is calculated as w(d i ). Therefore, for this system, the patient accessibility is \( {A}_i^E=\frac{coverage}{congestion} \), for the E2SFCA method. For the M2SFCA method, a similar calculation can be made, where the composite patient accessibility measure is \( {A}_i^M=\frac{coverag{e}^2}{congestion} \). Human subject study approval The Institutional Review Board of the Georgia Institute of Technology approved the overall research project using data from the Cystic Fibrosis Foundation, and the Cystic Fibrosis Foundation also approved the study to use registry data previously collected from patients with their signed consent. The submitted article uses the existing locations of Cystic Fibrosis care centers, the distances traveled by patients to CF centers for care, and simulated patient locations with corresponding distances to CF centers. Simulated locations of patients are randomly generated according to the prevalence of CF and the composition of populations at the county level. Analytical comparisons In this section, analytical results on accessibility as measured by the optimization method and catchment models are provided. Most analyses in this section focus on simple systems where service areas are non-overlapping. For simple networks with overlapping service areas, the detailed analysis can be found in Additional file 1 section 4. Notations that will be used frequently in the analysis are defined below. The distance decay function w(d ij ) is between 0 and 1. If d ij is the distance between community i and facility j, and v i is the visits needed by community i, then we assume that facility j receives w(d ij )v i visits from community i as in the catchment models. In optimization models, let P ij be the proportion of the population in community i that visits facility j. Result 1 (Opportunities vs. Experiences): optimization models capture a patient's experience rather than their opportunities. As a result, 2SFCA methods tend to overestimate the total number of visits For many catchment models, the estimated accessibility measure increases when more facility choices are available to a population. However, assignments models (including optimization and the 3SFCA method), are estimating the cost of potential access, and this does not increase if a new choice is congested or inconsequential. This is illustrated with a simulated system of populations and facilities, as in Delamater (2013) [9] . Consider System 1 as described in Fig. 1. When facility A and population X are sufficiently far from B and Y, the catchment models and the optimization method will provide the same accessibility estimate. Consider a second system, where B and Y are both closer to X and A than in the first system, with the distances between A - X and B - Y retained and b closer to Y than A. The 2SFCA methods show that the accessibility of Y increases due to the possibility of service at A, while the accessibility of X decreases because of demand on facility A from population Y. However, the optimization method shows there is no change in accessibility for reasonable congestion weights. From the perspective of a person at Y, service at facility A would be associated with a higher congestion cost and a further distance, thus he would neither be assigned to facility A nor choose that facility. This is still the cost associated with potential access rather than realized access, but the cost is associated with the potential experience of a patient. In contrast, the 2SFCA methods always realize additional choices regardless of their relative competitiveness to existing choices. Therefore the total number of visits implied by the 2SFCA methods is higher compared to the optimization method, and can be higher than the total number of visits demanded. System 1, with populations 100 at location X and 1 at Y. Facilities (a) and (b) each have 10 beds Systems 2 through 5, with populations as specified at location X, Y, and Z. Facilities (a) and (b) each have 10 beds, and the distance weights are provided between locations Systems 6 ~ 8, with population of 100 at location X, and a single facility with either 5 or 10 beds. Distance weights are provided for each system Result 2 (System Effects): the 2SFCA methods do not capture the cascading effects based on congestion For methods focused primarily on catchment zones without assignment, there are some system effects that may not be captured over the network. In Fig. 2, we define several systems to illustrate this point. Define System 2, with population z added to system 1, and with a population of 100 for each of X, Y, and Z. In this system, the optimization method and the 3SFCA both compute the same accessibility for each population, while in the 2SFCA methods the accessibility is higher for Y since it is capturing opportunities for access rather than the patient experience. Consider System 3 with increased population at location Z. In the catchment models, as the population of Z increases, the accessibility for Y and Z decrease, while the accessibility for X remains the same no matter how large Z is. In the optimization method, as Z gets larger, more of the population from Y goes to facility A, so the accessibility at all population locations decreases. The accessibility at each location is the same because the system is constructed in a very specific and symmetric way. A similar effect can be seen when System 2 is varied by moving population Z further away from the center (System 4). In this case, more patients from Y switch to B to reduce congestion, resulting in better access for population X in the optimization method, while the 2SFCA methods show no change for X. Define System 5 the same as 1 but with an unbreakable barrier separating population Y in half, and a population of Z equal to 150. The 3SFCA quantifies the same access with and without the barrier, because the assignment is based on distance alone. On the other hand, the optimization method shows different access in System 5 compared to 3, because assignment is based on both distance and congestion. The accessibility estimates for the different systems are summarized in Table 1. Table 1 Accessibility estimates for systems 2 ~ 5 Result 3 (Composite Measures vs. Individual Measures): the composite measures of the 2SFCA methods are insufficient to distinguish multiple elements of access Consider systems 6 ~ 8 in Fig. 3. System 6 has 100 people in X and 10 beds in A, and the distance weight between X and A is 0.1. System 7 is similar to system 6 but with a distance weight 0.2 (which implies the population is closer to the facility). System 8 is similar to system 7 but has 5 beds in A. As we move from system 6 to system 7 and then to system 8, either the population is closer to the facility, the facility has fewer beds, or both, so the network is getting more congested and the accessibility of X should reflect this change. However, as Delamater [9] points out, the E2SFCA method shows the same accessibility for populations in system 6 and 7. Similarly, the M2SFCA method shows the same accessibility for populations in system 6 and 8. The individual measures in the optimization method indicate the coverage increases as you move to system 8 but that the congestion also increases (see Table 2). The analytical analysis above illustrates several direct comparisons between the 2SFCA methods and the optimization method. In this section access is estimated for the specific health service network associated with Cystic Fibrosis (CF), which is a chronic condition that requires specialty care. Recent studies have shown that Medicaid status is related to survival rate and outcomes [29], but spatial access may also be a factor. The condition has prevalence in the United States of about 30,000 patients with 208 CF care centers in the continental US [30]. Though it is a rare disease, the service network displays heterogeneity, with the spatial access varying greatly over the network. Focusing on potential spatial access, locations of CF patients are simulated according to the incidence of the disease rather than using existing locations of actual patients (which may be biased by service locations). With CF, the population eligible for Medicaid is considered separately, since they may need to receive service in their home state. 30,000 virtual patients are generated with CF located in county centroids in the continental US, where the prevalence was generated proportionally to the populations in each race/ethnicity who are above or below 2 times the federal poverty level [31], using the incidence matrix for race/ethnicity in Additional file 1 section 5 (see Additional file 5 for raw population data). Patient demand is defined as 10 visits per year to a center (this captures more than 90 % of the patients with location information available in the CF Foundation Registry data) [30]. We assume the actual number of visits is decreasing with the distance to selected service facility, patients will not visit facilities more than 150 miles away (again, this captures more than 90 % of the patients in the registry with location information) [30], and low-income patients will only visit a CF center within the patient's state due to restrictions of the Medicaid program. The zip code of each CF center (see Additional file 6) is obtained using patient encounter data from the CF Foundation [30], and the road distance from each CF virtual patient to each CF center is computed using Radical Tools [32] . We assume all facilities are the same size (e.g., can serve 1500 visits a year); the exact number can be changed and the relative comparisons between methods will hold. Accessibility measures were calculated for E2FSCA, M2SFCA, and the decentralized (with user choice) optimization model. The optimization model was implemented using C++ and the CPLEX solver on a UNIX system (see Additional file 2). The decay functions are such that 10 visits will be made when distance is zero, and visits approach zero when distance is 150 miles; see specific functions in section 7 in Additional file 1: Table S4. There are many functions that can be used to model the decaying willingness of travel. We have chosen to use the exponential function for the rare disease setting of Cystic Fibrosis. Because CF is rare and access to care is relatively low compared to primary care, patients are willing to travel longer distances than for some conditions. The parameter used in the case study was calibrated to be in line with realized utilization derived from the CF registry data (see section 7 in Additional file 1: Figure S12). For the optimization model, a congestion weight of 10 is used unless otherwise specified (see Additional file 1 section 1). For the 2SFCA methods, Medicaid patients were only included in catchment areas of facilities in their own states. Maps of the decentralized optimization model display the distance traveled and the congestion experienced by each person, averaged at the county level, in Fig. 4(a) and 4(b). In general, distance is small close to centers, especially in areas with multiple centers such as the coastal northeast. There are a few pockets with higher distance, especially in parts of the West. Congestion is higher in a few areas, such as around Houston and some parts of Ohio and Pennsylvania. Some counties have no simulated patients, while others have uncovered demand, such as in many counties in the Midwest or Western regions. There are also isolated areas that are uncovered, such as near southwest Georgia, southern Missouri, and some counties at the boundary of the US. A summary histogram is provided for distance, congestion and coverage for each county in Additional file 1 section 6. The distribution of coverage shows that many needed visits are not met, due to the distance patients need to travel to CF centers. Optimization results for patient cost of potential access. (a) Distance, and (b) Congestion The composite measure AE generated from the decentralized optimization model is shown in Fig. 5(a). The main areas with high accessibility are near CF centers and around urban areas. There are pockets of low accessibility in many places; however, these can occur for different reasons. In Pittsburg, Pennsylvania, and Columbus, Ohio, Fig. 5(a) shows that the congestion was high, while in Springfield, Missouri, Fig. 5(a) shows that the travel distance is high. Pockets of low accessibility in New York arise from a combination of longer distances and higher congestion. Results comparing optimization model with E2SFCA and M2SFCA for CF care in US. (a) Decentralized model composite measure AE, and (b) E2SFCA-AE Figure 5(b) shows the difference between the decentralized optimization model composite measure AE and the result from the E2SFCA method using the same scale. In comparison to the optimization approach, the E2SFCA method tends to show higher accessibility in areas with many centers (e.g., near Los Angeles and around New York). It also shows higher accessibility in many areas that lie in overlapping service areas for centers (e.g., northern South Carolina, eastern Arkansas, and New Mexico). A pairwise t-test (1-tail) shows that for counties with more than 50 CF patients (127 "large" counties) or less than 5 CF patients (1289 "small" counties), the measure from the E2SFCA method is significantly higher than measures from the optimization method (respectively, with p-values 0.20 × 10−6 and 2.00 × 10−2); for counties of other sizes ("medium" counties), the test is inconclusive. The F-test shows that for all groups of counties, the variance of the E2SFCA measure is higher (with p-value 1.88 × 10−4 for small counties, value less than 10−6 for medium counties, and 3.90 × 10−2 for large counties. The Mann–Whitney-Wilcoxon test shows that the E2SFCA measure is greater in median than the optimization composite measure with p-values less than 10−6 for small and medium counties, and 2.02 × 10−2 for large counties. The finding is consistent with the analytical results in Additional file 1 section 4 showing that with overlapping catchment areas, E2SFCA quantifies higher access when distances are relatively small. The comparison between the composite measure AM and the M2SFCA method is similar but the magnitude of differences is smaller. The number of visits captured in the E2SFCA method is shown in Fig. 6 in comparison to the visits needed by the population. It is highest around facilities, and especially with multiple facilities such as around New York. For the optimization model, the realized visits per facility are estimated to be 0 to 3000. In contrast, the range for the E2SFCA result is 0 to 10,540 per facility. This is consistent with the analytical result that the number of visits is higher in the E2SFCA approach. The F test indicates that the variance of the facility congestion is significantly higher for the E2SFCA approach, with a p-value less than 10−6. This is similar to the analytical result that the optimization model always has a lower facility congestion. Estimated patient visits in E2SFCA and M2SFCA relative to the visits needed in each county. A value greater than 1 indicates that the 2SFCA methods estimate more visits than needed The results showing access over the network indicate a number of areas that have uncovered populations, high congestion, and/or high travel distances. Figure 7 shows the results in several local areas after network interventions. One new facility was added to the network in locations with uncovered populations (Springfield, MO), and the capacity of existing facilities was doubled in two locations (Columbus, OH; and Pittsburgh, PA). For the E2SFCA method, the gain in access is centered over the interventions and decays with distance within 150 miles. The gain is positive in all areas with change, as the new facilities increase the opportunities available or have no impact. Under the optimization method, the coverage in an area increases when a new facility is added, and congestion in an area decreases when new capacity is added. Although the total access increases, some populations show a worse composite measure, which indicates that they are traveling shorter distances but experiencing higher congestion (or the reverse) based on new network dynamics. Note also that when the new location is added in Springfield, there are cascading effects under the optimization approach, and access increases for the population around Jefferson City, since their congestion is decreasing due to the new facility. We performed a pairwise T-test comparing the impact of intervention on both measures for each of the 479 counties that had a change under the intervention. The test shows that the E2SFCA measure estimates a greater improvement from the intervention compared to the optimization measure, which is consistent with our discussion above. Optimization results showing impact of intervention near locations Springfield, MO, Columbus, OH, and Pittsburgh, PA. (a) Access gain under optimization using composite measure AE, and (b) access gain under E2SFCA Discussion and conclusions The optimization methods provide several innovations useful both for understanding access and designing interventions. They can be applied across heterogeneous networks with both dense and sparse areas, and they allow user choice to balance travel and congestion within communities. The approach presented includes a way to select the specific parameters of a model. Optimization models also provide both a picture of the status quo and an approach for evaluating a potential change to a network. Fundamentally, the optimization models have a different framework than many catchment methods, since they estimate the access costs associated with a patient's experience (albeit the potential experience rather than actual utilization). Under optimization models, the presence of additional opportunities only provides gains in potential access when they provide better access compared to existing opportunities, while in the 2SFCA methods, additional opportunities always provides gains in potential access. This difference shows that many 2SFCA methods over count visits when there are facilities with overlapping catchment zones. This effect is stronger in areas with the greatest infrastructure of health services, so interpreting accessibility over a network with sparse and dense areas may not be reasonable. One could adapt the approach by dividing the population by the number of facilities in the zone, or use other adaptations as in the assignment mechanism of the 3SFCA approach. However, these adaptations do not address other issues, such as the cascading effects across the system. The catchment methods tend to capture effects in a defined area, but they do not capture the interactions between areas (or the cascading effects over a network if there are changes introduced) as well as assignment models do. This also means catchment methods may misestimate availability across a network for complicated networks. Using optimization models for estimating access has many other advantages, as they can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, decentralized optimization models allow differences in access in rural and urban areas, which arises directly from the trade-off between distance and congestion rather than solely from different distance functions. It is also easy to modify optimization models to determine the best locations for facilities given an existing demand and supply network [11, 12]. The individual measures from optimization models show not only where to intervene, but also points to what kind of intervention is needed (e.g., new location to reduce distance versus more capacity to reduce congestion). This is especially true as one moves beyond just one measure of access like spatial accessibility to consider the other dimensions of access [1]. This study focus on estimating potential health access using optimization models. There are limitations with the approach. The optimization models assume that patients are trading off travel distance and congestion rationally across a network, while in reality there might be many other factors considered by patients. In addition, the optimization models are built using deterministic known data. In the case study, the possibility of using satellite clinics or services provided through telemedicine are not considered. Results are also dependent on the specific decay function and parameter chosen [27]. Furthermore, the case study also assumes that the transportation modes used by all patients are the same. Optimization models come at a cost. They are less familiar to many working in public health or public policy. They can be complex to model or compute, although this may be a matter more of the appropriate training than extensive computing power. Sample code and instructions for building the optimization models in this paper are provided for use without a license in Additional file 4 (see Additional file 3 for the software package). It may be most important to use optimization models when a network has facilities with overlapping zones, when one wants to capture the nuances of access across populations, or when one needs to develop interventions to improve access. We hope that the use of optimization models will provoke more discussion in how to measure access, and ultimately how to improve access, especially in light of the increase in computing power and big data that will be coming online in the US health system. The data sets supporting the results of this article are included within the article and its additional files (see Additional file 6). 2SFCA: Two-step floating catchment area E2SFCA: Enhanced two-step floating catchment area M2SFCA: Modified two-step floating catchment area CF: Penchansky R, Thomas JW. The concept of access: definition and relationship to consumer satisfaction. Med Care. 1981;19(2):127–40. Khan AA. An integrated approach to measuring potential spatial access to healthcare services. Socio Econ Plan Sci. 1992;26(4):275–87. Khan AA, Bhardwaj SM. Access to healthcare - a conceptual framework and its relevance to healthcare planning. Eval Health Prof. 1994;17(1):60–76. Beck JR. Optimized resource-allocation in medicine using network flows. Med Decis Mak. 1984;4(4):532. da Silva MEM, Santos ER, Borenstein D. Implementing regulation policy in Brazilian Health Care Regulation Centers. Med Decis Mak. 2010;30(3):366–79. Parkan C, Hollands L. The use of efficiency linear programs for sensitivity analysis in medical decision-making. Med Decis Mak. 1990;10(2):116–25. Luo W, Wang FH. Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. Environ Plann B-Plann Des. 2003;30(6):865–84. Luo W, Qi Y. An enhanced two-step floating catchment area (E2SFCA) method for measuring spatial accessibility to primary care physicians. Health Place. 2009;15(4):1100–7. Delamater PL. Spatial accessibility in suboptimally configured health care systems: A modified two-step floating catchment area (M2SFCA) metric. Health Place. 2013;24:30–43. Wan N, Zou B, Sternberg T. A three-step floating catchment area method for analyzing spatial access to health services. Int J Geogr Inf Sci. 2012;26(6):1073–89. Griffin PM, Scherrer CR, Swann JL. Optimization of community health center locations and service offerings with statistical need estimation. IIE Trans. 2008;40(9):880–92. Daskin MS, Dean LK. Location of Health Care Facilities. In: Sainfort MB F, Pierskalla W, editors. Handbook of OR/MS in Health Care: A Handbook of Methods and Applications. 2004. p. 43–76. Schweikhart SB, Smithdaniels VL. Location and service mix decisions for a managed health-care network. Socio Econ Plan Sci. 1993;27(4):289–302. Alejo JS, Martin MG, Ortega-Mier M, Garcia-Sanchez A. Mixed integer programming model for optimizing the layout of an ICU vehicle. BMC Health Serv Res. 2009;9:224. Begur SV, Miller DM, Weaver JR. An integrated spatial DSS for scheduling and routing home-health-care nurses. Interfaces. 1997;27(4):35–48. Deo S, Iravani S, Jiang TT, Smilowitz K, Samuelson S. Improving health outcomes through better capacity allocation in a community-based chronic care model. Oper Res. 2013;61(6):1277–94. Rosenhead J. Community-based operations research: decision modeling for local impact and diverse populations. Interfaces. 2013;43(6):609–10. Ekici A, Keskinocak P, Swann JL. Pandemic influenza response: food distribution logistics. Manuf Serv Oper Manage. 2014;16(1):11–27. Enns EA, Mounzer JJ, Brandeau ML. Optimal link removal for epidemic mitigation: a two-way partitioning approach. Math Biosci. 2012;235(2):138–47. Xiong W, Hupert N, Hollingsworth EB, O'Brien ME, Fast J, Rodriguez WR. Can modeling of HIV treatment processes improve outcomes? Capitalizing on an operations research approach to the global pandemic. BMC Health Serv Res. 2008;8:166. Wang FH. Measurement, optimization, and impact of health care accessibility: a methodological review. Ann Assoc Am Geogr. 2012;102(5):1104–12. Ahmed A, Fincham JE. Physician office vs retail clinic: patient preferences in care seeking for minor illnesses. Ann Fam Med. 2010;8(2):117–23. Kontogiannis S, Spirakis P. Atomic selfish routing in networks: A survey. In: Deng X, Ye Y, editors. Internet and Network Economics, Proceedings, vol. 3828. Berlin: Springer-Verlag Berlin; 2005. p. 989–1002. Heier Stamm J. Design and Analysis of Humanitarian and Public Health Logistics Systems, Ph.D. Dissertation. Atlanta, GA: PhD thesis available from Georgia Institute of Technology; 2010. McGrail MR, Humphreys JS. A new index of access to primary care services in rural areas. Aust N Z J Public Health. 2009;33(5):418–23. Ngui A, Apparicio P. Optimizing the two-step floating catchment area method for measuring spatial accessibility to medical clinics in Montreal. BMC Health Serv Res. 2011;11:166. Wan N, Zhan FB, Zou B, Chow E. A relative spatial access assessment approach for analyzing potential spatial access to colorectal cancer services in Texas. Appl Geogr. 2012;32(2):291–9. Mao L, Nekorchuk D. Measuring spatial accessibility to healthcare for populations with multiple transportation modes. Health Place. 2013;24:115–22. Schechter MS, Margolis PA. Relationship between socioeconomic status and disease severity in cystic fibrosis. J Pediatr. 1998;132(2):260–4. Knapp E. Cystic Fibrosis Foundation Patient Registry (1986–2010). In: Cystic Fibrosis Foundation. 2012. Ratio of Income to Poverty Level (C17002). In., 2012 edn. http://factfinder2.census.gov/: U.S. Census Bureau; 2010. Radical Tools 5. In., 5 edn. http://www.radicallogistics.com/: Radical Logistics; 2011. The authors thank Michael S. Schechter for providing knowledge of the current health network of the disease and the Cystic Fibrosis Foundation for providing the location of care centers. The study was supported by the National Science Foundation Grant CMMI-0954283 and a seed grant awarded by the Healthcare System Institute and Children's Healthcare of Atlanta. Dr. Swann was also supported by the Harold R. and Mary Anne Nash Junior Faculty Endowment Fund. H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, USA Zihao Li , Nicoleta Serban & Julie L. Swann School of Public Policy by courtesy, Georgia Institute of Technology, Atlanta, USA Julie L. Swann Search for Zihao Li in: Search for Nicoleta Serban in: Search for Julie L. Swann in: Correspondence to Julie L. Swann. ZL participated in the design of the study, implemented the computational model and data analysis, and drafted the manuscript. NS and JLS carried out the study design and coordination and participated in data analysis. All authors contributed to interpretation of findings, preparing the manuscript, read and approved the final manuscript. Zihao Li is an Operations Research Ph.D. student in the School of Industrial and System Engineering (ISyE) at Georgia Institute of Technology (GT). Dr. Nicoleta Serban is the Coca-Cola Associate Professor in the School of ISyE at GT. She received her B.S. in Mathematics and an M.S. in Theoretical Statistics and Stochastic Processes from the University of Bucharest. She went on to earn her Ph.D. in Statistics at Carnegie Mellon University. In 2010, she was granted the NSF CAREER award for research in service equity and access. Her research record is quite diverse, from mathematical statistics to modeling to data analysis to qualitative insights on causality and complexity. Dr. Serban's research interests on Health Analytics span various dimensions including large-scale data representation with a focus on processing patient-level health information into data features dictated by various considerations, such as data-generation process and data sparsity; machine learning and statistical modeling to acquire knowledge from a compilation of health-related datasets with a focus on geographic and temporal variations; and integration of statistical estimates into informed decision making in healthcare delivery and into managing the complexity of the healthcare system. Dr. Julie Swann is the Harold R. and Mary Anne Nash Professor in the School of ISyE at GT. She is also a co-founder and co-director of the Health and Humanitarian Systems Center at GT. She received her B.S. in Industrial Engineering from GT in 1996 and her Ph.D. in Industrial Engineering and Management Sciences from Northwestern in 2001. In 2009–2010 she was on loan to the Centers for Disease Control and Prevention (CDC) as a Senior Science Advisor for the H1N1 pandemic response to advise and evaluate the logistics of vaccine and medical countermeasures distribution to protect the American public. Her research specializes in supply chain management (especially humanitarian networks) and health systems. Her research methods have focused on integrating optimization models with economic concepts related to decentralized agents and on collaborating with other disciplines to solve problems in health policy. Dr. Swann was awarded the NSF career award in 2004 and 2014. Appendix file that provides detailed supplementary arguments and analysis of the methods and data in the article. The file also contains a histogram of distances traveled by CF patients compared to estimated distances by the model. Computer codes used to implement the optimization method in this article. The free AMPL package that researchers can use to implement optimization models to measure access described in this article. We provide example codes and instructions to use the AMPL package for researchers who want to implement optimization models to measure access. Contains example codes and instructions to implement optimization models in AMPL and to utilize the free online NEOS solver to solve the optimization models. Contains population information to calculate the incidence rate by race/ethnicity who are above or below 2 times the federal poverty level. Provides Cystic Fibrosis care center locations and simulated patient locations and demands that this article used to carry out the case study. Measurement of access
CommonCrawl
Periodic solutions of Liénard equations with resonant isochronous potentials Geometric aspects of transformations of forces acting upon a swimmer in a 3-D incompressible fluid April 2013, 33(4): 1545-1562. doi: 10.3934/dcds.2013.33.1545 SRB attractors with intermingled basins for non-hyperbolic diffeomorphisms Zhicong Liu 1, School of Statistics, Capital University of Economics and Business, Beijing 100070, China Received January 2011 Revised August 2012 Published October 2012 We investigate a class of non-hyperbolic diffeomorphisms defined on the product space. By using the Pesin theory combined with the general theory of differentiable dynamical systems, we prove that there are exactly two SRB attractors, and their basins cover a full measure subset of the ambient manifold. Furthermore, we prove that the basins of SRB attractors have the strange intermingled phenomenon, i.e. they are measure-theoretically dense in each other. The intermingled phenomena have been observed in many physical systems by numerical experiments, and considered to be important to some fundamental problems in physical, biology and computer science etc. Finally, we describe a concrete example for application. Keywords: Lyapunov exponents, non-hyperbolic diffeomorphisms., Pesin theory, SRB attractors, intermingled basins. Mathematics Subject Classification: Primary: 37C40; Secondary: 37D25, 37D30, 37D4. Citation: Zhicong Liu. SRB attractors with intermingled basins for non-hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1545-1562. doi: 10.3934/dcds.2013.33.1545 J. C. Alexander, B. Hunt, I. Kan and J. A. Yorke, Intermingled basins for the triangle map,, Ergodic Theory and Dynamical Systems, 16 (1996), 651. doi: 10.1017/S0143385700009020. Google Scholar J. C. Alexander, I. Kan, J. A. Yorke and Z. You, Riddled basins,, International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, 2 (1992), 795. Google Scholar D. V. Anosov and A. B. Katok, New examples in smooth ergodic theory, Ergodic diffeomorphisms,, Trudy Moskov. Mat. Obšč, 23 (1970), 3. Google Scholar P. Ashwin, J. Buescu and I. Stewart, Bubbling of attractors and synchronisation of chaotic oscillators,, Phys. Lett. A, 193 (1994), 126. doi: 10.1016/0375-9601(94)90947-4. Google Scholar L. Barreira and Y. B. Pesin, "Lyapunov Exponents and Smooth Ergodic Theory,", University Lecture Series 23, (2002). Google Scholar G. D. Birkhoff, Probability and physical systems,, Bull. Amer. Math. Soc., 38 (1932), 361. doi: 10.1090/S0002-9904-1932-05389-7. Google Scholar C. Bonatti and M. Viana, SRB measures for partially hyperbolic systems whose central direction is mostly contracting,, Israel Journal of Mathematics, 115 (2000), 157. doi: 10.1007/BF02810585. Google Scholar R. Bowen, "Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms,", Lecture Notes in Mathematics, 470 (1975). Google Scholar M. Brin and G. Stuck, "Introduction to Dynamical Systems,", Cambridge University Press, (2002). Google Scholar K. Burns, C. Pugh, M. Shub and A. Wilkinson, Recent results about stable ergodicity,, in, (1999), 327. Google Scholar B. Fayad, Topologically mixing flows with pure point spectrum,, in, (2003), 113. Google Scholar P. Grete and M. Markus, Residence time distributions for double-scroll attractors,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 17 (2007), 1007. doi: 10.1142/S0218127407017720. Google Scholar F. Hofbauer, J. Hofbauer, P. Raith and T. Steinberger, Intermingled basins in a two species system,, J. Math. Biol., 49 (2004), 293. doi: 10.1007/s00285-003-0253-3. Google Scholar I. Kan, Open sets of diffeomorphisms having two attractors, each with an everywhere dense basin,, Bull. Amer. Math. Soc., 31 (1994), 68. doi: 10.1090/S0273-0979-1994-00507-5. Google Scholar T. Kapitaniak, Uncertainty in coupled chaotic systems: Locally intermingled basins of attraction,, Phys. Rev. E(3), 53 (1996), 6555. doi: 10.1103/PhysRevE.53.6555. Google Scholar A. Katok and B. Hasselblatt, "Introduction to the Modern Theory of Dynamical Systems, With a Supplementary Chapter by Katok and Leonardo Mendoza,", Encyclopedia of Mathematics and its Applications, (1995). Google Scholar Y. C. Lai, C. Grebogi and J. A. Yorke, Intermingled basins and riddling bifurcation in chaotic dynamical systems,, Differential equations and applications, (1996), 138. Google Scholar I. Melbourne and A. Windsor, A $C^\infty$ diffeomorphism with infinitely many intermingled basins,, Ergodic Theory Dynamical Systems, 25 (2005), 1951. doi: 10.1017/S0143385705000325. Google Scholar Hiroyuki Nakajima and Yoshisuke Ueda, Riddled basins of the optimal states in learning dynamical systems,, Phys. D, 99 (1996), 35. doi: 10.1016/S0167-2789(96)00131-5. Google Scholar E. Ott, J. C. Alexander, I. Kan, J. C. Sommerer and J. A. Yorke, The transition to chaotic attractors with riddled basins,, Phys. D, 76 (1994), 384. doi: 10.1016/0167-2789(94)90047-7. Google Scholar E. Ott, J. C. Sommerer, J. C. Alexander, I. Kan and J. A. Yorke, Scaling behavior of chaotic systems with riddled basins,, Phys. Rev. Lett., 71 (1993), 4134. doi: 10.1103/PhysRevLett.71.4134. Google Scholar J. Palis, A global view of dynamics and a conjecture on the denseness of finitude of attractors,, Géométrie complexe et systémes dynamiques (Orsay, (2000), 335. Google Scholar J. Palis and W. De Melo, "Geometric Theory of Dynamical Systems: An Introduction,", Translated from the Portuguese by A. K. Manning, (1982). Google Scholar T. N. Palmer, A local deterministic model of quantum spin measurement,, Proc. Roy. Soc. London Ser. A, 451 (1995), 585. doi: 10.1098/rspa.1995.0145. Google Scholar M. W.Parker, Undecidability in $R^n$: riddled basins, the KAM tori, and the stability of the solar system,, Philos. Sci., 70 (2003), 359. doi: 10.1086/375472. Google Scholar Ja. B. Pesin, Characteristic Ljapunov exponents, and ergodic properties of smooth dynamical systems with invariant measure,, (Russian) Dokl. Akad. Nauk SSSR, 226 (1976), 774. Google Scholar Ja. B. Pesin, Families of invariant manifolds that correspond to nonzero characteristic exponents,, (Russian) Izv. Akad. Nauk SSSR Ser. Mat, 40 (1976), 1332. Google Scholar Ja. B. Pesin, Characteristic Ljapunov exponents, and smooth ergodic theory,, (Russian) Uspehi Mat. Nauk, 32 (1977), 55. Google Scholar C. Pugh and M. Shub, Ergodic attractors,, Trans. Amer. Math. Soc., 312 (1989), 1. doi: 10.1090/S0002-9947-1989-0983869-1. Google Scholar A. Saito and K. Kaneko, Inaccessibility in decision procedures,, in, (2000), 215. Google Scholar A. Saito and K. Kaneko, Inaccessibility and undecidability in computation, geometry, and dynamical systems,, Phys. D, 155 (2001), 1. doi: 10.1016/S0167-2789(01)00232-9. Google Scholar J. C. Sommerer and E. Ott, Intermingled basins of attraction: uncomputability in a simple physical system,, Phys. Lett. A, 214 (1996), 243. doi: 10.1016/0375-9601(96)00165-X. Google Scholar S. van Strien, Transitive maps which are not ergodic with respect to Lebesgue measure,, Ergodic Theory Dynam. Systems, 16 (1996), 833. Google Scholar P. Walters, "An Introduction to Ergodic Theory,", Graduate Texts in Mathematics, (1982). Google Scholar A. Windsor, Minimal but not uniquely ergodic diffeomorphisms,, in, (1999), 809. Google Scholar A. Yakubu and C. Carlos, Interplay between local dynamics and dispersal in discrete-time metapopulation models,, Journal of Theoretical Biology, 218 (2002), 273. Google Scholar Yongluo Cao, Stefano Luzzatto, Isabel Rios. Some non-hyperbolic systems with strictly non-zero Lyapunov exponents for all invariant measures: Horseshoes with internal tangencies. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 61-71. doi: 10.3934/dcds.2006.15.61 Vanderlei Horita, Marcelo Viana. Hausdorff dimension for non-hyperbolic repellers II: DA diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1125-1152. doi: 10.3934/dcds.2005.13.1125 Gerhard Keller. Stability index, uncertainty exponent, and thermodynamic formalism for intermingled basins of chaotic attractors. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 313-334. doi: 10.3934/dcdss.2017015 Carlos H. Vásquez. Stable ergodicity for partially hyperbolic attractors with positive central Lyapunov exponents. Journal of Modern Dynamics, 2009, 3 (2) : 233-251. doi: 10.3934/jmd.2009.3.233 Eleonora Catsigeras, Heber Enrich. SRB measures of certain almost hyperbolic diffeomorphisms with a tangency. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 177-202. doi: 10.3934/dcds.2001.7.177 Boris Kalinin, Victoria Sadovskaya. Lyapunov exponents of cocycles over non-uniformly hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5105-5118. doi: 10.3934/dcds.2018224 Mauricio Poletti. Stably positive Lyapunov exponents for symplectic linear cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5163-5188. doi: 10.3934/dcds.2018228 Pablo G. Barrientos, Artem Raibekas. Robustly non-hyperbolic transitive symplectic dynamics. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 5993-6013. doi: 10.3934/dcds.2018259 Peidong Liu, Kening Lu. A note on partially hyperbolic attractors: Entropy conjecture and SRB measures. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 341-352. doi: 10.3934/dcds.2015.35.341 Huyi Hu, Miaohua Jiang, Yunping Jiang. Infimum of the metric entropy of hyperbolic attractors with respect to the SRB measure. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 215-234. doi: 10.3934/dcds.2008.22.215 Zeng Lian, Peidong Liu, Kening Lu. Existence of SRB measures for a class of partially hyperbolic attractors in banach spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3905-3920. doi: 10.3934/dcds.2017164 Luis Barreira, César Silva. Lyapunov exponents for continuous transformations and dimension theory. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 469-490. doi: 10.3934/dcds.2005.13.469 Andrey Gogolev, Ali Tahzibi. Center Lyapunov exponents in partially hyperbolic dynamics. Journal of Modern Dynamics, 2014, 8 (3&4) : 549-576. doi: 10.3934/jmd.2014.8.549 Anna Cima, Armengol Gasull, Víctor Mañosa. Parrondo's dynamic paradox for the stability of non-hyperbolic fixed points. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 889-904. doi: 10.3934/dcds.2018038 Katrin Gelfert. Non-hyperbolic behavior of geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 521-551. doi: 10.3934/dcds.2019022 F. Rodriguez Hertz, M. A. Rodriguez Hertz, A. Tahzibi and R. Ures. A criterion for ergodicity for non-uniformly hyperbolic diffeomorphisms. Electronic Research Announcements, 2007, 14: 74-81. doi: 10.3934/era.2007.14.74 Keith Burns, Dmitry Dolgopyat, Yakov Pesin, Mark Pollicott. Stable ergodicity for partially hyperbolic attractors with negative central exponents. Journal of Modern Dynamics, 2008, 2 (1) : 63-81. doi: 10.3934/jmd.2008.2.63 Inmaculada Baldomá, Ernest Fontich, Pau Martín. Gevrey estimates for one dimensional parabolic invariant manifolds of non-hyperbolic fixed points. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4159-4190. doi: 10.3934/dcds.2017177 Peter E. Kloeden, Jacson Simsen. Pullback attractors for non-autonomous evolution equations with spatially variable exponents. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2543-2557. doi: 10.3934/cpaa.2014.13.2543 Snir Ben Ovadia. Symbolic dynamics for non-uniformly hyperbolic diffeomorphisms of compact smooth manifolds. Journal of Modern Dynamics, 2018, 13: 43-113. doi: 10.3934/jmd.2018013 Zhicong Liu
CommonCrawl
periodic table with charges and electron configuration This table is designed to fit on a single sheet of 8 1/2″ x 11″ sheet of paper while maintaining legibility for the small text of the electron shell configurations. The electron configurations are written in the noble gas notation. The reason this was done is that the configuration of a… (a) If each core electron (that is, the 1 s electrons) were totally effective in shielding the valence electrons (that is, the $2 s$ and $2 p$ electrons) from the nucleus and the valence electrons did not shield one another, what would be the shielding constant $(\sigma)$ and the effective nuclear charge … And a periodic table with atomic mass and charges interprets both the charges and atomic mass. To write electron configuration of an element, locate its symbol in ADOMAH Periodic Table and cross out all elements that have higher atomic numbers. Click the image for full size and save to your computer. They all have a similar electron configuration in their valence shells: a single s electron. Atomic model Section 2. However, we do find exceptions to the order of filling of orbitals that are shown in Figure 3 or Figure 4.For instance, the electron configurations (shown in Figure 6) of the transition metals chromium (Cr; atomic number 24) and copper … ionization energy. ELECTRON CONFIGURATION AND THE PERIODIC TABLE • The electrons in an atom fill from the lowest to the highest orbitals. 1s 2 2s 2 2p 6 3s 2 Phosphorus - Protons - Neutrons - Electrons - Electron Configuration. It is in the fourth column of the p block. Electron Configuration Chart for All Elements in the Periodic Table. Perimeter Worksheets Area Worksheets Geometry Worksheets Free Printable Worksheets Printables Periodic Table Poster Chemistry Periodic Table … They all have a similar electron configuration in their valence shells: a single s electron. How to use this book -The book is divided into 7 sections, each one referring to a different topic related to the periodic table. Rhenium and periodic table configuration of any empty orbitals relate to an orbital gains or newly synthesized elements. Play this game to review undefined. electron affinity. Rhenium and periodic table configuration of any empty orbitals relate to an orbital gains or newly synthesized elements. 1s 2 2s 2 2p 6 3s 2 This Picture was rated 44 by Bing.com for KEYWORD periodic table with charges, You will find this result at BING. 1. The periodic table can be a powerful tool in predicting the electron configuration of an element. The electron configuration of Periodic Table of Elements of an atom makes us understand the shape and energy of electrons of an electron. Electron configuration for ions. For the best printing options, choose Landscape and "Fit" as the size option. Draw and write the electron configuration of atoms and explain how electron configuration is linked to the group number. Similarity of valence shell electron configuration implies that we can determine the electron configuration of an atom solely by its position on the periodic table. Click the image for full size and save it to your computer. The number of each element corresponds to the number of protons in its nucleus (which is the same as the number of electrons orbiting that nucleus). The electron configuration of $\mathrm{B}$ is $1 \mathrm{s}^{2} 2 \mathrm{s}^{2} 2 p^{1}$. Get the periodic table with electron configurations periodic table nastiik periodic table with charges and electron configuration how to write electron configurations for atoms of any element Whats people lookup in this blog: . Searching for use the table with sulfur and electron configurations are absolutely essential for the added? Use this printable periodic table with element charges to predict compounds and chemical reactions. 8. To see all my Chemistry videos, check out http://socratic.org/chemistry Where do electrons live in atoms? This periodic table contains each element's atomic number, atomic mass, symbol, name, and electron configuration. This downloadable color periodic table contains each element's atomic number, atomic mass, symbol, name, and electron configuration. Electron Configuration Chart for All Elements in the Periodic Table. Phosphorus has 15 protons and electrons in its structure. Periodic Table with Charges PDF for Printing, Free Printable Periodic Tables (PDF and PNG), List of Electron Configurations of Elements, What Is a Heterogeneous Mixture? Lesson overview: Electron Configuration and the Periodic Table View in classroom. The ability of an element to attract or hold onto electrons is called electronegativity . Look up chemical element names, symbols, atomic masses and other properties, visualize trends, or even test your elements knowledge by playing a periodic table game! The condensed electron configuration of Calcium and Iodine are given below: Calcium is preceded by Argon which has 18 electrons so the remaining two electrons are written after the symbol for Argon. A crash course on using the periodic table to find electron configurations. Valued for the periodic table which increase as a distinct rectangular areas or the more. This means that its electron configuration should end in a p 4 electron Consider Se, as shown in Figure \(\PageIndex{10}\). Isotopes Ions and atoms Worksheet 1 Answer Key as Well as Chapter 7 Electron Configuration and the Periodic Table Pp Worksheet August 31, 2018 We tried to locate some good of Isotopes Ions and atoms Worksheet 1 Answer Key as Well as Chapter 7 Electron Configuration and the Periodic Table Pp image to suit your needs. t lli ti Dr. A. Al-Saadi 2 metallic properties. Electron configuration Section 3. The periodic table can be a powerful tool in predicting the electron configuration of an element. Classification of elements in the periodic table. cool periodic table with electron configuration. Get the periodic table with electron configurations periodic table nastiik periodic table with charges and electron configuration how to write electron configurations for atoms of any element Whats people lookup in this blog: You can download this table for easy printing in PDF format here. Section 1. The periodic table, also known as the periodic table of elements, arranges the chemical elements such as hydrogen, silicon, iron, and uranium according to their recurring properties. Learning the periodic table Section 5. For example, if you need to write electron configuration of Erbium (68), cross out elements 69 through 120. This periodic table contains each element's atomic number, atomic mass, symbol, name, and electron configuration. IMAGE Details FOR Periodic Table With Charges And Electron Configuration Elements and 's Picture Explore the Bohr model and atomic orbitals. Because of their differing nuclear charges, and as a result of shielding by inner electron shells, the different atoms of the periodic table have different affinities for nearby electrons. related with the periodic table. Valued for the periodic table which increase as a distinct rectangular areas or the more. Atom: Atom is a fundamental particle of any element and compound. Each element square contains all 118 of elements with the element number, symbol, name, atomic mass, and most common oxidation number. Indicate the number of valence electrons and the group number for the element. The filling order simply begins at hydrogen and includes each subshell as you proceed in increasing Z order. Since the arrangement of the periodic table is based on the electron configurations, Figure 4 provides an alternative method for determining the electron configuration. Each element has a unique atomic structure that is influenced by its electronic configuration, which is the distribution of electrons across different orbitals of an atom. Periodic Table with Charges PDF Printing Tips and Download. There are four distinct rectangular areas or blocks. The f-block is usually not included in the main table, but rather is floated be… C. Graphing e Periodic Property: Atomle Radius The atomic radii for elements with atomic numbers 1-25 are listed in Table 5.1. This notation uses the symbol of the previous row's noble gas in brackets to represent the part of the electron configuration that is identical to that noble gas's electron configuration. Definition and Examples, 10 Examples of Solids, Liquids, Gases, and Plasma. Pics of : Periodic Table With Charges And Electron Configuration Pdf The above image can be used as an HD wallpaper for your computer desktop. Video. Sorry, your blog cannot share posts by email. Play this game to review undefined. For best results, use the PDF file for printing. 12-Oct-11 1 Chapter 7 Electron Configuration and the Periodic Table Dr. A. Al-Saadi 1 Preview History of the periodic table. Basically the periodic table was constructed so that elements with similar electron configurations would be aligned into the same groups (columns). The periodic table is a tabular display of the chemical elements organized on the basis of their atomic numbers, electron configurations, and chemical properties. ThoughtCo uses cookies to provide you with a great user experience. Electronic … There are many general rules and norms that are taken into consideration while we assign the "location" of the electron with its energy state. Electronic structure for this In this lesson, we will explain why the charge of an atom is neutral. What atom matches this electron configuration? ELECTRON CONFIGURATION AND THE PERIODIC TABLE • The electrons in an atom fill from the lowest to the highest orbitals. The electron configurations are written in the noble gas notation. Video. Draw and write the electron configuration of atoms and explain how electron configuration is linked to the group number. Oct 22, 2018 - Ion Table Periodic Table And Ionic Charges I As Periodic Table With Charges Of Ions And Names Archives — Harshnoise.org Periodic Table showing last orbital filled for each element The periodic table shown above demonstrates how the configuration of each element was aligned so that the last orbital filled is the same except for the shell. Interactive periodic table with up-to-date element property data collected from authoritative sources. Electron Configuration Write the electron configuration of each atom listed on the laboratory report. Atomic and ionic radius Section 4. Because much of the chemistry of an element is influenced by valence electrons, we would expect that these elements would have similar chemistry—and they do.The organization of electrons in atoms explains not only the shape of the periodic table … Atomic properties from the periodic table (periodicity), atomic radius. The main body of the table is a 18 × 7 grid. A good periodic table is a necessary part of every chemist's, or future chemist's, reference materials. Elements that appear below the main body of the periodic table; Lanthanides and Actinides; Electron configuration has the outer s sublevel full, and is now filling the f sublevel Orbitals The regions around the nucleus of an atom where there is a high probability of finding an electron; holds up to 2 electrons One of the really cool things about electron configurations is their relationship to the periodic table. Periodic table pro periodic table with electron configurations periodic table pro printable periodic tables. Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. There are 118 … Article by audrey sherry. In this lesson, we will explain why the charge of an atom is neutral. This table is available for download and printing in PDF format ​here. For the best printing options, choose "Landscape" and "Fit" as the size option. What atom matches this electron configuration? • The knowledge of the location of the orbitals on the periodic table can greatly help the writing of electron configurations for large atoms. There are 118 elements in the periodic table. You can use the image as a 1920x1080 HD wallpaper for your computer desktop. Elements with 1 valence level electron (H, Li, Na) have an ionic charge of 1+ and lose 1 electron. The total number of neutrons in the … Post was not sent - check your email addresses! Elements with the same number of valence electrons are kept together in groups, such as the halogens and the noble gases. Learn how to use an element's position on the periodic table to predict its properties, electron configuration, and … After looking around for a useful printable periodic table, I found that most were pretty basic and included only a few properties. Pics of : Periodic Table With Charges And Electron Configuration. 7.3 Effective Nuclear Charge •Z (nuclear charge) = the number of protons in the nucleus of an atom •Z eff (effective nuclear charge) = the magnitude of positive charge "experienced" by an electron in the atom •Z eff increases from left to right across a period; changes very little down a column Searching for use the table with sulfur and electron configurations are absolutely essential for the added? • The knowledge of the location of the orbitals on the periodic table can greatly help the writing of electron configurations for large atoms. Color of the element symbol shows state of matter: black=solid: white=liquid Elements are presented in increasing atomic number. This periodic table with charges is a useful way to keep track of the most common oxidation numbers for each element. They are in column 1 of the periodic table. This color periodic table wallpaper contains each element's atomic number, atomic mass, symbol, name, and electron configuration. Describe the electron configuration of an atom using principal energy level, sublevels, orbitals, and the periodic table. Lesson overview: Electron Configuration and the Periodic Table View in classroom. 18 7.4 Periodic Trends in Properties of Elements • Atomic radius: distance between nucleus of an atom and its valence shell • Metallic radius: half the distance between nuclei of … She has taught science courses at the high school, college, and graduate levels. Excepting helium, elements with 2 valence level electrons (Be and Mg) have an ionic charge of 2+ and lose 2 electrons. 7.3 Effective Nuclear Charge •Z (nuclear charge) = the number of protons in the nucleus of an atom •Z eff (effective nuclear charge) = the magnitude of positive charge "experienced" by an electron in the atom •Z eff increases from left to right across a period; changes very little down a column By using ThoughtCo, you accept our, Color Periodic Table With Electron Configurations, Color Periodic Table Wallpaper With Electron Configurations, Printable Periodic Table With Electron Configurations, Color Periodic Table of the Elements: Atomic Masses, Periodic Table of the Elements - Oxidation Numbers, Periodic Table of the Elements - Accepted Atomic Masses, Color Periodic Table of the Elements - Valence Charge, What the Numbers on the Periodic Table Mean, Printable Periodic Table and Periodic Table Wallpaper, Ph.D., Biomedical Sciences, University of Tennessee at Knoxville, B.A., Physics and Mathematics, Hastings College. Briefly speaking, the charge of an element in its ionic form refers to the actual number of electrons that it loses or gains to achieve the nearest noble gas configuration. 1 valence level electron ( H, Li, Na ) have ionic... 2 the periodic table PDF file for printing 1 Chapter 7 electron configuration and the periodic to! This table for easy printing in PDF format here would be aligned into the same number of electrons! A distinct rectangular areas or the more writing of electron configurations is relationship. Live in atoms see all my Chemistry videos, check out http: //socratic.org/chemistry Where do live... Electrons live in atoms are absolutely essential for the best printing options, ``... H, Li, Na ) have an ionic charge of an element "location" of location. In the periodic table pro printable periodic table 1 Chapter 7 electron configuration, and electron configuration of an.. €¦ and a periodic table pro printable periodic table View in classroom level electron H! Groups, such as the size option to the highest orbitals atoms explain! You will find this result at BING A. Al-Saadi 1 Preview History of the location of the really cool about! The lowest to the group number principal energy level, sublevels, orbitals, electron... Atomic number, atomic mass, symbol, name, and electron configuration Chart all... For KEYWORD periodic table ( periodicity ), atomic radius 1 Chapter 7 electron Chart. Why the charge of an element 2 valence level electrons ( be and Mg ) an... Is their relationship to the group number they are in column 1 of the periodic table which increase as distinct. Properties, electron configuration Chart for all elements in the noble gases - check your email addresses for the printing! 69 through 120 Na ) have an ionic charge of 2+ and lose 2 electrons linked to the number! Location of the orbitals on the periodic table Dr. A. Al-Saadi 1 Preview History periodic table with charges and electron configuration the periodic.... Simply begins at hydrogen and includes each subshell as you proceed in increasing Z order makes us understand shape. Write electron configuration of any element and compound configurations are absolutely essential for the periodic table size... Makes us understand the shape and energy of electrons of an atom makes us understand the and. Write the electron with its energy state basic and included only a few.... For printing Z order lesson, we will explain why the charge of and. Elements 69 through 120, if you need to write electron configuration of Erbium ( 68 ), radius. Image for full size and save it to your computer example, if you to. - electron configuration blog can not share posts by email Atomle radius the atomic radii for with. With the same number of valence electrons are kept together in groups, such as the halogens and periodic. Not share posts by email shells: a single s electron table, found. This color periodic table can be used as an HD wallpaper for computer... Download and printing in PDF format ​here my Chemistry videos, check out:. Each element 's atomic number, atomic mass, symbol, name and... Symbol, name, and consultant in atoms at BING same number of valence electrons are kept in... ), atomic mass, symbol, name periodic table with charges and electron configuration and electron configurations the ability of an to! Synthesized elements ( \PageIndex { 10 } \ ) a distinct rectangular areas or the more in the noble notation... The electrons in an atom using principal energy periodic table with charges and electron configuration, sublevels, orbitals, and Plasma, ). With similar electron configurations are absolutely essential for the added can download this table easy! 2 2p 6 3s 2 electron configuration and `` Fit '' as the size option Examples. Valence electrons are kept together in groups, such as the size option searching for use table. 18 × 7 grid for a useful printable periodic tables size and save to your computer desktop number! Graphing e periodic Property: Atomle radius the atomic radii for elements with 1 valence level (. Elements with similar electron configurations are absolutely essential for the added such as the and. Landscape '' and `` Fit '' as the size option electrons is called electronegativity - electrons - electron configuration Dr.... 7 grid … Rhenium and periodic table can be a powerful tool in the... Electrons live in atoms, Li, Na ) have an ionic charge of and. 2 metallic properties, symbol, name, and … 8 definition and Examples, 10 of. Atomic numbers 1-25 are listed in table 5.1 educator, and consultant table of elements of atom. Writer, educator, and graduate levels configurations is their relationship to the table! Energy level, sublevels, orbitals, and consultant easy printing in PDF format ​here,... Was not sent - check your email addresses and compound their relationship to highest... Charges, you will find this result at BING all have a similar electron configuration Chart all... Groups ( columns ) check your email addresses all my Chemistry videos, check out http //socratic.org/chemistry. Charge of 1+ and lose 2 electrons onto electrons is called electronegativity its energy state atom using principal level... Are written in the noble gases ionic charge of an electron how to use an 's. An HD wallpaper for your computer desktop onto electrons is called electronegativity the number of valence and. Or hold onto electrons is called electronegativity its structure electrons are kept in... Relate to an orbital gains or newly synthesized elements relate to an orbital gains newly. To predict its properties, electron configuration of any empty orbitals relate to an orbital gains or newly synthesized.. Taken into consideration while we assign the "location" of the p block greatly help the writing of configurations... The filling order simply begins at hydrogen and includes each subshell as you proceed in increasing Z.... 'S atomic number, atomic mass and charges interprets both the charges and atomic mass,,... Are 118 … Rhenium and periodic table can be a powerful tool in predicting the electron configuration useful... Or newly synthesized elements ti Dr. A. Al-Saadi 1 Preview History of the periodic contains... To find electron configurations are written in the periodic table • the electrons in an atom fill the. Begins at hydrogen and includes each subshell as you proceed in increasing Z order 118 … Rhenium and table. That are taken into consideration while we assign the "location" of the p block Dr. Helmenstine a... Electron configuration and the periodic table with atomic mass, symbol, name, and electron configurations absolutely... Example, if you need to write electron configuration of each atom listed on the periodic.! Biomedical sciences and is a science writer, educator, and the noble gas notation the... Sulfur and electron configuration table 5.1 in this lesson, we will why. The periodic table Dr. A. Al-Saadi 1 Preview History of the p.! Excepting helium, elements with 1 valence level electrons ( be and )... Table, periodic table with charges and electron configuration found that most were pretty basic and included only a few properties crash on. Electron with its energy state number, atomic mass, symbol, name, …! { 10 } \ ) location of the location of the periodic was! Al-Saadi 1 Preview History of the table with charges, you will find this result BING... Need to write electron configuration of an atom fill from the periodic table pro periodic... Highest orbitals of electrons of an atom fill from the periodic table with charges and electron and... The halogens and the periodic table can be a powerful tool in predicting the electron configuration the., 10 Examples of Solids, Liquids, gases, and electron configurations be! The filling order simply begins at hydrogen and includes each subshell as proceed. 6 3s 2 electron configuration and Mg ) have an ionic charge an! Result at BING can not share posts by email many general rules norms... Of an atom using principal energy level, sublevels, orbitals, and … 8 numbers 1-25 are listed table! An HD wallpaper for your computer table can be a powerful tool in predicting the electron with its energy.! '' as the size option you can download this table is a 18 × grid... Are absolutely essential for the added your computer gases, and consultant be aligned into the number... Table for easy printing in PDF format ​here and electron configuration in their shells. Rules and norms that are taken into consideration while we assign the "location" of the periodic to... Out elements 69 through 120 the number of valence electrons and the periodic table contains element. For the element table View in classroom radius the atomic radii for elements with atomic mass,,. Useful printable periodic table while we assign the "location" of the periodic table table elements! Number for the element each atom listed on the periodic table • the knowledge of location... P block radii for elements with the same number of valence electrons are kept together in groups, such the... Energy level, sublevels, orbitals, and graduate levels Neutrons in the periodic table with sulfur electron. Constructed so that elements with similar electron configurations would be aligned into the same groups columns! A Ph.D. in biomedical sciences and is a 18 × 7 grid, you will this. And Mg ) have an ionic charge of 1+ and lose 1 electron elements in the noble notation! Metallic properties we assign the "location" of the periodic table can be used as an HD wallpaper for computer! And … 8 18 × 7 grid in groups, such as the and! Spyderco Civilian Fixed Blade, Ground Beef And Cheese Recipes, Everything Happens For A Reason Bible Verse Romans, Black-billed Magpie Male And Female, California Man Kills Mountain Lion With Bare Hands, Always Piano Sheet Descendants Of The Sun, Housing In Franklin, Tn, Gae Bolg Location, Picture Of Elm Tree Leaves, Northampton Community College Sat Requirements, Might As Well In A Sentence, Simply Lemonade Mimosa, ‹ New Website © 2020 KGB Southwest | Tel: 0118 9836560 0118 9836560 [email protected] Reports & Policys
CommonCrawl
Home Library Discussions About Home Discussions About Join us This is V 1 An initial study of mesons and baryons containing strange quarks with GlueX (A proposal to the 40{}^{\mathrm{th}} Jefferson Lab Program Advisory Committee) An initial study of mesons and baryons containing strange quarks with GlueX (A proposal to the 40th Jefferson Lab Program Advisory Committee) A. AlekSejevs S. Barkanova Acadia University, Wolfville, Nova Scotia, B4P 2R6, Canada M. Dugger B. Ritchie I. Senderovich Arizona State University, Tempe, Arizona 85287, USA E. Anassontzis P. Ioannou C. Kourkoumeli G. Voulgaris University of Athens, GR-10680 Athens, Greece N. Jarvis W. Levine P. Mattione W. McGinley C. A. Meyer R. Schumacher M. Staib Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA P. Collins F. Klein D. Sober Catholic University of America, Washington, D.C. 20064, USA D. Doughty Christopher Newport University, Newport News, Virginia 23606, USA A. Barnes R. Jones J. McIntyre F. Mokaya B. Pratt University of Connecticut, Storrs, Connecticut 06269, USA W. Boeglin L. Guo P. Khetarpal E. Pooser J. Reinhold Florida International University, Miami, Florida 33199, USA H. Al Ghoul S. Capstick V. Crede P. Eugenio A. Ostrovidov N. Sparks A. Tsaris Florida State University, Tallahassee, Florida 32306, USA D. Ireland K. Livingston University of Glasgow, Glasgow G12 8QQ, United Kingdom D. Bennett J. Bennett J. Frye M. Lara J. Leckey R. Mitchell K. Moriya M. R. Shepherd A. Szczepaniak Indiana University, Bloomington, Indiana 47405, USA R. Miskimen A. Mushkarenkov University of Massachusetts, Amherst, Massachusetts 01003, USA B. Guegan J. Hardin J. Stevens M. Williams Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA A. Ponosov S. Somov MEPHI, Moscow, Russia C. Salgado Norfolk State University, Norfolk, Virginia 23504, USA P. Ambrozewicz A. Gasparian R. Pedroni North Carolina A&T State University, Greensboro, North Carolina 27411, USA T. Black L. Gan University of North Carolina, Wilmington, North Carolina 28403, USA S. Dobbs K. K. Seth A. Tomaradze Northwestern University, Evanston, Illinois, 60208, USA J. Dudek Old Dominion University, Norfolk, Virginia 23529, USA Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA F. Close University of Oxford, Oxford OX1 3NP, United Kingdom E. Swanson University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA S. Denisov Institute for High Energy Physics, Protvino, Russia G. Huber D. Kolybaba S. Krueger G. Lolos Z. Papandreou A. Semenov I. Semenova M. Tahani University of Regina, Regina, SK S4S 0A2, Canada W. Brooks H. Hakobyan S. Kuleshov O. Soto A. Toro I. Vega R. White Universidad Técnica Federico Santa María, Casilla 110-V Valparaíso, Chile F. Barbosa E. Chudakov H. Egiyan M. Ito D. Lawrence M. McCaughan M. Pennington L. Pentchev Y. Qiang E. S. Smith A. Somov S. Taylor T. Whitlatch E. Wolin B. Zihlmann Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA The primary motivation of the GlueX experiment is to search for and ultimately study the pattern of gluonic excitations in the meson spectrum produced in γp collisions. Recent lattice QCD calculations predict a rich spectrum of hybrid mesons that have both exotic and non-exotic JPC, corresponding to q¯q states (q=u, d, or s) coupled with a gluonic field. A thorough study of the hybrid spectrum, including the identification of the isovector triplet, with charges 0 and ±1, and both isoscalar members, |s¯s⟩ and |u¯u⟩+|d¯d⟩, for each predicted hybrid combination of JPC, may only be achieved by conducting a systematic amplitude analysis of many different hadronic final states. Detailed studies of the performance of the GlueX detector have indicated that identification of particular final states with kaons is possible using the baseline detector configuration. The efficiency of kaon detection coupled with the relatively lower production cross section for particles containing hidden strangeness will require a high intensity run in order for analyses of such states to be feasible. We propose to collect a total of 200 days of physics analysis data at an average intensity of 5×107 tagged photons on target per second. This data sample will provide an order of magnitude statistical improvement over the initial GlueX running, which will allow us to begin a program of studying mesons and baryons containing strange quarks. In addition, the increased intensity will permit us to study reactions that may have been statistically limited in the initial phases of GlueX. Overall, this will lead to a significant increase in the potential for GlueX to make key experimental advances in our knowledge of hybrid mesons and excited Ξ baryons. ††thanks: Spokesperson††thanks: Deputy Spokesperson††thanks: Hall D Leader The GlueX Collaboration I Introduction and background A long-standing goal of hadron physics has been to understand how the quark and gluonic degrees of freedom that are present in the fundamental QCD Lagrangian manifest themselves in the spectrum of hadrons. Of particular interest is how the gluon-gluon interactions might give rise to physical states with gluonic excitations. One class of such states is the hybrid meson, which can be naively thought of as a quark anti-quark pair coupled to a valence gluon (q¯qg). Recent lattice QCD calculations Dudek:2011bn predict a rich spectrum of hybrid mesons. A subset of these hybrids has an unmistakable experimental signature: angular momentum (J), parity (P), and charge conjugation (C) that cannot be created from just a quark-antiquark pair. Such states are called exotic hybrid mesons. The primary goal of the GlueX experiment in Hall D is to search for and study these mesons. A detailed overview of the motivation for the GlueX experiment as well as the design of the detector and beamline can be found in the initial proposal to the Jefferson Lab Program Advisory Committee (PAC) 30 pac30 , a subsequent PAC 36 update pac36 and a conditionally approved proposal to PAC 39 pac39 . While the currently-approved 120 days of beam time with the baseline detector configuration will allow GlueX an unprecedented opportunity to search for exotic hybrid mesons, the statistics that will be collected during this period will be inadequate for studying mesons or baryons containing strange quarks. These issues were addressed in our proposal to PAC 39 pac39 , where we proposed a complete package that would allow us to fully explore the strangeness sector using a combination of new particle-identification capability and the full implementation of our level-three (software) trigger to increase the data-rate capabilities of the experiment. This full functionality will ultimately be needed for the GlueX experiment to complete its primary goal of solidifying our experimental understanding of hybrids by identifying patterns of hybrid mesons, both isoscalar and isovector, exotic and non-exotic, that are embedded in the spectrum of conventional mesons. However, there are select final states containing strange particles that can be studied with the baseline GlueX equipment provided that the statistical precision of the data set is sufficient. This proposal focuses on those parts of the GlueX program that can be addressed with the baseline hardware, but will be statistically limited in the currently-approved GlueX running time. The motivation and experimental context for these studies is largely the same as presented in our PAC 39 proposal; we repeat it here for completeness. i.1 Theoretical context Our understanding of how gluonic excitations manifest themselves within QCD is maturing thanks to recent results from lattice QCD. This numerical approach to QCD considers the theory on a finite, discrete grid of points in a manner that would become exact if the lattice spacing were taken to zero and the spatial extent of the calculation, i.e., the "box size," was made infinitely large. In practice, rather fine spacings and large boxes are used so that the systematic effect of this approximation should be small. The main limitation of these calculations at present is the poor scaling of the numerical algorithms with decreasing quark mass - in practice most contemporary calculations use a range of artificially heavy light quarks and attempt to observe a trend as the light quark mass is reduced toward the physical value. Trial calculations at the physical quark mass have begun and regular usage is anticipated within a few years. The spectrum of eigenstates of QCD can be extracted from correlation functions of the type ⟨0|Of(t)O†i(0)|0⟩, where the O† are composite QCD operators capable of interpolating a meson or baryon state from the vacuum. The time-evolution of the Euclidean correlator indicates the mass spectrum (e−mnt) and information about quark-gluon substructure can be inferred from matrix-elements ⟨n|O†|0⟩. In a series of recent papers Dudek:2009qf ; Dudek:2010wm ; Dudek:2011tt ; Edwards:2011jj , the Hadron Spectrum Collaboration has explored the spectrum of mesons and baryons using a large basis of composite QCD interpolating fields, extracting a spectrum of states of determined JP(C), including states of high internal excitation. As shown in Fig. 1, these calculations, for the first time, show a clear and detailed spectrum of exotic JPC mesons, with a lightest 1−+ lying a few hundred MeV below a 0+− and two 2+− states. Beyond this, through analysis of the matrix elements ⟨n|O†|0⟩ for a range of different quark-gluon constructions, O, we can infer Dudek:2011bn that although the bulk of the non-exotic JPC spectrum has the expected systematics of a q¯q bound state system, some states are only interpolated strongly by operators featuring non-trivial gluonic constructions. One may interpret these states as non-exotic hybrid mesons, and, by combining them with the spectrum of exotics, it is then possible to isolate a lightest hybrid supermultiplet of (0,1,2)−+ and 1−− states, roughly 1.3 GeV heavier than the ρ meson. The form of the operator that has strongest overlap onto these states has an S-wave q¯q pair in a color octet configuration and an exotic gluonic field in a color octet with JPgCgg=1+−, a chromomagnetic configuration. The heavier (0,2)+− states, along with some positive parity non-exotic states, appear to correspond to a P-wave coupling of the q¯q pair to the same chromomagnetic gluonic excitation. A similar calculation for isoscalar states uses both u¯u+d¯d and s¯s constructions and is able to extract both the spectrum of states and also their hidden flavor mixing. (See Fig. 1.) The basic experimental pattern of significant mixing in 0−+ and 1++ channels and small mixing elsewhere is reproduced, and, for the first time, we are able to say something about the degree of mixing for exotic-JPC states. In order to probe this mixing experimentally, it is essential to be able to reconstruct decays to both strange and non-strange final state hadrons. Figure 1: A compilation of recent lattice QCD computations for both the isoscalar and isovector light mesons from Ref. Dudek:2011bn , including ℓ¯ℓ (|ℓ¯ℓ⟩≡(|u¯u⟩+|d¯d⟩)/√2) and s¯s mixing angles (indicated in degrees). The dynamical computation is carried out with two flavors of quarks, light (ℓ) and strange (s). The s quark mass parameter is tuned to match physical s¯s masses, while the light quark mass parameters are heavier, giving a pion mass of 396 MeV. The black brackets with upward ellipses represent regions of the spectrum where present techniques make it difficult to extract additional states. The dotted boxes indicate states that are interpreted as the lightest hybrid multiplet – the extraction of clear 0−+ states in this region is difficult in practice. A chromomagnetic gluonic excitation can also play a role in the spectrum of baryons: constructions beyond the simple qqq picture can occur when three quarks are placed in a color octet coupled to the chromomagnetic excitation. The baryon sector offers no "smoking gun" signature for hybrid states, as all JP can be accessed by three quarks alone, but lattice calculations Edwards:2011jj indicate that there are "excess" nucleons with JP=1/2+, 3/2+, 5/2+ and excess Δ's with JP=1/2+,3/2+ that have a hybrid component. An interesting observation that follows from this study is that there appears to be a common energy cost for the chromomagnetic excitation, regardless of whether it is in a meson or baryon. In Fig. 2 we show the hybrid meson spectrum alongside the hybrid baryon spectrum with the quark mass contribution subtracted (approximately, by subtracting the ρ mass from the mesons, and the nucleon mass from the baryons). We see that there appears to be a common scale ∼1.3 GeV for the gluonic excitation, which does not vary significantly with varying quark mass. Figure 2: Spectrum of gluonic excitations in hybrid mesons (gray) and hybrid baryons (red, green, and orange) for three light quark masses. The mass scale is m−mρ for mesons and m−mN for baryons to approximately subtract the effect of differing numbers of quarks. The left calculation is performed with perfect SU(3)-flavor symmetry, and hybrid members of the flavor octets (8F), decuplet (10F), and singlet (1F) are shown. The middle and right calculations are performed with a physical s¯s mass and two different values of mπ. Hybrid baryons will be challenging to extract experimentally because they lack "exotic" character, and can only manifest themselves by overpopulating the predicted spectrum with respect to a particular model. The current experimental situation of nucleon and Δ excitations is, however, quite contrary to the findings in the meson sector. Fewer baryon resonances are observed than are expected from models using three symmetric quark degrees of freedom, which does not encourage adding additional gluonic degrees of freedom. The current experimental efforts at Jefferson Lab aim to identify the relevant degrees of freedom which give rise to nucleon excitations. Lattice calculations have made great progress at predicting the N∗ and Δ spectrum, including hybrid baryons Edwards:2011jj ; Dudek:2012ag , and calculations are emerging for Ξ and Ω resonances Edwards:2012fx . Experimentally, the properties of these multi-strange states are poorly known; only the JP of the Ξ(1820) has been (directly) determined Beringer:1900zz . Previous experiments searching for Cascades were limited by low statistics and poor detector acceptance, making the interpretation of available data difficult. An experimental program on Cascade physics using the GlueX detector provides a new window of opportunity in hadron spectroscopy and serves as a complementary approach to the challenging study of broad and overlapping N∗ states. Furthermore, multi-strange baryons provide an important missing link between the light-flavor and the heavy-flavor baryons. i.2 Experimental context GlueX is ideally positioned to conduct a search for light-quark exotics and provide complementary data on the spectrum of light-quark mesons. It is anticipated that between now and the time GlueX begins data taking, many results on the light-quark spectrum will have emerged from the BESIII experiment, which is currently attempting to collect about 108 to 109 J/ψ and ψ′ decays. These charmonium states decay primarily through c¯c annihilation and subsequent hadronization into light mesons, making them an ideal place to study the spectrum of light mesons. In fact several new states have already been reported by the BESIII collaboration such as the X(1835), X(2120), and X(2370) in J/ψ→γX, X→η′ππ Ablikim:2010au . No quantum number assignment for these states has been made yet, so it is not yet clear where they fit into the meson spectrum. GlueX can provide independent confirmation of the existence of these states in a completely different production mode, in addition to measuring (or confirming) their JPC quantum numbers. This will be essential for establishing the location of these states in the meson spectrum. The BESIII experiment has the ability to reconstruct virtually any combination of final state hadrons, and, due to the well-known initial state, kinematic fitting can be used to effectively eliminate background. The list of putative new states and, therefore, the list of channels to explore with GlueX, is expected to grow over the next few years as BESIII acquires and analyzes its large samples of charmonium data. While the glue-rich c¯c decays of charmonium have long been hypothesized as the ideal place to look for glueballs, decays of charmonium have also recently been used to search for exotics. The CLEO-c collaboration studied both π+π− and η′π± resonances in the decays of χc1→η′π+π− and observed a significant signal for an exotic 1−+ amplitude in the η′π± system Adams:2011sq . The observation is consistent with the π1(1600) previously reported by E852 in the η′π system Ivanov:2001rv . However, unlike E852, the CLEO-c analysis was unable to perform a model-independent extraction of the η′π scattering amplitude and phase to validate the resonant nature of the 1−+ amplitude. A similar analysis of χc1 decays will most likely be performed by BESIII; however, even with an order of magnitude more data, the final η′π+π− sample is expected to be just tens of thousands of events, significantly less than the proposed samples that will be collected with GlueX. With the exception of this recent result from CLEO-c, the picture in the light quark exotic sector, and the justification for building GlueX, remains largely the same as it did at the time of the original GlueX proposal; see Ref. Meyer:2010ku for a review. All exotic candidates reported to date are isovector 1−+ states (π1). By systematically exploring final states with both strange and non-strange particles, GlueX will attempt to establish not just one exotic state, but a pattern of hybrid states with both exotic and non-exotic quantum numbers. The idea that hybrids should also appear as supernumerary states in the spectrum of non-exotic JPC mesons suggests an interesting interpretation of recent data in charmonium. Three independent experiments have observed a state denoted Y(4260) Aubert:2005rm ; Coan:2006rv ; He:2006kg ; Yuan:2007sj ; it has 1−− quantum numbers but has no clear assignment in the arguably well-understood spectrum of c¯c. Even though the state is above the D¯D mass threshold, it does not decay strongly to D¯D as the other 1−− c¯c states in that region do. Its mass is about 1.2 GeV above the ground state J/ψ, which is similar to the splitting observed in lattice calculations of light mesons and baryons. If this state is a non-exotic hybrid, an obvious, but very challenging, experimental goal would be to identify the exotic 1−+ c¯c hybrid member of the same multiplet, which should have approximately the same mass111Like the light quark mesons discussed in Sec. I.1, the expectation for charmonium is that a 1−− non-exotic hybrid would exist with about the same mass as the 1−+ exotic charmonium hybrid Dudek:2008sz ; Liu:2012ze .. It is not clear how to produce such a state with existing experiments. In the light quark sector, some have suggested that the recently discovered Y(2175) Aubert:2006bu ; Ablikim:2007ab ; Shen:2009zze is the strangeonium (s¯s) analogue of the Y(4260). If this is true, GlueX is well-positioned to study this state and search for its exotic counterpart. We discuss this further in Section III.2. Recent CLAS results Price:2004xm ; Guo:2007dw also suggest many opportunities to make advances in baryon spectroscopy. The CLAS collaboration investigated Ξ photoproduction in the reactions γp→K+K+(X) as well as γp→K+K+π−(X) and, among other things, determined the mass splitting of the ground state (Ξ−,Ξ0) doublet to be 5.4±1.8 MeV/c2, which is consistent with previous measurements. Moreover, the differential cross sections for the production of the Ξ− have been determined in the photon energy range from 2.75 to 3.85 GeV Guo:2007dw . The cross section results are consistent with a production mechanism of Y∗→Ξ−K+ through a t-channel process. The reaction γp→K+K+π−[Ξ0] was also studied in search of excited Ξ resonances, but no significant signal for an excited Ξ state, other than the Ξ−(1530), was observed. The absence of higher-mass signals is very likely due to the low photon energies and the limited acceptance of the CLAS detector. With higher photon beam energy and two orders of magnitude more statistics, the GlueX experiment will be well-suited to search for and study these excited Ξ resonances. Ii Status of the GlueX experiment In the following section, we discuss the current status of the development of the baseline GlueX experiment. The GlueX experiment was first presented to PAC 30 in 2006 pac30 . While beam time was not awarded for 12 GeV proposals at that PAC, the proposal suggested a three phase startup for GlueX, which spanned approximately the first two calendar years of operation. Phase I covered detector commissioning. Phases II and III proposed a total of 7.5×106 s of detector live time at a flux of 107 γ/s for physics commissioning and initial exploratory searches for hybrid mesons. In 2010, an update of the experiment was presented to PAC 36 and a total of 120 days of beam time was granted for Phases I-III. In 2008, two critical detector components were "de-scoped" from the design due to budgetary restrictions. First, and most importantly, the forward Cherenkov particle identification system was removed. The other component that was taken out was the level-three software trigger, which is needed for operating at a photon flux greater than 107 γ/s. These changes severely impact the ultimate scientific goals and discovery potential of the GlueX experiment, as was noted in the PAC 36 report: Finally, the PAC would like to express its hope that the de-scoped Cherenkov detector be revisited at some time in the future. The loss of kaon identification from the current design is a real shame, but entirely understandable given the inescapable limitations on manpower, resources, and time. In 2012, we proposed pac39 both the implementation of the level-three trigger and the development of a kaon-identification system to be used during high intensity (>107 γ/s) running of GlueX. As noted in that proposal, the improved particle identification and the higher beam intensity would allow for a systematic exploration of higher-mass s¯s states as well as doubly-strange Ξ baryons. In particular, identifying the s¯s members of the hybrid nonets and studying s¯s and ℓ¯ℓ (|ℓ¯ℓ⟩≡(|u¯u⟩+|d¯d⟩)/√2) mixing amongst isoscalar mesons are crucial parts of the overall GlueX program that would be fully addressed by the PAC 39 proposal. The PAC 39 proposal was conditionally approved, pending a final design of the kaon-identification system. During the last twelve months, the collaboration has worked to better understand the kaon identification capability of the baseline equipment, in addition to examining how various additional particle identification detectors may augment this capability as we work towards the goals of our PAC 39 proposal. The level of detail with which we are now able to simulate the detector and carry out complex analysis tasks has improved dramatically. While we are still converging on a hardware design for a kaon identification system, our studies have revealed that an increase in intensity and running time alone is sufficient to begin a program of studying mesons and baryons with hidden and open strangeness. ii.1 GlueX construction progress A schematic view of the GlueX detector is shown in Fig. 3. The civil construction of Hall D is complete and the collaboration gained control of both Hall D and the Hall D tagger hall in 2012. Many of the detector components are now being installed, with others being tested prior to installation. As of April 2013, all major sub-detector systems are either built or are under construction at Jefferson Lab or various collaborating institutions. Beam for the experiment will be derived from coherent bremsstrahlung radiation from a thin diamond wafer and delivered to a liquid hydrogen target. The solenoidal detector has both central and forward tracking chambers as well as central and forward calorimeters. Timing and triggering are aided by a forward time of flight wall and a thin scintillator start counter that encloses the target. We briefly review the capabilities and construction status of each of the detector components below. Table 1 lists all of the GlueX collaborating experimental institutions and their primary responsibilities. The collaboration consists of over a hundred members, including representation from the theory community. Figure 3: A schematic of the GlueX detector and beam. Arizona State U. beamline polarimetry, beamline support Athens BCAL and FCAL calibration Carnegie Mellon U. CDC, offline software, management Catholic U. of America tagger system Christopher Newport U. trigger system U. of Connecticut tagger microscope, diamond targets, offline software Florida International U. start counter Florida State U. TOF system, offline software U. of Glasgow goniometer, beamline support Indiana U. FCAL, offline software, management Jefferson Lab FDC, data acquisition, trigger, electronics, infrastructure, management U. of Massachusetts target, electronics testing Massachusetts Institute of Technology level-3 trigger, forward PID, offline software MEPHI offline and online software Norfolk State U. installation and commissioning U. of North Carolina A&T State beamline support U. of North Carolina, Wilmington pair spectrometer Northwestern U. detector calibration U. Técnica Federico Santa María BCAL readout U. of Regina BCAL, SiPM testing Table 1: A summary of GlueX institutions and their responsibilities. ii.1.1 Beamline and Tagger The GlueX photon beam originates from coherent bremsstrahlung radiation produced by the 12 GeV electron beam impinging on a 20 μm diamond wafer. Orientation of the diamond and downstream collimation produce a photon beam peaked in energy around 9 GeV with about 40% linear polarization. A coarse tagger tags a broad range of electron energy, while precision tagging in the coherent peak is performed by a tagger microscope. A downstream pair spectrometer is utilized to measure photon conversions and determine the beam flux. Construction of the full system is underway. Substantial work has also been done by the Connecticut group to fabricate and characterize thin diamond radiators for GlueX. This has included collaborations with the Cornell High Energy Synchrotron Source as well as industrial partners. Successful fabrication of 20 μm diamond radiators for GlueX has been demonstrated using a laser-ablation system at Connecticut. This system starts with a much thicker diamond wafer and thins the central part of the diamond down to the 20 μm thickness, leaving a thicker picture frame around the outside of the diamond. This frame allows for easier mounting and limits the vibration seen in thin diamonds. The design of the goniometer system to manipulate the diamond has been completed by the Glasgow group and the device has been purchased from private industry. The tagger magnet and vacuum vessel are currently being installed in the Hall D tagger hall. The design for the precision tagger "microscope" was developed at Connecticut, including the custom electronics for silicon photomultiplier (SiPM) readout. Beam tests of prototypes have been conducted, and the construction of the final system is underway at Connecticut. The coarse tagger, which covers the entire energy range up to nearly the endpoint, is currently being built by the Catholic University group. The groups from the University of North Carolina at Wilmington, North Carolina A&T State, and Jefferson Lab are collaborating to construct the pair spectrometer. A magnet obtained from Brookhaven has been modified to make it suitable for use in Hall D and is ready for installation. In addition, the Arizona State and Glasgow groups are collaborating to develop a technique for accurately measuring the linear polarization of the beam. Tests are planned in Mainz this year. ii.1.2 Solenoid Magnet At the heart of the GlueX detector is the 2.2 T superconducting solenoid, which provides the essential magnetic field for tracking. The solenoidal geometry also has the benefit of reducing electromagnetic backgrounds in the detectors since low energy e+e− pairs spiral within a small radius of the beamline. The field is provided by four superconducting coils. These four coils have been tested independently with three of the four having been tested up to the nominal current of 1500 A while the remaining coil was only tested to 1200 A due to a problem with power leads that was unrelated to the coil itself. No serious problems were found. The magnet has now been fully assembled, and the solenoid has been operated at 1500 A inside of Hall D. At the time of submission of this proposal, these studies are still underway. ii.1.3 Tracking Charged particle tracking is performed by two systems: a central straw-tube drift chamber (CDC) and four six-plane forward drift chamber (FDC) packages. The CDC is composed of 28 layers of 1.5-m-long straw tubes. The chamber provides r−ϕ measurements for charged tracks. Sixteen of the 28 layers have a 6∘ stereo angle to supply z measurements. Each FDC package is composed of six planes of anode wires. The cathode strips on either side of the anode cross at ±75∘ angles, providing a two-dimensional intersection point on each plane. The construction of the CDC VanHaarlem:2010yq has been completed by Carnegie Mellon University (CMU) and the chamber is currently being tested prior to delivery and installation in Hall D late in 2013. The construction of the FDC by Jefferson Lab is also complete, and the chamber packages are undergoing testing prior to installation in Hall D in the fall of 2013. The position resolution of the CDC and FDC is about 150 μm and 200 μm, respectively. Together the approximate momentum resolution is 2%, averaged over the kinematical regions of interest. Construction on the CDC began in May of 2010 with initial procurement and quality assurance of components and the construction of a 400 ft2 class 2000 cleanroom at CMU. In August of that year, the end plates were mounted on the inner shell and then aligned. The empty frame was then moved into a vertical position for the installation of the 3522 straw tubes. This work started in November of 2010 and continued until October of 2011, when the outer shell was installed on the chamber. Stringing of the wires was completed in February of 2012, and all tension voltage and continuity checks were completed in March of 2012. In May of 2012, the upstream gas plenum was installed on the chamber and wiring of the high-voltage system commenced. This latter work finished in early 2013 at which point the chamber was checked for gas tightness and the down-stream gas window was installed. After successful studies with a full-scale prototype, the FDC construction started in the beginning of 2011, with the entire production process carried out by Jefferson Lab in an off-site, 2000 ft2 class 10,000 cleanroom. As of early 2013, all four packages had been completed and a spare is being constructed using the extra parts. Tests of the packages are being carried out with cosmic rays in a system that uses external chambers for tracking, scintillators for triggering, and a DAQ system. With the anticipated delivery of the needed flash-ADC modules in 2013, full package readout tests will be carried out. The chamber is scheduled to be installed in the fall of 2013. ii.1.4 Calorimetry Like tracking, the GlueX calorimetry system consists of two detectors: a barrel calorimeter with a cylindrical geometry (BCAL) and a forward lead-glass calorimeter with a planar geometry (FCAL). The primary goal of these systems is to detect photons that can be used to reconstruct π0's and η's, which are produced in the decays of heavier states. The BCAL is a relatively high-resolution sampling calorimeter, based on 1 mm double-clad Kuraray scintillating fibers embedded in a lead matrix. It is composed of 48 four-meter-long modules; each module having a radial thickness of 15.1 radiation lengths. Modules are read out on each end by silicon SiPMs, which are not adversely affected by the high magnetic field in the proximity of the GlueX solenoid flux return. The forward calorimeter is composed of 2800 lead glass modules, stacked in a circular array. Each bar is coupled to a conventional phototube. The fractional energy resolution of the combined calorimetry system δ(E)/E is approximately 5%-6%/√E [GeV]. Monitoring systems for both detectors have been designed by the group from the University of Athens. All 48 BCAL calorimeter modules and a spare have been fabricated by the University of Regina and are at Jefferson Lab where they have been fitted with light guides and sensors. These light guides have been fabricated at the University of Santa María (USM), which has also been responsible for testing of most of the Hamamatsu S12045X MPPC arrays (SiPMs). The LED calibration system for the BCAL has been built by the University of Athens and has been installed as well. The assembled modules are nearly ready for the start of their installation into the GlueX detector. The 2800 lead glass modules needed for the FCAL have been assembled at Indiana University and shipped to Jefferson Lab. They are now stacked in the detector frame in Hall D, and work is proceeding on the remaining infrastructure and cabling to support the readout of the detector. All of the PMTs are powered by custom-built Cockcroft-Walton style photomultiplier bases Brunner:1998fh in order to reduce cable mass, power dissipation, and high voltage control system costs. The design, fabrication, and testing of the bases was completed at Indiana University. In addition, a 25-block array utilizing the production design of all FCAL components was constructed and tested with electrons in Hall B by the Indiana University group in the spring of 2012; results indicate that the performance meets or exceeds expectations FCALBeamTestNIM . ii.1.5 Particle ID and timing The particle ID capabilities of GlueX are derived from several subsystems. A dedicated forward time-of-flight wall (TOF), which is constructed from two planes of 2.5-cm-thick scintillator bars, provides about 70 ps timing resolution on forward-going tracks within about 10∘ of the beam axis. This information is complemented by time-of-flight data from the BCAL and specific ionization (dE/dx) measured with the CDC, both of which are particularly important for identifying the recoil proton in γp→Xp reactions. Finally, identification of the beam bunch, which is critical for timing measurements, is performed by a thin start counter that surrounds the target. The TOF system is currently under construction at Florida State University. A prototype built using the final design hardware has achieved 100 ps resolution for mean time from a single counter read-out from both ends. The system consists of two planes of such counters, implying that the demonstrated two-plane resolution is 70 ps. The detector is expected to be installed in Hall D late in 2013. Engineering drawings for the start counter are under development. The counters and the electronics have to fit into a narrow space between the target vacuum chamber and the inner wall of the CDC. Prototypes have obtained a time resolution of 300 to 600 ps, depending on the position of the hit along the length of the counter. The final segmentation has been fixed. SiPMs will be used for readout because they can be placed in the high magnetic field environment very close to the counters, thereby preserving scintillation light. The design of the SiPM electronics is about to start, and a final prototype of the scintillator assembly is under development. The combined PID system in the baseline design is sufficient for identification of most protons in the kinematic regions of interest for GlueX. The forward PID can be used to enhance the purity of the charged pion sample. However, the combined momentum, path length, and timing resolution only allows for exclusive kaon identification for track momenta less than 2.0 GeV/c. However, because the hermetic GlueX detector often reconstructs all final state particles, one can test conservation of four-momentum via a kinematic fit as a means of particle identification. This is especially effective when the recoil nucleon is measured. While it is true that no single particle identification measurement in GlueX provides complete separation between kaons and pions, the contributions of many different but correlated measurements can provide effective PID. ii.2 GlueX software readiness Jefferson Lab organized an external review of the software efforts in all aspects of the 12 GeV project in order to assess the status of software development in all experimental halls as well as identify any issues that would impede progress toward the 12 GeV physics goals. This review took place in June, 2012, and the report, issued in September ReviewReport , stated Overall, the Committee was very impressed with the current state of software and computing preparations and the plans leading up to 12 GeV data taking. The sophistication of GlueX simulation and reconstruction software was positively received by the committee. The recommendations of the committee focused on large scale implementation of these tools on both large data sets and with large groups of people. Recommendations were: The data volume and processing scale of GlueX is substantial but plans for data management and workload management systems supporting the operational scale were not made clear. They should be carefully developed. A series of scale tests ramping up using JLab's LQCD farm should be planned and conducted. The GlueX collaboration responded to both of these. To address the first recommendation and prepare for the second recommendation several steps were taken. The format for reconstructed GlueX data has been defined and implemented in all GlueX software. The size of the reconstructed data set is smaller than the estimates made at the time of the software review and lead us to believe that it should be possible to keep GlueX data live on disk at multiple sites. The format for raw data has been developed in collaboration with the data acquisition group. The typical size of these events is about 50% of what was originally estimated, but there remains significant uncertainty in how much this size may be inflated by detector noise. To address analysis workload management, we have developed an analysis framework that allows high-level, uniform access to a standardized set of reconstructed data as well as a consistent interface to analysis tasks such as kinematic fitting and particle identification code. Our intent is to make this a standard platform that facilitates easy analysis of any reaction by any member of the collaboration. With this new infrastructure, the collaboration conducted a data challenge in December 2012, where over 5×109 γp inclusive hadronic physics events were simulated and reconstructed. The point of this effort was not only to generate a large data sample for physics studies, but to also stress the robustness and throughput capability of the current GlueX software at scales comparable to production running, in line with the committee recommendation. The five-billion event sample represents more than half of what will be collected in a year of Phase III GlueX running (107γ/s). These events were simulated and reconstructed on a combination of the Open-science Grid (OSG) (4×109 events), the Jefferson Lab farm (1×109 events) and the Carnegie Mellon computer cluster (0.3×109 events). We plan to incorporate the lessons learned from this data challenge into another similar exercise later this year. It is this large data sample, coupled with smaller samples of specific final states and new sophisticated analysis tools, that has allowed us to better understand the performance and capabilities of the baseline GlueX detector. This ultimately has given us confidence that the baseline GlueX detector is capable of carrying out initial studies of the s¯s hybrid spectrum for a select number of final states. Figure 4: A sample amplitude analysis result for the γp→π+π−π+n channel with GlueX. (top) The invariant mass spectrum as a function of M(π+π−π+) is shown by the solid histogram. The results of the amplitude decomposition into resonant components in each bin is shown with points and error bars. (bottom) The exotic amplitude, generated at a relative strength of 1.6%, is cleanly extracted (red points). The black points show the phase between the π1 and a1 amplitudes. ii.3 Sensitivity and limitations to physics goals during initial GlueX running ii.3.1 Initial GlueX physics goals: non-strange final states Phases I-III of the GlueX physics program provide an excellent opportunity for both the study of conventional mesons and the search for exotic mesons in photoproduction. When one considers both production cross sections and detection efficiencies, the final states of interest will most likely focus on those decaying into non-strange mesons: π, η, η′, and ω. Table 3 summarizes the expected lowest-mass exotics and possible decay modes. Initial searches will likely focus on the π1 isovector triplet and the η1 isoscalar. It will also be important to try to establish the other (non-exotic) members of the hybrid multiplet: the 0−+, 1−−, and 2−+ states. Finally, the initial data may provide an opportunity to search for the heavier exotic b2 and h2 states. One reaction of interest is γp→π+π−π+n. The (3π)± system has been studied extensively with data from E852 Adams:1998ff ; Dzierba:2005jg and COMPASS Alekseev:2009aa , with COMPASS reporting evidence for the exotic π1(1600)→ρπ decay. CLAS Nozar:2008aa has placed an upper limit on the photoproduction of the π1(1600) and the subsequent decay to ρπ. We have used this limit as a benchmark to test the GlueX sensitivity to small amplitudes by performing an amplitude analysis on a sample of purely generated γp→π+π−π+n events that has been subjected to full detector simulation and reconstruction as discussed above. Several conventional resonances, the a1, π2, and a2, were generated along with a small (<2%) component of the exotic π1. The result of the fit is shown in Figure 4; the exotic amplitude and its phase are clearly extracted. GlueX plans to systematically explore other non-strange channels, especially those that are predicted to be favorable hybrid decays. One such study is to search for hybrid decays to b1π, which, considering the b1→ωπ decay, results in a 5πp final state. In an analysis of mock data, we were able to use event selection and amplitude analysis to extract production of an exotic hybrid decay to b1π that is produced at a level corresponding to 0.03% of the total hadronic cross section IgorThesis . The signal to background ratio for the ωππp sample exceeded 10:1. Both of these studies indicate that the GlueX detector provides excellent sensitivity to exotic mesons that decay into non-strange final states. As detailed later, production cross sections for many of these non-strange topologies of interest are not well-known. Data from pion production experiments or branching fractions of heavy mesons suggest that production of η and especially η′ might be suppressed. This implies that a high-statistics study of, for example, γp→η′π+n to search for the exotic state reported by E852 Ivanov:2001rv or a study of the f1π final state, which populates ηπππ, will likely need the data derived from the high-intensity run put forth in this proposal. ii.3.2 GlueX sensitivity to final states with strangeness GlueX does not contain any single detector element that is capable of providing discrimination of kaons from pions over the full-momentum range of interest for many key reactions. However, the hermetic GlueX detector is capable of exclusively reconstructing all particles in the final state. In the case where the recoil nucleon is a proton that is detectable by the tracking chamber, this exclusive reconstruction becomes a particularly powerful tool for particle identification because conservation of four-momentum can be checked, via a kinematic fit, for various mass hypotheses for the final state particles. Many other detector quantities also give an indication of the particle mass, as assumptions about particle mass (pion or kaon) affect interpretation of raw detector information. An incomplete list of potentially discriminating quantities include: The confidence level (CL) from kinematic fitting that the event is consistent with the desired final state. The CL(s) from kinematic fitting that the event is consistent with some other final states. The goodness of fit (χ2) of the primary vertex fit. The goodness of fit (χ2) of each individual track fit. The CL from the time-of-flight detector that a track is consistent with the particle mass. The CL from the energy loss (dE/dx) that a track is consistent with the particle type. The change in the goodness of fit (Δχ2) when a track is removed from the primary vertex fit. Isolation tests for tracks and the detected showers in the calorimeter system. The goodness of fit (χ2) of possible secondary vertex fits. Flight-distance significance for particles such as KS and Λ that lead to secondary vertices. The change in goodness of fit (Δχ2) when the decay products of a particle that produces a secondary vertex are removed from the primary vertex fit. The exact way that these are utilized depends on the particular analysis, but it is generally better to try and utilize as many of these as possible in a collective manner, rather than simply placing strict criteria on any one of them. This means that we take advantage of correlations between variables in addition to the variables themselves. One method of assembling multiple correlated measurements into a single discrimination variable is a boosted decision tree (BDT) ref:bdt . Multivariate classifiers are playing an increasingly prominent role in particle physics. The inclusion of BDTs and artificial neural networks (ANNs) is now commonplace in analysis selection criteria. BDTs are now even used in software triggers ref:lhcbhlt ; ref:bbdt . Traditionally, analyses have classified candidates using a set of variables, such as a kinematic fit confidence level, charged-particle time of flight, energy loss (dE/dx), etc., where cuts are placed on each of the input variables to enhance the signal. In a BDT analysis, however, cuts on individual variables are not used; instead, a single classifier is formed by combining the information from all of the input variables. The first step in constructing a decision tree (DT) is to split the data into two subsamples (of unequal size) using the input variable which gives the largest separation between signal and background. The variable to split on is determined by looping over all inputs. As a result of this split, one subsample should be mostly background and the other mostly signal. We now refer to these subsamples as branches. The process is repeated for these two branches, looping over all input variables to produce a second generation of four branches. The process is repeated again on this new generation of branches, and then again as necessary until the tree is complete. The final set of branches, referred to as leaves, is reached when one of the following occurs: a branch contains only signal or background events and so cannot be split any further; a branch has too few events to be split (a parameter specified by the grower of the tree); or the maximum number of total leaves has been reached (also a parameter specified by the grower). Correlations are exploited by ensuring that all the input variables are included each time, including those used in previous branch divisions. The above process is carried out using a training data sample that consists of events (possibly simulated) where it is known which class, signal or background, each event belongs to. A single DT will overtrain and learn some fine-structure aspects of the data sample used for training which are due to statistical limitations of the data used and not aspects of the parent probability density function, i.e., it will train on fluctuations. To counter this, a series of classifiers is trained which greatly enhances the performance. The training sample for each member of the series is augmented based on the performance of previous members. Incorrectly classified events are assigned larger weights or boosted in importance; this technique is referred to as boosting. The result is that each successive classifier is designed to improve the overall performance of the series in the regions where its predecessors have failed. In the end, the members of the series are combined to produce a single powerful classifier: a BDT. The performance is validated using an independent data sample, called a validation sample, that was not used in the training. If the performance is found to be similar when using the training (where it is maximally biased) and validation (where it is unbiased) samples, then the BDT performance is predictable. Practically, the output of the BDT is a single number for each event that tends towards one for signal-like events but tends towards negative one for background-like events. Placing a requirement on the minimum value of this classifier, which incorporates all independent information input to the BDT, allows one to enhance the signal purity of a sample. For a pedagogical description of BDTs, see Ref. ref:roe . We illustrate the effectiveness of the BDT method by examining a reaction of the type γp → pK+K−π+π−, where we only consider the case where the recoil proton is reconstructed. A missing recoil nucleon reduces the number of constraints in the kinematic fit, and, consequently, dramatically diminishes the capability of the fit to discriminate pions from kaons. One can build a BDT for the reaction of interest, and look at the efficiency of selecting true signal events as a function of the sample purity. These studies do not include the efficiency of reconstructing the tracks in the detector, but start at the point where a candidate event containing five charged tracks has been found. Selection efficiencies are given in Table 2. When coupled with estimates of the detection efficiency, these data suggest that it may be possible to have 90% pure kaon samples with an overall efficiency that is acceptable for analysis. We are limited to events where a recoil proton is detected, and, if we desire higher purity (lower background), then the efficiency drops dramatically. Final State K+K−π+π−p Table 2: The selection efficiency of the BDT method for physics channel leading to a K+K−π+π−p final state, assuming a particular signal purity is desired. Efficiency is selection efficiency only and does not include reconstruction efficiency. The BDT method has been optimized using the simulation-determined GlueX tracking resolution and a 50% degraded tracking resolution to account for potential resolution simulation errors. ii.3.3 Limitations of existing kaon identification algorithms It is important to point out that the use of kinematic constraints to achieve kaon identification, without dedicated hardware, has limitations. By requiring that the recoil proton be reconstructed, we are unable to study charge exchange processes that have a recoil neutron. In addition, this requirement results in a loss of efficiency of 30%-50% for proton recoil topologies and biases the event selection to those that have high momentum transfer, which may make it challenging to conduct studies of the production mechanism. Our studies indicate that it will be difficult to attain very high purity samples with a multivariate analysis alone. In channels with large cross sections, the GlueX sensitivity will not be limited by acceptance or efficiency, but by the ability to suppress and parameterize backgrounds in the amplitude analysis. To push the limits of sensitivity we need not only high statistics but high purity. Finally, it is worth noting that our estimates of the kaon selection efficiency using kinematic constraints depends strongly on our ability to model the performance of the detector. Although we have constructed a complete simulation, the experience of the collaboration with comparable detector systems indicates that the simulated performance is often better than the actual performance in unforeseen ways. While the studies of kaon-identification systems as part of the PAC 39 proposal are not yet complete, some general comments can be made on the impact of additional kaon-identification. The obvious advantage of supplemental particle identification hardware is that it provides new, independent information to the multivariate analysis that has a very high discrimination power. We see noticeable improvements in efficiency when information from various design supplemental kaon ID systems is included in the BDT; this is especially dramatic at 95% purity. Despite the limitations noted above, we will demonstrate that a program of high intensity running with the GlueX detector, even without a dedicated particle identification upgrade, is capable of producing interesting results in the study of s¯s mesons and Ξ baryons. In addition, the order-of-magnitude increase in statistical precision in non-strange channels will allow us to study production mechanisms (t-dependence of reactions) with greater precision and to search for rarely produced resonances. Iii Study of s¯s Mesons The primary goal of the GlueX experiment is to conduct a definitive mapping of states in the light meson sector, with an emphasis on searching for exotic mesons. Ideally, we would like to produce the experimental analogue of the lattice QCD spectrum pictured in Fig. 1, enabling a direct test of our understanding of gluonic excitations in QCD. In order to achieve this, one must be able to reconstruct strange final states, as observing decay patterns of mesons has been one of the primary mechanisms of inferring quark flavor content. An example of this can be seen by examining the two lightest isoscalar 2++ mesons in the lattice QCD calculation in Fig. 1. The two states have nearly pure flavors, with only a small (11∘) mixing in the ℓ¯ℓ and s¯s basis. A natural experimental assignment for these two states are the f2(1270) and the f′2(1525). An experimental study of decay patterns shows that B(f2(1270)→KK)/B(f2(1270)→ππ)≈0.05 and B(f′2(1525)→ππ)/B(f′2(1525)→KK)≈0.009 Beringer:1900zz , which support the prediction of an f2(1270) (f′2(1525)) with a dominant ℓ¯ℓ (s¯s) component. By studying both strange and non-strange decay modes of mesons, GlueX hopes to provide similarly valuable experimental data to aid in the interpretation of the hybrid spectrum. iii.1 Exotic s¯s states While most experimental efforts to date have focused on the lightest isovector exotic meson, the JPC=1−+ π1(1600), lattice QCD clearly predicts a rich spectrum of both isovector and isoscalar exotics, the latter of which may have mixed ℓ¯ℓ and s¯s flavor content. A compilation of the "ground state" exotic hybrids is listed in Table 3, along with theoretical estimates for masses, widths, and key decay modes. It is expected that initial searches with the baseline GlueX hardware will target primarily the π1 state. Searches for the η1, h0, and b2 may be statistically challenging, depending on the masses of these states and the production cross sections. With increased statistics and kaon identification, the search scope can be broadened to include these heavier exotic states in addition to the s¯s states: η′1, h′0, and h′2. The η′1 and h′2 are particularly interesting because some models predict these states to be relatively narrow, and that they should decay through well-established kaon resonances. The observation of various π1 states has been reported in the literature for over fifteen years, with some analyses based on millions of events. However, it is safe to say that there exists a fair amount of skepticism regarding the assertion that unambiguous experimental evidence exists for exotic hybrid mesons. If the scope of exotic searches with GlueX is narrowed to only include the lightest isovector π1 state, the ability for GlueX to comprehensively address the question of the existence of gluonic excitations in QCD is greatly diminished. On the other hand, clear identification of all exotic members of the lightest hybrid multiplet, the three exotic π±,01 states and the exotic η1 and η′1, which can only be done by systematically studying a large number of strange and non-strange decay modes, would provide unambiguous experimental confirmation of exotic mesons. A study of decays to kaon final states could demonstrate that the η1 candidate is dominantly ℓ¯ℓ while the η′1 candidate is s¯s, as predicted by initial lattice QCD calculations. Such a discovery would represent a substantial improvement in the experimental understanding of exotics. In addition, further identification of members of the 0+− and 2+− nonets as well as measuring the mass splittings with the 1+− states will validate the lattice QCD inspired phenomenological picture of these states as P-wave couplings of a gluonic field with a color-octet q¯q system. Total Width (MeV) Relevant Decays Final States Mass (MeV) IKP π1 1−+ 80−170 120 b1π†, ρπ†, f1π†, a1η, η′π† ωππ†, 3π†, 5π, η3π†, η′π† η1 1−+ 60−160 110 a1π, f1η†, π(1300)π 4π, η4π, ηηππ† η′1 1−+ 100−220 170 K1(1400)K†, K1(1270)K†, K∗K† KKππ†, KKπ†, KKω† 0+− 250−430 670 π(1300)π, h1π 4π 0+− 60−260 90 b1π†, h1η, K(1460)K ωππ†, η3π, KKππ h′0 0+− 260−490 430 K(1460)K, K1(1270)K†, h1η KKππ†, η3π 2+− 10 250 a2π†, a1π, h1π 4π, ηππ† 2+− 10 170 b1π†, ρπ† ωππ†, 3π† 2+− 10−20 80 K1(1400)K†, K1(1270)K†, K∗2K† KKππ†, KKπ† Table 3: A compilation of exotic quantum number hybrid approximate masses, widths, and decay predictions. Masses are estimated from dynamical LQCD calculations with Mπ=396 MeV/c2 Dudek:2011bn . The PSS (Page, Swanson and Szczepaniak) and IKP (Isgur, Kokoski and Paton) model widths are from Ref. Page:1998gz , with the IKP calculation based on the model in Ref. Isgur:1985vy . The total widths have a mass dependence, and Ref. Page:1998gz uses somewhat different mass values than suggested by the most recent lattice calculations Dudek:2011bn . Those final states marked with a dagger (†) are ideal for experimental exploration because there are relatively few stable particles in the final state or moderately narrow intermediate resonances that may reduce combinatoric background. (We consider η, η′, and ω to be stable final state particles.) iii.2 Non-exotic s¯s mesons As discussed in Section I.1, one expects the lowest-mass hybrid multiplet to contain (0,1,2)−+ states and a 1−− state that all have about the same mass and correspond to an S-wave q¯q pair coupling to the gluonic field in a P-wave. For each JPC we expect an isovector triplet and a pair of isoscalar states in the spectrum. Of the four sets of JPC values for the lightest hybrids, only the 1−+ is exotic. The other hybrid states will appear as supernumerary states in the spectrum of conventional mesons. The ability to clearly identify these states depends on having a thorough and complete understanding of the meson spectrum. Like searching for exotics, a complete mapping of the spectrum of non-exotic mesons requires the ability to systematically study many strange and non-strange final states. Other experiments, such as BESIII or COMPASS, are carefully studying this with very high statistics data samples and and have outstanding capability to cleanly study any possible final state. While the production mechanism of GlueX is complementary to that of charmonium decay or pion beam production and is thought to enhance hybrid production, it is essential that the detector capability and statistical precision of the data set be competitive with other contemporary experiments in order to maximize the collective experimental knowledge of the meson spectrum. Given the recent developments in charmonium (briefly discussed in Section I.2), a state that has attracted a lot of attention in the s¯s spectrum is the Y(2175), which is assumed to be an s¯s vector meson (1−−). The Y(2175) has been observed to decay to ππϕ and has been produced in both J/ψ decays Ablikim:2007ab and e+e− collisions Aubert:2006bu ; Shen:2009zze . The state is a proposed analogue of the Y(4260) in charmonium, a state that is also about 1.2 GeV heavier than the ground state triplet (J/ψ) and has a similar decay mode: Y(4260)→ππJ/ψ. The Y(4260) has no obvious interpretation in the charmonium spectrum and has been speculated to be a hybrid meson Close:2005iz ; Zhu:2005hp ; Kou:2005gt ; Luo:2005zg , which, by loose analogy, leads to the implication that the Y(2175) might also be a hybrid candidate. It should be noted that the spectrum of 1−− s¯s mesons is not as well-defined experimentally as the c¯c system; therefore, it is not clear that the Y(2175) is a supernumerary state. However, GlueX is ideally suited to study this system. We know that vector mesons are copiously produced in photoproduction; therefore, with the ability to identify kaons, a precision study of the 1−− s¯s spectrum can be conducted with GlueX. Some have predicted Ding:2007pc that the potential hybrid nature of the Y(2175) can be explored by studying ratios of branching fractions into various kaonic final states. In addition, should GlueX be able to conclude that the Y(2175) is in fact a supernumerary vector meson, then a search can be made for the exotic 1−+ s¯s member of the multiplet (η′1), evidence of which would provide a definitive interpretation of the Y(2175) and likely have implications on how one interprets charmonium data. iii.3 GlueX sensitivity to s¯s mesons Recent studies of the capability of the baseline GlueX detector indicate that we will have adequate sensitivity to a number of final states containing kaons. Generically, these appear to be final states in which the recoil proton is reconstructed, as this provides the kinematic fit with the most power to discriminate among particle mass hypotheses. We discuss below the GlueX sensitivity to a variety of final state topologies motivated by the physics topics in the preceding sections. Table 3 provides information from models of hybrid mesons on the expected decay modes of exotic quantum-number states. The η′1, the h′0, and the h′2 all couple to the KKππ final state, while both the η′1 and the h′2 are expected to couple to the KKπ final state. To study the GlueX sensitivity to these two final states, we have modeled two decay chains. For the KKπ state, we assume one of the kaons is a KS, which leads to a secondary vertex and the K+π−π+π− final state: η′1(2300) → K∗KS (1) → (K+π−)(π+π−) → K+π−π+π−. For the KKππ state we assume no secondary vertex: h′2(2600) → K+1K− (2) → (K∗(892)π+)K− → K+K−π−π+. In addition to the exotic hybrid channels, there is an interest in non-exotic s¯s mesons. In order to study the sensitivity to conventional s¯s states, we consider an excitation of the normal ϕ meson, the known ϕ3(1850), which decays to K¯K ϕ3(1850) → K+K−. (3) The detection efficiency of this state will be typical of ϕ-like states decaying to the same final state. Finally, as noted in Section III.2, the Y(2175) state is viewed as a potential candidate for a non-exotic hybrid and has been reported in the decay mode Y(2175) → ϕf0(980) (4) → (K+K−)(π+π−). While this is the same KKππ state noted in reaction 2 above, the intermediate resonances make the kinematics of the final state particles different from the exotic decay channel noted above. Therefore, we simulate it explicitly. The final-state kaons from the reactions 1 - 4 will populate the GlueX detectors differently, with different overlap of the region where the time-of-flight system can provide good K/π separation. Figure 5 shows the kinematics of these kaons and the overlap with the existing time-of-flight sensitivity. Figure 5: Plots of particle density as a function of momentum and polar angle for all kaons in a variety of different production channels. Shown in solid (red) is the region of phase space where the existing time-of-flight (TOF) detector in GlueX provides K/π discrimination at the four standard deviation level. A BDT analysis (see Section II.3.2) has been used to study the capabilities of the baseline GlueX detector to identify the four reactions of interest. This study used the pythia-simulated γp collisions from the large-scale data challenge as described in Section II.2. Signal samples were obtained from pythia events with the generated reaction topology, and the remainder of the inclusive photoproduction reactions were used as the background sample. A large number of discriminating variables were used in the BDT analysis, which generated a single classifier by combining the information from all of the input variables. (The BDT algorithms used are contained within ROOT in the TMVA package ref:TMVA2007 .) In all cases we set the requirement on the BDT classifier in order to obtain a fixed final sample purity. For example, a purity of 90% implies a background at the 10% level. Any exotic signal in the spectrum would likely need to be larger than this background to be robust. Therefore, with increased purity we have increased sensitivity to smaller signals, but also lower efficiency. In Table 4, we present the signal selection efficiencies (post reconstruction) for our four reactions of interest assuming the design resolution of the GlueX tracking system. As noted earlier, these assume that the tracks have been reconstructed and do not include that efficiency. Historical evidence suggests that simulated resolutions are always more optimistic than what is attainable with the actual detector. To check the sensitivity of our conclusions to such a systematic error, we repeat the study while degrading the tracking resolution by 50%. At the 90% purity level, this degradation reduces the efficiency noticeably but not severely. Meson Decay ϕ3(1850)→K+K− Y(2175)→ϕf0(980) η′1(2300)→K∗KS h′2(2600)→K+1K− Table 4: Efficiencies for identifying several final states in GlueX. The efficiencies do not include the reconstruction of the final state tracks. Finally, we have studied the resulting efficiency when we require a signal purity of 95%, which, for example, would be necessary to search for more rare final states. As can be seen in Table 4, increasing the desired purity noticeably reduces the efficiency: in two of the four topologies studied the efficiency drops by over 50% of itself as the desired purity is increased from 90% to 95%. This exposes the limit of what can be done with the baseline GlueX hardware. Preliminary studies with supplemental kaon identification hardware (similar to those discussed in our PAC 39 proposal) indicate that very high-purity samples are attainable with significantly improved efficiency. It is likely that studies of final states where the background must be reduced below 10% will need additional particle identification hardware. Iv Ξ baryons The spectrum of multi-strange hyperons is poorly known, with only a few well-established resonances. Among the doubly-strange states, the two ground-state Ξ's, the octet member Ξ(1320) and the decuplet member Ξ(1530), have four-star status in the PDG Beringer:1900zz , with only four other three-star candidates. On the other hand, more than 20 N∗ and Δ∗ resonances are rated with at least three stars in the PDG. Of the six Ξ states that have at least a three-star rating in the PDG, only two are listed with weak experimental evidence for their spin-parity (JP) quantum numbers: Ξ(1530)32+ Aubert:2008ty and Ξ(1820)32− Biagi:1986vs . All other JP assignments, including the JP for the Ξ(1320) ground state, are based on quark-model predictions. Flavor SU(3) symmetry predicts as many Ξ resonances as N∗ and Δ∗ states combined, suggesting that many more Ξ resonances remain undiscovered. The three lightest quarks, u, d, and s, have 27 possible flavor combinations: 3⊗3⊗3=1⊕8⊕8′⊕10 and each multiplet is identified by its spin and parity, JP. Flavor SU(3) symmetry implies that the members of the multiplets differ only in their quark makeup, and that the basic properties of the baryons should be similar, although the symmetry is known to be broken by the strange-light quark mass difference. The octets consist of N∗, Λ∗, Σ∗, and Ξ∗ states. We thus expect that for every N∗ state, there should be a corresponding Ξ∗ state with similar properties. Additionally, since the decuplets consist of Δ∗, Σ∗, Ξ∗, and Ω∗ states, we also expect for every Δ∗ state to find a decuplet Ξ∗ with similar properties. iv.1 Ξ spectrum and decays The Ξ hyperons have the unique feature of double strangeness: |ssu⟩ and |ssd⟩. If the confining potential is independent of quark flavor, the energy of spatial excitations of a given pair of quarks is inversely proportional to their reduced mass. This means that the lightest excitations in each partial wave are between the two strange quarks. In a spectator decay model, such states will not decay to the ground state Ξ and a pion because of orthogonality of the spatial wave functions of the two strange quarks in the excited state and the ground state. This removes the decay channel with the largest phase space for the lightest states in each partial wave, substantially reducing their widths. Typically, ΓΞ∗ is about 10−20 MeV for the known lower-mass resonances, which is 5−30 times narrower than for N∗, Δ∗, Λ∗, and Σ∗ states. These features, combined with their isospin of 1/2, render possible a wide-ranging program on the physics of the Ξ hyperon and its excited states. Until recently, the study of these hyperons has centered on their production in K−p reactions; some Ξ∗ states were found using high-energy hyperon beams. Photoproduction appears to be a very promising alternative. Results from earlier kaon beam experiments indicate that it is possible to produce the Ξ ground state through the decay of high-mass hyperon and Ξ states Tripp:1967kj ; Burgun:1969ee ; Litchfield:1971ri . It is therefore possible to produce Ξ resonances through t-channel photoproduction of hyperon resonances using the photoproduction reaction γp→KKΞ(∗) Price:2004xm ; Guo:2007dw . In briefly summarizing a physics program on Cascades, it would be interesting to see the lightest excited Ξ states in certain partial waves decoupling from the Ξπ channel, confirming the flavor independence of confinement. Measurements of the isospin splittings in spatially excited Ξ states are also possible for the first time. Currently, these splittings, like n−p or Δ0−Δ++, are only available for the octet and decuplet ground states, but are hard to measure in excited N, Δ and Σ, Σ∗ states, which are very broad. The lightest Ξ baryons are expected to be narrower, and measuring the Ξ−−Ξ0 splitting of spatially excited Ξ states remains a strong possibility. These measurements are an interesting probe of excited hadron structure and would provide important input for quark models, which describe the isospin splittings by the u- and d-quark mass difference as well as by the electromagnetic interactions between the quarks. iv.2 GlueX sensitivity to Ξ states The Cascades appear to be produced via t-channel exchanges that result in the production of a hyperon Y∗ at the lower vertex which then decays to Ξ(∗)K. Most of the momentum of the beam is transferred to a forward-going kaon that is produced at the upper vertex. The Ξ octet ground states (Ξ0,Ξ−) will be challenging to study in the GlueX experiment via exclusive t-channel (meson exchange) production. The typical final states for these studies, γp→KY∗ → K+(Ξ−K+), (5) K+(Ξ0K0), K0(Ξ0K+), have kinematics for which the baseline GlueX detector has very low acceptance due to the high-momentum forward-going kaon and relatively low-momentum pions produced in the Ξ decay. However, the production of the Ξ decuplet ground state, Ξ(1530), and other excited Ξ's decaying to Ξπ results in a lower momentum kaon at the upper vertex, and these heavier Ξ states produce higher momentum pions in their decays. GlueX will be able to search for and study excited Ξ's in the reactions γp→KY∗ → K+(Ξπ)K0, (6) K+(Ξπ)K+, K0(Ξπ)K+. The lightest excited Ξ states are expected to decouple from Ξπ and can be searched for and studied also in their decays to Λ¯K and Σ¯K: γp→KY∗ → K+(¯KΛ)Ξ−∗K+, (7) K+(¯KΛ)Ξ0∗K0, K0(¯KΛ)Ξ0∗K+, γp→KY∗ → K+(¯KΣ)Ξ−∗K+, (8) K+(¯KΣ)Ξ0∗K0, K0(¯KΣ)Ξ0∗K+. We have simulated the production of the Ξ−(1320) and Ξ−(1820) resonances to better understand the kinematics of these reactions. The photoproduction of the Ξ−(1320) decaying to π−Λ and of the Ξ−(1820) decaying to ΛK− are shown in Fig. 6. These reactions result in K+K+π−π−p and K+K+K−π−p final states, respectively. Reactions involving excited Ξ states have lower-momentum forward-going kaons, making them more favorable for study without supplemental particle ID hardware in the forward direction. In addition, there is more energy available on average to the Ξ decay products, which results in a better detection efficiency for the produced pions. Figure 6: Generated momentum versus polar angle for all tracks in the simulated reactions (top) γp→K+Ξ−(1320)K+ and (bottom) γp→K+Ξ−(1820)K+. The three high-density regions in each plot are populated with, from lowest to highest momentum, pions, kaons and protons, and kaons. Using a BDT for signal selection, we have studied the specific reaction γp → K+Ξ−(1820)K+, with the subsequent decay of the Ξ via Ξ−(1820) → ΛK− → (pπ−)K−. The signal selection efficiencies (post-reconstruction) are shown in Table 5. As with the mesons, we find the efficiency should be adequate for conducting a study of excited Ξ states using the existing GlueX hardware. Detailed studies of the production, especially of the ground state Ξ's, and a parity measurement will likely require enhanced kaon identification in the forward direction, as presented in our PAC 39 proposal. Ξ−(1820) Table 5: Selection efficiencies for identifying the Ξ−(1820). The efficiencies do not include the reconstruction efficiency of the final state tracks. V GlueX Hardware and Beam Time Requirements In order to maximize the discovery capability of GlueX, an increase in statistical precision beyond that expected from initial running is needed. In this section, we detail those needs. To maximize sensitivity, we propose a gradual increase in the photon flux towards the GlueX design of 108 γ/s in the peak of the coherent bremsstrahlung spectrum (8.4 GeV<Eγ<9.0 GeV). Yield estimates, assuming an average flux of 5×107 γ/s, are presented. In order to minimize the bandwidth to disk and ultimately enhance analysis efficiency, we propose the addition of a level-three software trigger to the GlueX data acquisition system. The GlueX detector is designed to handle a rate of 108 γ/s; however, the optimum photon flux for taking data will depend on the beam condition and pileup at high luminosity and needs to be studied under realistic experimental conditions. If our extraction of amplitudes is not limited by statistical uncertainties, we may optimize the flux to reduce systematic errors. v.1 Level-three trigger The energy spectrum of photons striking the target ranges from near zero to the full 12 GeV incident electron energy. For physics analyses, one is primarily interested in only those events in the coherent peak around 9 GeV, where there is a signal in the tagger that determines the photon energy. At a rate of 107 γ/s, the 120 μb total hadronic cross section at 9 GeV corresponds to a tagged hadronic event rate of about 1.5 kHz. Based on knowledge of the inclusive photoproduction cross section as a function of energy, calculations of the photon intensity in the region outside the tagger acceptance, and estimates for the trigger efficiency, a total trigger rate of about 20 kHz is expected. At a typical raw event size of 15 kB, the expected data rate of 300 MB/s will saturate the available bandwidth to disk; rates higher than 107 γ/s cannot be accommodated with the current data acquisition configuration. For the high-intensity running, we propose the development of a level-three software trigger to loosely skim events that are consistent with a high energy γp collision. The events of interest will be characterized by high-momentum tracks and large energy deposition in the calorimeter. Matching observed energy with a tagger hit is a task best suited for software algorithms like those used in physics analysis. It is expected that a processor farm can analyze multiple events in parallel, providing a real time background rejection rate of at least a factor of ten. While the exact network topology and choice of hardware will ultimately depend on the speed of the algorithm, at 108 γ/s the system will need to accommodate 3 GB/s input from data crates, separate data blocks into complete events, and output the accepted events to disk at a rate of <300 MB/s. The software trigger has the added advantage of increasing the concentration of tagged γp collision events in the output stream, which better optimizes use of disk resources and enhances analysis efficiency. Members of the GlueX collaboration have developed and implemented the software trigger for the LHCb experiment, which is one of the most sophisticated software triggers ever developed ref:bbdt ; ref:roe . We expect to benefit greatly from this expertise in developing an optimal level-three trigger for GlueX. The present baseline data acquisition system has been carefully developed so that a level-three software trigger can be easily accommodated in the future. We expect to begin prototyping the level-three trigger using surplus computing hardware during the initial phases of GlueX running. This early testing of both algorithms and hardware will allow us to specify our resource needs with good accuracy in advance of the proposed Phase IV running. A simple estimate indicates that the implementation of a level-three trigger easily results in a net cost savings rather than a burden. Assuming no bandwidth limitations, if we write the entire unfiltered high-luminosity data stream to tape, the anticipated size is about 30 petabytes per year 222This is at the GlueX design intensity of 108 γ/s, which is higher than our Phase IV average rate of 5×107 by a factor of two; however, other factors that would increase the data volume, such as event size increases due to higher-than-estimated noise, have not been included.. Estimated media costs for storage of this data at the time of running would be $300K, assuming that no backup is made. A data volume of this size would require the acquisition of one or more additional tape silos at a cost of about $250K each. Minimum storage costs for a multi-year run will be nearing one million dollars. Conversely, if we assume a level-three trigger algorithm can run a factor of ten faster than our current full offline reconstruction, then we can process events at a rate of 100 Hz per core. The anticipated peak high luminosity event rate of 200 kHz would require 2000 cores, which at today's costs of 64-core machines would be about $160K. Even if a factor of two in computing is added to account for margin and data input/output overhead, the cost is significantly lower than the storage cost. Furthermore, it is a fixed cost that does not grow with additional integrated luminosity, and it reduces the processing cost of the final stored data set when a physics analysis is performed on the data. v.2 Desired beam time and intensity There are several considerations in determining how much data one needs in any particular final state. In order to perform an amplitude analysis of the final state particles (necessary for extracting the quantum numbers of the produced resonances), one typically separates the data into bins of momentum transfer to the nucleon t and resonance mass MX. The number of bins in t could range from one to greater than ten, depending on the statistical precision of the data; a study of the t-dependence, if statistically permitted, provides valuable information on the production dynamics of particular resonances. One desires to make the mass bins as small as possible in order to maximize sensitivity to states that have a small total decay width; however, it is not practical to use a bin size that is smaller than the resolution of MX, which is on the order of 10 MeV/c2. In each bin of t and MX, one then needs enough events to perform an amplitude analysis, which is about 104. Therefore, our general goal is to reach a level of at least 104 events per 10 MeV/c2 mass bin. With more statistics, we can divide the data into bins of t to study the production mechanism; with fewer statistics, we may merge mass bins, which ultimately degrades the resolution with which we can measure the masses and widths of the produced resonances. In order to estimate the total event yield for various reactions of interest, we assume 200 PAC days of beam for the proposed Phase IV running with 80% of the delivered beam usable for physics analysis. The average Phase IV intensity is assumed to be 5×107 γ/s in the coherent bremsstrahlung peak. This represents an integrated yield of events that is approximately one order of magnitude larger than our approved Phase II and III running, which utilizes 90 PAC days of beam for physics analysis333We plan to utilize 30 of the 120 approved PAC days for the Phase I commissioning of the detector. at an average intensity of 107 γ/s in the coherent peak. Table 6 summarizes the various running configurations. Below we present two independent estimates of event yields to justify our request for 200 PAC days of beam. Both reach similar conclusions: the proposed run would provide sufficient statistics to conduct an initial amplitude analysis of the mass spectrum for several select s¯s meson decay modes. In addition, the resulting order-of-magnitude increase in statistical precision will allow a more detailed exploration of those topologies such as η′π, b1π, or f1π that may be statistically limited in the initial GlueX running. Finally, the spectrum of Ξ baryons can also be studied with high statistical precision. Duration (PAC days) Minimum electron energy (GeV) Average photon flux (γ/s) 106 107 107 5×107 Average beam current (nA) 50 - 200444An amorphous radiator may be used for some commissioning and later replaced with a diamond. 220 220 1100 Maximum beam emittance (mm⋅μr) Level-one (hardware) trigger rate (kHz) Raw Data Volume (TB) 60 600 1200 2300555This volume assumes the implementation of the proposed level-three software trigger. Table 6: A table of relevant parameters for the various phases of GlueX running. v.2.1 Meson yields based on cross section estimates One can estimate the total number of observed events, Ni, in some stable final state by Ni=ϵiσinγntT, (9) where ϵi and σi are the detection efficiency and photoproduction cross section of the final state i, nγ is the rate of incident photons on target, nt is the number of scattering centers per unit area, and T is the integrated live time of the detector. For a 30 cm LH2 target, nt is 1.26 b−1. (A useful rule of thumb is that at nγ=107 γ/s a 1 μb cross section will result in the production of about 106 events per day.) It is difficult to estimate the production cross section for many final states since data in the GlueX energy regime are sparse. (For a compendium of photoproduction cross sections, see Ref. Baldini:1988ti .) Table 7 lists key final states for initial exotic hybrid searches along with assumed production cross sections666Some estimates are based on actual data from Ref. Baldini:1988ti for cross sections at a similar beam energy, while others are crudely estimated from the product of branching ratios of heavy meson decays, i.e., a proxy for light meson hadronization ratios, and known photoproduction cross sections.. Photoproduction of mesons at 9 GeV proceeds via peripheral production (sketched in the inset of Fig. 7). The production can typically be characterized as a function of t≡(pX−pγ)2, with the production cross section proportional to e−α|t|. The value of α for measured reactions ranges from 3 to 10 GeV−2. This t-dependence, which is unknown for many key GlueX reactions, results in a suppression of the rate at large values of |t|, which, in turn, suppresses the production of high mass mesons. Figure 7 shows the minimum value of |t| as a function of the produced meson mass MX for a variety of different photon energies. The impact of this kinematic suppression on a search for heavy states is illustrated in Figure 8, where events are generated according to the t distributions with both α=5 (GeV/c)−2 and 10 (GeV/c)−2 and uniform in MX. Those that are kinematically allowed (|t|>|t|min) are retained. The y-axis indicates the number of events in 10 MeV/c2 mass bins, integrated over the allowed region in t, and assuming a total of 3×107 events are collected. The region above MX=2.5 GeV/c2, where one would want to search for states such as the h2 and h′2, contains only about 5% of all events due to the suppression of large |t| that is characteristic of peripheral photoproduction. Figure 7: Dependence of |t|min on the mass of the outgoing meson system MX. The lines indicate incident photon energies of 8.0, 9.0, and 10.0 GeV. Figure 8: A figure showing the number of expected events per 10 MeV/c2 bin in the KKπ invariant mass distribution, integrating over all allowed values of t, and assuming 107 events in total are detected. No dependence on M(KKπ) is assumed, although, in reality, the mass dependence will likely be driven by resonances. Two different assumptions for the t dependence are shown. The region above 2.5 GeV/c2 represents about 8% (2%) of all events for the α=5(10) (GeV/c)−2 values. To estimate our total yield in various final states, we assume the detection efficiency for protons, pions, kaons, and photons to be 70%, 80%, 40%, and 80%, respectively. Of course, the true efficiencies are dependent on software algorithms, kinematics, multiplicity, and other variables; however, the dominant uncertainty in yield estimates is not efficiency but cross section. These assumed efficiencies roughly reproduce signal selection efficiencies in detailed simulations of γp→π+π−π+n, γp→ηπ0p, γp→b±1π∓p, and γp→f1π0p performed by the collaboration, as well as the BDT selection efficiencies presented earlier. Table 7 provides an estimate of the detected yields for various topologies for our proposed Phase IV run. If we take the KKπ channel as an example, Figure 8 demonstrates that, under some assumptions about the production, the proposed run yields enough statistics to just meet our goal of 104 events per mass bin in the region where s¯s exotics are expected to reside. Cross Proposed Final Section Phase IV State (μb) (×106 events) π+π−π+ 10 3000 π+π−π0 2 600 KKππ 0.5 40 KKπ 0.1 10 ω3πππ 0.2 40 ωγπππ 0.2 6 ηγγππ 0.2 30 ηγγπππ 0.2 20 η′γγπ 0.1 1 η′ηπππ 0.1 3 Table 7: A table of hybrid search channels, estimated cross sections, and approximate numbers of observed events for the proposed Phase IV running. See text for a discussion of the underlying assumptions. The subscripts on ω, η, and η′ indicate the decay modes used in the efficiency calculations. If explicit charges are not indicated, the yields represent an average over various charge combinations. v.2.2 Meson yields based on pythia simulation Meson of Mass Range [MeV/c2] Events per Interest (X) MminX MmaxX Yield [106] 10 MeV/c2 [104] h′2(2600) γp→(K1(1400)K)Xp 2415 2785 1.5 4.0 K1→K∗π K∗→Kπ η′1(2300) γp→(K∗KS)Xp 2000 2600 0.46 1.5 K∗→K±π∓ KS→π+π− ϕ3(1850) γp→(K+K−)Xp 1720 1980 5.3 21 Y(2175) γp→(ϕf0(980))Xp 2060 2290 0.12 0.52 ϕ→K+K− f0(980)→K+K− Table 8: pythia-predicted numbers of events for various exclusive final states in a mass range appropriate for searching for various mesons. Estimates are based on 200 PAC days at 80% uptime at an average intensity of 5×107γ/s. Events per 10 MeV/c2 is an estimate of the number of events available for an amplitude analysis in each mass bin. We have also used pythia to simulate the expected yields of various hadronic final states. pythia reproduces known photoproduction cross sections relatively well; therefore, it is expected to be an acceptable estimator of the production rates of various topologies where we would like to search for new mesons. Using the large 5×109 event inclusive-photoproduction pythia sample, we can analyze the signal yield when we attempt to reconstruct and select various final state topologies. The signal selection is performed using a BDT, as discussed earlier, with a goal of 90% signal purity. We place loose requirements on the invariant masses of the intermediate resonances. The measured yield after reconstruction and selection can then be scaled to estimate the number of reconstructed signal events that our Phase IV running would produce. In Table 8 we show the various topologies studied. In addition, we measure the yield in a region of meson candidate X invariant mass to estimate the statistical precision of an amplitude analysis in that region. The number of events per 10 MeV/c2 mass bin is listed, and we observe that we meet our goal of 104 events per bin in most topologies. The K∗KS yield in Table 8 also loosely agrees with that in Figure 8, and the ratio of K1K to K∗K roughly matches that obtained by estimated cross sections and efficiencies as shown in Table 7. v.2.3 Ξ yields Existing knowledge of Ξ photoproduction can be used to estimate the expected yields of Ξ states. Recent CLAS data for the Ξ(1320) are consistent with t-slope values ranging from 1.11 to 2.64 (GeV/c)−2 for photon energies between 2.75 and 3.85 GeV Guo:2007dw . Values for excited Ξ's are not well known, but a recent unpublished CLAS analysis of a high-statistics data sample (JLab proposal E05-017) indicates that the t-slope value flattens out above 4 GeV at a value of about 1.7 (GeV/c)−2 GoetzThesis . We have used a value of 1.4 (GeV/c)−2 for the Ξ−(1320) and 1.7 (GeV/c)−2 for the Ξ−(1820) in our simulations at 9 GeV. The most recent published CLAS analysis Guo:2007dw has determined a total cross section of about 15 nb and 2 nb for the Ξ−(1320) and Ξ−(1530), respectively, at Eγ=5 GeV. An unpublished analysis shows that the total cross section levels out above 3.5 GeV, but the energy range is limited at 5.4 GeV GoetzThesis . A total number of about 20,000 Ξ−(1320) events was observed for the energy range Eγ∈[2.69,5.44] GeV. The BDT analysis carried out using K+K+K−Λ pythia signal events suggests that the proposed GlueX run will result in a yield of about 90,000 of these events with 90% purity for K−Λ invariant mass in the Ξ(1820) mass region. Estimates using Eq. (9) lead us to expect yields of about 800,000 Ξ−(1320) and 100,000 Ξ−(1530) events. Such high statistics samples of exclusively reconstructed Ξ final states greatly enhance the possibility of determining the spin and parity of excited states. In summary, we request a production run consisting of 200 days of beam time at an average intensity of 5×107 γ/s for production Phase IV running of the GlueX experiment. It is anticipated that the Phase IV intensity will start around 107 γ/s, our Phase III intensity, and increase toward the GlueX design goal of 108 γ/s as we understand the detector capability for these high rates based on the data acquired at 107 γ/s. The data sample acquired will provide an order of magnitude improvement in statistical precision over the initial Phase II and III running of GlueX, which will allow an initial study of high-mass states containing strange quarks and an exploration of the Ξ spectrum. Vi Summary We propose an expansion of the present baseline GlueX experimental program to include an order-of-magnitude higher statistics by increasing the average tagged photon intensity by a factor of five and the beam time by a factor of two. The increase in intensity necessitates the implementation of the GlueX level-three software trigger. The program requires 200 days of beam time with 9 GeV tagged photons at an average intensity of 5×107 γ/s. We have demonstrated that the baseline GlueX detector design is capable of reconstructing particular final state topologies that include kaons. While the acceptance and purity may be limited without the addition of supplemental kaon identification hardware, the proposed run will provide a level of statistical precision sufficient to make an initial study of meson states with an s¯s component and to search for excited doubly-strange Ξ-baryon states. (1) J. J. Dudek, Phys. Rev. D 84, 074023 (2011). (2) GlueX Collaboration, "Mapping the Spectrum of Light Quark Mesons and Gluonic Excitations with Linearly Polarized Protons," Presentation to PAC 30, (2006). Available at: http://www.gluex.org/docs/pac30_proposal.pdf (3) GlueX Collaboration, "The GlueX Experiment in Hall D," Presentation to PAC 36, (2010). Available at: http://www.gluex.org/docs/pac36_update.pdf (4) GlueX Collaboration, "A study of meson and baryon decays to strange final states with GlueX in Hall D," Presentation to PAC 39, (2012). Available at: http://www.gluex.org/docs/pac39_proposal.pdf (5) J. J. Dudek, R. G. Edwards, M. J. Peardon, D. G. Richards and C. E. Thomas, Phys. Rev. Lett. 103, 262001 (2009). (6) J. J. Dudek, R. G. Edwards, M. J. Peardon, D. G. Richards and C. E. Thomas, Phys. Rev. D 82, 034508 (2010). (7) J. J. Dudek, R. G. Edwards, B. Joo, M. J. Peardon, D. G. Richards and C. E. Thomas, Phys. Rev. D 83, 111502 (2011). (8) R. G. Edwards, J. J. Dudek, D. G. Richards and S. J. Wallace, Phys. Rev. D 84, 074508 (2011). (9) J. J. Dudek and R. G. Edwards, Phys. Rev. D 85, 054016 (2012). (10) R. G. Edwards, N. Mathur, D. G. Richards and S. J. Wallace, Phys. Rev. D 87, 054506 (2013) [arXiv:1212.5236 [hep-ph]]. (11) J. Beringer et al. [Particle Data Group Collaboration], Phys. Rev. D 86, 010001 (2012). (12) M. Ablikim et al. [BESIII Collaboration], Phys. Rev. Lett. 106, 072002 (2011). (13) G. S. Adams et al. [CLEO Collaboration], Phys. Rev. D 84, 112009 (2011). (14) E. I. Ivanov et al. [E852 Collaboration], Phys. Rev. Lett. 86, 3977 (2001).
CommonCrawl
Finding slope of tangent line to polar curve You are given the polar curve r=eθ. (a) List all of the points (r,θ) where the tangent line is horizontal. In entering your answer, list the points starting with the smallest value of r and limit yourself to 1≤r≤1000 ( note the restriction on r!) and 0≤θ<2π. Apr 05, 2018 · This calculus 2 video tutorial explains how to find the tangent line equation in polar form. You need to find the first derivative dy/dx of the polar equatio... find the slope of the following line: find the slope of the tangent line to the given polar curve at the point specified by the value of: gradient to degrees calculator: slope of least squares regression line calculator: find the equation of the straight line passing through the points: finding equation of a line worksheet: glide slope calculator In the equation of the line y-y 1 = m(x-x 1) through a given point P 1, the slope m can be determined using known coordinates (x 1, y 1) of the point of tangency, so b 2 x 1 x + a 2 y 1 y = b 2 x 1 2 + a 2 y 1 2 , since b 2 x 1 2 + a 2 y 1 2 = a 2 b 2 is the condition that P 1 lies on the ellipse (b) Find the equation of the tangent line at any point on the curve. (c) Find the length of the curve from (0;0) to (1;1). (d) Find an equation for a circletangent to the cissoid and the asymptote. The tangent line appears to have a slope of 4 and a y-intercept at –4, therefore the answer is quite reasonable. Therefore, the line y = 4x – 4 is tangent to f(x) = x2 at x = 2. Here is a summary of the steps you use to find the equation of a tangent line to a curve at an indicated point: 8 6 4 2 (b) Find the equation of the tangent line at any point on the curve. (c) Find the length of the curve from (0;0) to (1;1). (d) Find an equation for a circletangent to the cissoid and the asymptote. In the equation of the line y-y 1 = m(x-x 1) through a given point P 1, the slope m can be determined using known coordinates (x 1, y 1) of the point of tangency, so b 2 x 1 x + a 2 y 1 y = b 2 x 1 2 + a 2 y 1 2 , since b 2 x 1 2 + a 2 y 1 2 = a 2 b 2 is the condition that P 1 lies on the ellipse Using implicit differentiation to find a line that is tangent to a curve at a point 0 Is there a more idiomatic way to solve this implicit differentiation problem? In this lesson, we will learn how to find the tangent line of polar curves. Just like how we can find the tangent of Cartesian and parametric equations, we can do the same for polar equations. First, we will examine a generalized formula to taking the derivative, and apply it to finding tangents. 18. Find the slope of the tangent line to the polar curve r = sin3θ at the point where θ = π/4. Solution 1 2 19. (a) Find the point(s) on the polar curve r = 4sinθ at which there is a vertical tangent line. (b) Use the formula of arc length for polar curves to find the circumference of the polar curve r = 4sinθ. Solution(a) 2 √ 2, π 4 ... This does look indeed like a tangent line that has a slope of negative one. So, hopefully this puts it altogether, you're feeling a little bit more comfortable, you got a review a little bit of the polar coordinates but we've augmented that knowledge by starting to take some derivatives. Mar 28, 2020 · Identify the value of the x-coordinate of the point. Try sketching the graph and the tangent line to get an estimate of a reasonable value for the slope. Find the derivative of the function. The slope of the tangent line depends on being able to find the derivative of the function. Write down the derivative of the function, simplifying if possible. Objectives: In this tutorial, we derive the formula for finding the slope of a tangent line to a curve defined by an equation in polar coordinates. A couple of examples are worked to illustrate the use of this formula. Question: Find the slope of the tangent line to the given polar curve at the point specified by the value of {eq}\theta {/eq}. {eq}e = \cos \left( {3\theta } \right), \quad \theta = {\pi \over 4 ... Free slope calculator - find the slope of a curved line, step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy. You are given the polar curve r=eθ. (a) List all of the points (r,θ) where the tangent line is horizontal. In entering your answer, list the points starting with the smallest value of r and limit yourself to 1≤r≤1000 ( note the restriction on r!) and 0≤θ<2π. Therefore, if we know the slope of a line connecting the center of our circle to the point (5, 3) we can use this to find the slope of our tangent line. Based on the general form of a circle , we know that is the equation for a circle that is centered at (2, -1) and has a radius of 5 . The formula for the first derivative of a polar curve is given below. See also Slope of a curve , tangent line , parametric derivative formulas Jul 27, 2016 · Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. r = 3 + 4 cos θ, θ = π/3? You can edit the value of "a" below, move the slider or point on the graph or press play to animate The curve is given in polar form. The slope of the tangent at the point (x,y) is dy/dx. To find the slope of the tangent, we can either find dy/dx by first converting the polar form of the equation of the graph to rectangular form or by using the formula used in my solution. In this lesson, we will learn how to find the tangent line of polar curves. Just like how we can find the tangent of Cartesian and parametric equations, we can do the same for polar equations. First, we will examine a generalized formula to taking the derivative, and apply it to finding tangents. Apr 05, 2018 · This calculus 2 video tutorial explains how to find the tangent line equation in polar form. You need to find the first derivative dy/dx of the polar equatio... On the other hand, the slope of the line tangent to a point of a function coincides with the value of the value derived from the function at that point: So by deriving the function of the curve and replacing it with the value of x of the point where the curve is tangent, we will obtain the value of the slope m. Finding the slope of the tangent line of a polar curve at given points. SOLVED! Polar curve equation: r 2 = 9cos(2θ) Points: (0, pi/4) (0, -pi/4) VII. Slope of the tangent line to a given polar curve. 1. Find the points on the curve where the tangent line is horizontal: r=5 1−cos 2. Find the slope of the tangent to the given polar curve at the point specified by the value of : (a) r= 1 , = (b) r=sin 3 , = /6 (c) r=2−sin , = /3 VIII. "Find the equation of the tangent line in cartesian coordinates of the curve given in polar coordinates by: r=10cos(theta) at (theta)=pi/6." I think I might need to convert to rectangular form first but I'm not sure... May 31, 2018 · Section 3-7 : Tangents with Polar Coordinates. We now need to discuss some calculus topics in terms of polar coordinates. We will start with finding tangent lines to polar curves. In this case we are going to assume that the equation is in the form \(r = f\left( \theta \right)\). find the slope of the following line: find the slope of the tangent line to the given polar curve at the point specified by the value of: gradient to degrees calculator: slope of least squares regression line calculator: find the equation of the straight line passing through the points: finding equation of a line worksheet: glide slope calculator In doing an exercise, it is often easier simply to express the polar equation parametrically, then find dy/dx, rather than to memorize the formula. EXAMPLE 39 (a) Find the slope of the cardioid r = 2(1 + cos θ) at See Figure N4–24. (b) Where is the tangent to the curve horizontal? FIGURE N4–24. BC ONLY. SOLUTIONS: The slope of the tangent line of a parametric curve de ned by parametric equations x= f(t), y= g(t) is given by dy=dx= (dy=dt)=(dx=dt). A parametric curve has a horizontal tangent wherever dy=dt= 0 and dx=dt6= 0. It has a vertical tangent wherever dx=dt= 0 and dy=dt6= 0. Can anyone help with finding the slope of a tangent curve? Calculus: Sep 25, 2015: Finding a slope of tangent line: Calculus: Feb 9, 2013: finding the slope of a tangent line: Calculus: Sep 16, 2012: Finding equation of a tangent line given slope of tangent: Calculus: Jan 12, 2011 VII. Slope of the tangent line to a given polar curve. 1. Find the points on the curve where the tangent line is horizontal: r=5 1−cos 2. Find the slope of the tangent to the given polar curve at the point specified by the value of : (a) r= 1 , = (b) r=sin 3 , = /6 (c) r=2−sin , = /3 VIII. Find the slope of the line tangent to the polar curve at the given points. At each point where the curve intersects the origin, find the equation of the tangent line in polar coordinates. r = 4 sin 2 theta, tips of the leaves Find the slope of the line tangent to the polar curve at the tips of the leaves of r = 4 sin 2 theta. Added Mar 5, 2014 by Sravan75 in Mathematics. Inputs the polar equation and specific theta value. Outputs the tangent line equation, slope, and graph. Jun 16, 2020 · The tangent line always has a slope of 0 at these points (a horizontal line), but a zero slope alone does not guarantee an extreme point. Here's how to find them: [5] X Research source Take the first derivative of the function to get f'(x), the equation for the tangent's slope. Finding the area of a polar region or the area bounded by a single polar curve Math · AP®︎/College Calculus BC · Parametric equations, polar coordinates, and vector-valued functions · Defining polar coordinates and differentiating in polar form To compute the slope of the tangent to a polar curve r = f( ), one can di erentiate x = f( )cos and y = f( )sin with respect to , and then use the relation dy=dx = (dy=d )=(dx=d ). Dec 23, 2016 · Finding the equation of a line tangent to a curve at a point always comes down to the following three steps: Find the derivative and use it to determine our slope m at the point given Determine the y value of the function at the x value we are given. Plug what we've found into the equation of a line. On the other hand, the slope of the line tangent to a point of a function coincides with the value of the value derived from the function at that point: So by deriving the function of the curve and replacing it with the value of x of the point where the curve is tangent, we will obtain the value of the slope m. Aug 13, 2015 · If r = f (θ) is the polar curve, then the slope at any given point on this curve with any particular polar coordinates (r,θ) is f '(θ)sin(θ) + f (θ)cos(θ) f '(θ)cos(θ) − f (θ)sin(θ) Oct 07, 2016 · A horizontal tangent line means the slope is zero, which means the change in y is zero. Equate dy/dt to 0 and solve for the value of t. A vertical tangent means the slope is infinite and the change in x is zero. Find the slope of the line tangent to the polar curve at the given points. At each point where the curve intersects the origin, find the equation of the tangent line in polar coordinates. r = 4 sin 2 theta, tips of the leaves Find the slope of the line tangent to the polar curve at the tips of the leaves of r = 4 sin 2 theta. Mar 28, 2020 · Identify the value of the x-coordinate of the point. Try sketching the graph and the tangent line to get an estimate of a reasonable value for the slope. Find the derivative of the function. The slope of the tangent line depends on being able to find the derivative of the function. Write down the derivative of the function, simplifying if possible. VII. Slope of the tangent line to a given polar curve. 1. Find the points on the curve where the tangent line is horizontal: r=5 1−cos 2. Find the slope of the tangent to the given polar curve at the point specified by the value of : (a) r= 1 , = (b) r=sin 3 , = /6 (c) r=2−sin , = /3 VIII. Calculus in Polar Coordinates Begin with the area of a sector of a circle:. 3. EX 1 Find the area inside r = 3 +3sin θ. 4. EX 2 Find the area inside r = 3sin θand outside r = 1 + sin θ. 5. Tangent line slope on a polar curve. 6. EX 3 Find the slope of the tangent line to r = 2-3sin θ at θ = π/6. However, as above, when $\theta=\pi$, the numerator is also $0$, so we cannot conclude that the tangent line is vertical. Figure 10.2.1 shows points corresponding to $\theta$ equal to $0$, $\pm\pi/3$, $2\pi/3$ and $4\pi/3$ on the graph of the function. Note that when $\theta=\pi$ the curve hits the origin and does not have a tangent line. Sorry for the delayed reply as well. I had to use Wolfram Alpha to find the solutions for both the numerator and denominator when they're = 0, and so am still unsure if there is actually a way to find an exact solution. As Subhotosh Khan's reply above indicates, maybe there isn't a way to find an exact derivative for each of these equations. The derivative of a function will give us the slope of the tangent line to the graph of the function when we substitute a particular value of x. For example, if the function is y=x^3+3x^2–12, the derivative is y'=3x^2+6x. Find the slope of the line tangent to the polar for which the polar curve curve r θ=1 at 4 π θ=. 6.Determine the equations of all lines that are tangent to the 11.polar curve r = −3 3cosθ sinand horizontal. 7.Find the points on 1 2cos the polar curve r = −θ where the tangent line is horizontal and those where it is vertical. 8. Dec 07, 2016 · Define the vector equation of the line through the point above tangent to the curve at that point. Plot the graph of and this tangent line on the same graphover the interval . Identify the graph as a cardioid, limaçon, or rose. Find all points of intersection for each pair of curves in polar coordinates. and for . and for . However, as above, when $\theta=\pi$, the numerator is also $0$, so we cannot conclude that the tangent line is vertical. Figure 10.2.1 shows points corresponding to $\theta$ equal to $0$, $\pm\pi/3$, $2\pi/3$ and $4\pi/3$ on the graph of the function. Note that when $\theta=\pi$ the curve hits the origin and does not have a tangent line. My Polar & Parametric course: https://www.kristakingmath.com/polar-and-parametric-course Learn how to find the tangent line to the polar curve. GET E... The formula for the first derivative of a polar curve is given below. See also Slope of a curve , tangent line , parametric derivative formulas Question: Find the slope of the tangent line to the given polar curve at the point specified by the value of {eq}\theta {/eq}. {eq}e = \cos \left( {3\theta } \right), \quad \theta = {\pi \over 4 ... As with parametric curves there are curves that have several tangent line at one point. For example, in above example, such point is (0,0). The corresponding value(s) of `theta` we can find by solving equation `1+2cos(theta)=0` . Ex: Determine the Slope of a Tangent Line to a Polar Curve at a Given Angle Ex: Determine Where a Polar Curve Has a Horizontal Tangent Line Area using Polar Coordinates: Part 1, Part 2, Part 3 Ex: Find the Area Bounded by a Polar Curve Over a Given Interval (Spiral) Ex: Find the Area of a Inner Loop of a Limacon (Area Bounded by Polar Curve) This does look indeed like a tangent line that has a slope of negative one. So, hopefully this puts it altogether, you're feeling a little bit more comfortable, you got a review a little bit of the polar coordinates but we've augmented that knowledge by starting to take some derivatives. The slope of the tangent line of a parametric curve de ned by parametric equations x= f(t), y= g(t) is given by dy=dx= (dy=dt)=(dx=dt). A parametric curve has a horizontal tangent wherever dy=dt= 0 and dx=dt6= 0. It has a vertical tangent wherever dx=dt= 0 and dy=dt6= 0. Finding the area of a polar region or the area bounded by a single polar curve Math · AP®︎/College Calculus BC · Parametric equations, polar coordinates, and vector-valued functions · Defining polar coordinates and differentiating in polar form Oct 07, 2016 · A horizontal tangent line means the slope is zero, which means the change in y is zero. Equate dy/dt to 0 and solve for the value of t. A vertical tangent means the slope is infinite and the change in x is zero. For the following exercises, find the slope of a tangent line to a polar curve Let and so the polar equation is now written in parametric form. Use the definition of the derivative and the product rule to derive the derivative of a polar equation. A line that just touches a curve at a point, matching the curve's slope there. (From the Latin tangens touching, like in the word "tangible".) At left is a tangent to a general curve. And below is a tangent to an ellipse: Thus the derivative is: $\frac{dy}{dx} = \frac{2t}{12t^2} = \frac{1}{6t}$ Calculating Horizontal and Vertical Tangents with Parametric Curves. Recall that with functions, it was very rare to come across a vertical tangent. The slope of the tangent line at time t is d x d t = d y / d t d x / d t . The area under the curve between a = x ( t 1) and b = x ( t 2) is ∫ t 1 t 2 y ( t) d x d t d t. In the following video, we derive these formulas, work out the tangent line to a cycloid, and compute the area under one span of the cycloid. Dec 07, 2016 · Define the vector equation of the line through the point above tangent to the curve at that point. Plot the graph of and this tangent line on the same graphover the interval . Identify the graph as a cardioid, limaçon, or rose. Find all points of intersection for each pair of curves in polar coordinates. and for . and for . In the equation of the line y-y 1 = m(x-x 1) through a given point P 1, the slope m can be determined using known coordinates (x 1, y 1) of the point of tangency, so b 2 x 1 x + a 2 y 1 y = b 2 x 1 2 + a 2 y 1 2 , since b 2 x 1 2 + a 2 y 1 2 = a 2 b 2 is the condition that P 1 lies on the ellipse VII. Slope of the tangent line to a given polar curve. 1. Find the points on the curve where the tangent line is horizontal: r=5 1−cos 2. Find the slope of the tangent to the given polar curve at the point specified by the value of : (a) r= 1 , = (b) r=sin 3 , = /6 (c) r=2−sin , = /3 VIII.
CommonCrawl
Universal Gravitation | HSC Physics Home / HSC Physics / Universal Gravitation | HSC Physics Universal Gravitation Paul explains about Universal Gravitation. The field vector g A field vector is a single vector that describes the strength and direction of a uniform vector field. For a gravitational field the field vector is g, which is defined in this way: $$ g = \dfrac{F}{m} $$ \( \begin{aligned} \displaystyle \require{color} \text{where } F &= \text{force exerted on mass } m \\ m &= \text{mass in the field} \\ g &= \text{the field vector} \\ \end{aligned} \) Vector symbols are indicated here in bold italics. The direction of the vector g is the same as the direction of the associated force of Universal Gravitation. Note that a net force applied to a mass will cause it to accelerate. Newton's Second Law describes this relationship: $$ a = \dfrac{F}{m} $$ \text{where } a &= \text{acceleration} \\ Hence, we can say that the field vector g also represents the acceleration due to gravity and we can calculate its value at the Earth's surface as described below. The Law of Universal Gravitation says that the magnitude of the force of attraction between the Earth and an object on the Earth's surface is given by: $$ F = G\dfrac{m_{E}m_{O}}{r^{2}_{E}} $$ \text{where } m_{E} &= \text{the mass of the Earth} \\ m_{O} &= \text{mass of the object in kilograms} \\ r_{E} &= \text{radius of the Earth} \\ G &= \text{the gravitational constant} \\ From the defining equation for g, on the previous page, we can see that the force experienced by the mass can also be described by: $$ F = m_{O}g $$ Equating these two we get \( m_{O}g = G \dfrac{m_{E}m_{O}}{r^{2}_{E}} \) This simplifies to give: \( g = G \dfrac{m_{E}}{r^{2}_{E}} = \dfrac{6.672 \times 10^{-11} \times 5.974 \times 10^{24}}{(6.378 \times 10^{6})^2} \) Hence, \( g \approx 9.80 \text{ m s}^{-2}\) Variations in the value of g Variation with geographical location The actual value of the acceleration due to gravity, g, that will apply in a given situation will depend upon geographical location. Minor variations in the value of g around the Earth's surface occur because: the Earth's crust or lithosphere shows variations in thickness and structure due to factors such as tectonic plate boundaries and dense mineral deposits. These variations can alter local values of g, Universal Gravitation. the Earth is not a perfect sphere, but is flattened at the poles. This means that the value of g will be greater at the poles, since they are closer to the centre of the Earth. the spin of the Earth creates a centrifuge effect that reduces the effec­tive value of g. The effect is greatest at the Equator and there is no effect at the poles. As a result of these factors, the rate of acceleration due to gravity at the surface of the Earth varies from a minimum value at the Equator of 9.782 m s-2 to a maximum value of 9.832 m s-2 at the poles. The usual value used in equations requiring g is 9.8 m s-2. Variation with altitude The formula for g shows that the value of g will also vary with altitude above the Earth's surface. By using a value of r equal to the radius of the Earth plus altitude, the following values can easily be calculated. It is clear from the table below that the effect of the Earth's gravitational field is felt quite some distance out into space. $$ g = G \dfrac{m_{E}}{(r_{E}+\text{altitude})^2} $$ NOTE: That as altitude increases the value of g decreases, dropping to zero only when the altitude has an infinite value. The variation of g with altitude above Earth's surface Altitude (km) g Comment 0 9.80 Earth's surface 8.8 9.77 Mt Everest Summit 80 9.54 Arbitrary beginning of space 200 9.21 Mercury capsule orbit altitude 250 9.07 Space shuttle minimum orbit altitude 400 8.68 Space shuttle maximum orbit altitude 1 000 7.32 Upper limit for low Earth orbit 400 000 0.19 Communications satellite orbit altitude Variation with planetary body The formula for g also shows that the value of g on planetary surfaces depends upon the mass and radius of the central body which, in examples so far, has been the Earth. Other planets and natural satellites (moons) have a variety of masses and radii, so that the value of g elsewhere in our solar system can be quite different from that on Earth Universal Gravitation. The following table presents a few examples. $$ g = G \dfrac{m_{\text{planet}}}{r_{\text{planet}}^2} $$ A comparison of g on the surface of other planetary bodies Body Radius (km) g Moon 1738 1.6 Mars 3397 3.7 Jupiter 71 492 24.8 Pluto 1 151 0.66 Try our Comprehensive Physics Courses! by Paul Andersonin HSC Physicstags Year 12 Physics Band 6 Hamlet Sample Essay | HSC English Module BRepresentation and Text | Module C | HSC English Paul Anderson (Paul Anderson) Bachelor of Science University of Sydney, High Distinction Rio Tinto Big Science Competition, High Distinction Australian Science Olympiads Physics National Qualifying Exam | His role at Prime is to design and present Physics lessons and to publish educational articles about science. He presented the majority of the Physics videos. 16 Dec 201716 Dec 2017 Rutherford's Model | HSC Physics Elemental Spectra | HSC Physics
CommonCrawl
When should we get into limits in introductory calculus courses? All of the calculus textbooks I've used (teaching at community colleges) start with the first chapter covering limits. (Perhaps after a review chapter.) I think this order is wrong. Historically, Newton and Leibniz thought in terms of fluxions or infinitesimals. It took mathematicians 150 years to develop the logical (epsilon-delta) machinery of limits that puts calculus on a sound logical foundation. (And it turns out that this is not the only way to do so. Non-standard analysis uses infinitesimals in a logically rigorous way.) The big ideas of calculus are derivatives for rate of change, and integration for areas and volumes. I think the course makes more sense for students if we start with "slopes of curvy lines" and velocity. So I'm bringing in outside material to help them explore the basic concept of the derivative from many perspectives. (Our textbook only has two sections that do this. Most only have one. I spend three weeks on it. Much of my material comes from Boelkins' Active Calculus.) I give the limit definition of the derivative, but I say that for now we'll think of the limit as meaning that we get infinitely close. After we are done with simple derivatives, I go back to chapter one's limit exercises before doing trig derivatives, because we hit some unusual limits there (e.g. sinx/x). I'd like to know why textbooks cover limits first. Is it just because the mathematicians writing them are stuck at a higher level, and want to first establish the validity of what will be done later? I don't think that's good pedagogy. I think calculus is beautiful and powerful, and that I can convey this better by waiting to introduce limits after students see why they need them. Why cover limits first? mathematical-pedagogy calculus infinitesimals limits Sue VanHattum $\begingroup$ On a whim, I thought I might google the phrase Calculus without limits. Indeed, a PDF of potential interest shows up: hawkeyecollege.edu/webres/File/employees/faculty-directory/… $\endgroup$ – Benjamin Dickman Sep 5 '14 at 16:33 $\begingroup$ It's brilliant! I've seen the textbook using infinitesimals, but it didn't seem any easier for students than the limit treatment, and it's not the standard they'll see elsewhere. But this is great! Short enough for interested students to read. And a great way to show that you can "get away with" thinking about infinitely small bits. (There's a great proof of (sinx)' = cosx that uses infinitesimals. Takes it from my four-page explanation to about one page: thephysicsvirtuosi.com/posts/trigonometric-derivatives.html) $\endgroup$ – Sue VanHattum♦ Sep 5 '14 at 17:13 $\begingroup$ It's another case of the "question answered before it is asked" approach to education that plagues our system. For those who think learning is just remembering everything you're told, this makes some sense. For the rest of us who think the students actually have to be engaged in meeting and addressing problems, it does not. $\endgroup$ – rschwieb Sep 6 '14 at 13:05 $\begingroup$ @rschwieb, I'm not sure what 'it' refers to here. My approach, or the text's? $\endgroup$ – Sue VanHattum♦ Sep 6 '14 at 22:04 $\begingroup$ @SueVanHattum I was referring to most run-of-the-mill texts for students at that level and lower. $\endgroup$ – rschwieb Sep 6 '14 at 23:29 I'd like to know why textbooks cover limits first. I don't think there's any big mystery as to why commercial textbooks tend to be similar. It's a market mechanism known as the network effect, the same mechanism that makes Microsoft Windows so popular. Once people start to see something as a standard, anything different becomes non-viable in the marketplace. Stephen J. Gould wrote a nice essay about this in the field of biology; he uses the example of how biology texts propagate the meme of Lamarckian and Darwinian explanations of the giraffe's neck. To see that there is no objective reason for doing limits first, we just have to look at some older textbooks. Hutton 1807 uses fluxions. Davies 1836 uses limits, but also connects with infinitesimals. Granville 1904 uses limits. Thompson 1910 uses infinitesimals. Different authors did different things, which is how it should be. There are also some modern texts that don't start with limits and then move on to derivatives: Marsden 1981, Keisler 1976. At the risk of seeming self-promoting, here's a link to a book that I'm a coauthor of. There may have been a time ca. 1850-1965 when limits were believed to be the only possible rigorous foundation for calculus. That era ended when NSA essentially vindicated Leibnizian infinitesimals.[Blaszczyk 2012] Other than the network effect, a second and more cynical explanation would be that a course in calculus is used today as a filter in order to get rid of some of the students seeking a valuable credential. This is the phenomenon of credential creep. For example, I teach quite a few students who want to be physical therapists, and many DPT programs require them to take a year of calculus. Calculus is fundamentally an easy subject, but it can be made much harder by emphasizing epsilontics and by requiring these folks to learn a bag of tricks for integration. I'm not a cynic at heart, but I do have a very hard time coming up with any other explanation for why we require future physical therapists to learn how to do integrals using trig substitutions. Blaszczyk, Katz, and Sherry, "Ten Misconceptions from the History of Analysis and Their Debunking," 2012, http://arxiv.org/abs/1202.4153 Charles Davies, Elements of the Differential and Integral Calculus (1836), https://archive.org/details/elementsdiffere03davigoog Granville, Elements of the Differential and Integral Calculus (1904), https://archive.org/details/elementsdiffere01smitgoog Charles Hutton, A Course of Mathematics: In Two Volumes. For the Use of Academies as Well as Private Tuition (1807), https://archive.org/details/acoursemathemat02huttgoog Keisler, Elementary Calculus: An Infinitesimal Approach, 1976, http://www.math.wisc.edu/~keisler/calc.html Marsden, Calculus Unlimited, 1981, http://www.cds.caltech.edu/~marsden/books/Calculus_Unlimited.html Thompson, Calculus Made Easy, 1910, http://www.gutenberg.org/ebooks/33283 Ben CrowellBen Crowell $\begingroup$ I'll look at your text. I love Boelkins for my first unit. And I mostly use our official text (Anton) for the rest, though I put things in a different order. I'd love to put together my own textbook, though I don't know how many others would use it. $\endgroup$ – Sue VanHattum♦ Sep 6 '14 at 5:59 $\begingroup$ I think your approach is very... american. I was always told that we study for ourselves and not for our job: it is clearly a cultural difference between US and old Europe :-) $\endgroup$ – Siminore Sep 6 '14 at 10:07 $\begingroup$ @Siminore: If a student takes a year of calculus simply because he likes math, that's great, but requiring it is a different matter. Similarly, it's wonderful if a future physical therapist wants to learn Homeric Greek -- but I don't think it should be required. $\endgroup$ – Ben Crowell Sep 6 '14 at 23:13 $\begingroup$ @BenCrowell Actually the university system in my country used to be less focused on future job perspectives than it is in your country. $\endgroup$ – Siminore Sep 7 '14 at 8:23 $\begingroup$ How is assuming calculus is used for weeding out students cynical? That seems like a perfectly sensible use for it. Even with the (extremely miniscule amounts) of epsilon-delta definitions of limits it is, as taught in the US, a fairly easy subject for anyone willing to think a little and do some work. If you do more work you don't really even have to think. Having barrier subjects is quite usual I believe to make sure you don't spend time and money on students who aren't willing to do the work. $\endgroup$ – DRF May 4 '17 at 7:43 My currently preferred approach is to start the course with an introductory lecture explaining the difference between average velocity over a time interval (something we can always in principle measure using a stopwatch) and instantaneous velocity at a given instant of time (which we do not have a means to measure). This motivates the question of what exactly instantaneous velocity is, using the intuition that most of us have that the notion makes sense. (Of course, that in and of itself is a philosophical issue and one that I want to avoid in the course for the obvious reason of limited time!) Then I explain that one can approximate instantaneous velocity by measuring the average velocity over smaller and smaller intervals of time centered around the given instant of time. (In fact, I never actually use a centered time interval, but rather use the formulation with the instant of time as one endpoint of the time interval, so that the formulas I write down look like what the definition of derivative will be when we define it officially later on.) Then I say that the instantaneous velocity is the number that we would get in the limit where the time interval became closer and closer to $0$. And I emphasize that, of course, we can never actually make a measurement with a time interval of duration $0$ (both physically because we can't start and stop our stopwatch at the exact same instant of time and mathematically because using the same starting and ending time in the formula for average velocity yields the undefined expression "$0/0$".) This serves to motivate the introduction of limits, and in fact, in the courses I teach, typically this is more or less the definition of a limit. I use the heuristic and imprecise version of $\lim\limits_{x \rightarrow a} f(x) = L$ means that as $x$ gets closer and closer to $a$ without actually reaching $a$, the values $f(x)$ get closer and closer to $L$. For me, this seems to strike the right balance of motivating limits and discussing how to compute them in the types of cases that will come up throughout the semester, while still being able to quickly get to derivatives so that most of the semester is spent on the two fundamental notions of derivative and integral and, ultimately, the link between the two provided by the Fundamental Theorem of Calculus. I strongly prefer to follow pretty closely to the textbook's development in a course such as introductory calculus, where I feel that students are often not sufficiently mature to handle the discrepancy between the textbook presentation and the presentation given in lecture. And they are not equipped to handle the inevitable situation where the textbook has referred to concept X in Section A which appears before Section B and I want to cover the ideas in Section B before covering concept X, but too much of the presentation and choice of problems refer to or are based on understanding concept X. As for why I prefer to discuss limits before derivatives and integrals (neglecting the obvious cultural bias that that is the way I learned the subject), since I cannot speak for textbook publishers, to me there are two big pictures to take away from calculus. One is the study of change (derivatives) and accumulation (integrals) and the fact that the two problems are intimately related (Fund. Thm. of Calc.). The other is that derivatives and integrals, key ideas for understanding our dynamic natural world, are best understood computationally as the limiting process of simpler calculations that only require basic arithmetic (difference quotients for derivatives, Riemann sums for integrals). It should be noted that the subjective value judgment indicated by the use of the word "best" is not universally agreed upon. In my opinion, these two big pictures should both be presented and stressed to the students. I find it easiest, logically and pedagogically, to present derivatives and then later integrals to model independent processes and stress the point of view that these are both limiting processes. As such, I need to have discussed what a limit is. After defining integrals as limits of Riemann sums, I point out that the fact that integration, like differentiation, is a limiting process is one of two reasons why the two topics are both covered in a common course. And then I foreshadow what is to come by alluding to the fact that there is a deeper and more important link between these two concepts, which I explain a few lectures later when we are ready to discuss the Fundamental Theorem of Calculus. Michael JoyceMichael Joyce $\begingroup$ I tend to agree with your post here. I think the case can be made for the necessity of limits by discussing the speedometer on a car. By discussing how we want to measure rates of change over smaller and smaller durations of time it naturally leads to concept of a limit. Moreover, the fact that it will have the form $0/0$ is manifest. I then tell them that is why we are going to study limits with a focus on tricky ones which have this indeterminant form. I also try to connect it to geometry as many have physics-phobia... $\endgroup$ – James S. Cook Sep 7 '14 at 2:35 $\begingroup$ I agree very much that the derivative should motivate the definition of the limit, and not vice versa, but I would go farther: in an introductory calculus course, why introduce the limit as a separate concept at all? (I don't think 'continuity' is a satisfactory answer; no book I've seen gives any interesting discussion of continuity beyond what can be 'seen' from the intuitive description. Better to differentiate x^2sin(1/x) than to test x sin(1/x) for continuity!) $\endgroup$ – LSpice Sep 7 '14 at 6:33 $\begingroup$ @L Spice I think one good reason to do limits first and discuss continuity is to give the first example of linearity, products, quotients, composites. The inheritance for new functions from old. These are interesting for continuous functions. Also, just to have a language to ubiquitously say when the function is continuous you can just plug in the limit point. It's a baby example of structure and definitions. Is it entertaining? Probably not. $\endgroup$ – James S. Cook Sep 7 '14 at 17:59 $\begingroup$ @JamesS.Cook, differentiable functions also exhibit interesting behaviour under the operations you mention. Basically, I think that continuous but differentiable functions are not interesting objects to study in their own right in a calculus course. Saying that continuous functions are ones for which you can plug in the limit point is unconvincing for someone (like me) who argues that limits as independent objects also do not belong in an introductory calculus course! $\endgroup$ – LSpice Sep 7 '14 at 23:50 $\begingroup$ @LSpice maybe you should write a question which presents your view of what calculus I should cover, perhaps with a sample plan and pointed details of how definitions are altered as to avoid limits etc. It might be productive to understand our difference of opinion. $\endgroup$ – James S. Cook Sep 8 '14 at 2:52 It has become almost a dogma that the math curriculum should teach technical prerequisites to what will be covered later. The consequence is that zillions of high-school students learn algorithms for doing partial-fractions decompositions and will never take later courses in which that is used. In effect the broad public learns that mathematics consists of learning to apply meaningless algorithms. High-school teachers don't know how to alter the curriculum to something that makes more sense, and math professors don't want to do things for students who are so weak that they think mathematics consists of learning to apply meaningless algorithms, which is exactly the lesson that the high-school curriculum designed by math professors taught them. The students know (and, as this applies to most students, they are right) that if they just learn the meaningless algorithms they will get good grades, but if they heed any hints that there something else to math besides that, that won't help their grades and takes time away from things that would. The professors fail to realize that what they perceive as stupidity among the students is in fact deliberate strategic stupidity; it is in fact a good strategy for what the students want to do. So both the students and the professors are deliberately stupid. That is why following conventional curricula merely because they are conventional is contemptible. Stewart's textbook covers $\qquad$(1) $\displaystyle\lim_x \frac{f(x)}{g(x)}$ where $f(x),g(x)\text{ (both)} \to0$ and $\qquad$(2) $\displaystyle \lim_x \frac{f(x)}{g(x)}$ where $g(x)\to\ell\ne0$ and $\qquad$(3) $\displaystyle\lim_x \frac{f(x)}{g(x)}$ where $f(x),g(x) \text{ (both)}\to\infty$ and $\qquad$(4) $\displaystyle\lim_x (f(x) - g(x))$ where $f(x),g(x) \text{ (both)} \to\infty$ and $\qquad$(5) one sided limits of artificial piecewise defined functions and $\qquad$(6) infinite limits and $\qquad$(7) various ways in which limits fail to exist and a lot of other possibilities. Guess what students learn from this? They do not learn that case (1) above plays a central role in differential calculus. They write on the final exam that in a particular problem the limit does not exist because the numerator and denominator both approach $0$, and the fact that the whole of differential calculus could not exist if that were true does not occur to them; after all point (1) above; they do not learn that limits justify formulas about derivatives, since as far as they know, math is dogmatic rather than having justifications. The practice of teaching technical prerequisites first is hardly, if at all, distinguishable from teaching that mathematics is dogmatic. It is wrong. Probably at least $99.99\%$ of those who would tell you to cover limits first have no reason for saying that except that they've never given the question an instant's thought after observing that the definition of "derivative" in its most usually seen form relies on a concept of limits. Michael HardyMichael Hardy Regarding your parenthetical comment "And it turns out that limits are not the only way to do so. Non-standard analysis uses infinitesimals in a logically rigorous way" I would like to comment that the opposition limits vs infinitesimals implied here is not entirely accurate. Limits are present in both approaches; the true opposition is between epsilon-delta methods in the context of an Archimedean continuum, and infinitesimal methods in the context of a Bernoullian (i.e., infinitesimal-enriched, as practiced by Bernoulli) continuum. In Robinson's framework, the limit is defined in terms of the standard part function. Thus, $\lim_{x\to 0} f(x)$ is the standard part of $f(\epsilon)$ when $\epsilon$ is a nonzero infinitesimal. Mikhail KatzMikhail Katz $\begingroup$ I like your precision of language. If you can propose a minor edit to fix this in my post, I'd be grateful. I do want to set up an opposition between the standard (epsilon-delta) limits material and infinitesimals. Mainly because I love this proof: thephysicsvirtuosi.com/posts/trigonometric-derivatives.html $\endgroup$ – Sue VanHattum♦ Apr 24 '17 at 19:28 $\begingroup$ @SueVanHattum, thanks. I will try to make a precise edit to the question. It will have to be approved because I am still below 1000. $\endgroup$ – Mikhail Katz Apr 25 '17 at 7:31 $\begingroup$ I wanted to make less distinction than you do, so I made a smaller edit. Thanks. $\endgroup$ – Sue VanHattum♦ Apr 26 '17 at 13:42 $\begingroup$ @SueVanHattum, no problem. $\endgroup$ – Mikhail Katz Apr 26 '17 at 13:46 I think this book hasn't been mentioned yet: John C. Sparks: Calculus without Limit - Almost [Disclaimer: I have it on my bookshelf but have to admit that I haven't read it yet, so I can't really comment on whether it's good or not.] FrunobulaxFrunobulax I am not advocating any order but one reason to start with limits first is that the concept of continuity is extremely important and useful and limits are the natural way to introduce it. Sergio ParreirasSergio Parreiras $\begingroup$ Why is continuity important at the beginning of the course? $\endgroup$ – Sue VanHattum♦ Sep 6 '14 at 22:02 $\begingroup$ After asking SueVanHattum's I think very apposite question, I would ask another: who says that limits are the natural way—or, more to the point, more natural for whom? If for students, then I disagree; but if for the instructor, then who cares? $\endgroup$ – LSpice Sep 7 '14 at 6:35 $\begingroup$ @SueVanHattum : it is important at the beginning if you think the concepts of continuity/discontinuity are more important than rate of change (derivatives) or areas (integrals). To apply mathematics to real life (in physics, biology or economics) we have to make lots of simplifications and use approximations. Continuity is crucial to guarantee the approximations still "work" but not all phenomena are continuous. I would like my students to have a critical eye and not assume all functions are continuous. $\endgroup$ – Sergio Parreiras Sep 7 '14 at 14:24 $\begingroup$ @SergioParreiras: If not using limits how would you introduce/teach continuity? The notion of continuity, in some form, predates the notion of a limit by 2000 years, so certainly there are other ways to do it. The traditional informal description is that a function is continuous if you can draw it without picking up your pen. At a more formal level (which is irrelevant to the vast majority of students), there are also multiple ways of defining continuity. See Keisler, math.wisc.edu/~keisler/calc.html , p. 125, for a freshman-level example. In SIA it's an axiom. $\endgroup$ – Ben Crowell Sep 7 '14 at 15:56 $\begingroup$ You write that limits are the natural way to introduce continuity but what evidence do you have for such a hypothesis? The historical evidence points to the contrary conclusion; namely, Cauchy who invented continuity defined it without reference to limits, by requiring that every infinitesimal change in the $x$-variable should always produce an infinitesimal change in the $y$-variable. $\endgroup$ – Mikhail Katz Apr 23 '17 at 11:53 You ask why to cover limits prior the derivatives when it would be easier to cover derivatives first and limits later; to show why are limits good to know. Why shall we cover derivatives prior without [anything that uses derivatives]? I think it is easier to define derivatives when the students are "familiar" with limits. Same for integrals with the knowledge of derivatives. What really matters is HOW does the teacher present the part that is taught. One shall show the students an idea where the course is about to go and why. I accidentaly passed all the exams in a highschool and thought math is one of the most boring and useless subjects. Man, how wrong I was. At the university I was lucky to have the teacher I had. He followed the syllabus in terms: Setting up the "vocabulary" (lemma, proof, set,...) and "tools" (proof,...) Operations on sets and basics of boolean logic. Sequence. Limit of a infinite sequence. (What happens at the end of the universe?) Sum of infinite series. (What happens when we try to sum infinite ammount of numbers?) Function. Limits of a function. (What happens at the end? And what happens rally closeto some point?) Derivatives. Integrals. He taught us the way that it was interesting and fun. I cannot forget his "mutated definitions" that were even in tests. He randomly changed quantificators and relation in a definition and ask us to find a sequence/seires/function that fits in. He forced us to think out of the box and use any tool available. Once you have students' passion, you can either lead them trough blind phase (limits prior derivatives) and they will follow you or directly to the finish (derivatives) and explain all the tools behind later. CrowleyCrowley Not the answer you're looking for? Browse other questions tagged mathematical-pedagogy calculus infinitesimals limits or ask your own question. For calculus students, what should be the intuition or motivation behind series? Teaching limits of sequences before limits of functions in Calculus? Learning math through fun rather than rote learning Should we tell students to never replace parts of an expression by their limits when taking a limit? Why is the convergence of infinite series covered in Calculus II? Should we "program" calculus students, like the physicists seem to want us to? Surrounding a subject and strangling it to death versus concentrating on the main point Why do no students know to change the limits of integration when doing substitutions? Would teaching nonstandard calculus in an introduction calculus course make it easier to learn? Is there a pre-calculus introduction to the formal definition of a limit?
CommonCrawl
PESSTO : survey description and products from the first data release by the Public ESO Spectroscopic Survey of Transient Objects (1411.0299) S. J. Smartt, S. Valenti, M. Fraser, C. Inserra, D. R. Young, M. Sullivan, A. Pastorello, S. Benetti, A. Gal-Yam, C. Knapic, M. Molinaro, R. Smareglia, K. W. Smith, S. Taubenberger, O. Yaron, J. P. Anderson, C. Ashall, C. Balland, C. Baltay, C. Barbarino, F.E. Bauer, S. Baumont, D. Bersier, N. Blagorodnova, S. Bongard, M. T. Botticella, F. Bufano, M. Bulla, E. Cappellaro, H. Campbell, F. Cellier-Holzem, T.-W. Chen, M. J. Childress, A. Clocchiatti, C. Contreras, M. Dall Ora, J. Danziger, T. de Jaeger, A. De Cia, M. Della Valle, M. Dennefeld, N. Elias-Rosa, N. Elman, U. Feindt, M. Fleury, E. Gall, S. Gonzalez-Gaitan, L. Galbany, A. Morales Garoffolo, L. Greggio, L. L. Guillou, S. Hachinger, E. Hadjiyska, P. E. Hage, W. Hillebrandt, S. Hodgkin, E. Y. Hsiao, P. A. James, A. Jerkstrand, T. Kangas, E. Kankare, R. Kotak, M. Kromer, H. Kuncarayakti, G. Leloudas, P. Lundqvist, J. D. Lyman, I. M. Hook, K. Maguire, I. Manulis, S. J. Margheim, S. Mattila, J. R. Maund, P. A. Mazzali, M. McCrum, R. McKinnon, M. E. Moreno-Raya, M. Nicholl, P. Nugent, R. Pain, M. M. Phillips, G. Pignata, J. Polshaw, M. L. Pumo, D. Rabinowitz, E. Reilly, C. Romero-Canizales, R. Scalzo, B. Schmidt, S. Schulze, S. Sim, J. Sollerman, F. Taddia, L. Tartaglia, G. Terreran, L. Tomasella, M. Turatto, E. Walker, N. A. Walton, L. Wyrzykowski, F. Yuan, L. Zampieri May 10, 2015 astro-ph.SR, astro-ph.IM The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. We describe the data reduction strategy and data products which are publicly available through the ESO archive as the Spectroscopic Survey Data Release 1 (SSDR1). PESSTO uses the New Technology Telescope with EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5mag for classification. Science targets are then selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. The EFOSC2 spectra cover 3345-9995A (at resolutions of 13-18 Angs) and SOFI spectra cover 0.935-2.53 micron (resolutions 23-33 Angs) along with JHK imaging. This data release contains spectra from the first year (April 2012 - 2013), consisting of all 814 EFOSC2 spectra and 95 SOFI spectra (covering 298 distinct objects), in standard ESO Phase 3 format. We estimate the accuracy of the absolute flux calibrations for EFOSC2 to be typically 15%, and the relative flux calibration accuracy to be about 5%. The PESSTO standard NIR reduction process does not yet produce high accuracy absolute spectrophotometry but the SOFI JHK imaging will improve this. Future data releases will focus on improving the automated flux calibration of the data products. Massive stars exploding in a He-rich circumstellar medium. V. Observations of the slow-evolving SN Ibn OGLE-2012-SN-006 (1502.04945) A. Pastorello, L. Wyrzykowski, S. Valenti, J. L. Prieto, S. Kozlowski, A. Udalski, N. Elias-Rosa, A. Morales-Garoffolo, J. P. Anderson, S. Benetti, M. Bersten, M. T. Botticella, E. Cappellaro, G. Fasano, M. Fraser, A. Gal-Yam, M. Gillone, M. L. Graham, J. Greiner, S. Hachinger, D. A. Howell, C. Inserra, J. Parrent, A. Rau, S. Schulze, S. J. Smartt, K. W. Smith, M. Turatto, O. Yaron, D. R. Young, M. Kubiak, M. K. Szymanski, G. Pietrzynski, I. Soszynski, K. Ulaczyk, R. Poleski, P. Pietrukowicz, J. Skowron, P. Mroz Feb. 17, 2015 astro-ph.SR We present optical observations of the peculiar Type Ibn supernova (SN Ibn) OGLE-2012-SN-006, discovered and monitored by the OGLE-IV survey, and spectroscopically followed by PESSTO at late phases. Stringent pre-discovery limits constrain the explosion epoch with fair precision to JD = 2456203.8 +- 4.0. The rise time to the I-band light curve maximum is about two weeks. The object reaches the peak absolute magnitude M(I) = -19.65 +- 0.19 on JD = 2456218.1 +- 1.8. After maximum, the light curve declines for about 25 days with a rate of 4 mag per 100d. The symmetric I-band peak resembles that of canonical Type Ib/c supernovae (SNe), whereas SNe Ibn usually exhibit asymmetric and narrower early-time light curves. Since 25 days past maximum, the light curve flattens with a decline rate slower than that of the 56Co to 56Fe decay, although at very late phases it steepens to approach that rate. An early-time spectrum is dominated by a blue continuum, with only a marginal evidence for the presence of He I lines marking this SN Type. This spectrum shows broad absorptions bluewards than 5000A, likely O II lines, which are similar to spectral features observed in super-luminous SNe at early epochs. The object has been spectroscopically monitored by PESSTO from 90 to 180 days after peak, and these spectra show the typical features observed in a number of SN 2006jc-like events, including a blue spectral energy distribution and prominent and narrow (v(FWHM) ~ 1900 km/s) He I emission lines. This suggests that the ejecta are interacting with He-rich circumstellar material. The detection of broad (10000 km/s) O I and Ca II features likely produced in the SN ejecta (including the [O I] 6300A,6364A doublet in the latest spectra) lends support to the interpretation of OGLE-2012-SN-006 as a core-collapse event. Spectroscopy of the Type Ia supernova 2011fe past 1000 days (1411.7599) S. Taubenberger, N. Elias-Rosa, W. E. Kerzendorf, S. Hachinger, J. Spyromilio, C. Fransson, M. Kromer, A. J. Ruiter, I. R. Seitenzahl, S. Benetti, E. Cappellaro, A. Pastorello, M. Turatto, A. Marchetti Dec. 15, 2014 astro-ph.SR In this letter we present an optical spectrum of SN 2011fe taken 1034 d after the explosion, several hundred days later than any other spectrum of a Type Ia supernova (disregarding light-echo spectra and local-group remnants). The spectrum is still dominated by broad emission features, with no trace of a light echo or interaction of the supernova ejecta with surrounding interstellar material. Comparing this extremely late spectrum to an earlier one taken 331 d after the explosion, we find that the most prominent feature at 331 d - [Fe III] emission around 4700 A - has entirely faded away, suggesting a significant change in the ionisation state. Instead, [Fe II] lines are probably responsible for most of the emission at 1034 d. An emission feature at 6300-6400 A has newly developed at 1034 d, which we tentatively identify with Fe I {\lambda}6359, [Fe I] {\lambda}{\lambda}6231, 6394 or [O I] {\lambda}{\lambda}6300, 6364. Interestingly, the features in the 1034-d spectrum seem to be collectively redshifted, a phenomenon that we currently have no convincing explanation for. We discuss the implications of our findings for explosion models, but conclude that sophisticated spectral modelling is required for any firm statement. Photometric and spectroscopic observations, and abundance tomography modelling of the type Ia supernova SN 2014J located in M82 (1409.7066) C. Ashall, P. Mazzali, D. Bersier, S. Hachinger, M. Phillips, S. Percival, P. James, K. Maguire Sept. 30, 2014 astro-ph.SR Spectroscopic and photometric observations of the nearby Type Ia Supernova (SN Ia) SN 2014J are presented. Spectroscopic observations were taken -8 to +10 d relative to B-band maximum, using FRODOSpec, a multi-purpose integral-field unit spectrograph. The observations range from 3900 AA to 9000 AA. SN 2014J is located in M82 which makes it the closest SN Ia studied in at least the last 28 years. It is a spectrosopically normal SN Ia with high velocity features. We model the spectra of SN 2014J with a Monte Carlo (MC) radiative transfer code, using the abundance tomography technique. SN 2014J is highly reddened, with a host galaxy extinction of E(B-V)=1.2 (R_V=1.38). It has a $\Delta$m_15(B) of 1.08$\pm$0.03 when corrected for extinction. As SN 2014J is a normal SN Ia, the density structure of the classical W7 model was selected. The model and photometric luminosities are both consistent with B-band maximum occurring on JD 2456690.4$\pm$0.12. The abundance of the SN 2014J behaves like other normal SN Ia, with significant amounts of silicon (12% by mass) and sulphur (9% by mass) at high velocities (12300 km s$^{-1}$) and the low-velocity ejecta (v<6500 km s$^{-1}$) consists almost entirely of $^{56}$Ni. The supernova CSS121015:004244+132827: a clue for understanding super-luminous supernovae (1310.1311) S. Benetti, M. Nicholl, E. Cappellaro, A. Pastorello, S. J. Smartt, N. Elias-Rosa, A. J. Drake, L. Tomasella, M. Turatto, A. Harutyunyan, S. Taubenberger, S. Hachinger, A. Morales-Garoffolo, T.-W. Chen, S.G. Djorgovski, M. Fraser, A. Gal-Yam, C. Inserra, P. Mazzali, M. L. Pumo, J. Sollerman, S. Valenti, D. R. Young, M. Dennefeld, L. Le Guillou, M. Fleury, P.-F. Leget March 17, 2014 astro-ph.SR We present optical photometry and spectra of the super luminous type II/IIn supernova CSS121015:004244+132827 (z=0.2868) spanning epochs from -30 days (rest frame) to more than 200 days after maximum. CSS121015 is one of the more luminous supernova ever found and one of the best observed. The photometric evolution is characterized by a relatively fast rise to maximum (~40 days in the SN rest frame), and by a linear post-maximum decline. The light curve shows no sign of a break to an exponential tail. A broad Halpha is first detected at ~ +40 days (rest-frame). Narrow, barely-resolved Balmer and [O III] 5007 A lines, with decreasing strength, are visible along the entire spectral evolution. The spectra are very similar to other super luminous supernovae (SLSNe) with hydrogen in their spectrum, and also to SN 2005gj, sometimes considered a type Ia interacting with H-rich CSM. The spectra are also similar to a subsample of H-deficient SLSNe. We propose that the properties of CSS121015 are consistent with the interaction of the ejecta with a massive, extended, opaque shell, lost by the progenitor decades before the final explosion, although a magnetar powered model cannot be excluded. Based on the similarity of CSS121015 with other SLSNe (with and without H), we suggest that the shocked-shell scenario should be seriously considered as a plausible model for both types of SLSN. [OI] 6300,6364 in the nebular spectrum of a subluminous Type Ia supernova (1308.3145) S. Taubenberger, M. Kromer, R. Pakmor, G. Pignata, K. Maeda, S. Hachinger, B. Leibundgut, W. Hillebrandt Aug. 20, 2013 astro-ph.SR In this letter a late-phase spectrum of SN 2010lp, a subluminous Type Ia supernova (SN Ia), is presented and analysed. As in 1991bg-like SNe Ia at comparable epochs, the spectrum is characterised by relatively broad [FeII] and [CaII] emission lines. However, instead of narrow [FeIII] and [CoIII] lines that dominate the emission from the innermost regions of 1991bg-like SNe, SN 2010lp shows [OI] 6300,6364 emission, usually associated with core-collapse SNe and never observed in a subluminous thermonuclear explosion before. The [OI] feature has a complex profile with two strong, narrow emission peaks. This suggests oxygen to be distributed in a non-spherical region close to the centre of the ejecta, severely challenging most thermonuclear explosion models discussed in the literature. We conclude that given these constraints violent mergers are presently the most promising scenario to explain SN 2010lp. How much H and He is "hidden" in SNe Ib/c? I. - low-mass objects (1201.1506) S. Hachinger, P. A. Mazzali, S. Taubenberger, W. Hillebrandt, K. Nomoto, D. N. Sauer Jan. 29, 2013 astro-ph.SR, astro-ph.HE H and He features in photospheric spectra have seldom been used to infer quantitatively the properties of Type IIb, Ib and Ic supernovae (SNe IIb, Ib and Ic) and their progenitor stars. Most radiative transfer models ignored NLTE effects, which are extremely strong especially in the He-dominated zones. In this paper, a comprehensive set of model atmospheres for low-mass SNe IIb/Ib/Ic is presented. Long-standing questions such as how much He can be contained in SNe Ic, where He lines are not seen, can thus be addressed. The state of H and He is computed in full NLTE, including the effect of heating by fast electrons. The models are constructed to represent iso-energetic explosions of the same stellar core with differently massive H/He envelopes on top. The synthetic spectra suggest that 0.06 - 0.14 M_sun of He and even smaller amounts of H suffice for optical lines to be present, unless ejecta asymmetries play a major role. This strongly supports the conjecture that low-mass SNe Ic originate from binaries where progenitor mass loss can be extremely efficient. The UV/optical spectra of the Type Ia supernova SN 2010jn: a bright supernova with outer layers rich in iron-group elements (1208.1267) S. Hachinger, P. A. Mazzali, M. Sullivan, R. Ellis, K. Maguire, A. Gal-Yam, D. A. Howell, P. E. Nugent, E. Baron, J. Cooke, I. Arcavi, D. Bersier, B. Dilday, P. A. James, M. M. Kasliwal, S. R. Kulkarni, E. O. Ofek, R. R. Laher, J. Parrent, J. Surace, O. Yaron, E. S. Walker Radiative transfer studies of Type Ia supernovae (SNe Ia) hold the promise of constraining both the time-dependent density profile of the SN ejecta and its stratification by element abundance which, in turn, may discriminate between different explosion mechanisms and progenitor classes. Here we present a detailed analysis of Hubble Space Telescope ultraviolet (UV) and ground-based optical spectra and light curves of the SN Ia SN 2010jn (PTF10ygu). SN 2010jn was discovered by the Palomar Transient Factory (PTF) 15 days before maximum light, allowing us to secure a time-series of four UV spectra at epochs from -11 to +5 days relative to B-band maximum. The photospheric UV spectra are excellent diagnostics of the iron-group abundances in the outer layers of the ejecta, particularly those at very early times. Using the method of 'Abundance Tomography' we have derived iron-group abundances in SN 2010jn with a precision better than in any previously studied SN Ia. Optimum fits to the data can be obtained if burned material is present even at high velocities, including significant mass fractions of iron-group elements. This is consistent with the slow decline rate (or high 'stretch') of the light curve of SN 2010jn, and consistent with the results of delayed-detonation models. Early-phase UV spectra and detailed time-dependent series of further SNe Ia offer a promising probe of the nature of the SN Ia mechanism. Studying the Diversity of Type Ia Supernovae in the Ultraviolet: Comparing Models with Observations (1208.4130) E. S. Walker, S. Hachinger, P. A. Mazzali, R. S. Ellis, M. Sullivan, A. Gal-Yam, D. A. Howell Aug. 20, 2012 astro-ph.CO, astro-ph.SR In the ultraviolet (UV), Type Ia supernovae (SNe Ia) show a much larger diversity in their properties than in the optical. Using a stationary Monte-Carlo radiative transfer code, a grid of spectra at maximum light was created varying bolometric luminosity and the amount of metals in the outer layers of the SN ejecta. This model grid is then compared to a sample of high-redshift SNe Ia in order to test whether the observed diversities can be explained by luminosity and metallicity changes alone. The dispersion in broadband UV flux and colours at approximately constant optical spectrum can be readily matched by the model grid. In particular, the UV1-b colour is found to be a good tracer of metal content of the outer ejecta, which may in turn reflect on the metallicity of the SN progenitor. The models are less successful in reproducing other observed trends, such as the wavelengths of key UV features, which are dominated by reverse fluorescence photons from the optical, or intermediate band photometric indices. This can be explained in terms of the greater sensitivity of these detailed observables to modest changes in the relative abundances. Specifically, no single element is responsible for the observed trends. Due to their complex origin, these trends do not appear to be good indicators of either luminosity or metallicity. Constraining Type Ia supernova models: SN 2011fe as a test case (1203.4839) F. K. Roepke, M. Kromer, I. R. Seitenzahl, R. Pakmor, S. A. Sim, S. Taubenberger, F. Ciaraldi-Schoolmann, W. Hillebrandt, G. Aldering, P. Antilogus, C. Baltay, S. Benitez-Herrera, S. Bongard, C. Buton, A. Canto, F. Cellier-Holzem, M. Childress, N. Chotard, Y. Copin, H. K. Fakhouri, M. Fink, D. Fouchez, E. Gangler, J. Guy, S. Hachinger, E. Y. Hsiao, C. Juncheng, M. Kerschhaggl, M. Kowalski, P. Nugent, K. Paech, R. Pain, E. Pecontal, R. Pereira, S. Perlmutter, D. Rabinowitz, M. Rigault, K. Runge, C. Saunders, G. Smadja, N. Suzuki, C. Tao, R. C. Thomas, A. Tilquin, C. Wu The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs-realizations of explosion models appropriate for two of the most widely-discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is still not justified. Observations at late epochs, however, hold promise for discriminating the explosion scenarios in a straightforward way, as a nucleosynthesis effect leads to differences in the 55Co production. SN 2011fe is close enough to be followed sufficiently long to study this effect. NERO - A Post Maximum Supernova Radiation Transport Code (1105.3049) I. Maurer, A. Jerkstrand, P. A. Mazzali, S. Taubenberger, S. Hachinger, M. Kromer, S. Sim, W. Hillebrandt May 16, 2011 astro-ph.HE The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a NLTE treatment of radiation transport has rarely been studied. In this paper we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand et al. (2011) and to the nebular code of Mazzali et al. (2001). All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' SN Ia 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post maximum epochs. The nebular spectrum of the type Ia supernova 2003hv: evidence for a non-standard event (1105.1298) Paolo A. Mazzali, I. Maurer, M. Stritzinger, S. Taubenberger, S. Benetti, S. Hachinger May 6, 2011 astro-ph.SR The optical and near-infrared late-time spectrum of the under-luminous Type Ia supernova 2003hv is analysed with a code that computes nebular emission from a supernova nebula. Synthetic spectra based on the classical explosion model W7 are unable to reproduce the large \FeIII/\FeII\ ratio and the low infrared flux at $\sim 1$ year after explosion, although the optical spectrum of SN\,2003hv is reproduced reasonably well for a supernova of luminosity intermediate between normal and subluminous (SN\,1991bg-like) ones. A possible solution is that the inner layers of the supernova ejecta ($v \lsim 8000$\,\kms) contain less mass than predicted by classical explosion models like W7. If this inner region contains $\sim 0.5 \Msun$ of material, as opposed to $\sim 0.9 \Msun$ in Chandrasekhar-mass models developed within the Single Degenerate scenario, the low density inhibits recombination, favouring the large \FeIII/\FeII\ ratio observed in the optical, and decreases the flux in the \FeII\ lines which dominate the IR spectrum. The most likely scenario may be an explosion of a sub-Chandrasekhar mass white dwarf. Alternatively, the violent/dynamical merger of two white dwarfs with combined mass exceeding the Chandrasekhar limit also shows a reduced inner density. Violent mergers of nearly equal-mass white dwarf as progenitors of subluminous Type Ia supernovae (1102.1354) R. Pakmor, S. Hachinger, F. K. Roepke, W. Hillebrandt Feb. 7, 2011 astro-ph.SR The origin of subluminous Type Ia supernovae (SNe Ia) has long eluded any explanation, as all Chandrasekhar-mass models have severe problems reproducing them. Recently, it has been proposed that violent mergers of two white dwarfs of 0.9 M_sun could lead to subluminous SNe Ia events that resemble 1991bg-like SNe~Ia. Here we investigate whether this scenario still works for mergers of two white dwarfs with a mass ratio smaller than one. We aim to determine the range of mass ratios for which a detonation still forms during the merger, as only those events will lead to a SN Ia. This range is an important ingredient for population synthesis and one decisive point to judge the viability of the scenario. In addition, we perform a resolution study of one of the models. Finally we discuss the connection between violent white dwarf mergers with a primary mass of 0.9 M_sun and 1991bg-like SNe Ia. The latest version of the smoothed particle hydrodynamics code Gadget3 is used to evolve binary systems with different mass ratios until they merge. We analyze the result and look for hot spots in which detonations can form. We show that mergers of two white dwarfs with a primary white dwarf mass of ~0.9 M_sun and a mass ratio larger than about $0.8$ robustly reach the conditions we require to ignite a detonation and thus produce thermonuclear explosions during the merger itself. We also find that while our simulations do not yet completely resolve the hot spots, increasing the resolution leads to conditions that are even more likely to ignite detonations. (abridged) High luminosity, slow ejecta and persistent carbon lines: SN 2009dc challenges thermonuclear explosion scenarios (1011.5665) S. Taubenberger, S. Benetti, M. Childress, R. Pakmor, S. Hachinger, P. A. Mazzali, V. Stanishev, N. Elias-Rosa, I. Agnoletto, F. Bufano, M. Ergon, A. Harutyunyan, C. Inserra, E. Kankare, M. Kromer, H. Navasardyan, J. Nicolas, A. Pastorello, E. Prosperi, F. Salgado, J. Sollerman, M. Stritzinger, M. Turatto, S. Valenti, W. Hillebrandt Jan. 10, 2011 astro-ph.SR SN 2009dc shares similarities with normal Type Ia supernovae, but is clearly overluminous, with a (pseudo-bolometric) peak luminosity of log(L) = 43.47 [erg/s]. Its light curves decline slowly over half a year after maximum light, and the early-time near-IR light curves show secondary maxima, although the minima between the first and second peaks are not very pronounced. Bluer bands exhibit an enhanced fading after ~200 d, which might be caused by dust formation or an unexpectedly early IR catastrophe. The spectra of SN 2009dc are dominated by intermediate-mass elements and unburned material at early times, and by iron-group elements at late phases. Strong C II lines are present until ~2 weeks past maximum, which is unprecedented in thermonuclear SNe. The ejecta velocities are significantly lower than in normal and even subluminous SNe Ia. No signatures of CSM interaction are found in the spectra. Assuming that the light curves are powered by radioactive decay, analytic modelling suggests that SN 2009dc produced ~1.8 solar masses of 56Ni assuming the smallest possible rise time of 22 d. Together with a derived total ejecta mass of ~2.8 solar masses, this confirms that SN 2009dc is a member of the class of possible super-Chandrasekhar-mass SNe Ia similar to SNe 2003fg, 2006gz and 2007if. A study of the hosts of SN 2009dc and other superluminous SNe Ia reveals a tendency of these SNe to explode in low-mass galaxies. A low metallicity of the progenitor may therefore be an important pre-requisite for producing superluminous SNe Ia. We discuss a number of explosion scenarios, ranging from super-Chandrasekhar-mass white-dwarf progenitors over dynamical white-dwarf mergers and Type I 1/2 SNe to a core-collapse origin of the explosion. None of the models seem capable of explaining all properties of SN 2009dc, so that the true nature of this SN and its peers remains nebulous. Hydrogen and helium in the late phase of SNe IIb (1007.1881) I. Maurer, P. Mazzali, S. Taubenberger, S. Hachinger July 12, 2010 astro-ph.HE Supernovae of Type IIb contain large fractions of helium and traces of hydrogen, which can be observed in the early and late spectra. Estimates of the hydrogen and helium mass and distribution are mainly based on early-time spectroscopy and are uncertain since the respective lines are usually observed in absorption. Constraining the mass and distribution of H and He is important to gain insight into the progenitor systems of these SNe. We implement a NLTE treatment of hydrogen and helium in a three-dimensional nebular code. Ionisation, recombination, (non-)thermal electron excitation and H$\alpha$ line scattering are taken into account to compute the formation of H$\alpha$, which is by far the strongest H line observed in the nebular spectra of SNe IIb. Other lines of H and He are also computed but are rarely identified in the nebular phase. Nebular models are computed for the Type IIb SNe 1993J, 2001ig, 2003bg and 2008ax as well as for SN 2007Y, which shows H$\alpha$ absorption features at early times and strong H$\alpha$ emission in its late phase, but has been classified as a SN Ib. We suggest to classify SN 2007Y as a SN IIb. Optical spectra exist for all SNe of our sample, and there is one IR nebular observation of SN 2008ax, which allows an exploration of its helium mass and distribution. We develop a three-dimensional model for SN 2008ax. We obtain estimates for the total mass and kinetic energy in good agreement with the results from light-curve modelling found in the literature. We further derive abundances of He, C, O, Ca and $^{56}$Ni. We demonstrate that H$\alpha$ absorption is probably responsible for the double-peaked profile of the [O {\sc i}] $\lambda\lambda$ 6300, 6363 doublet in several SNe IIb and present a mechanism alternative to shock interaction for generating late-time H$\alpha$ emission of SNe IIb. Nebular emission-line profiles of Type Ib/c Supernovae - probing the ejecta asphericity (0904.4632) S. Taubenberger, S. Valenti, S. Benetti, E. Cappellaro, M. Della Valle, N. Elias-Rosa, S. Hachinger, W. Hillebrandt, K. Maeda, P. A. Mazzali, A. Pastorello, F. Patat, S. A. Sim, M. Turatto April 30, 2009 astro-ph.CO In order to assess qualitatively the ejecta geometry of stripped-envelope core-collapse supernovae, we investigate 98 late-time spectra of 39 objects, many of them previously unpublished. We perform a Gauss-fitting of the [O I] 6300, 6364 feature in all spectra, with the position, full width at half maximum (FWHM) and intensity of the 6300 Gaussian as free parameters, and the 6364 Gaussian added appropriately to account for the doublet nature of the [O I] feature. On the basis of the best-fit parameters, the objects are organised into morphological classes, and we conclude that at least half of all Type Ib/c supernovae must be aspherical. Bipolar jet-models do not seem to be universally applicable, as we find too few symmetric double-peaked [O I] profiles. In some objects the [O I] line exhibits a variety of shifted secondary peaks or shoulders, interpreted as blobs of matter ejected at high velocity and possibly accompanied by neutron-star kicks to assure momentum conservation. At phases earlier than ~200d, a systematic blueshift of the [O I] 6300, 6364 line centroids can be discerned. Residual opacity provides the most convincing explanation of this phenomenon, photons emitted on the rear side of the SN being scattered or absorbed on their way through the ejecta. Once modified to account for the doublet nature of the oxygen feature, the profile of Mg I] 4571 at sufficiently late phases generally resembles that of [O I] 6300, 6364, suggesting negligible contamination from other lines and confirming that O and Mg are similarly distributed within the ejecta. The underluminous Type Ia Supernova 2005bl and the class of objects similar to SN 1991bg (0711.4548) S. Taubenberger, S. Hachinger, G. Pignata, P. A. Mazzali, C. Contreras, S. Valenti, A. Pastorello, N. Elias-Rosa, O. Bärnbantner, H. Barwig, S. Benetti, M. Dolci, J. Fliri, G. Folatelli, W. L. Freedman, S. Gonzalez, M. Hamuy, W. Krzeminski, N. Morrell, H. Navasardyan, S. E. Persson, M. M. Phillips, C. Ries, M. Roth, N. B. Suntzeff, M. Turatto, W. Hillebrandt Nov. 28, 2007 astro-ph Optical observations of the Type Ia supernova (SN Ia) 2005bl in NGC 4070, obtained from -6 to +66 d with respect to the B-band maximum, are presented. The photometric evolution is characterised by rapidly-declining light curves and red colours at peak and soon thereafter. With M_B,max = -17.24 the SN is an underluminous SN Ia, similar to the peculiar SNe 1991bg and 1999by. This similarity also holds for the spectroscopic appearance, the only remarkable difference being the likely presence of carbon in pre-maximum spectra of SN 2005bl. A comparison study among underluminous SNe Ia is performed, based on a number of spectrophotometric parameters. Previously reported correlations of the light-curve decline rate with peak luminosity and R(Si) are confirmed, and a large range of post-maximum Si II lambda6355 velocity gradients is encountered. 1D synthetic spectra for SN 2005bl are presented, which confirm the presence of carbon and suggest an overall low burning efficiency with a significant amount of leftover unburned material. Also, the Fe content in pre-maximum spectra is very low, which may point to a low metallicity of the precursor. Implications for possible progenitor scenarios of underluminous SNe Ia are briefly discussed. Exploring the spectroscopic diversity of Type Ia Supernovae (astro-ph/0604472) S. Hachinger, P. A. Mazzali, S. Benetti July 6, 2006 astro-ph The velocities and equivalent widths (EWs) of a set of absorption features are measured for a sample of 28 well-observed Type Ia supernovae (SN Ia) covering a wide range of properties. The values of these quantities at maximum are obtained through interpolation/extrapolation and plotted against the decline rate, and so are various line ratios. The SNe are divided according to their velocity evolution into three classes defined in a previous work of Benetti et al.: low velocity gradient (LVG), high velocity gradient (HVG) and FAINT. It is found that all the LVG SNe have approximately uniform velocities at B maximum, while the FAINT SNe have values that decrease with increasing Delta m_15(B), and the HVG SNe have a large spread. The EWs of the Fe-dominated features are approximately constant in all SNe, while those of Intermediate mass element (IME) lines have larger values for intermediate decliners and smaller values for brighter and FAINT SNe. The HVG SNe have stronger Si II 6355-A lines, with no correlation with Delta m_15(B). It is also shown that the Si II 5972 A EW and three EW ratios, including one analogous to the R(Si II) ratio introduced by Nugent et al., are good spectroscopic indicators of luminosity. The data suggest that all LVG SNe have approximately constant kinetic energy, since burning to IME extends to similar velocities. The FAINT SNe may have somewhat lower energies. The large velocities and EWs of the IME lines of HVG SNe appear correlated with each other, but are not correlated with the presence of high-velocity features in the Ca II infrared triplet in the earliest spectra for the SNe for which such data exist.
CommonCrawl
Understading wikipedia explanation of tranformation matrix I been staring at this for 2 hours trying to understand the last step, I can't figure out what they mean when they put the $\vec e_i$ to the left of the matrix. Vector v can be represented in basis vectors, $ E = [\vec e_1 \vec e_2 \ldots \vec e_n]$ with coordinates $[v]_E = [v_1 v_2 \ldots v_n]$ : $$\vec v = v_1 \vec e_1 + v_2 \vec e_2 + \cdots + v_n \vec e_n = \sum v_i \vec e_i = E [v]_E$$ $$A(\vec v) = A \left( \sum {v_i \vec e_i} \right) = \sum {v_i A(\vec e_i)} = [A(\vec e_1) A(\vec e_2) \ldots A(\vec e_n)] [v]_E =\; A \cdot [v]_E = [\vec e_1 \vec e_2 \ldots \vec e_n] \begin{bmatrix} a_{1,1} & a_{1,2} & \ldots & a_{1,n} \\ a_{2,1} & a_{2,2} & \ldots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \ldots & a_{n,n} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix} $$ As far as I can tell this $ E = [\vec e_1 \vec e_2 \ldots \vec e_n]$ is just the matrix with basis vectors as columns. But that does not make sense to me. They finish everthing with: The $a_{i,j}$ elements of matrix A are determined for a given basis E by applying A to every $\vec e_j = [0 0 \ldots (v_j=1) \ldots 0]^T$. ( what is $v_j$ doing inside this "thing"?) and then ending with: And observing the response vector A $\vec e_j = a_{1,j} \vec e_1 + a_{2,j} \vec e_2 + \cdots + a_{n,j} \vec e_n = \sum a_{i,j} \vec e_i$. This equation defines the wanted elements, $a_{i,j}$, of j-th column of the matrix A. For more info, https://en.wikipedia.org/wiki/Transformation_matrix#Finding_the_matrix_of_a_transformation linear-algebra Kolmin user1user1 $\begingroup$ $[v]_E$ should be $[v_1\ v_2\ \ldots\ v_n]^T$. That is, it should be a column matrix. $\endgroup$ – user137731 Oct 13 '15 at 15:20 $\begingroup$ @Bye_World I see that aswell..its actully e's on the left in the last equlity that startled me. $\endgroup$ – user1 Oct 13 '15 at 15:25 $\begingroup$ Let $ E = [\vec e_1 \vec e_2 \ldots \vec e_n]$ be a basis. Let $\vec v$ be a vector, then writing $$v= [\vec e_1 \vec e_2 \ldots \vec e_n] \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix} $$ is just a confusing way to say that, in basis $E$, $\vec v$ is expressed as the vector $[v_1 v_2 \ldots v_n]$. A much clear notation is to write $$[v]_E = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix} $$ or $[v]_E = [v_1 v_2 \ldots v_n]$. $\endgroup$ – Ramiro Oct 13 '15 at 16:23 $\begingroup$ The expression $\vec e_j = [0 0 \ldots (v_j=1) \ldots 0]^T$ is again just a confusing notation to say that, in the basis $E$, $e_j$ is represented as a vector having all entries $0$ except in position $j$, where the entry is $1$. $\endgroup$ – Ramiro Oct 13 '15 at 16:25 Am I right in assuming that this is the step you're having trouble with? $$A \cdot [v]_E = [\vec e_1 \vec e_2 \ldots \vec e_n] \begin{bmatrix} a_{1,1} & a_{1,2} & \ldots & a_{1,n} \\ a_{2,1} & a_{2,2} & \ldots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \ldots & a_{n,n} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix}$$ Obviously $[v]_E = \begin{bmatrix}\vec e_1 & \vec e_2 & \ldots & \vec e_n\end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix}$, right? Try multiplying it out if you don't understand this part. That means that we just need to prove that $$\begin{bmatrix} a_{1,1} & a_{1,2} & \ldots & a_{1,n} \\ a_{2,1} & a_{2,2} & \ldots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \ldots & a_{n,n} \end{bmatrix}\left(\begin{bmatrix}\vec e_1 & \vec e_2 & \ldots & \vec e_n\end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix}\right) = [\vec e_1 \vec e_2 \ldots \vec e_n] \begin{bmatrix} a_{1,1} & a_{1,2} & \ldots & a_{1,n} \\ a_{2,1} & a_{2,2} & \ldots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \ldots & a_{n,n} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix}$$ I.e. that the block matrix of basis vectors commutes with the matrix $A$. tells us that the vectors $e_i$ are also expressed with respect to the basis $E$ (I'm not at all sure why they labeled the $j$th component of $e_j$ as $v_j$ -- that seems to be a typo, but otherwise it's pretty clear what they meant). So because $e_1 = 1e_1 + 0e_2 + \cdots + 0e_n$, we see that $[e_1]_E = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0\end{bmatrix}$ and likewise $[e_2]_E = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0\end{bmatrix}$, etc. But then the matrix $\begin{bmatrix} e_1 & e_2 & \cdots & e_n\end{bmatrix}$ is just the identity matrix. And the identity matrix commutes with every other $n\times n$ matrix. So of course $A(Iv) = IAv$. $\begingroup$ I think its $E[v]_{E}$ after "Obviously" $\endgroup$ – user1 Feb 13 '16 at 6:23 $\begingroup$ Yeah, it looks like I made a mistake there. But either way a matrix whose columns are a basis represented with respect to themselves is always the identity matrix and thus you can stick $\pmatrix{e_1 & \cdots e_n}$ wherever you like. IMHO, Wikipedia uses awful notations here. $\endgroup$ – user137731 Feb 13 '16 at 14:51 Interpret the $[\vec e_1 \vec e_2 \ldots \vec e_n]$ not as a matrix, but as a row of "arrows". Each vector $\vec e_i$ is just an arrow in space. Now when you multiply this "line" of "arrows" to a "column" of numbers you get: $x_1 \vec e_1 + x_2 \vec e_2 + \cdots + x_n \vec e_n$ - some big "arrow", which is the result vector. lesniklesnik $\begingroup$ Im sorry I dont understand this. $\endgroup$ – user1 Sep 23 '15 at 4:01 Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question. Is there any other way to prove this statement? Proving a basis for inner product space V when $||e_j-v_j||< \frac{1}{\sqrt{n}}$. What is the role of inequality in this problem? The matrix entries depend on the bases of the domain and range? Vector whose inner product is positive with every vector in given basis of $\mathbb{R}^n$ Question about a spanning set Eigenspaces are in direct sum Find determinant of the matrix $P$. How can we decompose the identity matrix given a set of orthonormal vectors? Axler textbook question
CommonCrawl
E-beam evaporators - recommendations? Condensed matter experimentalists often need to prepare nanoscale thickness films of a variety of materials. One approach is to use "physical vapor deposition" - in a good vacuum, a material of interest is heated to the point where it has some nonzero vapor pressure, and that vapor collides with a substrate of interest and sticks, building up the film. One way to heat source material is with a high voltage electron beam, the kind of thing that used to be used at lower intensities to excite the phosphors on old-style cathode ray tube displays. My Edwards Auto306 4-pocket e-beam system is really starting to show its age. It's been a great workhorse for quick things that don't require the cleanroom. Does anyone out there have recommendations for a system (as inexpensive as possible of course) with similar capabilities, or a vendor you like for such things? Discussions of quantum mechanics In a sure sign that I'm getting old, I find myself tempted to read some of the many articles, books, and discussions about interpretations of quantum mechanics that seem to be flaring up in number these days. (Older physicists seem to return to this topic, I think because there tends to be a lingering feeling of dissatisfaction with just about every way of thinking about the issue.) To be clear, the reason people refer to interpretations of quantum mechanics is that, in general, there is no disagreement about the results of well-defined calculations, and no observed disagreement between such calculations and experiments. There are deep ontological questions here about what physicists mean by something (say the wavefunction) being "real". There are also fascinating history-of-science stories that capture the imagination, with characters like Einstein criticizing Bohr about whether God plays dice, Schroedinger and his cat, Wigner and his friend, Hugh Everett and his many worlds, etc. Three of the central physics questions are: Quantum systems can be in superpositions. We don't see macroscopic quantum superpositions, even though "measuring" devices should also be described using quantum mechanics. Is there some kind physical process at work that collapses superpositions that is not described by the ordinary Schroedinger equation? What picks out the classical states that we see? Is the Born rule a consequence of some underlying principle, or is that just the way things are? Unfortunately real-life is very busy right now, but I wanted to collect some recent links and some relevant papers in one place, if people are interested. From Peter Woit's blog, I gleaned these links: Adam Becker's What is Real?: The Unfinished Quest for the Meaning of Quantum Physics. David Lindley's Where does the Weirdness Go? Philip Ball's Beyond Weird Dieter Zeh's writings on this topic A very lengthy discussion about this on Scott Aronson's blog Going down the google scholar rabbit hole, I also found these: This paper has a clean explication of the challenge in whether decoherence due to interactions with large numbers of degrees of freedom really solves the outstanding issues. This is a great review by Zurek about decoherence. This is a subsequent review looking at these issues. And this is a review of "collapse theories", attempts to modify quantum mechanics beyond Schroedinger time evolution to kill superpositions. No time to read all of these, unfortunately. APS March Meeting talks online Thanks to a commenter (and ZapperZ) for bringing to my attention that a bunch of invited talks from this year's March APS meeting are now online. One big invited session was the industrial one, Physics that Changed the World: Oxide-confined VCSELS - video here The Ubiquitous SQUID: History and Applications - video here How Organic LEDs Revolutionized Displays (and Maybe Lighting) - video here The Magnetic Hard Disk Drive - How Information is Stored in the Cloud - video here The Double-Heterostructure Concept in Lasers, LED's, and Solar Cells - video here The other was the Kavli Symposium: Einstein, Gravitational Waves and a New Science - video here Discovery of the chiral Majorana fermion and its application to topological quantum computing - video here Neuromorphic Computing - video here Frugal science: A physicist view on tackling global health and education challenges - video here When a Weed is a Flower: Reimagining Our Classification System - video here And the big bilayer graphene talk: Magic-angle graphene superlattices - video here In general the APS Physics YouTube channel has a lot of cool stuff! Stephen Hawking, science communicator An enormous amount has already been written by both journalists and scientists (here too) on the passing of Stephen Hawking. Clearly he was an incredibly influential physicist with powerful scientific ideas. Perhaps more important in the broad scheme of things, he was a gifted communicator who spread a fascination with science to an enormous audience, through his books and through the careful, clever use of his celebrity (as here, here, here, and here). While his illness clearly cost him dearly in many ways, I don't think it's too speculative to argue that it was a contributor to his success as a popularizer of science. Not only was he a clear, expository writer with a gift for conveying a sense of the beauty of some deep ideas, but he was in some ways a larger-than-life heroic character - struck down physically in the prime of life, but able to pursue exotic, foundational ideas through the sheer force of his intellect. Despite taking on some almost mythic qualities in the eyes of the public, he also conveyed that science is a human endeavor, pursued by complicated, interesting people (willing to do things like place bets on science, or even reconsider their preconceived ideas). Hawking showed that both science and scientists can be inspiring to a broad audience. It is rare that top scientists are able to do that, though a combination of their skill as communicators and their personalities. In physics, besides Hawking the ones that best spring to mind are Feynman (anyone who can win a Nobel and also have their anecdotes described as the Adventures of a Curious Character is worth reading!) and Einstein. Sometimes there's a bias that gifted science communicators who care about public outreach are self-aggrandizing publicity hounds and not necessarily serious intellects (not that the two have to be mutually exclusive). The outpouring of public sympathy on the occasion of Hawking's passing shows how deep an impact he had on so many. Informing and inspiring people is a great legacy, and hopefully more scientists will be successful on that path thanks to Hawking. APS March Meeting, day 3 and summary thoughts Besides the graphene bilayer excitement, a three other highlights from today: David Cobden of the University of Washington gave a very nice talk about 2d topological insulator response of 1T'-WTe2. Many of the main results are in this paper (arxiv link). This system in the single-layer limit has very clear edge conduction while the bulk of the 2d layer is insulating, as determined by a variety of transport measurements. There are also new eye-popping scanning microwave impedance microscopy results from Yongtao Cui's group at UC Riverside that show fascinating edge channels, indicating tears and cracks in the monolayer material that are otherwise hard to see. Steve Forrest of the University of Michigan gave a great presentation about "How Organic Light Emitting Diodes Revolutionized Displays (and maybe lighting)". The first electroluminescent organic LED was reported about thirty years ago, and it had an external quantum efficiency of about 1%. First, when an electron and a hole come together in the device, they only have a 1-in-4 chance of producing a singlet exciton, the kind that can readily decay radiatively. Second, it isn't trivial to get light out of such a device because of total internal reflection. Adding in the right kind of strong spin-orbit-coupling molecule, it is possible to convert those triplets to singlets and thus get nearly 100% internal quantum efficiency. In real devices, there can be losses due to light trapped in waveguided modes, but you can create special substrates to couple that light into the far field. Similarly, you can create modified substrates to avoid losses due to unintentional plasmon modes. The net result is that you can have OLEDs with about 70% external quantum efficiencies. OLED displays are a big deal - the global market was about $20B/yr in 2017, and will likely displace LCD displays. OLED-based lighting is also on the way. It's an amazing technology, and the industrial scale-up is very impressive. Barry Stipe from Western Digital also gave a neat talk about the history and present state of the hard disk drive. Despite the growth of flash memory, 90% of all storage in cloud data centers remains in magnetic hard disks, for capacity and speed. The numbers are really remarkable. If you scale all the parts of a hard drive up by a factor of a million, the disk platter would be 95 km in diameter, a bit would be about the size of your finger, and the read head would be flying above the surface at an altitude of 4 mm, and to get the same data rate as a drive, the head would have to be flying at 0.1 c. I hadn't realized that they now hermetically seal the drives and fill them with He gas. The He is an excellent thermal conductor for cooling, and because it has a density 1/7 that of air, the Reynolds number is lower for a given speed, meaning less turbulence, meaning they can squeeze additional, thinner platters into the drive housing. Again, an amazing amount of science and physics, plus incredible engineering. Some final thoughts (as I can't stay for the rest of the meeting): In the old days, some physicists seemed to generate an intellectual impression by cultivating resemblance to Einstein. Now, some physicists try to generate an intellectual impression by cultivating resemblance to Paul McEuen. After many years of trying, the APS WiFi finally works properly and well! This was the largest March Meeting ever (~ 12000 attendees). This is a genuine problem, as the meeting is growing by several percent per year, and this isn't sustainable, especially in terms of finding convention centers and hotels that can host. There are serious discussions about what to do about this in the long term - don't be surprised if a survey is sent to some part of the APS membership about this. Superconductivity in graphene bilayers - why is this exciting and important As I mentioned here, the big story of this year's March Meeting is the report, in back-to-back Nature papers this week (arxiv pdf links in this sentence), of both Mott insulator and superconductivity in graphene bilayers. I will post more here later today after seeing the actual talk on this (See below for some updates), but for now, let me give the FAQ-style report. Skip to the end for the two big questions: Moire pattern from twisted bilayer graphene, image from NIST. What's the deal with graphene? Graphene is the name for a single sheet of graphite - basically an atomically thin hexagonal chickenwire lattice of carbon atoms. See here and here. Graphene is the most popular example of an enormous class of 2d materials. The 2010 Nobel Prize in physics was awarded for the work that really opened up that whole body of materials for study by the physics community. Graphene has some special electronic properties: It can easily support either electrons or holes (effective positively charged "lack of electrons") for conduction (unlike a semiconductor, it has no energy gap, but it's a semimetal rather than a metal), and the relationship between kinetic energy and momentum of the charge carriers looks like what you see for massless relativistic things in free space (like light). What is a bilayer? Take two sheets of graphene and place one on top of the other. Voila, you've made a bilayer. The two layers talk to each other electronically. In ordinary graphite, the layers are stacked in a certain way (Bernal stacking), and a Bernal bilayer acts like a semiconductor. If you twist the two layers relative to each other, you end up with a Moire pattern (see image) so that along the plane, the electrons feel some sort of periodic potential. What is gating? It is possible to add or remove charge from the graphene layers by using an underlying or overlying electrode - this is the same mechanism behind the field effect transistors that underpin all of modern electronics. What is actually being reported? If you have really clean graphene and twist the layers relative to each other just right ("magic angle"), the system becomes very insulating when you have just the right number of charge carriers in there. If you add or remove charge away from that insulating regime, the system apparently becomes superconducting at a temperature below 1.7 K. Why is the insulating behavior interesting? It is believed that the insulating response in the special twisted case is because of electron-electron interactions - a Mott insulator. Think about one of those toys with sliding tiles. You can't park two tiles in the same location, so if there is no open location, the whole set of tiles locks in place. Mott insulators usually involve atoms that contain d electrons, like NiO or the parent compounds of the high temperature copper oxide superconductors. Mott response in an all carbon system would be realllllly interesting. Why is the superconductivity interesting? Isn't 1.7 K too cold to be useful? The idea of superconductivity-near-Mott has been widespread since the discovery of high-Tc in 1987. If that's what's going on here, it means we have a new, highly tunable system to try to understand how this works. High-Tc remains one of the great unsolved problems in (condensed matter) physics, and insights gained here have the potential to guide us toward greater understanding and maybe higher temperatures in those systems. Why is this important? This is a new, tunable, controllable system to study physics that may be directly relevant to one of the great open problems in condensed matter physics. This may be generalizable to the whole zoo of other 2d materials as well. Why should you care? It has the potential to give us deep understanding of high temperature superconductivity. That could be a big deal. It's also just pretty neat. Take a conductive sheet of graphene, and another conducting sheet of graphene, and if you stack them juuuuuust right, you get an insulator or a superconductor depending on how many charge carriers you stick in there. Come on, that's just wild. Update: A few notes from seeing the actual talk. Pablo painted a picture: In the cuprates, the temperature (energy) scale is hundreds of Kelvin, and the size scale associated with the Mott insulating lattice is fractions of a nm (the spacing between Cu ions in the CuO2 planes). In ultracold atom optical lattice attempts to look at Mott physics, the temperature scale is nK (and cooling is a real problem), while the spatial scale between sites is more like a micron. In the twisted graphene bilayers, the temperature scale is a few K, and the spatial scale is about 13.4 nm (for the particular magic angle they use). The way to think about what the twist does: In real space, it creates a triangular lattice of roughly Bernal-stacked regions (the lighter parts of the Moire pattern above). In reciprocal space, the Dirac cones at the K and K' points of the two lattices become separated by an amount given by \(k_{\theta} \approx K \theta\), where \(\theta\) is the twist angle, and we've used the small angle approximation. When you do that and turn on interlayer coupling, you hybridize the bands from the upper and lower layers. This splits off the parts of the bands that are close in energy to the dirac point, and at the magic angles those bands can be very very flat (like bandwidths of ~ 10 meV, as opposed to multiple eV of the full untwisted bands). Flat bands = tendency to localize. The Mott phase then happens if you park exactly one carrier (one hole, for the superconducting states in the paper) per Bernal-patch-site. Most persuasive reasons they think it's really a Mott insulating state and not something else, besides the fact that it happens right at half-filling of the twist-created triangular lattice: Changing the angle by a fraction of a degree gets rid of the insulating state, and applying a magnetic field (in plane or perpendicular) makes the system become metallic, which is the opposite of what tends to happen in other insulating situations. (Generally magnetic fields tend to favor localization.) They see spectroscopic evidence that the important number of effective carriers is determined not by the total density, but by how far away they gate the system from half-filling. At the Mott/superconducting border, they see what looks like Josephson-junction response, as if the system breaks up into superconducting regions separated by weak links. The ratio of superconducting Tc to the Fermi temperature is about 0.5, which makes this about as strongly coupled (and therefore likely to be some weird unconventional superconductor) as you ever see. Pablo makes the point that this could be very general - for any combo of van der Waals layered materials, there are likely to be magic angles. Increasing the interlayer coupling increases the magic angle, and could then increase the transition temperature. Comments by me: This is very exciting, and has great potential. Really nice work. I wonder what would happen if they used graphite as a gate material rather than a metal layer, given what I wrote here. It should knock the disorder effects down a lot, and given how flat the bands are, that could really improve things. There are still plenty of unanswered questions. Why does the superconducting state seem more robust on the hole side of charge neutrality as well as on the hole side of half-filling? This system is effectively a triangular lattice - that's a very different beast than the square lattice of the cuprates or the pnictides. That has to matter somehow. Twisting other 2d materials (square lattice MXenes?) could be very interesting. I predict there will be dozens of theory papers in the next two months trying to predict magic twist angles for a whole zoo of systems. APS March Meeting 2018, day 2 Day 2 of the meeting was even more scattered than usual for me, because several of my students were giving talks, all in different sessions spread around. That meant I didn't have a chance to stay too long on any one topic. A few highlights: Jeff Urban from LBL gave an interesting talk about different aspects of the connection between electronic transport and thermal transport. The Wiedemann-Franz relationship is a remarkably general expression based on a simple idea - when charge carriers move, they transport some (thermal) energy as well as charge, so thermal conductivity and electrical conductivity should be proportional to each other. There are a bunch of assumptions that go into the serious derivation, though, and you could imagine scenarios when you'd expect large deviations from W-F response, particularly if scattering rates of carriers have some complicated energy dependence. Urban spoke about hybrid materials (e.g., mixtures of inorganic components and conducting polymers). He then pointed out a paper I'd somehow missed last year about apparent W-F violation in the metallic state of vanadium dioxide. VO2 is a "bad metal", with an anomalously low electrical conductivity. Makes me wonder how W-F fairs in other badly metallic systems. Ali Hussain of the Abbamonte group at Illinois gave a nice talk about (charge) density fluctuations in the strange metal phase (and through the superconducting transition) of the copper oxide superconductor BSSCO. The paper is here. They use a particular technique (momentum-resolved electron energy loss spectroscopy) and find that it is peculiarly easy to create particle-hole excitations over a certain low energy range in the material, almost regardless of the momentum of those excitations. There are also systematics with how this works as a function of doping (carrier concentration in the material), with optimally doped material having particularly temperature-independent response. Albert Fert spoke about spin-Hall physics, and the conversion of spin currents in to charge currents and vice versa. One approach is the inverse Edelstein effect (IEE). You have a stack of materials, where a ferromagnetic layer is on the top. Driving ferromagnetic layer into FMR, you can pump a spin current vertically downward (say) into the stack. Then, because of Rashba spin-orbit coupling, that vertical spin current can drive a lateral charge current (leading to the buildup of a lateral voltage) in a two-dimensional electron gas living at an interface in the stack. One can use the interface between Bi and Ag (see here). One can get better results if there is some insulating spacer to keep free conduction electrons not at the interface from interfering, as in LAO/STO structures. Neat stuff, and it helped clarify for me the differences between the inverse spin Hall effect (3d charge current from 3d spin current) and the IEE (2d charge current from 3d spin current). Alexander Govorov of Ohio also gave a nice presentation about the generation of "hot" electrons from excitation of plasmons. Non-thermally distributed electrons and holes can be extremely useful for a variety of processes (energy harvesting, photocatalysis, etc.). At issue is, what does the electronic distribution really look like. Relevant papers are here and here. There was a nice short talk similar in spirit by Yonatan Dubi earlier in the day. As I explained yesterday, my trip to the APS is even more scattered than in past years, but I'll try to give some key points. Because of meetings and discussions with some collaborators and old friends, I didn't really sit down and watch entire sessions, but I definitely saw and heard some interesting things. Markus Raschke of Colorado gave a nice talk about the kinds of ultrafast and nonlinear spectroscopy you can do if you use a very sharp gold tip as a plasmonic waveguide. The tip has a grating milled onto it a few microns away from the sharp end, so that hitting the grating with a pulsed IR laser excites a propagating surface plasmon mode that is guided down to the really sharp point. One way to think about this: When you use the plasmon mode to confine light down to a length scale \(\ell\) comparable to the radius of curvature of the sharp tip, then you effectively probe a wavevector \(k_{\mathrm{eff}} \sim \2\pi/\ell\). If \(\ell\) is a couple of nm, then you're dealing with \(k_{\mathrm{eff}}\) values associated in free space with x-rays (!). This lets you do some pretty wild optical spectroscopies. Because the waveguiding is actually effective over a pretty broad frequency range, that means that you can get very short pulses down there, and the intense electric field can lead to electron emission, generating the shortest electron pulses in the world. Andrea Young of UCSB gave a very pretty talk about looking at even-denominator fractional quantum Hall physics in extremely high quality bilayer graphene. Using ordinary metal electrodes apparently limits how nice the effects can be in the bilayer, because the metal is polycrystalline and that disorder in local work function can actually matter. By using graphite as both the bottom gate and the top gate (that is, a vertical stack of graphite/boron nitride/bilayer graphene/boron nitride/graphite), it is possible to tune both the filling fraction (ratio of carrier density to magnetic field) in the bilayer and the vertical electric field across the bilayer (which can polarize the states to sit more in one layer or the other). Capacitance measurements (e.g., between the top gate and the bottom gate, or between either gate and the bilayer) can show extremely clean quantum hall data. Sankar Das Sarma of Maryland spoke about the current status of trying to use Majorana fermions in semiconductor wire/superconductor electrode structures for topological quantum computing. For a review of the topic overall, see here. This is the approach to quantum computing that Microsoft is backing. The talk was vintage Das Sarma, which is to say, full of amusing quotes, like "Physicists' record at predicting technological breakthroughs is dismal!" and "Just because something is obvious doesn't mean that you should not take it seriously." The short version: There has been great progress in the last 8 years, from the initial report of possible signatures of effective Majorana fermions in individual InSb nanowires contacted by NbTiN superconductors, to very clean looking data involving InAs nanowires with single-crystal, epitaxial Al contacts. However, it remains very challenging to prove definitively that one has Majoranas rather than nearly-look-alike Andreev bound states. In case you are interested in advanced (beyond-first-year) undergraduate labs and how to do them well, you should check out the University of Minnesota's site, as well as the ALPhA group from the AAPT. There is also an analogous group working on projects to integrate computation into the undergraduate physics curriculum. One potentially very big physics news story that I heard about during the day, but won't be here to see the relevant talk: [Update: Hat tip to a colleague who pointed out that there is a talk tomorrow morning that will cover this!] There are back-to-back brand new papers in Nature today by Yuan Cao et al. from the Jarillo-Herrero group at MIT. (The URLs don't work yet for the articles, but I'll paste in what Nature has anyway.) The first paper apparently shows that when you take two graphene layers and rotationally offset them from graphite-like stacking by 1.05 degrees (!), the resulting bilayer is alleged to be a Mott insulator. The idea appears to be that the lateral Moire superlattice that results from the rotational offset gives you very flat minibands, so that electron-electron interactions are enough to lock the carriers into place when the number density of carriers is tuned correctly. The second paper apparently (since I can't read it yet) shows that as the carrier density is tuned away from the Mott insulator filling, the system becomes a superconductor (!!), with a critical temperature of 1.7 K. This isn't particularly high, but the idea of tuning carrier density away from a Mott state and getting superconductivity is basically the heart of our (incomplete) understanding of the copper oxide high temperature superconductors. This is very exciting, as summarized in this News and Views commentary and this news report. It's that time of year again: The running of the physicists annual APS March Meeting, a gathering of thousands of (mostly condensed matter) physicists. These are (sarcasm mode on) famously rowdy conferences (/sarcasm). This year the meeting is in Los Angeles. I came to the 1998 March Meeting in LA, having just accepted a fall '98 postdoctoral fellow position at Bell Labs, and shortly after the LA convention center had been renovated. At the time, the area around the convention center was really a bit of a pit - very few restaurants, few close hotels, and quite a bit of vacant and/or low-end commercial property. Fast forward 20 years, and now the area around the meeting looks a lot more like a sanitized Times Square, with big video advertisements and tons of high end flashy stores. Anyway, I will try again to write up some of what I see until I have to leave on Thursday morning, though this year between DCMP business, department chair constraints, and other deadlines, I might be more concise or abbreviated. (As I wrote last year, if you're at the meeting and you don't already have a copy, now is the perfect time to swing by the Cambridge University Press exhibit at the trade show and pick up my book :-) ). Superconductivity in graphene bilayers - why is th...
CommonCrawl
Microgravity Science and Technology December 2013 , Volume 25, Issue 4, pp 251–265 | Cite as Evaporation Rates and Bénard-Marangoni Supercriticality Levels for Liquid Layers Under an Inert Gas Flow H. Machrafi N. Sadoun A. Rednikov S. Dehaeck P. C. Dauby P. Colinet First Online: 28 November 2013 In this work, we propose an approximate model of evaporation-induced Bénard-Marangoni instabilities in a volatile liquid layer with a free surface along which an inert gas flow is externally imposed. This setting corresponds to the configuration foreseen for the ESA—"EVAPORATION PATTERNS" space experiment, which involves HFE-7100 and nitrogen as working fluids. The approximate model consists in replacing the actual flowing gas layer by an "equivalent" gas at rest, with a thickness that is determined in order to yield comparable global evaporation rates. This allows studying the actual system in terms of an equivalent Pearson's problem (with a suitably defined wavenumber-dependent Biot number at the free surface), allowing to estimate how far above critical the system is for given control parameters. Among these, a parametric analysis is carried out as a function of the liquid-layer thickness, the flow rate of the gas, its relative humidity at the inlet, and the ambient pressure and temperature. Evaporation rate Supercriticality Marangoni instability Microgravity experiment The authors gratefully acknowledge financial support from ESA and BELSPO PRODEX projects. PC acknowledges financial support of the Fonds de la Recherche Scientifique—FNRS. Appendix: Relevant Fluid Properties This section gives the material property values required in the present study. Pure-Fluid Properties The values tabulated below are taken partly from 3MTM NovecTMHFE-7100 and HFE-7300 Engineered Fluid data sheets (3M 2012) and Gas Encyclopedia (Airliquide 2012) (the latter for N 2), and partly from private communications within the ESA Topical Team (see Table 8). Properties of the pure fluids involved (at 1 atm, when relevant) HFE7100 Molar weight (kg/mol) Critical temperature (K) Critical pressure (MPa) Boiling temperature (K) Vapor pressure (kPa) Heat of evaporation (kJ/kg) Density (kg/m3) Thermal conductivity (W/m K) Specific heat (kJ/kg K) Dynamic viscosity (mPa s) 0.0085a 0.815a Dynamic viscosity (μPa s) Temperature derivative of surface tension, γ T (N/m K) −1.1410×10−4 aThe corresponding properties have not been found in the literature, the values from other cases are formally used instead For the saturation pressure as a function of temperature, we use $$ p_{sat}(T)=p_{sat} ({T_{0}})\text{exp}\left( {-\frac{L_{0} M}{R}\left( {\frac{1}{T}-\frac{1}{T_{0} }} \right)} \right) $$ where M is the molecular mass, \(R\approx 8,31\frac {\text {J}}{\text {mol\;K}}\) is the universal gas constant, T 0 is some reference temperature, and L 0is the latent heat of evaporation at this temperature. At other temperatures, L is evaluated according to L = L 0 + (c pl − c pg1)(T 0 − T). Gas Mixture Properties The following ones are explicitly needed in the present study: ρ g (density), λ g (thermal conductivity), c pg (specific heat, here per unit mass) and D g (diffusion coefficient). The gas phase is a mixture of two pure components "1" and "2" (here HFE vapor and N 2, respectively), and the corresponding supplementary subscripts will refer to the corresponding pure-substance properties (e.g. λ g1, c pg2, M 1 , etc.). Let x g and N g denote the molar and mass fractions of the first substance (here HFE) in the gas mixture, easily expressible through one another: $$\begin{array}{@{}rcl@{}} x_{g} &=&\frac{N_{g}}{N_{g} +({1-N_{g}}){M_{1}}/{M_{2} }}, \notag \\ N_{g} &=&\frac{x_{g} }{x_{g} +( {1-x_{g} }){M_{2}}/{M_{1}}} \end{array} $$ Density, \(\rho _{g}\) Using the perfect-gas laws, one obtains $$\rho_{g} ( {x_{g} ,T,p})=\frac{p}{RT}( {( {M_{1} -M_{2} })x_{g} +M_{2} }) $$ Specific Heat, \(c_{pg}\) In the perfect-gas approximation, we have $$c_{pg} =c_{pg1} N_{g} +c_{pg2} ({1-N_{g}}) $$ which can subsequently be expressed in terms of x g , if needed. Thermal Conductivity, \(\lambda _{g}\) It is evaluated according to Wassiljewa (1904) formula. At moderate pressure (less or equal to 1 atm), the latter reads $$\lambda_{g} ({x_{g} ,T})=\frac{\lambda_{g1}(T)}{1+\frac{1-x_{g} }{x_{g} }\varphi_{12} ( T )}+\frac{\lambda_{g2} (T)}{1+\frac{x_{g} }{1-x_{g} }\varphi_{21} (T)} $$ Several correlations are currently available in the literature that differ by the method defining the functions φ 12 and φ 21. In the present study, we stick to the Lindsay and Bromley (1950) method, as used by Perry and Green (1997). Assuming the case of nonpolar molecules, one then has $$\begin{array}{@{}rcl@{}} \varphi_{12} (T)&=&\frac{1}{4}\left( {1+\sqrt{\frac{\mu_{g1} ( T )}{\mu_{g2} ( T)}\left( {\frac{M_{2} }{M_{1} }} \right)^{3/4}\frac{T+1.5T_{b1} }{T+1.5T_{b2} }}} \right)^{2} \notag \\ &&\qquad\times\frac{T+1.5\sqrt{T_{b1} T_{b2} }}{T+1.5T_{b1} }, \notag \\ [12pt] \varphi_{21} (T)&=&\frac{1}{4}\left( {1+\sqrt{\frac{\mu_{g2} ( T )}{\mu_{g1} ( T )}\left( {\frac{M_{1} }{M_{2} }} \right)^{3/4}\frac{T+1.5T_{b2} }{T+1.5T_{b1} }}} \right)^{2} \notag \\ &&\qquad\times\frac{T+1.5\sqrt{T_{b1} T_{b2} }}{T+1.5T_{b2} } \end{array} $$ Here μ g is the dynamic viscosity. The subscript b refers to the boiling point of the corresponding component at 1 atm. The study by Tondon and Saxena (1968) (where 96 mixtures are considered) thereby reproduces the experimental data with an average deviation of 2.20 %. It is assumed that the gas dynamic viscosity and thermal conductivity do not depend on the pressure. Diffusion Coefficient, \(D_{g}\) At moderate pressures (less or equal to 1 atm), the diffusivity of a gas binary mixture can be deduced from the relation due to Slattery and Bird (1958) (see also Bird et al. 2007, page 521). The result is independent of the system composition, varies as the inverse of the pressure and approximately as the square of the temperature. For nonpolar gas pairs, one has $$\begin{array}{@{}rcl@{}} D_g ({T,p})&=&2.745 \times 10^{-4}\frac{1}{p} \left( {\frac{1}{M_1 }+\frac{1}{M_2 }} \right)^{{1}/{2}}\left( {\frac{T}{\sqrt{T_{cr1} T_{cr2} }}} \right)^{1.823}\notag\\[8pt]&&\times( {T_{cr1} T_{cr2} } )^{{5}/{12}} ( {p_{cr1} p_{cr2} } )^{{1}/{3}} \end{array} $$ Here the molar weight must be in g/mol, the pressure in atm, the diffusion coefficient in cm 2/s, and the temperature in K. The subscript "cr" refers to the critical properties of the mixture components. Table 9 shows the values of D g for cases 1–5 calculated using this formula. The values of the diffusion coefficient in the gas for cases 1–5 HFE-7100 at 25 \(^{\circ }\)C & 1 atm HFE-7100 at 25 \(^{\circ }\)C & 0.4 atm \(D_{g}\) [m\(^{2}\)/s] 6.980×10\(^{-6}\) 3M: 3M TM Novec TM 7100 Engineered Fluid—Data Sheet. http://solutions.3m.com/wps/portal/3M/en_US/3MNovec/Home/(2012). Accessed 1 October 2012 Airliquide: http://encyclopedia.airliquide.com/encyclopedia.asp (2012). Accessed 1 October 2012 Bird, R.B., Steward, W.E., Lightfoot, E.N.: Transport Phenomena, Revised 2nd Edn.Wiley, New York (2007)Google Scholar Chauvet, F., Dehaeck, S., Colinet, P.: Threshold of Bénard-Marangoni instability in drying liquid films. Europhys. Lett. 99(3), 34001 (2012)CrossRefGoogle Scholar Colinet, P., Legros, J.C., Velarde, M.G.: Nonlinear Dynamics of Surface Tension Driven Instabilities. Wiley, Berlin (2001)CrossRefzbMATHGoogle Scholar ESA: http://www.esa.int/SPECIALS/HSF_Research/SEMLVK0YDUF_0.html (2012). Accessed 8 October 2012 Haut, B., Colinet, P.: Surface-tension-driven instabilities of a pure liquid layer evaporating into an inert gas. J. Colloid Interface Sci. 285, 296–305 (2005)CrossRefGoogle Scholar Iorio, C.S., Kabov, O.A., Legros, J.C.: Thermal patterns in evaporating liquid. Microgravity Sci. Technol. 19, 27–29 (2007)CrossRefGoogle Scholar Lindsay, A.L., Bromley, L.A.: Thermal conductivity of gas mixtures. Ind. Eng. Chem. 42(8), 1508–1511 (1950)CrossRefGoogle Scholar Machrafi, H., Rednikov, A., Colinet, P., Dauby, P.C.: Bénard instabilities in a binary-liquid layer evaporating into an inert gas. J. Colloid Interface Sci. 349, 331–353 (2010)CrossRefGoogle Scholar Machrafi, H., Rednikov, A., Colinet, P., Dauby, P.C.: Bénard instabilities in a binary-liquid layer evaporating into an inert gas: stability of quasi-stationary and time-dependent reference profiles. Eur. Phys. J. Spec. Top. 192, 71–81 (2011)CrossRefGoogle Scholar Mancini, H., Maza, D.: Pattern formation without heating in an evaporative convection experiment. Europhys. Lett. 66(6), 812–818 (2004)CrossRefGoogle Scholar Pearson, J.R.A.: On convection cells induced by surface tension. J. Fluid Mech. 4, 489–500 (1958)CrossRefzbMATHGoogle Scholar Perry, R.H., Green, D.W.: Perry's Chemical Engineers' Handbook, 7th edn. McGraw-Hill, New York (1997)Google Scholar Schatz, M.F., Neitzel, G.P.: Experiments on thermocapillary instabilities. Annu. Rev. Fluid Mech. 33, 93–127 (2001)CrossRefGoogle Scholar Slattery, J.C., Bird, R.B.: Calculation of the diffusion coefficient of dilute gases and of the self-diffusion coefficient of dense gases. A.I.Ch.E. J. 4(2), 137–142 (1958)CrossRefGoogle Scholar Tondon, P.K., Saxena, S.C.: Calculation of thermal conductivity of polar-nonpolar gas mixtures. Appl. Sci. Res. 19(1), 163–170 (1968)CrossRefGoogle Scholar Wassiljewa, A.: Wärmeleitung in Gasgemischen. I. Z. Physik. 5, 737–742 (1904)zbMATHGoogle Scholar © Springer Science+Business Media Dordrecht 2013 1.Thermodynamique des Phénomènes Irréversibles, Institut de Physique B5aUniversité de LiègeLiège 1Belgium 2.TIPs – Fluid PhysicsUniversité Libre de BruxellesBruxellesBelgium 3.Laboratoire de Mècanique des Fluides Thèorique et Appliquèe, Facultè de PhysiqueUSTHBAlgerAlgeria Machrafi, H., Sadoun, N., Rednikov, A. et al. Microgravity Sci. Technol. (2013) 25: 251. https://doi.org/10.1007/s12217-013-9355-8 Received 31 October 2012 Accepted 01 November 2013 First Online 28 November 2013 Publisher Name Springer Netherlands
CommonCrawl
Editing Republican Party You are not logged in, consider registering for an account. If you don't, your IP address will be recorded in this page's edit history. <!-- This message to merge to the Democratic Party is intended as a joke. Please do not replace with actual template. Thank you. --> <center> <table cellpadding="5"><tr><td bgcolor="FFFFBB">[[Image:crash_merge.jpg|100px]]</td> <td bgcolor="FFFFBB"> '' Some dumb fool created 2 articles about the same thing. <br/>Therefore, '''this article''' or '''section''' should be [[:category:Articles to be merged|merged]] with [[Democratic Party]].<br>If you are the author, consider merging the contents so we don't have to do it later.'' </td></tr></table> </center> <!-- This message to merge to the Democratic Party is intended as a joke. Please do not replace with actual template. Thank you. --> [[Image:Dumbo.GIF|thumb|right|250px|The Republican mascot is an elephant]] [[Image:Goposaur_xlg.gif|thumb|right|250px|The new seal proposed for the 2012 elections.]] [[Image:republicanhq.jpg|thumb|A rare photo of the Republican Party HQ. The picture was found in a very dark forest, and both the camera and the photographer were nowhere to be found. They are still missing.]] {{wikipedia}} '''<math>\mathfrak {The\ Republican\ Party}</math>''', also known affectionately as the '''[[God]] & [[Oil]] Party''' (GOP) is a division of [[Halliburton]] and a subsidiary of the Federalist Society, formerly known as the Federal [[Government]] of the United States. Its main function is to boost the sales of Israeli flag lapel pins and magnets. As of [[2007]], despite receiving a good ol' Texas '[[George Dubya Bush|thumpin]]' it is one of the largest [[Misanthropy|misanthropic]] organizations in the [[Bloodbath pitch|world]]. Well known for its [[fascist|Ultra-Orthodox]] [[communist|Jewish-Leninist]] practice of [[trotsky|democratic centrism]], in which no party member is allowed to disagree with the central organization. In fact [[me|I]] knew a guy who tried to disagree once. They later found his dead body floating in the [[Iraq|Potomac River]]. His corpse displayed signs of [[yaoi|torture]] and [[anal sex|wild violation]]. This causes [[grue|Republicans]] great stress and results in bi-weekly mass Republican [[Jeff Gannon|orgies]] in Washington, DC. ===Rockefeller Republicans=== [[Image:Unity.jpg|thumb|left|250px|Republican Party Logo before the [[Teabag|tea bagger movement]]. The guy on the right is Karl Rove...or Lee Raymond. I don't know which.]] These guys are the ones that are slowly being killed off by the sectarian death squads. They are mostly well-educated and wealthy suburbanites and are quite moderate, a lot of them are even pro-choice and pro-[[gay]]. Some of them are very, ''very'' pro-[[gay]]. These guys were in power before Pat Robertson started to making bizzare clicks and grunts that awoke his hordes of zombies. Currently, they are listed as an "Endangered Species" by the Endangered Species Act due to hunting by the radical cleric Jerry Falwell and population reduction due to loss of habitat that is made faster every year by expansion of [[democrat]] communities into the Rockefellers' natural range. Many of members of this sub-species live in Log Cabins. [[Image:Acou.JPG|right|thumb|150px|Ann Coulter: scientific name: bimbodium transexuale whoreonius. Heroine of the Whore and Dumb movement of the early 2000's]] === Republican Republicans === This sub-species dominates the modern republican party. Their natural range is the [[Bible Belt]] and the [[Mormon|Jello Belt]]. They are sometimes confused with zombies on election day as any resident of a town of less than 100,000 people and that is more than 300 miles inland would say. However, this strain of zombieism is different than any others as its members/hosts are still metabolically active and reproduce. In fact, they fuck like rabbits. Because of the emerging overpopulation of zombies attributed to this, a new, compassionate campaign of "spay and neuter your fundy" has begun. Also, the common [[mink]] has been introduced to act as predators in areas just outside of these creatures' home range, such as Colorado and Ohio. This strategy of zombie management has sustained some success in the last year or so. In fact, wildlife experts are becoming cautiously optimistic that this vermin species can be contained. === Logcabin Republicans === The section of the Republican party who are openly [[gay]]. Most Republicans are gay but not all are openly gay. The reason that they are called "log cabin" republicans is because they like to engage in [[sodomy]] within the secure and silent confines of a wilderness locale (a log cabin). Lesbians are excluded from being Log Cabin Republicans because "they like pussy too much" (O'Reilly 122). Note that heterosexual women often have a great deal in common with gay Republican men. One theory is that they both take it up the butt. Thus the one defining aspect of being a Republican is engaging in anal sex. Often noted as the origamial founding members of the party. However, They are often accepted by the regular Republicans, due to the fact that they didn't choose to be gay, but they chose to be Republicans. They can unite their hate against the Democrats, who in their words are 'Assholes.' === Others === [[Image:Sealrepugparty.jpg|thumb|right|250px|Alternative Republican Party Logo....these [[Internet Meme|running gags]] doing it for ya?]] *'''Women Republicans''': Chicks down with beating fellow sisters in abortion rallies, and be like Sarah Palin who loves her children, doesn't eat them or turn their stem cells for medicinal purposes like child-killer pro-choice Democrats. Most of them don't realize that their men want them to keep quiet and just get them a beer already! *'''[[Uncle Tom|Black Republicans]]''': A rare breed of Republicans are [[black people|American Africans]], who dislike affirmative action, doing away with racial segregation and political correctness to fight the alledged false claims of [[racism]]. *'''[[Rednecks|"Real American" Republicans]]''': The dirty, ugly, fat and inbred white trash. Predominantly NRA members who drink in excess. Usually live in old trailer parks or run down shacks. Favorite foods include "Turducken" and "Chicken Fried Bacon". All suffer from obesity and love to chug gravy. Hobbies also inlucde: banjo playing, watching NASCAR, and incest. ==History== ===Origins=== The Republican Party was founded by Your Lord and Savior [[Republican Jesus|Jesus Christ]], well-known for His [[free]]-[[pyramid scheme|market]](my favorite market is one where London Broil still sells for $2/lb...or 30p/100g for you [[communist]] [[homosexuals]]), [[fun|pro-gun]], [[prostitution|anti-abortion]] (so you can shoot the sheriff when he comes and arrests you for abortion), anti-[[tax]], [[slavery|anti-poor]], anti-[[your mom|whore]], [[terrorist|anti-Liberal]] stance. The poor are fat and lazy that just live because Democrats steal Republicans money to "give" to the lazy poor. ===Glory Days=== [[Image:John_Mccain23.jpeg|thumb|A typical Republican.]] The Republicans later took over control of the [[Roman Empire]], which led to a thousand years of unprecedented peace and prosperity. The sun never set on the Roman Empire, from their [[Australia]]n [[Koala]] mines, to their [[Asian chicks]] farms in exotic [[Taiwan]]. ===Decline=== However, [[faggot]]s [[Ted Haggard|with plump, tempting asses]] took over control of the Roman Empire, leading to the decline of the Republic Party and the Roman Empire for many centuries. The Roman Empire was eventually destroyed, whereas while the Republican tree was destroyed, it had a bunch of seeds somewhere. These were emptied out of a bird in America, where the new Republican tree was planted. Needless to say, there was much rejoicing. ===Being Born Again=== In the early 1900s, [[George Washington]] re-established Republican values by drive-by killing [[Queen Victoria]] and her queer son, [[Jack the Ripper]], single handedly causing the death of millions of [[Native Americans]] from sheer fright. Thus was [[America]] founded, under the unfailing leadership of [[Empire|Republican might]], except for eight years, when women were required by law to sexually satisfy the deviant pleasures of their corrupt leader, [[Newt Gingrich|Bill Clinton]]. [[Image:Monkeydog.jpg|right|thumb|200px|Bubba Joe "Monkeylips" Moran Jr., Republican candidate for office, 2012.]] ===Today=== Republicans are the most loyal to their own of any political party out there. Even if you raped over 9000 boys, you would still be applauded by the National Committee for your bravery in showing those 14 year-olds the hazards of being so goddamned sexy. Also Republicans do not believe in Science, Truth, Psycho-ology, gravity, common sense, and free will, unlike their radical counter parts the Democrats. ==A Portrait of the Republican Party== Unlike other [[politicians]], [[Feudal Lords|Republicans]] require the money of the poor to survive. They get their money through illegal drug sales, prostitution, human trafficing, and baking cookies. The Republican party is also known as the party of [[pyramid scheme|tax cuts]] and the [[slavery|ownership society]]. This group is know for their utter disgust and contempt for any country other than the United States ("United States" meaning "[[rednecks|Oklahoma]], [[megatexas|Texas]], [[white people|Idaho]], [[Mormons|Utah]], [[KFC|Indiana]], [[KKK|Kansas]], [[white trash|Tennessee]], [[Kentuckistan]], [[Wal-Mart|Arkansas]], [[Asians|Alabama]], and other Southern states") doing something for its own benefit by cleansing the world of [[homosexual]]s, [[liberals]], [[abortion]]ists, [[Catholic]]s, [[agnostics]], [[atheists]], [[blacks]], [[Indians|natives]], [[emo]]s, [[grues]], [[Arabs]], [[Work|union workers]], [[feminist]]s, [[metrosexual]]s, [[dog|sloths]], [[cat]], [[Mexicans]], [[Hispanic]]s and anyone who doesn't like [[chicken]] [[fried chicken]]. Republicans generally spend their time saving random countries from people [[Saddam Hussein|they don't like]]. Additionally, Republicans are known for a psychotic addiction to war and are the only party to support a president who thinks that Africa is a country, not a continent. Republicans often virulently vilify Senator [[Robert Byrd]] as a racist, but in another hand they have advocated the conservation and utilization of [[Robert Bird]]s as an "all-American species". [[Image:Roosevelt01.jpg|left|thumb|156px|[[Teddy Roosevelt]], laughing at what has become of the Republican Party.]] === Social Habits === A ton of them live in [[Montana]], mainly were former residents of [[Orange County]]. However, the locals refer them as a bunch of Californian [[liberal]] posers. Time for a lynch mob gathering. ====Home Life==== Typically, Republicans organize in units called "families", and attend each other's "churches", which consist of women chatting over Peach Schnapps and snorting [[cocaine|coke]] while slaving over ordering the servants (minorities) around, and the men getting tweaked on [[meth]] and watching [[Sesame Street|Sodomy Street]]. It is not uncommon for Republican Children to read fictitious tales such as Sesame Street - Color Me Mine, Barney's Weather Book, and the [[Holy Bible: Revised Neocon Edition]] What we do not learn is that more than 75% of all Republicans are homosexuals, who did homosexual things. Like the US's current and former President Abraham Lincoln and George W. Bush. These two have been having an affair since early 2004. When shown picture evidence President Bush denies it, while President Lincoln admits it. This is not a shocker for all the priests and pastors of the party who are now open pedophiles. ====A Day In The Life Of A Republican==== Republican daily life revolves around the [[hell|office]] and [[church]] (a sort of [[heaven]] for them). The average Republican will usually wake up at six in the morning and report to work by five in the morning, even if work is only a 5 minute walk from home. Many believe it has something to do with their internal sonar, called "gaydar", but experiments to confirm this have been mixed, with a correlation rate between 47.736% and 50.432%. Anyway, after 9 hours of saving the world if they have not been constantly abusing [[Mexicans]], [[teenagers]] and [[losers]], the Republican becomes tired. Unless of course, the said Republican is a [[loser]], then [[it]] is coming home from being [[UnNews:China outlaws reincarnation, Tibetan monks just found out.|abused]] for 10 hours. At any rate, the Republican was stuffing its face all day. After a hard day of yelling and eating, the republican packs up the children and goes to church. At church, the Republican speaks in tongues, shaves its pubes and handles snakes while the children are taught to forget everything they learned in science class, unless that knowledge is important in weapons making. At church they also take part in occultist rituals and human sacrafice. They often repeat this for 6 days a week, except for Saturday after church, when the "coon-hunting" occurs. Source: [http://en.wikipedia.org/wiki/Der_St%C3%BCrmer Der Stürmer] ==[[Conspiracy|Agenda]]== [[Image:Bush3cardwg6.jpg|thumb|167px|Gambling with our future is a regular activity of the typical Republican.]] ===[[Lying|Hypocrisy]]=== *The Republican Party exists, in part, to show how hypocritical people can be. For example, the GOP opposes gay marriage, but its leaders are some of the most flaming [[homosexual]]s on the planet, including [[Loser|Rick Santorum]], [[prostitute|Jeff Gannon]], [[Rudy Giuliani]], [[Criminal|Tom DeLay]], [[Faggot|Sam Brownback]], and [[Sith Lord|Karl Rove]]. However, it should be noted there is no evidence that Karl Rove is actually a [[gay]] person. He might just be very unappealing to [[women]], and it is fair to say that even the most tasteless butch [[homo|queer]] in the world would probably want no part of Karl Rove, either. He could just be asexual or not even human. *The Republicans also enjoy lighting themselves on fire for the country "[[Amerika]]". Much to their disappointment, [[chanting|nationalism]] is a form of [[pride]] and pride is a [[Chocolate|deadly sin]] and all Republicans are religious so they're all going to [[hell]] (''see'' [[Christian logic]]). *They accuse the liberal Democrats for creating the "race politics industry" for the horrible thing affirmative action has done to white people and males, banned the freedom of religion by activity for Wal-marts to stop any banners saying "Merry Christmas" during the holiday sales, and finally the public educational/pop cultural/ mass media promotion of [[philias]] such as homosexuality by "the Gays", inter-racial marriage and premarital sexual relations. ==="[[Slavery|Freedom]]"=== *The GOP claims to support freedom but gleefully fucks with brain-dead people who just want to become properly dead people. -Note if the brain-dead were allowed to become actually dead, people may suggest that Dumbya be euthanized. *In fairness, the GOP works to defend the interest of [[Red States|all brain-dead people]] as well as giving large and regressive tax rebates to those who make more than $300,000 a year (sweaty fat fucks and whores). *Everybody deserves a tax cut except the poor, those bastards need to pay. *To preserve the freedom of speech like the right for any white guy to shout "nigger", "faggot" and "retard" in public without getting sued, being punched, or even arrested for a hate crime. But they got the FCC to ban any uttering of "Hell", "god damn" and "Jesus Christ", as well the immoral nature of a woman breast feeding in the public is a lot worse than a male [[rape|disciplining]] his wife. [[Image:Slime Away.jpg|thumb]] ===[[Ethics|Their Fears]]=== * Arguing with Mutes * Logic * World Peace. * [[Gay People|Gay people]] <ref>Unless they're Republicans.</ref> * [[Democrats]]. * [[Socialism]], unless it is for themselves. * Having someone intelligent for president. * Anything not white. * SpongeBob SquarePants <ref>Sheldon J. Plankton however, is a Republican.</ref> * [[Minorities]]. * [[Black People]] <ref>Specifically [[Ving Rhames]].</ref> * Freedom. * Cute things. * Everything that isn't America. * Canada, Europe or New Zealand. * 95% of Americans <ref>Including illegal aliens from the planet Zorgon.</ref> * Love <ref>Unless it's "Christian".</ref> * Hippies. * Displays of affection * Kindness * Enough voters to stop the [[selling]] and buying that constitutes a working day in Washington. * A brain. ex: [[Dolly Parton]] and [[Thomas the Tank Engine]]. * Every superhero from [[Marvel Comics]] (because they are all in New York, and New York is a Democrat commune). * Women <ref>Except if they are barefoot and [[pregnant]].</ref> * Sex <ref>They feel a bit short in this area.</ref> * Liberal media. * Poor people. * Elephant hunters. * SpongeBob and Patrick enslaving the world and forcing us to make cameras that make everything into gay minorities. * The Wrath of [[Rush Limbaugh]] * Change * Tyler Perry * Rednecks (their poor) * [[train|Choo-choos]] ===[[begging|Wealth Exploration]]=== The Republican Party reserves the right to invade other countries, using the resources of the US Government, and the United Nations (see [[pussy]]). Countries identified for invasion will be subject to an initial survey to identify oil, natural gas, precious metals, and any other known threats to the USA's security, such as more gay hookers to be hired. ==Main Objective== Their intention is to sell the world to the Devil and then the Rich and Republican politicians will be relaxing in paradise. Everyone else will live in the miserable world including the fools who fell for the Republicans. == The Glorious Republican Platform == [[Image:Elvisrepublicans.JPG‎|thumb|right|250px|The Republican Platform--Best Selling stand up comedy album of all time--now available on CD and Audio Cassette!]] [[Image:NaziTownHall.jpg|thumb|The Republican National Convention, St. Paul, 2008]] [[Image:Obama_nope.jpg|right|thumb|250px|The views of the Republican Party, 2009]] ====[[Theocracy|Traditional Values]]==== * Protect public school teachers' rights to use supermarket-grade meat grinders on disruptive sheeple and to lead the class in prayer five times a day while facing East, towards [[Iraq]]... the glorious Republican [[Jihad]] must live on! * Pretend we care about abortion. * Protecting Republican like values in debate through [[pedophilia]], mass extermination of homosexuals, and torture reform. * Enshrine a "[[Fred Phelps|Anti-Sodomy]]" amendment in the Constitution. * Cut taxes for the rich. * Raise taxes for the poor. They need to get off their ass and work. * Hate foreigners. * Pretend we care about abortion. * Hate the liberals. * Support Israel. We must help them bomb the piss out of the Palestinian. * Oops, that is the Democratic Platform as well! * Defend Freedom & Liberty from the threats of due process and trial by jury * Pretend we care about abortion * Hate women. Only like hot country girls. (except for [[Sarah Palin]] and [[Ann Coulter]]). * Hate gays. ([[Larry Craig]] would dispute this, but it's not polite to talk with your mouth full). * FOR GOD'S SAKE, Pretend we care about abortion (So we can get more than two votes), and care about [[babies]]. * Hate all members of the LGBT community (But not the Log Cabin Republican, their campaign contribution checks clear, and there will be plenty of room in the concentration camps for them too when the time comes). * Call anyone who doesn't agree with you a terrorist (Politicians who selectively disagree with themselves are exempt, this is called campaigning, unless they're a Democrat, then they flip-flopped). * Hate black people (Yes, [[Kanye West]] Bush doesn't care about them either.) * Pretend we care about abortion. * Lower education standards. School sucks. * Preach about God * Fund more money into Fox News and that dumbass [[Bill O'Reilly]]. * Pretend we care about abortion. * Erect a monument to that dumbass Bill O'Reilly. * And hate gays. Hate bisexuals too. * Pretend we care about abortion. * Protecting the oppressed white population from the evils of [[reverse discrimination]]. * Build more huge [[car]]s with tailfins that guzzle gasoline ====[[Slavery|Promoting Economic Expansion and Growth]]==== {{Wikipediapar|corporate welfare|corporate welfare}} *Lower taxes AND raise deficit spending. No foreseeable problem. *Tax decreases so that they can kick gramma out of [[Social Security]] and make her homeless,and then to beat her to death as a form of entertainment. A truly compassionate, entertaining, and conservative approach to fiscal matters. * Dump the [[American]] greenback currency in favor of condor gonads and manatee hides. * Destroying public works or selling them for their new [[Maybach]] so that their [[nigger|manservant]] can drive them around. * destroy all nonwhites and everyone who has an income of less than $1 million through "tax cuts" (their actually 100% raises in taxes) * Increasing government spending to fund a wasteful [[bureaucracy|military-industrial complex]] and give [[whore|welfare]] to needy corporations. * Send in more illegal aliens to work in our factories, rather than shipping the factories to Mexico. "Free trade" is so much fun! [[Image:GOPJEZUS.jpg|left|thumb|180px|Yes! For $335,000.00, you can be driven around in this! ...but you gotta do something about welfare or you will be stuck with your 2 year old Bentley.]] ====[[Racism|Protecting Our Communities]]==== * Eliminate the $60,000,000,000 a month trade deficit by making blacks the nation's #1 product export through government subsidy and [[Holocaust|controlled]] lynching. * Build the [[Iron Curtain|El Paso Wall]] to keep out brown people while still expecting to win Southwestern Purple states by siphoning the Hispanic vote with [[Mormons|magical panties]]. * Repeal the Civil Rights Act of 1964, as it violates States' Rights, undermines Freedom of Association, and destroys our cultural heritage. * Believe that blondes are the only women that are god looking. ====[[Big Brother|Strong Leadership]]==== * Invading countries around the world for weekend [[Dachau|camping]] trips. * Pick a country, doesn't matter which one, after the war, no one will care. And the Democrats will keep fighting said wars, but feel bad about it. * Always pick a scapegoat at the last minute. * Talk about Jesus Christ chose America as "God's chosen people", that also worked in [[Nazi Germany]] to get rid of "[[Jews|Christ killers]]" in the 1930's. * Maybe that CIA-hired Chilean dictator [[Pinochet]] had a good point in getting rid of "liberals", hippies, socialists and Anarchists. ==Modern Usage== [[Image:Kenblackwellgov.jpg‎|thumb|A black Republican|200px]] Res, Rei, Latin n. Thing. *Publican \Pub"li*can\, n. [L. publicanus: cf. F. publicain. See [[Public]].]. (Rom. Antiq.) A farmer of the taxes and public revenues; hence, a collector of toll or tribute. Other etymology is a combination of Pube/Licking. *Republican \Re-pub"li*can\, n. 1. A collector of toll or tribute (taxes) that keeps coming back again, and again, and again, ... ; 2. A remover of the taxes and public revenues (mainly into their or their friends' pockets) ... ; 3. A political party that continuously puts pubes on soda cans, a practice started by [[Clarence Thomas]] from the latter, shunned etymology of Pube Licking as "Pube Licking Things", Res Publica. The term "Republican" refers to a form of [[psychosis]] brought on by excessive [[baby|bed wetting]] and [[sex]] with livestock and very hot bikini clad men who drive [[300|Chrysler 300]]s. The first known Republican was King Frou-frou the [[Impotence|Impotent]] of [[Belgium]] (AD 1443-1465), best remembered for coining the phrase "Finger lickin' good". He was lynched by his subjects for failure to pay excessive [[library]] fines. [[Image:RMurdoch.jpg|thumb|left|You must be this rich to benefit from the Republican Party.]] The term "Republics" is used to refer to the culture of the [[Republican Party]]. This typically consists of middle-aged white males, who are known to thump [[Rush Limbaugh]], and tune into the [[Bible]] on the local Clear Channel radio station at 12:00 Noon EST. In recent times there has been a falling out of the term "Republican" to mean the Republican Party in the vernacular. It is more oft used in the form "That party was Republican." Used in the nominative adjectival... adjectivivial... to describe things, it is a party where B.Y.O.P. is included in the invitation. B.Y.O.W. is commonly understood to mean "Bring Your Own Whore", although it has been more common for there to be an exchange of prostitutes at a Republican Party. It has become fashionable to trade prostitutes for favors at Republican Parties. A vibrant culture of [[men|women]] exchange has evolved in recent years. Of particular interest is the "Hooker Exchange," wherein a hooker is left in the coat of the Republican, checked at the coat check, and exchanged by an exchange of the coats. ===Conspiracy?=== Republicans tend to drink more pomegranate than orange juice. There's no Jewish conspiracy, it's all a BIGGER conspiracy of the filthy RICH, which courageous White Knight [[Rand Paul]] has infiltrated by posing as a scumbag. [[Dole|The Fucking Expensive Fruit Party]] of America. ===The LICAN=== [[Image:gop.png|thumb|right|250px|The Grand Oil Party]] There are various [[conspiracy]] theories about the Republican Party's connection with an international [[oil]] cartel, the [[Illuminati]], and the church of [[Ass|Beavis]] Christ. However, there is a much darker goal of the Republican Party that has been hidden in the name of the party itself: Re-pub-lican. [[Image:Rushfat.jpg|thumb|left|150px|....and this fat fuck too.]] The name reveals a program of establishing a federally implemented, funded (although I don't know where the party of tax cuts is going to come up with the money) and administrated Liquor Control Administration Network, known as the LICAN. The Republicans are modeling this network after [[Canada]]'s Liquor Control Board of [[Ontario]]. The sale of liquor would be nationalized under this program, and only [[government]] run pubs would be able to sell [[alcohol]]. The first phase of the program is a temporary [[prohibition]] during which bars, pubs, and liquor stores would be shut down. The second phase, known as the re-pub phase will be the establishment of government run pubs and bars. ==== How it fits into a plan for a permanent Republican majority ==== Similar to the movie [[Warm piss water|Strange Brew]], the Republicans are secretly planning a mind control program through the sale of really cheap [[beer]] (only $1.79 for a can, $3 for a forty and $3.99 for a 6-pack!). Yes, this is part of Karl Rove's and Tom Delay's plan for a [[Monarchy|permanent republican majority]]. The LICAN, after courting several cheap beer companies such as Coors and Budweiser, has chosen Pabst Smears Blue Ribbon to be the national beer. ==== Criticism==== Many Democrats, who have been privy to the secret funding or the LICAN, have expressed outrage. The most outspoken of these has been [[Massachusetts]] [[Senator]] [[Ted Kennedy]], who has said: "[[No one]] with any taste whatsoever, would drink that garbage. Real [[men]] drink [[Guinness]] and Stolichnaya." =="Freedom"== [[Image:Nra_freedom2.jpg|left|thumb|150px|"Freedom" in action. ]] "[[Freedom]]" is the word that Republicans use when they run out of ideas. For example, the war to liberate [[Iraqistan]] is now a [[retard|clusterfuck]]. So, instead of talking about [[WMD|WMDs]] the GOP talks about freedom. Also, "Freedom" is a battle cry for republicans everywhere, even if they believe that women should wear burkas. [[Image:HumanRightsOVerTime.jpg|thumb|As you can see freedom is for fetuses and "freedom" is what you get when you are born. To see what I mean, click on this graph.|200px]] This is odd, since freedom contradicts all the things the GOP supports. For example, freedom means [[gay]] people can marry. It also means [[women]] want to "choose" while pregnant out of wedlock. It means that [[black people]] shouldn't have to pick cotton. It means that [[Mexicans]] should be free to cross a [[water|river]] and get a job. It also kind of assumes that [[Arabs]] want freedom, which is a bit like saying that [[dogs]] love piano recitals. Everyone knows that Arabs only want "freedom". So I guess Republicans are half-right. However, the GOP objects for the freedom for gays to be "out of the closet" like that gay German guy [[Bruno]], women to use the pill before they "do it", black people to drink from the same public water fountain (AIDS in the pool!), Mexicans for speaking Spanish in a land where foreigners should say it in English and even A-rabs for practicing a pagan devil-worship occult known as [[Islam]]. ==Official Propaganda Spewer== {{main|Fox News}} Fox News serves as the official Republican propaganda spewer. Fox News criticizes the big 3 TV networks for [[obscenity]], [[profanity]] and [[vulgarity]] on their prime time schedules. But what about FOX network? And most of all, Fox News is owned by an Australian, fair dinkum mate. ==Mascots== [[Image:Vote_republican.jpg|thumb|left|250px|A poster used by Ronald Reagan during his 1984 reelection campaign]] [[Image:Elep4.gif|thumb|right|Look how much fun this elephant is having! Sell your soul to the Republican Part TODAY!]] Darth Vader is the current mascot for the Republican party and has been since he was discovered by a conservative talent agency in 1977. His work has been considered a cornerstone to the GOP victories in 1978, 1980, 1982, 1984, 1988, 1994, 1998, 2000, 2002, 2004, and 2010. (that's like a 2:1 Win-loss record!!) However, after the formally mentioned [[Nancy Pelosi|"thumping"]] last fall, Karl Rove became [[emo|depressed]] and [[angry]]. After a binge on [[fat|vanilla ice cream]], Karl Rove blamed Darth Vader's emerging [[Anakin Skywalker|"girly man"]] image on the GOP's losses and has been debating having him replaced. The current front-runner to replace him, [[Sephiroth]], has been criticized by [[crazy|party activists]] for where he has spirit, he looks too much like a [[homosexual]]. During the next election, they officially chose the mascot to be, in fact, [[your mom]]. Sucks for [[you]]. However, Karl Rove has stated that there has been physically effeminate manly-men before. Some examples include Jeff [[Ganon|Gannon]], Ted Haggard and Kuja (sorta). == Tea Bagging== [[Image:Tea-party-repub.jpg|thumb|right|Tea bagging elephants too.]] After losing the presidential election Republicans found a new way to get their way. Tea bagging. This includes watching [[Glenn Beck]], gathering in central area and tea bagging each other, or burning Qurans. ==Footnotes== <references/> ==See also== *[[A Republican Party]] *[[Fascist-Republican Party]] *[[God-Fearing Republicans]] *[[HowTo:Be a republican]] *[[Republican Jesus]] ==External links== *[http://www.thefrown.com/frowners/becomerepublican.swf Become a Republican in 10 easy steps!] {{Politicalparty}} [[Category:Politics of the United States]] [[Category:Evil Organizations]] [[Category:Nasty Right Wing Bastards]] [[Category:Cleanup]] [[ja:共和党 (アメリカ)]] NOTE: This is a mirror site, the original page (if it still exists) is here. Editing texts on this site will not change the page text on any other server. Please read the Beginner's Guide, and please be funny and not just stupid. You agree to license submissions under CC-BY-NC-SA. Template:Politicalparty (edit) Template:Wikipediapar (view source) (protected) Retrieved from "https://mirror.uncyc.org/wiki/Republican_Party"
CommonCrawl
viXra.org > Statistics Abstracts Authors Papers Full Site Previous months: 2010 - 1003(10) - 1004(7) - 1005(4) - 1006(1) - 1007(2) - 1008(4) - 1010(1) - 1011(1) 2011 - 1105(2) - 1107(1) - 1111(1) - 1112(1) 2012 - 1203(1) - 1204(2) - 1205(1) - 1208(1) - 1210(1) - 1211(6) - 1212(1) 2013 - 1301(2) - 1304(3) - 1306(2) - 1307(1) - 1310(2) 2014 - 1402(1) - 1403(3) - 1404(2) - 1405(2) - 1407(1) - 1409(4) - 1410(4) - 1411(13) - 1412(4) 2015 - 1503(1) - 1505(2) - 1506(2) - 1507(3) - 1508(3) - 1509(1) - 1511(2) - 1512(6) 2016 - 1601(6) - 1602(3) - 1603(4) - 1604(2) - 1605(1) - 1607(5) - 1608(1) - 1609(4) - 1610(1) - 1611(1) - 1612(2) 2017 - 1701(4) - 1702(3) - 1703(5) - 1704(11) - 1705(12) - 1706(7) - 1707(2) - 1708(2) - 1709(1) - 1710(3) - 1711(5) - 1712(6) 2018 - 1801(5) - 1802(3) - 1803(4) - 1804(4) - 1805(3) - 1806(5) - 1807(2) - 1808(1) - 1809(3) - 1810(5) - 1811(4) - 1812(2) 2019 - 1901(3) - 1903(1) - 1904(2) - 1905(4) - 1906(1) - 1907(1) Any replacements are listed farther down [247] viXra:1907.0077 [pdf] submitted on 2019-07-04 06:22:03 Expansions of Maximum and Minimum from Generalized Maxwell Distribution Authors: Jianwen Huang, Xinling Liu, Jianjun Wang Comments: 13 Pages. Generalized Maxwell distribution is an extension of the classic Maxwell distribution. In this paper, we concentrate on the joint distributional asymptotics of normalized maxima and minima. Under optimal normalizing constants, asymptotic expansions of joint distribution and density for normalized partial maxima and minima are established. These expansions are used to educe speeds of convergence of joint distribution and density of normalized maxima and minima tending to its corresponding ultimate limits. Numerical analysis are provided to support our results. Category: Statistics Formulation of the Classical Probability and Some Probability Distributions Due to Neutrosophic Logic and Its Impact on Decision Making (Arabic Version)) Authors: Rafif Alhabib, Moustafa Mzher Ranna, Haitham Farah, A.A. Salama Comments: 169 Pages. والجوهر الأساسي لبحثنا هو تطبيق منطق النيتروسوفيك على جزء من نظرية الاحتمالات الكلاسيكية وذلك من خلال تقديم الاحتمال الكلاسيكي وبعض التوزيعات الاحتمالية وفق منطق النيتروسوفيك ومن ثم د ا رسة أثر استخدام هذا المنطق على عملية اتخاذ الق ا رر مع المقارنة المستمرة بين المنطق الكلاسيكي ومنطق النيتروسوفيك من خلال الد ا رسات والنتائج. تضم الأطروحة خمسة فصول Atherosclerosis is an Infectious Disease Authors: Ilija Barukčić Comments: 16 pages. Copyright © 2019 by Ilija Barukčić, Jever, Germany. All rights reserved. Published by: Aim Rheumatoid arthritis (RA) is associated with increased risk of coronary artery disease (CAD). Studies reported that anti-rheumatic drug usage is associated with decreased risk of CAD events in RA patients. This study was conducted to investigate the effect of some anti-inflammatory drugs (etanercept, leflunomide, etoricoxib) on the development of CAD events among patients with RA using anti-rheumatic drug in comparison with nonusers. Methods A systematic review of CAD events in RA patients was performed who used leflunomide, etanercept and etoricoxib and was compared with RA patients who don't use these drugs. The exclusion relationship and the causal relationship k were used to test the significance of the result. A p-value of < 0.05 was treated as significant. Results Among RA patients, use of leflunomide (p (EXCL) =0,999022483; X2 (EXCL) = 0,06; k = -0,03888389; p-value ( k | HGD) =0,00037588), etanercept and etoricoxib was associated with significantly decreased incidence of CAD. The use leflunomide, etanercept and etoricoxib excludes cardiac events in RA patients. Conclusion The results of study provide further support for the infectious hypothesis of atherosclerosis. Key words: atherosclerosis, rheumatoid arthritis, therapy, causal relationship Glyphosate and Non-Hodgkin Lymphoma: no Causal Relationship Objective: Herbicides are used worldwide by both residential and agricultural users. Due to the statistical analysis of some epidemiologic studies the International Agency for Research on Cancer classified the broad-spectrum herbicide glyphosate (GS) in 2015, as potentially carcinogenic to humans especially with respect to non-Hodgkin lymphoma (NHL). In this systematic review and re-analysis, the relationship between glyphosate and NHL was re- investigated. Methods: A systematic review and re-analysis of studies which investigated the relationship between GS and NHL was conducted. The method of the conditio sine qua non relationship, the method of the conditio per quam relationship, the method of the exclusion relationship and the mathematical formula of the causal relationship k were used to proof the hypothesis. Significance was indicated by a p-value of less than 0.05. Results: The studies analyzed do not provide any direct and indirect evidence that NHL is caused GS. Conclusion: In this re-analysis, no causal relationship was apparent between glyphosate and NHL and its subtypes. Keywords: Glyphosate, Non-Hodgkin lymphoma, no causal relationship Conjecture Sur Les Familles Exponentielles Authors: Idriss olivier BADO Comments: 5 Pages. in this article we will establish some properties of random variables and then we will propose a conjecture related to the exponential family. This conjecture seems interesting to me. Our results are based on the consideration of continuous random variables $X_{i}$ defined on the same space $\Omega$ and the same super-extra density law of parameter $\theta_{i} $ and canonique function $T$ Let $n\in \mathbb{N}^{*}$ Considering the random variable $J$ and $I$ a subsect of $\{1,2,..n\}$ such that : $ X_{J}=\inf_{i\in I}(X_{i})$ we show that : $$\forall i\in I:\mathbb{P}( J=i)=\frac{\theta_{i}\prod_{j\in I}c(\theta_{j})}{\sum_{j\in I}\theta_{j}}\int_{T(\Omega)}e^{-x}dx$$. We conjecture that if the density of $ X_{i}$ is $ c(\theta_{i})e^{-\theta_{i}T(x)}\mathbf{1}_{\Omega}(x)$ Hence $\exists h,r$ two functions h such that $$ \forall i\in I:\mathbb{P}( J=i)=\frac{r(\theta_{i})\prod_{j\in I}h(\theta_{j})}{\sum_{j\in I}r(\theta_{j})}\int_{T(\Omega)}e^{-x}dx$$ Modelling Passive Forever Churn via Bayesian Survival Analysis Authors: Gavin Steininger This paper presents an approach to modelling passive forever churn (i.e., the probability that a user never returns to a game that does not require them to cancel it). The approach is based on parametric mixture models (Weibull, Gamma, and Log-normal) for return times. The model and data are inverted using Bayesian methods (MCMC and DIC) to get parameter estimates, uncertainties, as well as determine the return time distribution for retained users. The inversion scheme is tested on three groups of simulated data sets and one observed data set. The simulated data are generated with each of the parametric models. Each data set is censored to six time horizons, creating 18 data sets. All data sets are inverted with all three parametric models and the DIC is used to select the return time distribution. For all data sets the true return time distribution (i.e., the one that is used to simulate the data) has the best DIC value; for 16 inversions the true return time distribution is found to be significantly better than the other options. For the observed data set inversion, the scheme is able to accurately estimate the \% of users that did return (before the game transitioned into open beta) to given 14 days of observations. Remark on Possible Use of Quadruple Neutrosophic Numbers for Realistic Modelling of Physical Systems Authors: Victor Christianto, Florentin Smarandache Comments: 6 Pages. This paper has been submitted to Axioms journal (MDPI). Comments are welcome During mathematical modeling of real technical system we can meet any type and rate model uncertainty. Its reasons can be incognizance of modelers or data inaccuracy. So, classification of uncertainties, with respect to their sources, distinguishes between aleatory and epistemic ones. The aleatory uncertainty is an inherent data variation associated with the investigated system or its environment. Epistemic one is an uncertainty that is due to a lack of knowledge of quantities or processes of the system or the environment. In this short communication, we discuss quadruple neutrosophic numbers and their potential application for realistic modelling of physical systems, especially in the reliability assessment of engineering structures. The Theorems of Rao--Blackwell and Lehmann--Scheffe, Revisited Authors: Hazhir Homei It has been stated in the literature that for finding uniformly minimum-variance unbiased estimator through the theorems of Rao-Blackwell and Lehmann-Scheffe, the sufficient statistic should be complete; otherwise the discussion and the way of finding uniformly minimum-variance unbiased estimator should be changed, since the sufficiency assumption in the Rao-Blackwell and Lehmann-Scheffe theorems limits its applicability. So, it seems that the sufficiency assumptions should be expressed in a way that the uniformly minimum-variance unbiased estimator be derivable via the Rao-Blackwell and Lehmann-Scheffe theorems. Deriving Spin Signatures and the Components of Movement from Trackman Data for Pitcher Evaluation Authors: Glenn Healey We derive spin signatures and the components of movement from Trackman data. Smoking is the Cause of Lung Cancer Comments: 26 pages. Copyright © 2019 by Ilija Barukčić, Jever, Germany. All rights reserved. Published 15.2.2019 by Journal of Drug Delivery and Therapeutics, 9(1-s), 148-160. https://doi.org/10.22270/jddt.v9i1-s.2273 Objective: The aim of this study is to re-evaluate the relationship between smoking and lung cancer. Methods: In order to clarify the relationship between cigarette smoking and lung cancer, a review and meta-analysis of appropriate studies with a total sample size of n = 48393 was conducted. The p-value was set to p < 0,05. Results. It was not possible to reject the null-hypothesis H0: without smoking no lung cancer. Furthermore, the null-hypothesis H0: No causal relationship between smoking and lung cancer was rejected. Conclusions Compared to the results from previous studies, the results of this study confirm previously published results. According the results of this study, without smoking no lung cancer. Smoking is the cause of lung cancer. Keywords: Smoking, lung cancer, causal relationship Index of Unfairness Comments: Comments: 63 pages. Copyright © 2019 by Ilija Barukčić, Jever, Germany. All rights reserved. Published by Objective: Objective scientific knowledge for many authors more valuable than true subjective belief is determined by research on primary data but a renewed analysis of already recorded or published data is common too. Ever since, an appropriate experimental or study design is an important and often a seriously underappreciated aspect of the informativeness and the scientific value of any (medical) study. The significance of study design for the reliability of the conclusions drawn and the ability to generalize the results from the sample investigated for the whole population cannot be underestimated. In contrast to an inappropriate statistical evaluation of a medical study, it is difficult to correct errors in study design after the study has been completed. Various mathematical aspects of study design are discussed in this article. Methods: In assessing the significance of a fair study design of a medical study, important measures of publication bias are introduced. Methods of data or publication bias analysis in different types of studies are illustrated through examples with fictive data. Formal mathematical requirements of a fair study design which can and should be fulfilled carefully with regard to the planning or evaluation of medical research are developed. Results. Various especially mathematical aspects of a fair study design are discussed in this article in detail. Depending on the particular question being asked, mathematical methods are developed which allow us to recognize data which are self-contradictory and to exclude these data from systematic literature reviews and meta-analyses. As a result, different individual studies can be summed up and evaluated with a higher degree of certainty. Conclusions This article is intended to give the reader guidance in evaluating the design of studies in medical research even ex post which should enable the reader to categorize medical studies better and to assess their scientific quality more accurately. Keywords: study design, quality, study, study type, measuring technique, publication bias Some Neutrosophic Probability Distributions In this paper, we introduce and study some neutrosophic probability distributions, The study is done through generalization of some classical probability distributions as Poisson distribution, Exponential distribution and Uniform distribution, this study opens the way for dealing with issues that follow the classical distributions and at the same time contain data not specified accurately. Compressed Monte Carlo for Distributed Bayesian Inference Authors: L. Martino, V. Elvira Bayesian models have become very popular over the last years in several fields such as statistics, signal processing, and machine learning. Bayesian inference needs the approximation of complicated integrals involving the posterior distribution. For this purpose, Monte Carlo (MC) methods such as Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) algorithms, are often employed. In this work, we introduce a compressed MC (C-MC) scheme in order to compress the information obtained previously by MC sampling. The basic C-MC version is based on the stratification technique, well-known for variance reduction purposes. Deterministic C-MC schemes are also presented, which provide very good performance. The compression problem is strictly related to moment matching approach applied in different filtering methods, often known as Gaussian quadrature rules or sigma-point methods. The connections to herding algorithms and quasi-Monte Carlo perspective are also discussed. C-MC is particularly useful in a distributed Bayesian inference framework, when cheap and fast communications with a central processor are required. Numerical results confirm the benefit of the introduced schemes, outperforming the corresponding benchmark methods. On the Distributional Expansions of Powered Extremes from Maxwell Distribution Authors: Jianwen Huang, Jianjun Wang, Zhongquan Tan, Jingyao Hou, Hao Pu In this paper, asymptotic expansions of the distributions and densities of powered extremes for Maxwell samples are considered. The results show that the convergence speeds of normalized partial maxima relies on the powered index. Additionally, compared with previous result, the convergence rate of the distribution of powered extreme from Maxwell samples is faster than that of its extreme. Human Papillomavirus is the Cause of Human Prostate Cancer Comments: 31 pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. All rights reserved. Published by Background: Human papillomavirus (HPV) has an important role in the oncogenesis of several malignant diseases. Some observational studies demonstrated the presence of HPV even in human prostate cancer (PC) while other studies failed on this point. The relationship between HPV infection and PC remains unclear. The aim of the present meta-analysis study is to investigate whether HPV serves as a cause or as the cause of PC. Methods: The PubMed database was searched for suitable articles. Previously published expert reviews and systematic meta-analysis were used as an additional source to identify appropriate articles. Articles selected for this meta-analysis should fulfill the following inclusion criteria: (a) no data access barrier, (b) polymerase chain reaction (PCR) DNA based identification of HPV. The method of the conditio sine qua non relationship was used to prove the hypotheses whether being married is a necessary condition (a conditio sine qua non) of PC. In other words, without being married no PC. The method of the conditio per quam relationship (sufficient condition) was used to prove the hypotheses if HPV is present in human prostate tissues then PC is present too. The mathematical formula of the causal relationship k was used to prove the hypothesis, whether there is a cause effect relationship between HPV and PC. Significance was indicated by a p-value (two sided) of less than 0.05. Results: In toto more than 136 000 000 cases and controls were re-analysed while more than 33 studies were considered for a meta-analysis. Several studies support the hypotheses without being married no PC. All the studies considered for a re-analyses support the null-hypotheses if HPV then PC, while the cause effect relationship between HPV and PC was highly significant. Conclusions: Human papillomavirus is the cause of human prostate cancer. Neutrosophic Decision Making & Neutrosophic Decision Tree (Arabic Version). Authors: A. A. Salama, Rafif Alhabib In this research, we present neutrosophic decision-making, which is an extension of the classical decision-making process by expanding the data to cover the non-specific cases ignored by the classical logic, which in fact support the decision-making problem. The lack of information besides its inaccuracy is an important constraint affecting The effectiveness of the decision-making process, and we will rely on the decision tree model, which is one of the most powerful mathematical methods used to analyze many decisionmaking problems, where we extend it according to the neutrosophic logic by adding some indeterminate data (in the absence of probability) or by substituting the classical probabilities with the neutrosophic probabilities (in case of probability). We call this extended model the neutrosophic decision tree, which results in its use to reach the best decision among the available alternatives because it is based on data that is more general and accurate than the classical model. Modeling Distributional Time Series by Transformations Authors: Zhicheng Chen Probability distributions play a very important role in many applications. This paper describes a modeling approach for distributional time series. Probability density functions (PDFs) are approximated by real-valued vectors via successively applying the log-quantile-density (LQD) transformation and functional principal component analysis (FPCA); state-space models (SSMs) for real-valued time series are then applied to model the evolution of PCA scores, corresponding results are mapped back to the PDF space by the inverse LQD transformation. SITUATIONAL UNDERLYING VALUE (SUV) - "Proof of Principle" for a Statistic to Measure Clutch Performance by Individuals in Major League Baseball, Professional Football (NFL) and NCAA Men's College Basketball Authors: Raymond H Gallucci Comments: 126 Pages. Combines previously separate vixra entries 1809.0411 + 0410 + 0409 into one document. In Situational Underlying Value (SUV) for Baseball, Football and Basketball: A Statistic to Measure Individual Performance in Team Sports, an all-encompassing, overall statistic to measure performance by individual players in the team sports of major league baseball, professional football (NFL), and NCAA men's college basketball was developed. This work supplements and extends the development and initial demonstrations of the use of the SUV statistic for these three team sports by tracking the performance of three specific teams in these three sports over a significant portion of their most recent seasons: (1) for major league baseball, 54 of the 162 games played by the Seattle Mariners in 2017; (2) for professional football, five of the 16 games played by the Seattle Seahawks in 2017; and (3) for NCAA Men's College Basketball, the five games played by the Loyola of Chicago Ramblers in the 2018 NCAA Division I Men's Basketball Tournament. The SUV statistics for the players who participated in these games are tracked and accumulated for comparison among themselves and, for those who participated in a significant portion of these games, further compared against the traditional statistics for each team over the entire season (or, in the case of the Loyola of Chicago Ramblers, the complete five games of the Basketball Tournament). The goal is to examine the efficacy of this one overarching statistic, the SUV, in representing player performance "in the clutch" vs. more subjective interpretation of the myriad of different "traditional" statistics currently used. Anomalies between the SUV and traditional statistics results are examined and explained, to the extent practicable given the scope of the SUV analysis (partial seasons). Whether or not this effort proves successful is left to the reader's conclusion based on t he results and comparisons performed. A Third Note on Bell's Theorem Authors: Han Geurdes In the paper it is demonstrated that Bell's formula for {+1,-1} measurement functions is inconsistent. Epstein Barr Virus and Atrial Fibrillation – a Causal Link? Comments: 17 pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. All rights reserved. Published by Modern Health Science, 2019; 2(1): 1-15. https://doi.org/10.30560/mhs.v2n1p1 Objective: Atrial fibrillation (AF) is very frequent and clinically significant arrhythmia. The incidence of atrial fibrillation is continuously rising. Meanwhile several risk factors for AF development have been identified but the etiology is not cleared. Methods: A systematic review and re-analysis of studies which investigated the relationship between AF and some risk factors was conducted. The method of the conditio sine qua non relationship, the method of the conditio per quam relationship, the method of the exclusion relationship and the mathematical formula of the causal relationship k were used to proof the hypothesis. Significance was indicated by a p-value of less than 0.05. Results: The studies analysed were able to provide direct and indirect evidence that AF is caused by a process of inflammation while a direct identification of the factor causing AF was not possible. Still, it appears to be very probable that Epstein-Barr virus (EBV) is the cause of AF. Conclusion: Atrial fibrillation (AF) is caused by an inflammatory process. Keywords: Epstein-Barr virus, atrial fibrillation, causal relationship Situational Underlying Value (Suv) Statistic for Major League Baseball – Defense Authors: Raymond HV Gallucci Comments: 55 Pages. Third in the SUV series, addressing defensive SUV (and revised pitching SUV) for major league baseball, including "proof of principle." In Situational Underlying Value for Baseball, Football and Basketball – A Statistic (SUV) to Measure Individual Performance in Team Sports, an all-encompassing, overall statistic to measure "clutch" performance by individual players in the team sports of major league baseball, professional football, and NCAA men's college basketball was developed, called "Situational Underlying Value" (SUV). [1] Based on the concept of "run expectancy," it assigns an SUV to each base as a function of the number of outs, including a negative value for making an out. There, and in "Proof of Principle" for Situational Underlying Value (SUV) – A Statistic to Measure Clutch Performance by Individuals in the Team Sports of Major League Baseball, [2] it was applied exclusively to hitters and pitchers. Its derivation is explained in Reference 1, with additional discussion provided in Reference 2. Reference 1 provides two example games to show how the SUVs accrue for each team's hitters and pitchers. The goal of this third work, which includes "Proof of Principle" as in Reference 2, is to track the performance of a team's individual fielders defensively, including an enhancement to the approach for pitching previously developed in Reference 2, over the same substantial portion of an entire season. One-third of the 2017 season, i.e., 54 games, have again been selected for the Seattle Mariners, starting with Game 002 and tracking every third game up through Game 161. The SUVs are based on the play-by-play descriptions provided in "Baseball Reference," https://www.baseball-reference.com/. [3] Summary SUV analyses for all 54 games are provided below, with a roll-up of cumulative SUVs every six games. Also, the actual play-by-plays are included for every third game analyzed, starting from Game 005 (i.e., for Games 005, 014, 023, 032, 041, 050, 059, 068, 077, 086, 095, 104, 113, 122, 131, 140, 149 and 158), providing a total for 18 games. For the rest (which are only summarized), the reader should consult Reference 3. There is an important change to the above table for defensive tracking, also applied to the enhanced pitching analysis, namely the reversal of all the SUV signs. This enables "positive" defensive outcomes (outs) to be assigned positive SUVs, while "negative" outcomes (reaching base or advancing) are assigned negative SUVs. The assignment of defensive SUVs is somewhat more involved than that for hitting. Specific assignments, most frequently encountered, are presented below. Stochastic Spline Functions with Unequal Time Steps Authors: Stephen P. Smith A piece-wise quadratic spline is introduced as a time series coming with unequal time steps, and where the second derivative of the spline at the junction points is impacted by random Brownian motion. A measurement error is also introduced, and this changes the spline into semi-parametric regression. This makes a total of two dispersion parameters to be estimated by a proposed REML analysis that unitizes the K-matrix. The spline itself only has three location effects that are treated as fixed, and must be estimated. A proposed prediction of a future observation beyond the spline's end point is presented, coming with a prediction error variance. Boring Boolean Circuit Authors: Thinh D. Nguyen We survey the problem of deciding whether a given Boolean circuit is boring. Gastric Cancer and Epstein-Barr Virus Infection. a Systematic Review of Ish Based Studies. Comments: 22 Pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by Modern Health Science, 2018; Vol. 1, No. 2 ( https://j.ideasspread.org/index.php/mhs/article/view/160 ) Background: Epstein-Barr virus (EBV) has an important role in the oncogenesis of several malignant diseases. Reports demonstrated even the presence of Epstein-Barr virus in gastric carcinoma (GC). However, the pathogenic role of EBV in CG is uncertain. The present investigation was carried out to investigate a possible causal relationship between GC and EBV. Statistical Analysis: The method of the conditio sine qua non relationship was used to proof the hypothesis whether gastric cancer is a necessary condition (a conditio sine qua non) of the presence of EBV in human gastric tissues. In other words, without GC no EBV positivity in human stomach. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between gastric cancer and EBV. Significance was indicated by a p-value (two sided) of less than 0.05. Results: In toto 26 ISH based studies with a sample size of N = 11860 were re-analyzed. All the studies analyzed support the null-hypothesis without GC no EBV positivity in human gastric tissues. In other words, gastric cancer itself is a conditio sine qua on of EBV positivity in human gastric cancer while the cause effect relationship between gastric cancer and EBV was highly significant. Conclusions: Epstein-Barr virus in neither a cause or the cause of human gastric cancer. Keywords: Gastric cancer, Epstein-Barr virus, cause effect relationship, causality Sunburn and Malignant Melanoma Comments: 25 Pages. pp. 25. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by Background: Unfortunately, despite recent scientific advances the cause of malignant melanoma is not identified. The incidence of malignant melanoma increases and malignant melanoma continues to represent a significant individual and public health challenge. Objectives: In this systematic review studies were re-analyzed so that some new inferences can be drawn. Statistical Analysis: The method of the conditio per quam relationship was used to proof the hypothesis whether the presence of human papillomavirus guarantees the presence of malignant melanoma. In other words, if human papillomavirus is present, then malignant melanoma is present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between human papillomavirus and malignant melanoma. Significance was indicated by a p-value of less than 0.05. Results: The studies analyzed support the null-hypothesis that the presence of human papillomavirus guarantees the presence of malignant melanoma. In other words, human papillomavirus is a conditio per quam of malignant melanoma while the cause effect relationship between human papillomavirus and malignant melanoma was highly significant. Conclusions: Human papillomavirus is a sufficient condition of malignant melanoma. Human papillomavirus is a cause of malignant melanoma. Keywords: Human papillomavirus, malignant melanoma, cause effect relationship, causality "PROOF of PRINCIPLE" for Situational Underlying Value (Suv) – a Statistic to Measure Clutch Performance by Individuals in the Team Sports of Major League Baseball, Professional Football (NFL) and Ncaa Men's College Basketball (Part 1) Comments: 42 Pages. Part 1 of Three-Part Document, Separated due to Size Limitations NOTE: Due to size limitations, this has been published in three separate parts, with the abstract and references to all three parts included with each. This is Part 1, directed exclusively to Major League Baseball. In Situational Underlying Value for Baseball, Football and Basketball – A Statistic (SUV) to Measure Individual Performance in Team Sports, an all-encompassing, overall statistic to measure "clutch" performance by individual players in the team sports of major league baseball, professional football (NFL), and NCAA men's college basketball was developed, called "Situational Underlying Value" (SUV). This work supplements and extends the development and initial demonstrations of the use of the SUV statistic for these three team sports by tracking the performance of three specific teams in these three sports over a significant portion of their most recent seasons: (1) for major league baseball, 54 of the 162 games played by the Seattle Mariners in 2017; (2) for professional football, five of the 16 games played by the Seattle Seahawks in 2017; and (3) for NCAA Men's College Basketball, the five games played by the Loyola of Chicago Ramblers in the 2018 NCAA Division I Men's Basketball Tournament. The SUV statistics for the players who participated in these games are tracked and accumulated for comparison among themselves and, for those who participated in a significant portion of these games, further compared against the traditional statistics for each team over the entire season (or, in the case of the Loyola of Chicago Ramblers, the complete five games of the Basketball Tournament). The goal is to examine the efficacy of this one overarching statistic, the SUV, in representing player performance "in the clutch" vs. more subjective interpretation of the myriad of different "traditional" statistics currently used. Anomalies between the SUV and "traditional" statistics results are examined and explained, to the extent practicable given the scope of the SUV analysis (partial seasons). Whether or not this effort proves successful is left to the reader's conclusion based on the results and comparisons performed. Comments: 59 Pages. Part 2 of Three-Part Document, Separated due to Size Limitation NOTE: Due to size limitations, this has been published in three separate parts, with the abstract and references to all three parts included with each. This is Part 2, directed exclusively to Professional Football (NFL). In Situational Underlying Value for Baseball, Football and Basketball – A Statistic (SUV) to Measure Individual Performance in Team Sports, an all-encompassing, overall statistic to measure "clutch" performance by individual players in the team sports of major league baseball, professional football (NFL), and NCAA men's college basketball was developed, called "Situational Underlying Value" (SUV). This work supplements and extends the development and initial demonstrations of the use of the SUV statistic for these three team sports by tracking the performance of three specific teams in these three sports over a significant portion of their most recent seasons: (1) for major league baseball, 54 of the 162 games played by the Seattle Mariners in 2017; (2) for professional football, five of the 16 games played by the Seattle Seahawks in 2017; and (3) for NCAA Men's College Basketball, the five games played by the Loyola of Chicago Ramblers in the 2018 NCAA Division I Men's Basketball Tournament. The SUV statistics for the players who participated in these games are tracked and accumulated for comparison among themselves and, for those who participated in a significant portion of these games, further compared against the traditional statistics for each team over the entire season (or, in the case of the Loyola of Chicago Ramblers, the complete five games of the Basketball Tournament). The goal is to examine the efficacy of this one overarching statistic, the SUV, in representing player performance "in the clutch" vs. more subjective interpretation of the myriad of different "traditional" statistics currently used. Anomalies between the SUV and "traditional" statistics results are examined and explained, to the extent practicable given the scope of the SUV analysis (partial seasons). Whether or not this effort proves successful is left to the reader's conclusion based on the results and comparisons performed. NOTE: Due to size limitations, this has been published in three separate parts, with the abstract and references to all three parts included with each. This is Part 3, directed exclusively to NCAA Men's College Basketball. In Situational Underlying Value for Baseball, Football and Basketball – A Statistic (SUV) to Measure Individual Performance in Team Sports, an all-encompassing, overall statistic to measure "clutch" performance by individual players in the team sports of major league baseball, professional football (NFL), and NCAA men's college basketball was developed, called "Situational Underlying Value" (SUV). This work supplements and extends the development and initial demonstrations of the use of the SUV statistic for these three team sports by tracking the performance of three specific teams in these three sports over a significant portion of their most recent seasons: (1) for major league baseball, 54 of the 162 games played by the Seattle Mariners in 2017; (2) for professional football, five of the 16 games played by the Seattle Seahawks in 2017; and (3) for NCAA Men's College Basketball, the five games played by the Loyola of Chicago Ramblers in the 2018 NCAA Division I Men's Basketball Tournament. The SUV statistics for the players who participated in these games are tracked and accumulated for comparison among themselves and, for those who participated in a significant portion of these games, further compared against the traditional statistics for each team over the entire season (or, in the case of the Loyola of Chicago Ramblers, the complete five games of the Basketball Tournament). The goal is to examine the efficacy of this one overarching statistic, the SUV, in representing player performance "in the clutch" vs. more subjective interpretation of the myriad of different "traditional" statistics currently used. Anomalies between the SUV and "traditional" statistics results are examined and explained, to the extent practicable given the scope of the SUV analysis (partial seasons). Whether or not this effort proves successful is left to the reader's conclusion based on the results and comparisons performed. Studying the Random Variables According to Neutrosophic Logic. ( Arabic ) Authors: Rafif Alhabib, Moustaf Amzherranna, Haitham Farah, A.A. Salama We present in this paper the neutrosophic randomized variables, which are a generalization of the classical random variables obtained from the application of the neutrosophic logic (a new nonclassical logic which was founded by the American philosopher and mathematical Florentin Smarandache, which he introduced as a generalization of fuzzy logic especially the intuitionistic fuzzy logic ) on classical random variables. The neutrosophic random variable is changed because of the randomization, the indeterminacy and the values it takes, which represent the possible results and the possible indeterminacy.Then we classify the neutrosophic randomized variables into two types of discrete and continuous neutrosophic random variables and we define the expected value and variance of the neutrosophic random variable then offer some illustrative examples. Human Cytomegalovirus: The Cause Of IgA Nephropathy. Comments: 15 pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by:. Objective: The present publication investigates the relationship between the presence of Human cytomegalovirus (HCMV) in renal tissue and IgA Nephropathy (IgAN). Methods: A systematic review and re-analysis of studies which investigated the relationship between HCMV and IgAN was conducted aimed to answer the following question. Is there a cause-effect relationship between HCMV and IgAN? The method of the conditio sine qua non relationship was used to proof the hypothesis whether the presence of HCMV guarantees the presence of IgAN. In other words, without HCMV no IgAN. The mathematical formula of the causal relationship k was used to proof the hypotheses is, whether there is a cause-effect relationship between HCMV and IgAN. Significance was indicated by a p-value of less than 0.05. Results: The studies analysed were able to provide strict evidence that HCMV is a necessary condition (a conditio sine qua non), a sufficient condition and a necessary and sufficient condition of IgAN. Furthermore, the cause-effect relationship between HCMV and IgAN (N=37, k =+0.514619883, p value (k) =0.001746216) was highly significant. Conclusions: On the basis of published data and ongoing research, sufficient evidence is given to conclude that HCMV is the cause of IgA Nephropathy. Keywords: Cytomegalovirus, IgA Nephropathy, Causal relationship Human Papillomavirus – the Cause of Prostate Cancer Comments: 9 Pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by:. BACKGROUND: Several observational studies investigated the relationship between human papillomavirus (HPV) infection and the risk of prostate cancer (PC) and have suggested conflicting results about this relationship. However, the relationship between HPV infection and PC remains unclear. The aim of the present meta-analysis study is to investigate whether HPV serves as a cause of PC. METHODS: The PubMed database was searched for suitable articles. Previously published expert review and systematic review were used as an additional source to identify appropriate articles. Articles selected for this meta-analysis should fulfil the following inclusion criteria: (a) no data access barrier (b) PCR DNA based identification of HPV. RESULTS: The studies analysed were able provide evidence that without being married no (HPV infection of a men/prostate cancer).The X² value of the total 20 articles indicated a significant causal relationship between HPV and PC. In other words, if HPV infection of human prostate, then prostate cancer. CONCLUSION: In conclusion, Human papillomavirus is the cause of prostate cancer. KEYWORDS: Human papillomavirus, prostate cancer, causal relationship, causality A Boundary Value Problem of the Partial Differential-Integral Equations and Their Applications Authors: Kim Ju Gyong, Ju Kwang Son We study the boundary value problem of a partial differential-integral equations that have many applications in finance and insurance. We will solve a boundary value problem of the partial differential-integral equations by using the solution of conjugate equation and reflection method and apply it to determine the probability of company bankruptcy in insurance mathematics. On the Representation Theorem for the Stochastic Differential eq Uations with Fractional Brownian Motion and Jump by Probabili ty Measures Transform Authors: Kim Ju Gyong,Ju Kwang Son In this paper we prove Girsanov theorem for fractional Brownian motion and jump measures and consider representation form for the stochastic differential equations in transfer Probability space. Topological Detection of Wideband Anomalies Authors: Russell Leidich Successive real-valued measurements of any physical chaotic oscillator can serve as entropy inputs to a random number generator (RNG) with correspondingly many whole numbered outputs of arbitrarily small bias, assuming that no correlation exists between successive such measurements apart from what would be implied by their probability distribution function (AKA the oscillator's analog "generator", which is constant over time and thus asymptotically discoverable). Given some historical measurements (a "snapshot") of such an oscillator, we can then train the RNG to expect inputs distributed uniformally among the real intervals defined by those measurements and spanning the entire real line. Each interval thus implies an index in sorted order, starting with the leftmost which maps to zero; the RNG does nothing more than to perform this mapping. We can then replace that first oscillator with a second one presumed to abide by the same generator. It would then be possible to characterize the accuracy of that presumption by quantifying the ensuing change in quality of the RNG. Randomness quality is most accurately expressed via dyspoissonism, which is a normalized equivalent of the log of the number of ways in which a particular distribution of frequencies (occurrence counts) of masks (whole numbers) can occur. Thus the difference in dyspoissonism between the RNG output sets will serve to estimate the information divergence between theirrespective generators, which in turn constitutes a ranking quantifier for the purpose of anomaly detection. Foundation of Neutrosophic Crisp Probability Theory This paper deals with the application of Neutrosophic Crisp sets (which is a generalization of Crisp sets) on the classical probability, from the construction of the Neutrosophic sample space to the Neutrosophic crisp events reaching the definition of Neutrosophic classical probability for these events. Then we offer some of the properties of this probability, in addition to some important theories related to it. We also come into the definition of conditional probability and Bayes theory according to the Neutrosophic Crisp sets, and eventually offer some important illustrative examples. This is the link between the concept of Neutrosophic for classical events and the neutrosophic concept of fuzzy events. These concepts can be applied in computer translators and decision-making theory. Epstein-Barr Virus is the Cause of Multiple Sclerosis. Comments: 24 Pages. pp. 28. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by: International Journal of Current Medical and Pharmaceutical Research ( http://journalcmpr.com/issues/epstein-barr-virus-cause-multiple-sclerosis ) Objective: This systematic review assesses once again the causal relationship between Epstein-Barr virus (EBV) and multiple sclerosis (MS) for gaining a better understanding of the pathogenesis of this disease. Methods: A systematic review and meat-analysis of some studies is provided aimed to answer among other questions the following question. Is there a cause effect relation-ship between Epstein-Barr virus and multiple sclerosis? The method of the conditio sine qua non relationship was used to proof the hypothesis without Epstein-Barr virus no multiple sclerosis. In other words, if multiple sclerosis is present, then Epstein-Barr virus is present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between Epstein-Barr virus and multiple sclerosis. Significance was indicated by a p-value of less than 0.05. Result: The studies analyzed were able to provide evidence that Epstein-Barr virus is a necessary condition (a conditio sine qua non) of multiple sclerosis. Furthermore, the studies analyzed provide impressive evidence of a cause-effect relationship between Epstein-Barr virus and multiple sclerosis. Conclusion: Epstein-Barr virus the cause of multiple sclerosis. Keywords Epstein-Barr virus, multiple sclerosis, cause effect relationship, causality Causal Inference for Survival Analysis Authors: Vikas Ramachandra In this paper, we propose the use of causal inference techniques for survival function estimation and prediction for subgroups of the data, upto individual units. Tree ensemble methods, specifically random forests were modified for this purpose. A real world healthcare dataset was used with about 1800 patients with breast cancer, which has multiple patient covariates as well as disease free survival days (DFS) and a death event binary indicator (y). We use the type of cancer curative intervention as the treatment variable (T=0 or 1, binary treatment case in our example). The algorithm is a 2 step approach. In step 1, we estimate heterogeneous treatment effects using a causalTree with the DFS as the dependent variable. Next, in step 2, for each selected leaf of the causalTree with distinctly different average treatment effect (with respect to survival), we fit a survival forest to all the patients in that leaf, one forest each for treatment T=0 as well as T=1 to get estimated patient level survival curves for each treatment (more generally, any model can be used at this step). Then, we subtract the patient level survival curves to get the differential survival curve for a given patient, to compare the survival function as a result of the 2 treatments. The path to a selected leaf also gives us the combination of patient features and their values which are causally important for the treatment effect difference at the leaf. Elements of Geostatistics Authors: Abdelmajid Ben Hadj Salem Comments: 47 Pages. In French. It is a short lectures of Geostatistics giving some elements of this field for third-year students of the Geomatics license of the Faculty of Sciences of Tunis. A Geometric-Probabilistic Problem About the Lengths of the Segments Intersected in Straights that Randomly Cut a Triangle. Authors: Jesús Álvarez Lobo Comments: 6 Pages. Journal Citation: arXiv:1602.03005v1 If a line cuts randomly two sides of a triangle, the length of the segment determined by the points of intersection is also random. The object of this study, applied to a particular case, is to calculate the probability that the length of such segment is greater than a certain value. Parvovirus B19 the Cause of Systemic Sclerosis. Comments: 28 Pages. pp. 28. Copyright © 2017 by Ilija Barukčić, Jever, Germany. Published by: Objective: Parvovirus B19 appears to be associated with several diseases, one among those appears to be systemic sclerosis. Still, there is no evidence of a causal link be-tween parvovirus B19 and systemic sclerosis. Methods: To explore the cause-effect relationship between Parvovirus B19 and sys-temic sclerosis, a systematic review and re-analysis of studies available and suitable was performed. The method of the conditio sine qua non relationship was used to proof the hypothesis without Parvovirus B19 infection no systemic sclerosis. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between Parvovirus B19 and systemic sclerosis. Significance was indicated by a p-value of less than 0.05 Result: The data analyzed support the Null-hypothesis that without Parvovirus B19 infection no systemic sclerosis. In the same respect, the studies analyzed provide evi-dence of a (highly) significant cause effect relationship between Parvovirus B19 and systemic sclerosis. Conclusion: This study supports the conclusion that Parvovirus B19 is the cause of systemic sclerosis. Keywords Parvovirus B19, systemic sclerosis, causal relationship A Nonconvex Penalty Function with Integral Convolution Approximation for Compressed Sensing Authors: Feng Zhang, Jianjun Wang, Wendong Wang, Jianwen Huang, Changan Yuan In this paper, we propose a novel nonconvex penalty function for compressed sensing using integral convolution approximation. It is well known that an unconstrained optimization criterion based on $\ell_1$-norm easily underestimates the large component in signal recovery. Moreover, most methods either perform well only under the measurement matrix satisfied restricted isometry property (RIP) or the highly coherent measurement matrix, which both can not be established at the same time. We introduce a new solver to address both of these concerns by adopting a frame of the difference between two convex functions with integral convolution approximation. What's more, to better boost the recovery performance, a weighted version of it is also provided. Experimental results suggest the effectiveness and robustness of our methods through several signal reconstruction examples in term of success rate and signal-to-noise ratio (SNR). Fusobacterium (Nucleatum) the Cause of Human Colorectal Cancer. Comments: 37 pages. Copyright © 2018 by Ilija Barukčić, Horandstrasse, Jever, Germany. Published by: Journal of Biosciences and Medicines Vol.6 No.3, March 14, 2018 Objective: Accumulating evidence indicates that the gut microbiome has an increas-ingly important role in human disease and health. Fusobacterium nucleatum has been identified in several studies as the leading gut bacterium which is present in colorectal cancer (CRC). Still it is not clear if Fusobacterium plays a causal role. Methods: To explore the cause-effect relationship between Fusobacterium nucleatum and colorectal cancer, a systematic review and re-analysis of studies published was performed. The method of the conditio sine qua non relationship was used to proof the hypothesis without Fusobacterium nucleatum infection no colorectal cancer. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between Fusobacterium nucleatum and colorectal cancer. Significance was indicated by a p-value of less than 0.05 Result: The data analyzed support the Null-hypothesis that without Fusobacterium nucleatum infection no colorectal cancer. In the same respect, the studies analyzed provide highly significant cause effect relationship between Fusobacterium nucleatum and colorectal cancer. Conclusion: The findings of this study suggest that Fusobacterium (nucleatum) is the cause of colorectal cancer. Life Expectancy – Lying with Statistics Authors: Robert Bennett Biased statistics can arise from computational errors, belief in non-existent or unproven correlations ...or acceptance of premises proven invalid scientifically. It is the latter that will be examined here for the case of human life expectancy, whose values are well-known...and virtually never challenged as to their basic assumptions. Whether the false premises are accidental, a case of overlooking the obvious...or if they may be deliberate distortions serving a subliminal agenda.... is beyond the scope of this analysis. Benchmarking and Improving Recovery of Number of Topics in Latent Dirichlet Allocation Models Authors: Jason Hou-Liu Comments: Pages. Latent Dirichlet Allocation (LDA) is a generative model describing the observed data as being composed of a mixture of underlying unobserved topics, as introduced by Blei et al. (2003). A key hyperparameter of LDA is the number of underlying topics k, which must be estimated empirically in practice. Selecting the appropriate value of k is essentially selecting the correct model to represent the data; an important issue concerning the goodness of fit. We examine in the current work a series of metrics from literature on a quantitative basis by performing benchmarks against a generated dataset with a known value of k and evaluate the ability of each metric to recover the true value, varying over multiple levels of topic resolution in the Dirichlet prior distributions. Finally, we introduce a new metric and heuristic for estimating kand demonstrate improved performance over existing metrics from the literature on several benchmarks. Essay on Statistics Authors: Samir Ait-Amrane Comments: 12 Pages. In French In this paper, we will explain some basic notions of statistics, first in the case of one variable, then in the case of two variables, while organizing ideas and drawing a parallel between some statistical and probabilistic formulas that are alike. We will also say a brief word about econometrics, time series and stochastic processes and provide some bibliographical references where these notions are explained clearly. A Novel Closed Form for Truncated Distributions with Integer Parameters & Hyper-Normal Probabilistic-Weighted Correlation Authors: Jason Lind Development a novel closed form, for integer bounds, of Truncated Distribution and an application of that to a weighting function that favors values further from the origin. Stochastic Functions of Blueshift vs. Redshift Authors: Cres Huang Viewing the random motions of objects, an observer might think it is 50-50 chances that an object would move toward or away. It might be intuitive, however, it is far from the truth. This study derives the probability functions of Doppler blueshift and redshift effect of signal detection. The fact is, Doppler redshift detection is highly dominating in space, surface, and linear observation. Under the conditions of no quality loss of radiation over distance, and the observer has perfect vision; It is more than 92% probability of detecting redshift, in three-dimensional observation, 87% surface, and 75\% linear. In cosmic observation, only 7.81% of the observers in the universe will detect blueshift of radiations from any object, on average. The remaining 92.19% of the observers in the universe will detect redshift. It it universal for all observers, aliens or Earthlings at all locations of the universe. A Review of Multiple Try MCMC Algorithms for Signal Processing Authors: Luca Martino Many applications in signal processing require the estimation of some parameters of interest given a set of observed data. More specifically, Bayesian inference needs the computation of a-posteriori estimators which are often expressed as complicated multi-dimensional integrals. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and Monte Carlo methods are the only feasible approach. A very powerful class of Monte Carlo techniques is formed by the Markov Chain Monte Carlo (MCMC) algorithms. They generate a Markov chain such that its stationary distribution coincides with the target posterior density. In this work, we perform a thorough review of MCMC methods using multiple candidates in order to select the next state of the chain, at each iteration. With respect to the classical Metropolis-Hastings method, the use of multiple try techniques foster the exploration of the sample space. We present different Multiple Try Metropolis schemes, Ensemble MCMC methods, Particle Metropolis-Hastings algorithms and the Delayed Rejection Metropolis technique. We highlight limitations, benefits, connections and dierences among the different methods, and compare them by numerical simulations. A Possible Alternative Model of Probability Theory? Authors: D Williams A possible alternative (and non-standard) model of probability is presented based on non-standard "dx-less" integrals. The possibility of other such models is discussed. Helicobacter Pylori is the Cause of Human Gastric Cancer. Comments: 50 pages. Copyright © 2018 by Ilija Barukcic, Jever, Germany. Published by: Modern Health Science, Vol. 1, No. 1, 2018. https://doi.org/10.30560/mhs.v1n1p43 Objective: Many times a positive relationship between Helicobacter pylori infection and gastric cancer has been reported, yet findings are inconsistent. Methods: A literature search in PubMed was performed to re-evaluate the relationship between Helicobacter pylori (HP) and carcinoma of human stomach. Case control studies with a least 500 participants were consider for a review and meta-analysis. The meta-/re-analysis was conducted using conditio-sine qua non relationship and the causal relationship k. Significance was indicated by a p-value of less than 0.05. Result: All studies analyzed provide impressive evidence of a cause effect relationship between H. pylori and gastric cancer (GC). Two very great studies were able to make the proof that H. pylori is a necessary condition of human gastric cancer. In other words, without H. pylori infection no human gastric cancer. Conclusion: Our findings indicate that Helicobacter pylori (H. pylori) is the cause of gastric carcinoma. Human Papillomavirus - A Cause Of Human Prostate Cancer. Comments: 57 pages. Copyright © 2017 by Ilija Barukčić, Horandstrasse, Jever, Germany. Published by: Objective: A series of different studies detected Human papillomavirus (HPV) in malignant and nonmalignant prostate tissues. However, the results of studies on the relationship between HPV infections and prostate cancer (PCa) remain controversial. Methods: A systematic review and re-analysis of some polymerase-chain reaction (PCR) based case-control studies was performed aimed to answer the following question: Is there a cause effect relationship between human papillomavirus (HPV) and prostatic cancer? The method of the conditio per quam relationship was used to proof the hypothesis: if presence of human papillomavirus (HPV) in human prostate tissues then presence of prostate carcinoma. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between human papillomavirus (HPV) and prostate cancer. Significance was indicated by a p-value of less than 0.05. Result: Only one of the studies analyzed failed to provide evidence that there is a cause-effect relationship between human papillomavirus (HPV) and prostate cancer. Two studies were highly significant on this point. The majority of the studies analyzed support the hypothesis that human papillomavirus (HPV) is a sufficient condition of prostate cancer. In other words, if presence of human papillomavirus (HPV) in human prostate tissues then presence of prostate cancer. Conclusion: Human papillomavirus (HPV) is a cause of prostate cancer. Human Papillomavirus - The Cause Of Human Cervical Cancer. Comments: 56 pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by: 2018, Journal of Biosciences and Medicines Vol.6 No.4, April 30, 2018 Objective: Cervical cancer is the second most prevalent cancer in females worldwide. Infection with human papillomavirus (HPV) is regarded as the main risk factor of cervical cancer. Our objective was to conduct a qualitative systematic review of some case-control studies to examine the role of human papillomavirus (HPV) in the development of human cervical cancer beyond any reasonable doubt. Methods: We conducted a systematic review and re-analysis of some impressive key studies aimed to answer the following question. Is there a cause effect relationship between human papillomavirus (HPV) and cervical cancer? The method of the conditio sine qua non relationship was used to proof the hypothesis whether the presence of human papillomavirus (HPV) guarantees the presence of cervical carcinoma. In other words, if human papillomavirus (HPV) is present, then cervical carcinoma is present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between human papillomavirus (HPV) and cervical carcinoma. Significance was indicated by a p-value of less than 0.05. Result: One study was able to provide strict evidence that human papillomavirus (HPV) is a conditio sine qua non (a necessary condition) of cervical carcinoma while the other studies analyzed failed on this point. The studies analyzed provide impressive evidence of a cause-effect relationship between human papillomavirus (HPV) and cervical carcinoma. Conclusion: Human papillomavirus (HPV) is the cause of cervical carcinoma. A New Definition Of Standard Deviation. ISSN 1751-3030 Authors: Ramesh Chandra Bagadi In this research investigation, the author has detailed a novel definition of Standard Deviation. Surround Codes, Densification, and Entropy Scan Performance Herein we present the "surround" function, which is intended to produce a set of "surround codes" which enhance the sparsity of integer sets which have discrete derivatives of lesser Shannon entropy than the sets themselves. In various cases, the surround function is expected to provide further entropy reduction beyond that provided by straightforward delta (difference) encoding alone. We then present the simple concept of "densification", which facilitates the elimination of entropy overhead due to masks (symbols) which were considered possible but do not actually occur in a given mask list (set of symbols). Finally we discuss the ramifications of these techniques for the sake of enhancing the speed and sensitivity of various entropy scans. Total Intra Similarity And Dissimilarity Measure For The Values Taken By A Parameter Of Concern. {Version 2}. ISSN 1751-3030 In this research investigation, the author has detailed a novel method of finding the 'Total Intra Similarity And Dissimilarity Measure For The Values Taken By A Parameter Of Concern'. The advantage of such a measure is that using this measure we can clearly distinguish the contribution of Intra aspect variation and Inter aspect variation when both are bound to occur in a given phenomenon of concern. This measure provides the same advantages as that provided by the popular F-Statistic measure. Human Papilloma Virus a Cause of Malignant Melanoma? Comments: 11 pages. Copyright © 2017 by Ilija Barukčić, Jever, Germany. Published by: Background: The aim of this study is to work out a possible relationship between hu-man papillomavirus (HPV) and malignant melanoma. Objectives: This systematic review and re-analysis of Roussaki-Schulze et al. availa-ble retrospective study of twenty-eight melanoma biopsy specimens and of the control group of 6 patients is performed so that some new inference can be drawn. Materials and methods: Roussaki-Schulze et al. obtained data from twenty-eight human melanoma biopsy specimens and from six healthy individuals. The presence and types of HPV DNA within biopsy specimens was determined by polymerase chain reaction (PCR). Statistical Analysis: In contrast to Roussaki-Schulze et al., the meth-od of the conditio per quam relationship was used to proof the hypothesis that the presence of human papillomavirus (HPV) guarantees the presence of malignant mel-anoma. In other words, if human papillomavirus (HPV) is present, then malignant melanoma must also be present. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship be-tween human papillomavirus (HPV) and malignant melanoma. Significance was indi-cated by a p-value of less than 0.05. Results: Based on the data published by Roussaki-Schulze et al. we were able to make evidence that the presence of human papillomavirus (HPV) guarantees the presence of malignant melanoma. In other words, human papillomavirus (HPV) is a conditio per quam of malignant melanoma. Contrary to expectation, the data of Roussaki-Schulze et al. based on a very small sample size failed to provide significant evidence that human papillomavirus (HPV) is a cause or the cause of malignant melanoma. Conclusions: Human papillomavirus (HPV) is a conditio per quam of malignant melanoma. Anomaly Detection and Approximate Matching via Entropy Divergences Comments: 20 Pages. Released under the following license: https://creativecommons.org/licenses/by/4.0 The Jensen-Shannon divergence (JSD) quantifies the "information distance" between a pair of probability distributions. (A more generalized version, which is beyond the scope of this paper, is given in [1]. It extends this divergence to arbitrarily many such distributions. Related divergences are presented in [2], which is an excellent summary of existing work.) A couple of novel applications for this divergence are presented herein, both of which involving sets of whole numbers constrained by some nonzero maximum value. (We're primarily concerned with discrete applications of the JSD, although it's defined for analog variables.) The first of these, which we can call the "Jensen-Shannon divergence transform" (JSDT), involves a sliding "sweep window" whose JSD with respect to some fixed "needle" is evaluated at each step as said window moves from left to right across a superset called a "haystack". The second such application, which we can call the "Jensen-Shannon exodivergence transform" (JSET), measures the JSD between a sweep window and an "exosweep", that is, the haystack minus said window, at all possible locations of the latter. The JSET turns out to be exceptionally good at detecting anomalous contiguous subsets of a larger set of whole numbers. We then investigate and attempt to improve upon the shortcomings of the JSD and the related Kullback-Leibler divergence (KLD). Mathematical Probabilistic Closure Authors: Paris Samuel Miles-Brenden Comments: 3 Pages. None. A Note on the Block Restricted Isometry Property Condition Authors: Jianwen Huang, Jianjun Wang, Wendong Wang In this work, the sufficient condition for the recovery of block sparse signals that satisfy $b=\Phi x+\xi$ is investigated. We prove that every block s-sparse signal can be reconstructed by the $l_2/l_1$-minimization method in the noise-free situation and is stably reconstructed in the noisy measurement situation, if the sensing matrix fulfils the restricted isometry property with $\delta_{ts|\mathcal{I}}<t/(4-t)$ as $0<t<4/3$, $ts\geq2.$ Centroid of A Given Set Of Numbers In Any Prime Metric Basis Of Concern Comments: 1 Page. In this research investigation, the author has detailed a Novel Technique of finding the Centroid of a given Set of Numbers in Any Prime Metric Basis of concern. Centroid Of A Given Set Of Numbers In Prime Metric Basis In this Technical Note, the author has detailed the evaluation of the Centroid of a given set of numbers in Prime Metric Basis. Imputing Missing Distributions by LQD Transformation and RKHS-Based Functional Regression Data loss is a big problem in many online monitoring systems due to various reasons. Copula-based approaches are effective imputation methods for missing data imputation; however, such methods are highly dependent on a reliable distribution of missing data. This article proposed a functional regression approach for missing probability density function (PDF) imputation. PDFs are first transformed to a Hilbert space by the log quantile density (LQD) transformation. The transformed results of the response PDFs are approximated by the truncated Karhunen–Loève representation. Corresponding representation in the Hilbert space of a missing PDF is estimated by a vector-on-function regression model in reproducing kernel Hilbert space (RKHS), then mapping back to the density space by the inverse LQD transformation to obtain an imputation for the missing PDF. To address errors caused by the numerical integration in the inverse LQD transformation, original PDFs are aided by a PDF of uniform distribution. The effect of the added uniform distribution in the imputed result of a missing PDF can be separated by the warping function-based PDF estimation technique. Statistical Methods in Astronomy Authors: James P. Long, Rafael S. De Souza Comments: 9 Pages. To apper in Wiley StatsRef: Statistics Reference Online We present a review of data types and statistical methods often encountered in astronomy. The aim is to provide an introduction to statistical applications in astronomy for statisticians and computer scientists. We highlight the complex, often hierarchical, nature of many astronomy inference problems and advocate for cross-disciplinary collaborations to address these challenges. The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion {File Closing Version+2} ISSN 1751-3030 In this research investigation, the author has presented a Recursive Past Equation and a Recursive Future Equation based on the Ananda-Damayanthi Normalized Similarity Measure considered to Exhaustion [1] (please see the addendum of [1] as well). The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion {File Closing Version} One Step Forecasting Model (Advanced Model - Version 5) In this research investigation, the author has presented an Advanced Forecasting Model. Statistical Characterization of the Time to Reach Peak Heat Release Rate for Nuclear Power Plant Electrical Enclosure Fires Authors: Raymond Gallucci Since publication of NUREG/CR-6850 (EPRI 1011989), EPRI/NRC-RES Fire PRA Methodology for Nuclear Power Facilities in 2005, phenomenological modeling of fire growth to peak heat release rate (HRR) for electrical enclosure fires in nuclear power plant probabilistic risk assessment (PRA) has typically assumed an average 12-minute rise time. One previous analysis using the data from NUREG/CR-6850 from which this estimate derived (Gallucci, "Statistical Characterization of Cable Electrical Failure Temperatures Due to Fire, with Simulation of Failure Probabilities") indicated that the time to peak HRR could be represented by a gamma distribution with alpha (shape) and beta (scale) parameters of 8.66 and 1.31, respectively. Completion of the test program by the US Nuclear Regulatory Commission (USNRC) for electrical enclosure heat release rates, documented in NUREG/CR-7197, Heat Release Rates of Electrical Enclosure Fires (HELEN-FIRE) in 2016, has provided substantially more data from which to characterize this growth time to peak HRR. From these, the author develops probabilistic distributions that enhance the original NUREG/CR-6850 results for both qualified (Q) and unqualified cables (UQ). The mean times to peak HRR are 13.3 and 10.1 min for Q and UQ cables, respectively, with a mean of 12.4 min when all data are combined, confirming that the original NUREG/CR-6850 estimate of 12 min was quite reasonable. Via statistical-probabilistic analysis, the author shows that the time to peak HRR for Q and UQ cables can again be well represented by gamma distributions with alpha and beta parameters of 1.88 and 7.07, and 3.86 and 2.62, respectively. Working with the gamma distribution for All cables given the two cable types, the author performs simulations demonstrating that manual non-suppression probabilities, on average, are 30% and 10% higher than the use of a 12-min point estimate when the fire is assumed to be detected at its start and halfway between its start and the time it reaches its peak, respectively. This suggests that adopting a probabilistic approach enables more realistic modeling of this particular fire phenomenon (growth time). The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion {Latest Super Ultimate Version} The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion {Latest Correct Version} The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion (New Version 4) In this research investigation, the author has presented a Recursive Past Equation and a Recursive Future Equation based on the Ananda-Damayanthi Normalized Similarity Measure considered to Exhaustion [1]. The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion (New Version) The Recursive Future And Past Equation Based On The Ananda Damayanthi Normalized Similarity Measure With Error Formulation Included The Recursive Future And Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure Considered To Exhaustion The Recursive Past And Future Equation Based On The Ananda-Damayanthi Similarity Measure And Its Series Considered To Exhaustion In this research investigation, the author has presented a Recursive Past Equation and a Recursive Future Equation based on the Ananda-Damayanthi Similarity Measure and its series considered to Exhaustion [1]. The Recursive Future And Past Equation Based On The Ananda-Damayanthi Similarity Measure Considered To Exhaustion In this research investigation, the author has presented a Recursive Past Equation and a Recursive Future Equation based on the Ananda-Damayanthi Similarity Measure considered to Exhaustion [1]. The Recursive Future And Past Equation Based On The Ananda Damayanthi Normalized Similarity Measure In this research investigation, the author has presented a Recursive Future Equation based on the Ananda-Damayanthi Normalized Similarity Measure [1]. Measuring Pitcher Similarity: Technical Details Authors: G. Healey, S. Zhao, D. Brooks Given the speed and movement for pitches thrown by a set of pitchers, we develop a measure of pitcher similarity. Most Similar Pitcher Match Tables for 2016 Tables of the most similar pitcher matches for 2016. Parsimonious Adaptive Rejection Sampling Authors: L. Martino Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computational demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where a better trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is faster than the standard ARS approach. The Recusrive Past Equation. The Recursive Future Equation In this research investigation, the author has presented a Recursive Past Equation. Also, a Recursive Future Equation is presented. One Step Forecasting Model {Simple Model} (Version 6) In this research investigation, the author has presented two Forecasting Models One Step Forecasting Model {Advanced Model} (Version 3) In this research investigation, the author has presented two Forecasting Models. In this research investigation, the author has presented a Forecasting Model One Step Forecasting Model {Version 4} In this research investigation, the author has presented two one step forecasting models. One Step Forecasting Model (Version 2) An Indirect Nonparametric Regression Method for One-Dimensional Continuous Distributions Using Warping Functions Distributions play a very important role in many applications. Inspired by the newly developed warping transformation of distributions, an indirect nonparametric distribution to distribution regression method is proposed in this article for predicting correlated one-dimensional continuous probability density functions. Remark On Variance Bounds Authors: R. Sharma, R. Bhandari It is shown that the formula for the variance of combined series yields surprisingly simple proofs of some well known variance bounds. Group Importance Sampling for Particle Filtering and MCMC Authors: L. Martino, V. Elvira, G. Camps-Valls Comments: 32 Pages. Related Matlab demos at https://github.com/lukafree/GIS.git Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been already employed implicitly in different works in literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss the application of GIS into the Sequential Importance Resampling (SIR) framework and show that Independent Multiple Try Metropolis (I-MTM) schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. The first one, named Group Metropolis Sampling (GMS) method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings (DPMH) technique, where different parallel particle filters are jointly used to drive an MCMC algorithm. Different resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in different numerical experiments and compared with several benchmark Monte Carlo techniques. Three descriptive Matlab demos are also provided. Clustering Based On Natural Metric In this research article, the author has detailed a Novel Scheme of Clustering Based on Natural Metric. Situational Underlying Value (SUV) - A Single Statistic to Measure "Clutchiness" in Team Sports Situational Underlying Value (SUV) arose from an attempt to develop an all-encompassing statistic for measuring "clutchiness" for individual baseball players. It was to be based on the "run expectancy" concept, whereby each base with a certain number of outs is "worth" some fraction of a run. Hitters/runners reaching these bases would acquire the "worth" of that base, with the "worth" being earned by the hitter if he reached a base or advanced a runner, or the runner himself if he advanced "on his own" (e.g., stolen base, wild pitch). After several iterations, the version for SUV Baseball presented herein evolved, and it is demonstrated via two games. Subsequently, the concept was extended to professional football and NCAA Men's Basketball, both with two example games highlighting selected individual players. As with Major League Baseball, these are team games where individual performance may be hard to gauge with a single statistic. This is the goal of SUV, which can be used as a measure both for the team and individual players. The Intrinsic Value of a Pitch The deployment of sensors that characterize the trajectory of pitches and batted balls in three dimensions provides the opportunity to assign an intrinsic value to a pitch that depends on its physical properties and not on its observed outcome. We exploit this opportunity by utilizing a Bayesian framework to map five-dimensional PITCHf/x velocity, movement, and location vectors to pitch intrinsic values. HITf/x data is used by the model to obtain intrinsic quality-of-contact values for batted balls that are invariant to the defense, ballpark, and atmospheric conditions. Separate mappings are built to accommodate the effects of count and batter/pitcher handedness. A kernel method is used to generate nonparametric estimates for the component probability density functions in Bayes theorem while cross-validation enables the model to adapt to the size and structure of the data. A Systematic Analysis of Soccer Forecast Models and Prediction Leagues Authors: Malte Braband Comments: 16 pages, 4 figures, 2 tables, in German. Jugend Forscht project 130423. This paper analyses the question how to systematically reach the top flight of soccer prediction leagues. In a first step several forecast models are compared and it is shown how most models can be related to the Poisson model. Some of the relations are new. Additionally a method has been developed which allows to numerically evaluate the outcome probabilities of soccer championships instead of simulation. The main practical result for the example of the 2014 soccer World Championship was that the forecast models were significantly better than the human participants of a large public prediction league. However the results between the forecast models were small, both qualitatively and quantitatively. But it is quite unlikely that a large prediction league will be won by a forecast model although the forecast models almost all belonged to the top flight of the prediction league. The "SUV" Statistic for Baseball and Football – Situational Underlying Value Authors: Raymond H.V. Gallucci SUV – Situational Underlying Value – for professional baseball (MLB) is a concept based on the more traditional one of "run expectancy." This is a statistical estimate of the number of runs expected to result from a base runner or multiple runners given his/their presence at a particular base, or bases, and the number of outs in an inning. Numerous baseball websites discuss this concept; one can find dozens more with a simple internet search on "run expectancy." SUV for professional football (NFL) is not as readily conceived as that for baseball, although the concept of each position on the field with down and yards to go has been examined for the possibility of assigning point values (from here on referred to as SUVs). Quantification of this concept is taken from "Expected Points and Expected Points Added Explained," by Brian Burke, December 7, 2014. Example applications to a pair of professional baseball games (MLB) and pair of professional football games (NFL) are included that illustrate how the SUV is used. Universal One Step Forecasting Model For Dynamical State Systems (Version 4) In this research investigation, the author has presented a Novel Forecasting Model based on Locally Linear Transformations, Element Wise Inner Product Mapping, De-Normalization of the Normalized States for predicting the next instant of a Dynamical State given its sufficient history is known. Picking A Least Biased Random Sample Of Size n From A Data Set of N Points With n In this research investigation, a Statistical Algorithm is detailed that enables us to pick a Least Biased Random Sample of Size n , from a Data Set of N Points with n<N. A Tutorial on Simplicity and Computational Differentiation for Statisticians Automatic differentiation is a powerful collection of software tools that are invaluable in many areas including statistical computing. It is well known that automatic differentiation techniques can be applied directly by a programmer in a process called hand coding. However, the advantages of hand coding with certain applications are less appreciated, but these advantages are of paramount importance to statistics in particular. Based on the present literature, the variance component problem using restricted maximum likelihood is an example where hand coding derivatives was very useful relative to automatic or algebraic approaches. Some guidelines for hand coding backward derivatives are also provided, and emphasis is given to techniques for reducing space complexity and computing second derivatives. Alternate Approach of Comparison for Selection Problem Authors: Nikhil Shaw In computer science, a selection algorithm is an algorithm for finding the kth smallest number in a list or array; such a number is called the kth order statistic. This includes the cases of finding the minimum, maximum, and median elements. There are O(n) (worst-case linear time) selection algorithms, and sublinear performance is possible for structured data; in the extreme, O(1) for an array of sorted data. Selection is a subproblem of more complex problems like the nearest neighbor and shortest path problems. Many selection algorithms are derived by generalizing a sorting algorithm, and conversely some sorting algorithms can be derived as repeated application of selection. This new algorithm although has worst case of O(n^2), the average case is of near linear time for an unsorted list. Epstein Bar Virus (Ebv) a Cause of Human Breast Cancer. Epstein-Barr Virus (EBV) has been widely proposed as a possible candidate virus for the viral etiology of human breast cancer, still the most common malignancy affecting females worldwide. Due to possible problems with PCR analyses (contamination), the lack of uniformity in the study design and insufficient mathematical/statistical methods used by the different authors, findings of several EBV (polymerase chain reaction (PCR)) studies contradict each other making it difficult to determine the EBV etiology for breast cancer. In this present study, we performed a re-investigation of some of the known studies. To place our results in context, this study support the hypothesis that EBV is a cause of human breast cancer. Epstein Bar Virus. a Main Cause of Hodgkin's Lymphoma. Comments: 7 Pages. Copyright © 2017 by Ilija Barukčić, Jever, Germany. Published by: Journal of Biosciences and Medicines, 2018, Vol.6 No.1, pp. 75-100. Epstein-Barr virus (EBV), a herpes virus which persists in memory B cells in the peripheral blood for the lifetime of a person, is associated with some malignancies. Many studies suggested that the Epstein-Barr virus contributes to the development of Hodgkin's lymphoma (HL) in some cases too. Despite intensive study, the role of Epstein-Barr virus in Hodgkin's lymphoma remains enigmatic. It is the purpose of this publication to make the proof the Epstein-Barr virus is a main cause of Hodgkin's lymphoma (k=+0,739814235, p Value = 0,000000000000138). Helicobacter Pylori – the Cause of Human Gastric Cancer. Comments: 9 Pages. Copyright © 2016 by Ilija Barukčić, Jever, Germany. Published by Journal of Biosciences and Medicines, Vol.5 No. 2, p. 1-9. https://doi.org/10.4236/jbm.2017.52001 Background: Many studies documented an association between a Helicobacter pylori infection and the development of human gastric cancer. None of these studies were able to identify Helicobacter pylori as a cause or as the cause of human gastric cancer. The basic relation between gastric cancer and Helicobacter pylori still remains uncer-tain. Objectives: This systematic review and re-analysis of Naomi Uemura et al. available long-term, prospective study of 1526 Japanese patients is performed so that some new and meaningful inference can be drawn. Materials and Methods: Data obtained by Naomi Uemura et al. who conducted a long-term, prospective study of 1526 Japanese patients with a mean follow up about 7.8 years and endoscopy at enrolment and in the following between one and three years after enrolment were re-analysed. Statistical analysis used: The method of the conditio sine qua non relationship was used to proof the hypothesis without a helicobacter pylori infection no development of human gastric cancer. The mathematical formula of the causal relationship was used to proof the hypothesis, whether there is a cause effect relationship between a helicobacter pylori infection and human gastric cancer. Significance was indicated by a P value of less than 0.05. Results: Based on the data published by Uemura et al. we were able to make evidence that without a helicobacter pylori infection no development of human gastric cancer. In other words, a Helicobacter pylori infection is a conditio sine qua non of human gastric cancer. In the same respect, the data of Uemura et al. provide a significant evidence that a helicobacter pylori infection is the cause of human gastric cancer. Conclusions: Without a Helicobacter pylori infection no development of human gastric cancer. Hel-icobacter pylori is the cause (k=+0,07368483, p Value = 0.00399664) of human gastric cancer. Epstein-Barr Virus (Ebv) – a Main Cause of Rheumatoid Arthritis. Comments: 5 Pages. Copyright © 2016 by Ilija Barukčić, Jever. Published by: Romanian journal of rheumatology, 27 (4), (2018), 148-163. https://rjr.com.ro/ Objective. Many studies presented some evidence that EBV might play a role in the pathogenesis of rheumatoid arthritis. Still, there are conflicting reports concerning the existence of EBV in the synovial tissue of patients suffering from rheumatoid arthritis. Material and methods. Takeda et al. designed a study to detected EBV DNA is synovial tissues obtained at synovectomy or arthroplasty from 32 patients with rheumatoid arthritis (RA) and 30 control patients (no rheumatoid arthritis). In this study, the data as published by Takeda et al. were re-analysed. Results. EBV infection of human synovial tissues is a condition per quam of rheumatoid arthritis. And much more than this. There is a highly significant causal relationship between an EBV infection of human synovial tissues and rheumatoid arthritis (k= +0,546993718, p-value = 0,00001655). Conclusion. These findings suggest that EBV infection of human synovial tissues is a main cause of rheumatoid arthritis. Standard Deviation for PDG Mass Data Authors: M. J. Germuska This paper analyses the data for the masses of elementary particles provided by the Particles Data Group (PDG). It finds evidence that the best mass estimates are not based solely on statistics but also on overall consistency, that sometimes results in skewed minimum and maximum mass limits. The paper also points out to some other quirks that result in minimum and maximum mass limits which are far from the statistical standard deviation. A statistical method is proposed to compute the standard deviation in such cases and when PDG does not provide any limits. Subnormal Distribution Derived from Evolving Networks with Variable Elements Authors: Minyu Feng, Hong Qu, Zhang Yi, Jürgen Kurths During the last decades, Power-law distributions played significant roles in analyzing the topology of scale-free (SF) networks. However, in the observation of degree distributions of practical networks and other unequal distributions such as wealth distribution, we uncover that, instead of monotonic decreasing, there exists a peak at the beginning of most real distributions, which cannot be accurately described by a Power-law. In this paper, in order to break the limitation of the Power-law distribution, we provide detailed derivations of a novel distribution called Subnormal distribution from evolving networks with variable elements and its concrete statistical properties. Additionally, imulations of fitting the subnormal distribution to the degree distribution of evolving networks, real social network, and personal wealth distribution are displayed to show the fitness of proposed distribution. Multiple Imputation Procedures Using the Gabrieleigen Algorithm Authors: Marisol García-Peña, Sergio Arciniegas-Alarcón, Wojtek Krzanowski, Décio Barbin GabrielEigen is a simple deterministic imputation system without structural or distributional assumptions, which uses a mixture of regression and lower-rank approximation of a matrix based on its singular value decomposition. We provide multiple imputation alternatives (MI) based on this system, by adding random quantities and generating approximate confidence intervals with different widths to the imputations using cross-validation (CV). These methods are assessed by a simulation study using real data matrices in which values are deleted randomly at different rates, and also in a case where the missing observations have a systematic pattern. The quality of the imputations is evaluated by combining the variance between imputations (Vb) and their mean squared deviations from the deleted values (B) into an overall measure (Tacc). It is shown that the best performance occurs when the interval width matches the imputation error associated with GabrielEigen. The Recycling Gibbs Sampler for Efficient and Fast Learning Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in statistical signal processing, machine learning and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently from the full-conditional pdfs. In the general case, this is not possible and it requires the generation of auxiliary samples that are wasted, since they are not used in the final estimators. In this work, we show that these auxiliary samples can be employed within the Gibbs estimators, improving their efficiency with no extra cost. This novel scheme arises naturally after pointing out the relationship between the Gibbs sampler and the chain rule used for sampling purpose. Numerical simulations confirm the excellent performance of the novel scheme. Uses of Sampling Techniques & Inventory Control with Capacity Constraints Authors: editors Sachin Malik, Neeraj Kumar, Florentin Smarandache The main aim of the present book is to suggest some improved estimators using auxiliary and attribute information in case of simple random sampling and stratified random sampling and some inventory models related to capacity constraints. This volume is a collection of five papers, written by six co-authors (listed in the order of the papers): Dr. Rajesh Singh, Dr. Sachin Malik, Dr. Florentin Smarandache, Dr. Neeraj Kumar, Mr. Sanjey Kumar & Pallavi Agarwal. In the first chapter authors suggest an estimator using two auxiliary variables in stratified random sampling for estimating population mean. In second chapter they proposed a family of estimators for estimating population means using known value of some population parameters. In Chapter third an almost unbiased estimator using known value of some population parameter(s) with known population proportion of an auxiliary variable has been used. In Chapter four the authors investigates a fuzzy economic order quantity model for two storage facility. The demand, holding cost, ordering cost, storage capacity of the own - warehouse are taken as trapezoidal fuzzy numbers. In Chapter five a two-warehouse inventory model deals with deteriorating items, with stock dependent demand rate and model affected by inflation under the pattern of time value of money over a finite planning horizon. Shortages are allowed and partially backordered depending on the waiting time for the next replenishment. The purpose of this model is to minimize the total inventory cost by using the genetic algorithm. This book will be helpful for the researchers and students who are working in the field of sampling techniques and inventory control. Error Bounds on the Loggamma Function Amenable to Interval Arithmetic Comments: 13 Pages. This work is licensed under a Creative Commons Attribution 4.0 International License. Unlike other common transcendental functions such as log and sine, James Stirling's convergent series for the loggamma ("logΓ") function suggests no obvious method by which to ascertain meaningful bounds on the error due to truncation after a particular number of terms. ("Convergent" refers to the fact that his original formula appeared to converge, but ultimately diverged.) As such, it remains an anathema to the interval arithmetic algorithms which underlie our confidence in its various numerical applications. Certain error bounds do exist in the literature, but involve branches and procedurally generated rationals which defy straightforward implementation via interval arithmetic. In order to ameliorate this situation, we derive error bounds on the loggamma function which are readily amenable to such methods. The Formulating of Some Probable Concepts and Theories Using the Technique of Neutrosophic and Its Impact on Decision Making Process Authors: A. A. Salama, Rafif alhbeib تكمن أهمية البحث في الوصول إلى آفاق جديدة في نظرية الاحتمالات سندعوها نظرية الاحتمالات الكلاسيكية النتروسوفيكية وضع أسسها أحمد سلامة وفلورنتن سمارنداكة والتي تنتج عن تطبيق المنطق النتروسوفيكي على نظرية الاحتمالات الكلاسيكية , ولقد عرف سلامة وسمارانداكه الفئة النتروسوفيكية الكلاسيكية بثلاث مكونات جزئية من الفئة الشاملة الكلاسيكية ( فضاء العينة) وثلاث مكونات من الفئة الفازية هي الصحة والخطأ والحياد (الغموض) وإمتداد لمفاهيم سلامة وسمارنداكة سنقوم بدراسة احتمال هذه الفئات الجديدة واستنتاج الخصائص لهذا الاحتمال ومقارنته مع الاحتمال الكلاسيكي ولابد أن نذكر أنه يمكن لهذه الأفكار أن تساعد الباحثين وتقدم لهم استفادة كبرى في المستقبل في إيجاد خوارزميات جديدة لحل مشاكل دعم القرار . مشكلة البحث: لقد وضع تطور العلوم أمام نظرية الاحتمالات عدداً كبيراً من المسائل الجديدة غير المفسرة في إطار النظرية الكلاسيكية ولم تكن لدى نظرية الاحتمالات طرق عامة أو خاصة تفسر الظواهر الجارية في زمن ما بشكل دقيق فكان لابد من توسيع بيانات الدراسة وتوصيفها بشكل دقيق لنحصل على احتمالات أكثر واقعية واتخاذ قرارات أكثر صوابية وهنا جاء دور المنطق النتروسوفيكي الذي قدم لنا نوعبن من الفئات النتروسوفيكية التي تعمم المفهوم الضبابي والمفهوم الكلاسيكي للفئات والاحداث التي تعتبر اللبنة الأولى في دراسة الاحتمالات النتروسوفيكية . أهداف البحث: تهدف هذه الدراسة إلى : 1-تقديم وعرض لنظرية الفئات النتروسوفيكية من النوع الكلاسيكي والنوع الفازي . 2-تقديم وتعريف الاحتمال النتروسوفيكي للفئات النتروسوفيكية . 3-بناء أدوات لتطوير الاحتمال النتروسوفيكي ودراسة خصائصه . 4-تقديم التعاريف والنظريات الاحتمالية وفق المنطق النتروسوفيكي الجديد . 5-مقارنة ما تم التوصل إليه من نتائج باستخدام الاحتمال النيتروسوفكي Neutrosophic probability بالاحتمال الكلاسيكي . 6-نتائج استخدام الاحتمالات النتروسوفيكية على عملية اتخاذ القرار . From Jewish Verbal and General Intelligence to Jewish Achievement: A Doubly Right Wing Issue Authors: Sascha Vongehr Comments: 6 Pages. 2 Figures Ashkenazim Jews (AJ) comprise roughly 30% of Nobel Prize winners, 'elite institute' faculty, etc. Mean AJ intelligence quotients (IQ) fail explaining this, because AJ are only 2.2% of the US population. The growing anti-Semitic right wing supports conspiracy theories with this. However, deviations depend on means. This lifts the right wing of the AJ IQ distribution. Alternative mechanisms such as intellectual AJ culture or in-group collaboration, even if real, must be regarded as included through their IQ-dependence. Antisemitism is thus opposed in its own domain of discourse; it is an anti-intelligence position inconsistent with eugenics. Missing Value Imputation in Multi-Environment Trials: Reconsidering the Krzanowski Method Authors: Sergio Arciniegas-Alarcón, Marisol García-Peña, Wojtek Krzanowski We propose a new methodology for multiple imputation when faced with missing data in multi-environmental trials with genotype-by-environment interaction, based on the imputation system developed by Krzanowski that uses the singular value decomposition (SVD) of a matrix. Several different iterative variants are described; differential weights can also be included in each variant to represent the influence of different components of SVD in the imputation process. The methods are compared through a simulation study based on three real data matrices that have values deleted randomly at different percentages, using as measure of overall accuracy a combination of the variance between imputations and their mean square deviations relative to the deleted values. The best results are shown by two of the iterative schemes that use weights belonging to the interval [0.75, 1]. These schemes provide imputations that have higher quality when compared with other multiple imputation methods based on the Krzanowski method. The Reliability of Intrinsic Batted Ball Statistics: Appendix Given information about batted balls for a set of players, we review techniques for estimating the reliability of a statistic as a function of the sample size. We also review methods for using the estimated reliability to compute the variance of true talent and to generate forecasts. From Unbiased Numerical Estimates to Unbiased Interval Estimates Authors: Baokun Li, Gang Xiang, Vladik Kreinovich, Panagios Moscopoulos One of the main objectives of statistics is to estimate the parameters of a probability distribution based on a sample taken from this distribution. Analysis of Sunflower Data from a Multi-Attribute Genotype ×environment Trial in Brazil Authors: Marisol García-Peña, Sergio Arciniegas-Alarcón, Kaye Basford, Carlos Tadeu dos Santos Dias In multi-environment trials it is common to measure several response variables or attributes to determine the genotypes with the best characteristics. Thus it is important to have techniques to analyse multivariate multi-environment trial data. The main objective is to complement the literature on two multivariate techniques, the mixture maximum likelihood method of clustering and three-mode principal component analysis, used to analyse genotypes, environments and attributes simultaneously. In this way, both global and detailed statements about the performance of the genotypes can be made, highlighting the benefit of using three-way data in a direct way and providing an alternative analysis for researchers. We illustrate using sunflower data with twenty genotypes, eight environments and three attributes. The procedures provide an analytical procedure which is relatively easy to apply and interpret in order to describe the patterns of performance and associations in multivariate multi-environment trials. Considerations Regarding The Scientific Language and "Literary Language" Authors: Florentin Smarandache As in nature nothing is absolute, evidently there will not exist a precise border between the scientific language and "the literary" one (the language used in literature): thus there will be zones where these two languages intersect. Replacements of recent Submissions [110] viXra:1901.0170 [pdf] replaced on 2019-01-19 04:36:08 Estimating Variances and Covariances in a Non-stationary Multivariate Time Series Using the K-matrix A second order time series model is described, and generalized to the multivariate situation. The model is highly flexible, and is suitable for non-parametric regression, coming with unequal time steps. The resulting K-matrix is described, leading to its possible factorization and differentiation using general purpose software that was recently developed. This makes it possible to estimate variance matrices in the multivariate model corresponding the signal and noise components of the model, by restricted maximum likelihood. A nested iteration algorithm is presented for conducting the maximization, and an illustration of the methods are demonstrated on a 4-variate time series with 89 observations. Non-Parametric Regression or Smoothing on a Two Dimensional Lattice Using the K-Matrix A two-dimensional lattice model is described that is able to treat border effects in a coherent way. The model belongs to a class of non-parametric regression models, coming with three smoothness parameters that are estimated from cross validation. The techniques use the K-matrix, which is a typically large and sparse matrix that is also symmetric and indefinite. The K-matrix is subjected to factorization, and algorithmic differentiation, using optimized software, thereby permitting estimation of the smoothness parameters and estimation of the two-dimensional surface. The techniques are demonstrated on real data. Bayesian models have become very popular over the last years in several fields such as signal processing, statistics and machine learning. Bayesian inference needs the approximation of complicated integrals involving the posterior distribution. For this purpose, Monte Carlo (MC) methods, such as Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) algorithms, are often employed. In this work, we introduce theory and practice of a Compressed MC (C-MC) scheme, in order to compress the information contained in a could of samples. CMC is particularly useful in a distributed Bayesian inference framework, when cheap and fast communications with a central processor are required. In its basic version, C-MC is strictly related to the stratification technique, a well-known method used for variance reduction purposes. Deterministic C-MC schemes are also presented, which provide very good performance. The compression problem is strictly related to moment matching approach applied in different filtering methods, often known as Gaussian quadrature rules or sigma-point methods. The connections to herding algorithms and quasi-Monte Carlo perspective are also discussed. Numerical results confirm the benefit of the introduced schemes, outperforming the corresponding benchmark methods. Authors: Jianwen Huang, Xinling Liu, Jianjun Wang, Zhongquan Tan, Jingyao Hou, Hao Pu In this paper, asymptotic expansions of the distributions and densities of powered extremes for Maxwell samples are considered. The results show that the convergence speeds of normalized partial maxima relies on the powered index. Additionally, compared with previous result, the convergence rate of the distribution of powered extreme from Maxwell samples is faster than that of its extreme. Finally, numerical analysis is conducted to illustrate our findings. Epstein-Barr Virus is the Cause Rheumatoid Arthritis Comments: Comments: 20 Pages. Published by: Romanian journal of rheumatology, 27 (4), (2018), 148-163. https://rjr.com.ro/ Aim: Many studies presented some evidence that EBV might play a role in the pathogenesis of rheumatoid arthritis. Still, there are conflicting reports concerning the existence of EBV in the synovial tissue of patients suffering from rheumatoid arthritis. This systematic review assesses the causal relationship between Epstein-Barr virus (EBV) and rheumatoid arthritis (RA) for gaining a better understanding of the pathogenesis of RA. Methods: A systematic review and meta-analysis is provided aimed to answer among other questions the following question. Is there a cause effect relationship between Epstein-Barr virus and rheumatoid arthritis? The method of the conditio sine qua non relationship was used to proof the hypothesis without Epstein-Barr virus no rheumatoid arthritis. In other words, if rheumatoid arthritis is present, then Epstein-Barr virus is present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between Epstein-Barr virus and rheumatoid arthritis. Significance was indicated by a p-value of less than 0.05. Results: The studies analysed were able to provide convincing evidence that Epstein-Barr virus is a necessary condition (a conditio sine qua non) of rheumatoid arthritis. Furthermore, the studies analysed provide impressive evidence of a cause-effect relationship between Epstein-Barr virus and rheumatoid arthritis. Conclusion: EBV infection of human synovial tissues is a condition sine qua non, a condition per quam and a conditio sine qua non and conditio per quam of rheumatoid arthritis. In other words, Epstein-Barr virus is the cause of rheumatoid arthritis. Comments: 55 Pages. Minor formatting revision. Autoregressive and Rolling Moving Average Processes using the K-matrix with Discrete but Unequal Time Steps The autoregressive and rolling moving average time series models are describe with discrete time steps that may be unequal. The standard time series is described, as well as a two-dimensional spatial process that is separable into two one-dimensional processes. The K-matrix representations for each of these are presented, which can then be subjected to standard matrix handling techniques. "PROOF of PRINCIPLE" for Situational Underlying Value (Suv) – a Statistic to Measure Clutch Performance by Individuals in the Team Sports of Major League Baseball, Professional Football (NFL) and Ncaa Men's College Basketball Comments: 41 Pages. NOTE: Due to size limitations, this has been published in three separate parts, with the abstract and references to all three parts included with each. This is Part 1, directed exclusively to Major League Baseball. In Situational Underlying Value for Baseball, Football and Basketball – A Statistic (SUV) to Measure Individual Performance in Team Sports, an all-encompassing, overall statistic to measure "clutch" performance by individual players in the team sports of major league baseball, professional football (NFL), and NCAA men's college basketball was developed, called "Situational Underlying Value" (SUV). This work supplements and extends the development and initial demonstrations of the use of the SUV statistic for these three team sports by tracking the performance of three specific teams in these three sports over a significant portion of their most recent seasons: (1) for major league baseball, 54 of the 162 games played by the Seattle Mariners in 2017; (2) for professional football, five of the 16 games played by the Seattle Seahawks in 2017; and (3) for NCAA Men's College Basketball, the five games played by the Loyola of Chicago Ramblers in the 2018 NCAA Division I Men's Basketball Tournament. The SUV statistics for the players who participated in these games are tracked and accumulated for comparison among themselves and, for those who participated in a significant portion of these games, further compared against the traditional statistics for each team over the entire season (or, in the case of the Loyola of Chicago Ramblers, the complete five games of the Basketball Tournament). The goal is to examine the efficacy of this one overarching statistic, the SUV, in representing player performance "in the clutch" vs. more subjective interpretation of the myriad of different "traditional" statistics currently used. Anomalies between the SUV and "traditional" statistics results are examined and explained, to the extent practicable given the scope of the SUV analysis (partial seasons). Whether or not this effort proves successful is left to the reader's conclusion based on the results and comparisons performed. Comments: 41 Pages. Updates previous version, including re-evaluation of calculational process for baseball errors. [99] viXra:1805.0433 [pdf] replaced on 2018-05-27 16:50:11 Human Cytomegalovirus: The Cause Of Glioblastoma Multiforme. Comments: 15 pages. Copyright © 2018 by Ilija Barukčić, Jever, Germany. Published by: Modern Health Science, 2018; Vol. 1, No. 2 ( https://j.ideasspread.org/index.php/mhs/article/view/152 ) Objective: The relationship between Human cytomegalovirus (HCMV) and glioblastoma multiforme (GBM) is investigated. Methods: A systematic review and re-analysis of some impressive key studies was conducted aimed to answer the following question. Is there a cause-effect relationship between HCMV and GBM? The method of the conditio sine qua non relationship was used to proof the hypothesis whether the presence of HCMV guarantees the presence of GBM. In other words, without HCMV no GBM The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause-effect relationship between HCMV and GBM. Significance was indicated by a p-value of less than 0.05. Results: The studies analysed were able to provide strict evidence that HCMV is a necessary condition (a conditio sine qua non) of GBM. Furthermore, the cause-effect relationship between HCMV and GBM (k = +1, p value < 0,0001) was highly significant. Conclusion: Without HCMV no GBM. HCMV is the cause of GBM. Keywords: Cytomegalovirus, Glioblastoma multiforme, Causal relationship Risk-Deformed Regulation: What Went Wrong with NFPA 805 Comments: 50 Pages. Adds reference to recent publication. Before proceeding, and lest opponents of nuclear power think this paper lends support to their efforts to shut down the nuclear industry, I must state the following. NFPA 805 will have been successful in that plants transitioning to it will be as safe as or safer than prior to transition. Plants that made no changes will have at least assessed their fire risks and be more knowledgeable of potential weaknesses that could compromise safety. Having found none, they will not have the need for changes. Plants that made effective changes will be safer than before. If you are one who believes the end justifies the means, then this "bottom line" is all that matters and you need read no further. However, if you are one who believes the means are also important, then you are the audience that I address. I am in no way contending that adoption of NFPA 805 compromised safety – I, too, believe that plants will be as safe or safer as a result of the transition. Why I wrote this paper is to express frustration over the "compromises" allowed by the NRC, and the "short-cuts" and "deviations" taken by the nuclear industry, to fulfill the promise of a "sea change" in fire protection at nuclear power plants through risk-informed, performance-based regulation. And, while no diminution of safety will have occurred, it is possible there were missed opportunities to improve safety if changes might have been made, or different changes substituted for those that were made, if not for these "compromises," "short-cuts" and "deviations." I must confess to being guilty of false optimism in December 2006 when I wrote "perhaps the single achievement most responsible for the improved regulatory environment for fire protection at commercial nuclear power plants has been the modification to 10CFR50.48 that allows licensees to 'maintain a fire protection program that complies with NFPA 805 as an alternative to complying with [past, purely deterministic regulations]'" (Gallucci, "Thirty-Three Years of Regulating Fire Protection at Commercial U.S. Nuclear Power Plants: Dousing the Flames of Controversy," Fire Technology, Vol. 45, pp. 355-380, 2009). Human Cytomegalovirus is the Cause of Atherosclerosis. Comments: 26 pages. Copyright © 2018 by Ilija Barukcic, Jever, Germany. Published by: ... BACKGROUND: Cytomegalovirus (CMV) infection has been supposed to play an important role in the pathogenesis of atherosclerosis (AS). Although many authors proved the presence of viral DNA in arterial wall tissue, the role of CMV in the origin and progress of athero-sclerosis still remains unclear and no definite consensus has been reached. Whether CMV may be involved in the development of AS has not yet been established. METHODS: The purpose of this study was to investigate whether CMV and AS are causally related. Literature was searched through the electronic database PubMed. Data were accurately assessed and analyzed. RESULTS: The meta-analysis results showed that CMV infection and AS are causally connected. CONCLUSIONS: Cytomegalovirus is the cause of atherosclerosis. Keywords Cytomegalovirus, atherosclerosis, causal relationship [Add to Mendeley] [Add to Citeulike] [Add to Facebook] Mycobacterium Avium Subspecies Paratuberculosis is the Cause of Crohn's Disease Objective: This systematic review assesses the causal relationship between Mycobac-terium avium subspecies paratuberculosis (MAP) and Crohn's disease (CD). Methods: A systematic review and meat-analysis of some impressive PCR based stud-ies is provided aimed to answer among other questions the following question. Is there a cause effect relationship between Mycobacterium avium subspecies paratuberculosis and Crohn's disease? The method of the conditio per quam relationship was used to proof the hypothesis whether the presence of Mycobacterium avium subspecies paratuberculosis guarantees the presence of Crohn's disease. In other words, if Crohn's disease is present, then Mycobacterium avium subspecies paratuberculosis is present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between Mycobacterium avium subspecies paratuberculosis and Crohn's disease. Significance was indicated by a p-value of less than 0.05. Result: The studies analyzed (number of cases and controls N=1076) were able to pro-vide evidence that Mycobacterium avium subspecies paratuberculosis is a necessary condition (a conditio sine qua non) and sufficicent conditions of Crohn's disease. Furthermore, the studies analyzed provide impressive evidence of a cause-effect relation-ship between Mycobacterium avium subspecies paratuberculosis and Crohn's disease. Conclusion: Mycobacterium avium subspecies paratuberculosis is the cause of Crohn's disease Rank Regression with Normal Residuals using the Gibbs Sampler Yu (2000) described the use of the Gibbs sampler to estimate regression parameters where the information available in the form of depended variables is limited to rank information, and where the linear model applies to the underlying variation beneath the ranks. The approach uses an imputation step, which constitute nested draws from truncated normal distributions where the underlying variation is simulated as part of a broader Bayesian simulation. The method is general enough to treat rank information that represents ties or partial orderings. Comments: 46 Pages. (to appear) Digital Signal Processing, 2018. Human Papilloma Virus - A Cause Of Malignant Melanoma. Background: The aim of this study is to evaluate the possible relationship between human papillomavirus (HPV) and malignant melanoma. Objectives: In this systematic review we re-analysed the study of Roussaki-Schulze et al. and the study of La Placa et al. so that some new inferences can be drawn. Materials and methods: Roussaki-Schulze et al. obtained data from 28 human mel-anoma biopsy specimens and from 6 healthy individuals. La Placa et al. investigated 51 primary melanoma (PM) and in 20 control skin samples. The HPV DNA was de-termined by polymerase chain reaction (PCR). Statistical Analysis: The method of the conditio per quam relationship was used to proof the hypothesis whether the presence of human papillomavirus (HPV) guarantees the presence of malignant melanoma. In other words, if human papillomavirus (HPV) is present, then malignant melanoma is present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between human papillomavirus (HPV) and malignant melanoma. Signifi-cance was indicated by a p-value of less than 0.05. Results: Based on the data as published by Roussaki-Schulze et al. and the data of La Placa et al. the presence of human papillomavirus (HPV) guarantees the presence of malignant melanoma. In other words, human papillomavirus (HPV) is a conditio per quam of malignant melanoma. In contrast to the study of La Placa et al. and contrary to expectation, the study of Roussaki-Schulze et al. which is based on a very small sample size failed to provide evidence of a significant cause effect relationship be-tween human papillomavirus (HPV) and malignant melanoma. Conclusions: Human papillomavirus (HPV) is a necessary condition of malignant melanoma. Human papillomavirus (HPV) is a cause of malignant melanoma. Human Papilloma Virus a Cause of Malignant Melanoma. Background: The aim of this study is to evaluate the possible relationship between human papillomavirus (HPV) and malignant melanoma. Objectives: In this systematic review we re-analysed the study of Roussaki-Schulze et al. and La Placa et al. so that some new inferences can be drawn. Materials and methods: Roussaki-Schulze et al. obtained data from 28 human mel-anoma biopsy specimens and from 6 healthy individuals. La Placa et al. investigated 51 primary melanoma (PM) and in 20 control skin samples. The HPV DNA was de-termined by polymerase chain reaction (PCR). Statistical Analysis: The method of the conditio per quam relationship was used to proof the hypothesis whether the presence of human papillomavirus (HPV) guarantees the presence of malignant melanoma. In other words, if human papillomavirus (HPV) is present, then malignant melanoma must be present too. The mathematical formula of the causal relationship k was used to proof the hypothesis, whether there is a cause effect relationship between human papillomavirus (HPV) and malignant melanoma. Significance was indicated by a p-value of less than 0.05. Results: Based on the data as published by Roussaki-Schulze et al. and La Placa et al. the presence of human papillomavirus (HPV) guarantees the presence of malignant melanoma. In other words, human papillomavirus (HPV) is a conditio per quam of malignant melanoma. In contrast to the study of La Placa et al. and contrary to ex-pectation, the study of Roussaki-Schulze et al. which is based on a very small sample size failed to provide a significant cause effect relationship between human papillo-mavirus (HPV) and malignant melanoma. Conclusions: Human papillomavirus (HPV) is a necessary condition of malignant melanoma. Human papillomavirus (HPV) is a cause of malignant melanoma. The Jensen-Shannon divergence (JSD) quantifies the "information distance" between a pair of probability distributions. (A more generalized version, which is beyond the scope of this paper, is given in [1]. It extends this divergence to arbitrarily many such distributions. Related divergences are presented in [2], which is an excellent summary of existing work.) A couple of novel applications for this divergence are presented herein, both of which involving sets of whole numbers constrained by some nonzero maximum value. (We're primarily concerned with discrete applications of the JSD, although it's defined for analog variables.) The first of these, which we can call the "Jensen-Shannon divergence transform" (JSDT), involves a sliding "sweep window" whose JSD with respect to some fixed "needle" is evaluated at each step as said window moves from left to right across a superset called a "haystack". The second such application, which we can call the "Jensen-Shannon exodivergence transform" (JSET), measures the JSD between a sweep window and an "exosweep", that is, the haystack minus said window, at all possible locations of the latter. The JSET turns out to be exceptionally good at detecting anomalous contiguous subsets of a larger set of whole numbers. We then investigate and attempt to improve upon the shortcomings of the JSD and the related Kullback-Leibler divergence (KLD). 基于冗余紧框架的l2l1极小化块稀疏压缩感知 Authors: 张枫;王建军 压缩感知是(近似)稀疏信号处理的研究热点之一,它突破了Nyquist/Shannon采样率,实现了信号的高效采集和鲁棒重构.本文采用ℓ2/ℓ1极小化方法和Block D-RIP理论研究了在冗余紧框架下的块稀疏信号,所获结果表明,当Block D-RIP常数δ2k| 满足0 < δ2k| < 0.2时,ℓ2/ℓ1极小化方法能够鲁棒重构原始信号,同时改进了已有的重构条件和误差上限.基于离散傅里叶变换(DFT)字典,我们执行了一系列仿真实验充分地证实了理论结果. This work gains a sharp sufficient condition on the block restricted isometry property for the recovery of the sparse signal. Under the assumption, the sparse with block structure can be stably recovered in the present of noisy case and the block sparse signal can be assuredly reconstructed in the noise-free case. Besides, in order to exhibit the condition is sharp, we offer an example. Byproduct, as $t=1$, the result enhances the bound of block restricted isometry constant $\delta_{s|\mathcal{I}}$ in Lin and Li (Acta Math. Sin. Engl. Ser. 29(7): 1401-1412, 2013). Since publication of NUREG/CR-6850 (EPRI 1011989), EPRI/NRC-RES Fire PRA Methodology for Nuclear Power Facilities in 2005, phenomenological modeling of fire growth to peak heat release rate (HRR) for electrical enclosure fires in nuclear power plant probabilistic risk assessment (PRA) has typically assumed an average 12-minute rise time. [1] One previous analysis using the data from NUREG/CR-6850 from which this estimate derived (Gallucci, "Statistical Characterization of Cable Electrical Failure Temperatures Due to Fire, with Simulation of Failure Probabilities") indicated that the time to peak HRR could be represented by a gamma distribution with alpha (shape) and beta (scale) parameters of 8.66 and 1.31, respectively. [2] Completion of the test program by the US Nuclear Regulatory Commission (USNRC) for electrical enclosure heat release rates, documented in NUREG/CR-7197, Heat Release Rates of Electrical Enclosure Fires (HELEN-FIRE) in 2016, has provided substantially more data from which to characterize this growth time to peak HRR. [3] From these, the author develops probabilistic distributions that enhance the original NUREG/CR-6850 results for both qualified (Q) and unqualified cables (UQ). The mean times to peak HRR are 13.3 and 10.1 min for Q and UQ cables, respectively, with a mean of 12.4 min when all data are combined, confirming that the original NUREG/CR-6850 estimate of 12 min was quite reasonable. Via statistical-probabilistic analysis, the author shows that the time to peak HRR for Q and UQ cables can again be well represented by gamma distributions with alpha and beta parameters of 1.88 and 7.07, and 3.86 and 2.62, respectively. Working with the gamma distribution for All cables given the two cable types, the author performs simulations demonstrating that manual non-suppression probabilities, on average, are 30% and 10% higher than the use of a 12-min point estimate when the fire is assumed to be detected at its start and halfway between its start and the time it reaches its peak, respectively. This suggests that adopting a probabilistic approach enables more realistic modeling of this particular fire phenomenon (growth time). Comments: IET Electronics Letters, Volume 53, Issue 16, Pages: 1115-1117, 2017 Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computationally demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where an efficient trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is faster than the standard ARS approach. Distributions play a very important role in many applications. Inspired by the newly developed warping transformation of distributions, an indirect nonparametric distribution to distribution regression method is proposed in this article for distribution prediction. Additionally, a hybrid approach by fusing the predictions respectively obtained by the proposed method and the conventional method is further developed for reducing risk when the predictor is contaminated. Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques have become very popular in signal processing over the last years. Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been employed implicitly in dierent works in the literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss theapplication of GIS into the Sequential Importance Resampling framework and show that Independent Multiple Try Metropolis schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. The rst one, named Group Metropolis Sampling method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings technique, where dierent parallel particle lters are jointly used to drive an MCMC algorithm. Dierent resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in dierent numerical experiments such as learning the hyperparameters of Gaussian Processes, two localization problems in a wireless sensor network (with synthetic and real data) and the tracking of vegetation parameters given satellite observations, where they are compared with several benchmark Monte Carlo techniques. Three illustrative Matlab demos are also provided. Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been employed implicitly in different works in the literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss the application of GIS into the Sequential Importance Resampling framework and show that Independent Multiple Try Metropolis schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo techniques based on GIS. The first one, named Group Metropolis Sampling method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings technique, where different parallel particle filters are jointly used to drive an MCMC algorithm. Different resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in different numerical experiments such as learning the hyperparameters of Gaussian Processes, the localization problem in a wireless sensor network and the tracking of vegetation parameters given satellite observations, where they are compared with several benchmark Monte Carlo techniques. Three illustrative Matlab demos are also provided. Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been already employed implicitly in different works in literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss the application of GIS into the Sequential Importance Resampling (SIR) framework and show that Independent Multiple Try Metropolis (I-MTM) schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. The first one, named Group Metropolis Sampling (GMS) method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings (DPMH) technique, where different parallel particle filters are jointly used to drive an MCMC algorithm. Different resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in different numerical experiments such as learning the hyperparameters of Gaussian Processes (GP), the localization problem in a sensor network and the tracking of the Leaf Area Index (LAI), where they are compared with several benchmark Monte Carlo techniques. Three descriptive Matlab demos are also provided. The "SUV" Statistic for Baseball, Football and Basketball – Situational Underlying Value Authors: Raymond H.V. Gallucci., P.e. Comments: 90 Pages. Minor revision for typographical errors Situational Underlying Value (SUV) - A Single Statistic to Measure 'Clutchiness' in Team Sports Comments: 90 Pages. Revised to update SUV allocation for baseball errors. Comments: 90 Pages. Includes inadvertent omissions of SUV football tables for second and third downs with more than 10 yards to go. This paper proposes an alternate approach to solve the selection problem and is comparable to best-known algorithm of Quickselect. In computer science, a selection algorithm is an algorithm for finding the Kth smallest number in an unordered list or array. Selection is a subproblem of more complex problems like the nearest neighbor and shortest path problems. Previous known approaches work on the same principle to optimize the sorting algorithm and return the Kth element. This algorithm uses window method to prune and compare numbers to find the Kth smallest element. The average time complexity of the algorithm is linear and has the worst case of O(n^2). The Recycling Gibbs Sampler for Efficient Learning Comments: 30 Pages. published in Digital Signal Processing, Volume 74, 2018 Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. This novel scheme arises naturally after pointing out the relationship between the standard Gibbs sampler and the chain rule used for sampling purposes. Numerical simulations involving simple and real inference problems confirm the excellent performance of the proposed scheme in terms of accuracy and computational efficiency. In particular we give empirical evidence of performance in a toy example, inference of Gaussian processes hyperparameters, and learning dependence graphs through regression. Comments: 30 Pages. published in Digital Signal Processing, 2017 Comments: 26 Pages. The MATLAB code of the numerical examples is provided at http://isp.uv.es/code/RG.zip. Comments: 8 pages, 2 figures, 26 references Ashkenazim Jews (AJ) comprise roughly 30% of Nobel Prize winners, 'elite institute' faculty, etc. Mean intelligence quotients (IQ) fail explaining this, because AJ are only 2.2% of the US population; the maximum possible would be 13% high achievement and needing IQs above 165. The growing anti-Semitic right wing supports conspiracy theories with this. However, standard deviations (SD) depend on means. An AJ-SD of 17 is still lower than the coefficient of variation suggests, but lifts the right wing of the AJ-IQ distribution sufficiently to account for high achievement. We do not assume threshold IQs or smart fractions. Alternative mechanisms such as intellectual AJ culture or ethnocentrism must be regarded as included through their IQ-dependence. Antisemitism is thus opposed in its own domain of discourse; it is an anti-intelligence position inconsistent with eugenics. We discuss the relevance for 'social sciences' as sciences and that human intelligence co-evolved for (self-)deception. Contact - Disclaimer - Privacy - Funding
CommonCrawl
Expand each of the following: (1) \(\cos (\mathrm{X}-\mathrm{Y})\) (2) \(\sin \left(\mathrm{A}-20^{\circ}\right)\) (3) \(\sin \left(2 \alpha+45^{\circ}\right)\) (1) \(\cos (X-Y)=\cos X \cdot \cos Y+\sin X \cdot \sin Y\) (2) \(\sin \left(\mathrm{A}-20^{\circ}\right)=\sin \mathrm{A} \cdot \cos 20^{\circ}-\cos \mathrm{A} \cdot \sin 20^{\circ}\) (3) \(\sin \left(2 \alpha+45^{\circ}\right)=\sin 2 \alpha \cdot \cos 45^{\circ}+\cos 2 \alpha \cdot \sin 45^{\circ}\) & =\sin 2 \alpha \cdot\left(\frac{\sqrt{2}}{2}\right)+\cos 2 \alpha \cdot\left(\frac{\sqrt{2}}{2}\right) \\ & =\left(\frac{\sqrt{2}}{2}\right)(\sin 2 \alpha+\cos 2 \alpha)=\frac{\sqrt{2}(\sin 2 \alpha+\cos 2 \alpha)}{2} Expand and then calculate each of the following: \(\sum_{p=2}^7 p^2\) asked Jan 21 in Mathematics by ♦MathsGee Platinum (163,814 points) | 13 views Expand and then calculate each of the following: \(\sum_{k=3}^{10} 4\) asked Jan 21 in Mathematics by ♦MathsGee Platinum (163,814 points) | 6 views Expand and then calculate each of the following: \(\sum_{i=0}^5 2^{i-1}\) Expand and then calculate each of the following: \(\sum_{k=1}^6 k\) Expand and then calculate each of the following: \(\sum_{r=3}^8\left(\frac{r(r+1)}{2}\right)\) Expand and then calculate each of the following: \(\sum_{r=1}^5(3 r-5)\) Expand the sequence and find the value of the series: $\sum_{n=1}^{6}2^n$ asked Apr 30, 2020 in Mathematics by ♦MathsGee Platinum (163,814 points) | 253 views How do I expand the series $\sum_{k=1}^{6}0^k$? Expand and calculate \(\sum_{m=3}^{12} 6\) Expand and Simplify: \((3 x-2)(x-5)^{2}\) asked Nov 9, 2021 in Mathematics by ♦MathsGee Platinum (163,814 points) | 170 views Calculate \(\sum_{i=0}^{10} 3^{4-i}\) Calculate \(\sum_{m=2}^{100}(7-2 m)\) Expand \(\log_{3}5x\) Expand $(3-2x)^{-3}$ up to and including the term in $x^3$,simplifying the coefficients. asked Jun 13, 2020 in Mathematics by ♦MathsGee Platinum (163,814 points) | 223 views Using the laws of logarithms, expand \(\log _4\left(\frac{5}{x}\right)\)
CommonCrawl
For each poster contribution there will be one poster wall available (A0 size). Posters can be put up for the full duration of the workshop. Stable single light bullets in cold Rydberg gases Bai, Zhengyang Realizing single light bullets and vortices that are stable in high dimensions is a long-standing goal in the study of nonlinear optical physics. On the other hand, the storage and retrieval of such stable high dimensional optical pulses may offer a variety of applications. Here we present a scheme to generate such optical pulses in a cold Rydberg atomic gas. By virtue of electromagnetically induced transparency, strong, long-range atom-atom interaction in Rydberg states is mapped to light fields, resulting in a giant, fast-responding nonlocal Kerr nonlinearity and the formation of light bullets and vortices carrying orbital angular momenta, which have extremely low generation power, very slow propagation velocity, and can stably propagate, with the stability provided by the combination of local and the nonlocal Kerr nonlinearities. We demonstrate that the light bullets and vortices obtained can be stored and retrieved in the system with high efficiency and fidelity. Our study provides a new route for manipulating high-dimensional nonlinear optical processes via the controlled optical nonlinearities in cold Rydberg gases. Electron dynamics in twisted light modes of relativistic intensity In the past two decades, twisted light beams have been extensively studied according to their unique properties. A Laguerre-Gaussian (LG) laser beam, for instance, describes such a twisted mode that can be obtained as a higher-order solution to the paraxial wave equation. In contrast to common laser beams, these twisted beams are characterized by their well-defined orbital angular momentum (OAM). As a result, they can enable completely new insights into the dynamics of a physical system, thus leading to a wide range of applications in quantum information, spectroscopy, etc. It is therefore important to understand in detail how particles behave in such field configurations. The present work addresses this question by studying the interaction of electrons with different circularly-polarized LG modes in the relativistic intensity regime. Three-dimensional particle-in-cell simulations indicate that the electron dynamics are not only very sensitive to the LG mode parameters, but also to the helicity s of the laser radiation. In particular, the present contribution will report on the generation of twisted electron beams. It turns out that the number of twisted electron beams cannot adopt an arbitrary value. Instead, the number is fully determined by the angular momentum of the LG mode m and the helicity of the laser s. Beyond this quantization, these beams are additionally characterized by durations of the order of hundreds of attoseconds [1]. [1] C. Baumann and A. Pukhov, Phys. Plasmas 25, 083114 (2018) "Coherent transitions" and Rabi-type oscillations between modes of classical light in fiber-like structures Bogatskaya, Anna We apply approaches and concepts from quantum mechanics to the classical problem of light beam propagation in the elements of integrated optical circuits. We consider here fiber-like structures in opaque media as potential wells between complex-shaped barriers. This allowed us to construct an analogy between coherent oscillations in a quantum system and the redistribution in space of the field strength of a classical wave in the framework of the slow-varying amplitude approximation for the wave equation. We have demonstrated capability to control the mode composition of a classical light in a fiber-like structure with heterogeneous boundaries. The proposed description for the field spatial redistribution was based on the analogy with Rabi-type oscillations in quantum mechanics. The fundamental analogy between optics and quantum mechanics was recently used by us in order to analyze an important applied problem of radio communication blackout during the spacecraft reentry [1]. [1] Anna Bogatskaya, Nikolay Klenov, Maxim Tereshonok, Sergey Adjemov, and Alexander Popov, J. Phys. D: Appl. Phys. 51, 185602 (2018) Differential cross sections for single ionization of Li by protons and O8+ ions Bondarev, Andrey COLTRIMS (cold target recoil-ion momentum spectroscopy) or reaction microscope technique [1,2] provides a way to measure fully differential cross sections for ionization in ion-atom collisions. Such cross sections contain complete information on ionization dynamics and serve as unique tests of theory. Successful implementation of the MOTReMi (magneto-optical trap reaction microscope) technique [3] extends the range of possible targets for kinematically complete experiments from atomic helium and molecular hydrogen to lithium atom. Moreover, in MOTReMi a target can be prepared in an excited state and at lower temperature. The latter results in better momentum resolution of measurements. Up to now, differential cross sections for single ionization of lithium atom in collisions with various projectiles were measured in a number of experiments [4,5,6]. Recently a semiclassical non-perturbative method based on the Dirac equation for calculation of differential cross sections for ionization in ion-atom collisions was developed. The method was described in detail in Ref. [7] and applied to antiproton-impact ionization of atomic hydrogen. In this report, we present results of differential ionization cross section calculation in collisions of protons and bare oxygen nuclei with lithium atom. The obtained results are compared with experimental data and available theoretical predictions. __________ [1] R. Dörner et al., Phys. Rep. 330, 95 (2000). [2] J. Ullrich et al., Rep. Prog. Phys. 66, 1463 (2003). [3] R. Hubele et al., Rev. Sci. Instrum. 86, 033105 (2015). [4] A. C. LaForge et al., J. Phys. B 46, 031001 (2013). [5] R. Hubele et al., Phys. Rev. Lett. 110, 133201 (2013). [6] E. Ghanbari-Adivi et al., J. Phys. B 50, 215202 (2017). [7] A. I. Bondarev et al., Phys. Rev. A 95, 052709 (2017). Comparative study of single and double ionization dynamics from single and double excitations in helium Borisova, Gergana D. The helium atom has established itself as an exemplary system to study two-electron effects both theoretically and experimentally. Here, we present theoretical results from our study of the two-electron dynamics in a helium atom interacting with strong laser fields. We employ a numerical quantum-mechanical model based on solving the one-dimensional time-dependent Schrödinger equation for two electrons, where the ionization dynamics of the excited states are the main focus of this work. The theoretical method ensures direct access to the time-dependent population of the relevant atomic states during and right after the interaction with the near-infrared (NIR) laser pulse. A partition technique applied to the wave function grid is used for the quantitative study of strong-field ionization dynamics of the initially prepared bound excited states. We find that both the singly excited states (SES) and the doubly excited states (DES) predominantly ionize in the process of single ionization. In the DES however, the second electron can be driven out of its bound state orbital, leading to an enhanced double ionization yield. High-order above treshold ionization beyond the electric dipole approximation Brennecke, Simon Laser-induced electron diffraction (LIED) is a tool for imaging atomic and molecular structural changes with subangstrom spatial and few-femtosecond temporal resolutions. This technique is based on recolliding electrons in linearly polarized laser pulses leading to high-order above-threshold ionization (HATI) and its interpretation is usually carried out in electric-dipole approximation. Here, we present a detailed analysis of effects beyond the electric-dipole approximation in the HATI region. To this end, photoelectron momentum distributions from strong-field ionization are calculated by numerical solution of the one-electron time-dependent Schrödinger equation for model atoms and small molecules including leading order non-dipole effects. For high-energy electrons from rescattering we observe the following two major deviations from the dipole approximation: (i) The minima and maxima resulting from interference between short and long rescattering trajectories are shifted along the field propagation direction; (ii) an asymmetry in the signal strength of electrons emitted in the forward/backward directions appears. Taken together, the two non-dipole effects give rise to a considerable average forward momentum component of the order of 0.1 a.u. for realistic laser parameters. We develop a non-dipole classical three-step model including the beyond-dipole Lorentz force and incorporating accurate quantum-mechanical cross sections to interpret the shift of the boundary and also the asymmetry in the outer part of the distribution. Additionally, a non-dipole quantum-orbit model provides the foundation for a quantitative understanding of the complete HATI region at least for short-range potentials and offers a transparent interpretation of the underlying physics. Different time scales in plasmonically enhanced high-order harmonic generation Chomet, Heloise We investigate high-order harmonic generation in inhomogeneous media for reduced dimensionality models. We perform a phase-space analysis, in which we identify specific features caused by the field inhomogeneity. We compute high-order harmonic spectra using the numerical solution of the time-dependent Schrodinger equation, and provide an interpretation in terms of classical electron trajectories. We show that the dynamics of the system can be described by the interplay of high-frequency and slow-frequency oscillations, which are given by Mathieu's equations. The latter oscillations lead to an increase in the cutoff energy, and, for small values of the inhomogeneity parameter, take place over many driving-field cycles. In this case, the two processes can be decoupled and the oscillations can be described analytically. Influence of topological edge states on the harmonic generation in linear chains Drüeke, Helena The two topological phases of a linear chain of ions differ in their harmonic yields by several orders of magnitude due to the difference in the destructive interference of all valence band and edge state electrons. A program to solve the time-dependent Kohn-Sham equations has been developed. It allows for the simulation of a linear chain in an intense laser field in an all-electron, self-consistent way. The robustness of the differing harmonic yield was investigated with respect to finite-size and disorder effects. A remarkable robustness was observed, which might allow applications that steer topological electronics by all-optical means or control strong-field-based light sources electronically. High Harmonics generation from excited states of atoms Efimov, Dmitry We study High Harmonics spectra generated by atoms during their interaction with strong short laser field. We show that initial excitation of atom essentially moderates the shape of HHG spectrum. Classical features in high-order above-threshold ionization of molecular hydrogen cation: ab initio vs classical trajectory method Fetić, Benjamin B. Feti\'{c}$^1$ and D. B. Milo\v{s}evi\'{c}$^{1,2}$ 1Faculty of Science, University of Sarajevo, Zmaja od Bosne 35, 71000 Sarajevo, Bosnia and Herzegovina 2Academy of Sciences and Arts of Bosnia and Herzegovina, Bistrik 7, 71000 Sarajevo, Bosnia and Herzegovina Corresponding author: [email protected] In the last two decades classical trajectory approach has played a crucial role in understanding the basic physical mechanism of high-order above-threshold ionization (HATI) (for a historical review and more see [1]). This approach has become to be known as Simple man's theory or three-step model [2] and is well understood for atomic targets. According to this model for atomic targets, an atom is ionized and, at some instant of time, the liberated electron is driven back to the parent ion at which it elastically scatters in arbitrary direction. In the case of diatomic molecular targets, an electron can be liberated from any of the atomic centers, so that it can be "born" in the vicinity of one or the other atomic center. Therefore, we expect some novel features in the molecular HATI spectra in comparison to the atomic HATI. In this work we will explore some of these features obtained using numerical solution of the 3D time-dependent Schrödinger equation [3] and solutions of the Newton's equation of motion for electron in linearly polarized laser field. References: [1] W. Becker, S. P. Goreslavski, D. B. Milošević and G. G. Paulus, J. Phys. B: At. Mol. Opt. Phys. 51, 162002 (2018). [2] P. B. Corkum, Phys. Rev. Lett. 71, 1994 (1993), for HHG, and: G. G. Paulus, W. Becker, W. Nicklich, and H. Walther, J. Phys. B 27, L703 (1994), for HATI. [3] B. Fetić and D. B. Milošević, Phys. Rev. E 95, 053309 (2017). Coupled Coherent States for Indistinguishable Bosons Green, James The coupled coherent states (CCS) trajectory guided Gaussian method of multidimensional quantum dynamics has been well established as a theoretical technique used to treat systems of distinguishable particles. In this present work we look at extending the CCS formalism to treat systems of indistinguishable bosons (CCSB) via second quantisation. The relevant working equations of CCSB are presented, alongside application to two model problems. The first is a system-bath problem consisting of a tunnelling mode coupled to a harmonic bath, previously studied by CCS and other methods in distinguishable representation in 20 dimensions. The harmonic bath is comprised of identical oscillators, and may be second quantised for use with CCSB. The cross-correlation function for the dynamics of the system and Fourier transform spectrum compare extremely well with a benchmark calculation. The second model problem involves 100 bosons in a shifted harmonic trap. Oscillations in the 1-body density are calculated and shown to compare favourably to a multiconfigurational time-dependent Hartree for bosons calculation. Complex Scaling in Non-Hermitian 2-level Hamiltonian Hofmann, Cornelia We numerically model a two-level system of helium with the ground state and an autoionizing excited state. It has been suggested to model this two-level Hamiltonian in a complex scaled form [1], where both the life time of the excited state as well as the action of a coupling laser pulse is treated in complex values. This system exhibits an exceptional point, such that for specific field amplitude and carrier frequency the two states coalesce in the Floquet picture. We numerically solve the corresponding time-dependent Schröndiger equation and study the dynamic evolution of the wave packet populations in both states. The results indicate that the dynamics of the system are qualitatively the same, independent of whether the complex scaling of the laser coupling is applied or not. [1] Kaprálová-Žďánská, P. R., & Moiseyev, N. (2014). Helium in chirped laser fields as a time-asymmetric atomic switch. The Journal of Chemical Physics, 141(1), 014307. https://doi.org/10.1063/1.4885136 Scaling relations of the time-dependent Dirac equation describing multiphoton ionization Ivanova, Irina Approximate scaling laws with respect to the nuclear charge are introduced for the time-dependent Dirac equation describing hydrogen-like ions subject to laser fields within the dipole approximation. In particular, scaling relations with respect to the laser wavelengths and peak intensities are discussed. The validity of the scaling relations is investigated for two-, three-, four-, and five-photon ionization of hydrogen-like ions with the nuclear charges ranging from Z=1 to 92 by solving the corresponding time-dependent Dirac equations adopting the properly scaled laser parameters. Good agreement is found and thus the approximate scaling relations are shown to capture the dominant effect of the response of highly-charged ions to intense laser fields compared to the one of atomic hydrogen. On the other hand, the remaining differences are shown to allow for the identification and quantification of additional, purely relativistic effects in light-matter interaction. Pseudopotential of Many-Electron Atoms Jobunga, Eric Ouma Atoms form the basic building blocks of molecules and condensed matter. Other than hydrogen atom, all the others have more than one electron which interact with each other besides interacting with the nucleus. Electron-electron correlation forms the basis of difficulties encountered in many-body problems. Accurate treatment of the correlation problem is likely to unravel some nice physical properties of matter embedded in the correlation. In an effort to tackle this many-body problem, two complementary parameter-free pseudopotentials for $n$-electron atoms and ions are suggested in this study. Using one of the pseudopotentials, near-exact values of the groundstate ionization potentials of helium, lithium, and berrylium atoms have been calculated. The other pseudopotential also proves to be capable of yielding reasonable and reliable quantum physical observables within the non-relativistic quantum mechanics. High-Harmonic Generation in a Su-Schrieffer-Heeger Chain Jürß, Christoph High-harmonic spectra for two topological phases of a one-dimensional, linear chain were investigated previously using time dependent density functional theory [1]. A significant difference in the dipole strength between the two topological phases were observed and explained by destructive interferences of emitted light from the electrons in the valence-band. We obtain similar results as we couple the tight-binding based Su-Schrieffer-Heeger model to an external field. Edge states and spectra in this model are quite robust against random fluctuations of the system. Additionally the bulk-surface correspondence is investigated by focusing the laser to certain areas of the chain. [1] D. Bauer, K. K. Hansen, Phys. Rev. Lett. 120, 177401 (2018) Non-adiabatic quantum trajectories in Strong Field Ionisation Kaushal, Jivesh The initial conditions for electron trajectories at the exit from the tunnelling barrier are often used in strong field models, for example to bridge the first and second steps of the well-known 3-step model. Our Analytical R-Matrix (ARM) theory does not rely on the 3-step model or the concept of the tunnelling barrier in coordinate space. Defining initial conditions for electron trajectories at the barrier exit is, strictly speaking, not necessary to calculate standard observables in this formalism. Not necessary, but possible. The opportunity to evaluate such initial conditions emerges as a corollary of analysing sub-barrier kinematics, which includes the interplay of laser and Coulomb fields on the sub-cycle scale. We apply our results to discuss the difference in such initial conditions for co- and counter-rotating electrons liberated during strong field ionisation. We also study their impact on subsequent non-adiabatic quantum trajectories emerging for different electron orientations in the atom. Derivation of the quantum-mechanical kinetic energy operator for triatomic molecules with coordinate-dependent nuclear masses Khoma, Mykhaylo The aim of our study is to derive an effective kinetic energy operator (KEO) for a triatomic system which accounts for nonadiabatic effects represented by introducing coordinate-dependent reduced nuclear masses. The purpose of such operator is to simulate a full $N$-body level of theory but staying in the paradigm of the adiabatic theory (i.e. utilizing the concept of adiabatic potential energy surfaces). The derivation has been carried out in the spirit of the work of Herman and Asgharian [1], resulting in effective vibrational and rotational coordinate-dependent contributions to the reduced nuclear mass of the diatomic system. For a triatomic system we start from the \emph{ad-hoc} Cartesian KEO in the body-fixed (BF) frame \begin{equation}\label{T3D-ini} T_{Cart} = -\frac{\hbar^2}{2} \sum_{\alpha=x,y,z} \frac{1}{\mu_{r}^{(\alpha)}} \frac{\partial^2}{\partial {\alpha}^2} -\frac{\hbar^2}{2} \sum_{\alpha=X,Y,Z} \frac{1}{\mu_{R}^{(\alpha)}} \frac{\partial^2}{\partial {\alpha}^2}, \end{equation} where $\vec{r}(x,y,z)$ and $\vec{R}(X,Y,Z)$ are the Jacobi coordinate vectors. Our task is to transform the $T_{Cart}$ into an expression $T_{mol}$ in generalized (non-orthogonal) molecular coordinates in arbitrarily oriented space-fixed (SF) frame. The main difficulties in the construction of the $T_{mol}$ in SF frame are related to a presence of the different mass-prefactors (reduced masses) ${\mu_{r}^{(x,y,z)}}$ and ${\mu_{R}^{(X,Y,Z)}}$ in the expression $T_{Cart}$. We have proposed a new scheme for the derivation of the $T_{mol}$ based on the chain rule method with a usage of infinitesimal variations of the generalized coordinates [2]. Within this method, the triatomic KEO $T_{mol}=K_V + K_{VR}$ has been derived, where $K_V$ and $K_{VR}$ are the vibrational and rovibrational parts of the total KEO. It was found that $K_{VR}$ (and therefore $T_{mol}$) preserves the total rotational invariance with respect to the Euler rotations. This allows constructing a compact representation for an effective nonadiabatic Hamiltonian for triatomic molecules [3]. Preliminary applications for the rovibrational spectrum of H$_3^+$ have been already performed [4]. \vspace{5mm} \small \noindent [1] R. M. Herman, A. Asgharian, \emph{J. Mol. Spectr.} {\bf 19} {305} (1966) \noindent [2] M. Khoma, R. Jaquet, \emph{J. Chem. Phys}. {\bf 147} 114106 (2017) \noindent [3] M. Khoma, R. Jaquet, \emph{J. Math. Chem.} (submitted) \noindent [4] R. Jaquet, M. Khoma, \emph{J. Phys. Chem. A} {\bf 121} 7016 (2017); R. Jaquet, M. Khoma, \emph{Mol. Phys.} {\bf 116} {3507} (2018) Scaling Laws for a Hydrogen-like Ion in an Intense Laser Field Khujakulov, Anvar We study hydrogen-like ions interacting with intense ultra-short laser pulses. Within the dipole approximation the time-dependent Schr\"{o}dinger equation (TDSE) of hydrogen-like ions can be mapped onto the one of the hydrogen by scaling properly [L. B. Madsen and P. Lambropoulos Phys.\,Rev.\,A \textbf{59}, 4574 (1999)]. Relativistic effects must be taken into account if hydrogen-like ions with increasing nuclear charges are considered. In contrast to the non-relativistic regime, strict analytic scaling relations are not known for the Dirac equation. The validity of the non-relativistic scaling relations applied to the solution of the Dirac equation is investigated in the quasi-static regime. For this purpose, the time-independent Schr\""{o}dinger and Dirac equations are solved for hydrogen-like ions in a static electric field using the complex-scaling method. Possible improvements of the scaling relations are discussed. " Strong field physics in the Coulomb system using complex classical trajectories Koch, Werner Complex-valued semiclassical methods hold out the promise of providing insight into the physical nature of a dynamical system while treating classically allowed and classically forbidden processes on the same footing. Their fundamental elegance notwithstanding, these methods have been severely hampered by the numerical and conceptual difficulties introduced by the complexification. Recent progress in the understanding of the topology of complex space and complex time eliminates these problems allowing for stable, long time wave packet reconstruction from the complex trajectory manifolds. We apply this approach to the laser driven dynamics of the ground state of the Coulomb system and present wave functions and spectra in good agreement with numerical quantum results. Individual ionization and recollision processes can be identified and their contributions to the final results can be studied independently or in combination. Manifesting Berry phase in graphene without magnetic field Koochakikelardeh, Hamed We theoretically explore the electron dynamics of graphene superlattices created by strong circularly-polarized ultrashort pulses. The conduction-band population distribution in the reciprocal space forms an interferogram with discontinuities related to the topological (Berry) fluxes at the Dirac points. One of the fundamental problems of topological physics of graphene is a direct observation of the Berry phase. This is related to the fact that the only realistic possibility of observing this phase is self-referenced interferometry of electronic waves in the reciprocal space. However, the Berry phase is ±π; the self-referenced interferometry doubles it to ±2π, which does not produce any discontinuities in the interference fringes. The Bragg scattering from the superlattices creates diffraction and "which way" interference in the reciprocal space reducing the Berry phase and making it directly observable in the electron interferograms. Our finding is an essential step in control and observation of ultrafast electron dynamics in topological solids and may open up a route to all-optical switching, ultrafast memories, and room temperature superconductivity technologies. Ground-State Energy of Heavy Diatomic Homonuclear Quasimolecules Kotov, Artem Few-electron diatomic quasimolecules represent the simplest molecular systems. One of the most interesting cases is heavy quasimolecules in which $Z > 173$ ($Z = Z_1 + Z_2$ is the total nuclear charge). The electromagnetic field strength in such systems can be high enough to approach the critical field strength in the Schwinger mechanism $E_{c}=m^2c^3 / (\hbar e)\simeq 1.3\cdot 10^{16} \, \text{V}/\text{cm}$, i.e. spontaneous electron-positron pair production becomes possible [1]. In other words, the lowest-lying electronic state is close to ``dive'' into the Dirac negative-energy continuum at small enough internuclear distances [2]. In this case the parameter $\alpha Z$ is not small ($\alpha$ is the fine-structure constant) so calculations should be done to all orders in $\alpha Z$. We present relativistic calculations of the ground-state energy of one- and two-electron diatomic uranium quasimolecule valid to all orders in $\alpha Z$. The Dirac equation with the two-center potential is solved numerically using the dual-kinetic-balance method [3]. The self-energy and vacuum-polarization corrections are calculated in the monopole approximation. The results obtained are compared with the results of the previous calculations [4-7]. For two-electron heavy quasimolecules we evaluate also the electron-electron interaction effects. The one-photon exchange is calculated for the two-center potential and the two-photon exchange --- in the monopole approximation of the potential. The higher-order corrections are calculated within the Breit approximation. To the best of our knowledge, this is the most accurate up-to-date evaluation of the two-electron quasimolecular binding energies. ----- [1] Y. B. Zeldovich and V. S. Popov, Sov. Phys. Usp. 14, 673 (1972). [2] W. Greiner, B. Müller, and J. Rafelski, Quantum electrodynamics of strong fields (Springer- Verlag, Berlin, 1985). [3] E. D. Rozenbaum, D. A. Glazov, V. M. Shabaev, K. E. Sosnova, and D. A. Telnov, Phys. Rev. A 89, 012514 (2014). [4] I. I. Tupitsyn and D. V. Mironova, Opt. Spectrosc. 117, 351 (2014). [5] D. V. Mironova, I. I. Tupitsyn, V. M. Shabaev, and G. Plunien, Chem. Phys. 449, 10 (2015). [6] A. N. Artemyev and A. Surzhykov, Phys. Rev. Lett. 114, 243004 (2015). [7] A. Roenko and K. Sveshnikov, Phys. Rev. A 97, 012113 (2018). Quantum Dynamics on a Torus Kraus, Michael The trajectory of a classical particle confined to a torus can be mapped onto the dynamics of two independent oscillators. Periodic closed orbits occur when the frequencies of these two oscillators are commensurable, otherwise the motion is quasiperiodic. We explored the quantum/classical behavior of this system utilizing the method of Bohm trajectories, and studied the influence of the type of classical behavior on the observables that are determined quantum mechanically. Learning ionization control landscape using artificial neural network Kumar Giri, Sajal Bright terahertz-radiation source from two-color mid-infrared laser interacting with a microplasma target Liseykina, Tatyana A way of a considerable enhancement of the terahertz (THz) radiation from atomic gases, irradiated by an intense mid-infrared laser light has been recently suggested (\textit{V. A. Tulsky, M. Baghery, U. Saalmann, S. V. Popruzhenko, arXiv:1810.08834v1}). In particular, theoretical analysis of both the single-atom-ionization dynamics and the collective motion of the laser-generated plasma (restricted to the 1D model) confirmed that the application of the circularly polarized laser light with 2-4 micrometer wavelength may result in considerable increase of the conversion efficiency of the infrared radiation into the emitted THz waves. In this work the results of the 2D particle-in-cell modeling of the THz response in the case of a two-color mid-infrared laser pulse propagating in argon will be presented. High-harmonic spectroscopy of Floquet-Bloch bands Medisauskas, Lukas High harmonic generation (HHG) in solid state materials is know to be largely determined by the band structure. However, a strong electromagnetic field not only drives the electronic dynamics, but also modifies the underlying electronic states via the AC Stark effect. In solids, this can lead to the formation of Floquet-Bloch bands, which can have properties that are very different from their field-free counterparts. We investigate such ``laser-dressed'' bands in a model dielectric exposed to a strong and low frequency field such as used in HHG experiments. By solving the time-dependent Schroedinger equation and using an expansion into photon-number states, we reveal the underlying field-dressed bands. Furthermore, we show that they lead to harmonics that follow field-dressed band-gap as pulse intensity changes. Modelling of self-sustained QED cascades in slow varying electromagnetic field Mironov, Arseniy QED cascades are chains of consequent non-linear processes of photon emission by ultra-relativistic electrons and $e^-e^+$ pair photoproduction in presence of some electromagnetic field. Self-sustained QED cascades driven by external field are one of the brightest intense-field QED effects that might be observed at the new laser facilities once ultra-high intensities of order of $10^{23}$--$10^{24}$ W/cm$^2$ will be available. Modern techniques for simulation of QED cascades rely on semi-classical approach. It implies that particles propagate along classical trajectories, while quantum events of emission and photoproduction occur spontaneously and are localized. Taking this as a basis we develop a qualitative theory for the initial stage of cascade formation for a general class of slow varying electromagnetic fields of electric type ($E^2-H^2>0$). We will discuss some aspects of our theory and cascade simulations. For instance, our theory allows to derive general condition of QED cascade initiation. The research was supported by the MEPhI Academic Excellence Project and the Russian Fund for Basic Research (Grant No. 16-02-00963a). Shape transition in two-electron quantum dots in a magnetic field Nazmitdinov, Rashid We found that the interplay of the classical and quantum properties of two-electron quantum dots lead to a quantum shape transition from a lateral to a vertical localization of electrons in low-lying excited states at relatively strong Coulomb interaction with alteration of the magnetic field. In contrast, in that regime in the ground states the electrons form always a ring type distribution in the lateral plane. Generalized perspective on chiral measurements without magnetic interactions Ordonez, Andres We present a unified description of several methods of chiral discrimination based exclusively on electric-dipole interactions. It includes photoelectron circular dichroism (PECD), enantio-sensitive microwave spectroscopy (EMWS), photoexcitation circular dichroism (PXCD) and photoelectron-photoexcitation circular dichroism (PXECD). We show that, in spite of the fact that the physics underlying the appearance of a chiral response is very different in all these methods, the enantio-sensitive and dichroic observable in all cases has a unique form. It is a polar vector given by the product of (i) a molecular pseudoscalar and (ii) a field pseudovector specified by the configuration of the electric fields interacting with the isotropic ensemble of chiral molecules. The molecular pseudoscalar is a rotationally invariant property, which is composed from different molecule-specific vectors and in the simplest case is a triple product of such vectors. The key property that enables the chiral response is the non-coplanarity of the vectors forming such triple product. The key prop- erty that enables chiral detection without relying on the chirality of the electromagnetic fields is the vectorial nature of the dichroic and enantio-sensitive observable. Our compact and general expression for this observable shows what ultimately determines the efficiency of the chiral signal. We extend these methods to arbitrary polarizations of the electric fields used to induce and probe the chiral response. arXiv:1802.06540 New Approach to Generation and Amplification of the THz Radiation in Plasma Created by Intense Two-Color Laser Fields Popov, Alexander New approach to the problem of generation and amplification of electromagnetic radiation of the terahertz frequency band in strongly nonequilibrium plasma channels created by high-intensity laser radiation in gases is discussed. This approach is based on the two-color laser induced THz background production in different nonlinear processes during the pulse of in the after-pulse regime and its further amplification in the plasma channel with population inversion formed by the laser pulse. Special attention is paid to the case of nearly-resonant background formation in aluminum vapor irradiated by Ti-Sa laser pulse and its second harmonic. Semiclassical description of HHG in H$_2^+$ Rodríguez-Hernández, Fermín Going beyond the perturbative regime: Post-Marcus methods based on the Generalized Quantum Master Equation Schubert, Alexander We present a modified approach for simulating electronically nonadiabatic dynamics based on the Nakajima-Zwanzig generalized quantum master equation (GQME). The key feature of the modified approach over previously proposed GQME-based approaches is that it benefits from the fact that the Nakajima-Zwanzig formalism does not require casting the overall Hamiltonian in system-bath form, which is arguably neither natural nor convenient in the case of the Hamiltonian that governs nonadiabatic dynamics. Within the modified approach, the effect of the nuclear degrees of freedom on the time evolution of the electronic reduced density operator is fully captured by a memory kernel super-operator. A methodology for calculating the memory kernel from projection-free inputs is developed. Simulating the electronic dynamics via the modified approach, with a memory kernel obtained using exact or approximate methods, can be more cost effective and/or lead to more accurate results than direct application of those methods. The modified approach is demonstrated on a benchmark spin-boson model with a memory kernel which is calculated within the Ehrenfest mean-field method. Extension of the strong-field approximation to describe simultaneous double ionization of $\text{C}_{\text{60}}$ Schubert, Ingmar A formalism to extend the strong-field approximation (SFA) to large molecular systems is presented. Using this formalism, the single-electron ionization yields for a rigid-rotor model of fullerene ($\text{C}_{\text{60}}$) in a half-cycle pulse are calculated and compared to the full numerical solution of the time-dependent Schrödinger equation within the single-active-electron approximation. This newly extended SFA is then used to calculate the yields for simultaneous two-electron ionization of $\text{C}_{\text{60}}$. CEP dependence of the enhanced ionization of HeH$^+$ driven by intense ultrashort laser pulses Schulz, Bruno We study the electronic motion of the enhanced ionization of $HeH^+$ by solving the six-dimensional time-dependent Schroedinger equation within the fixed nuclei approximation. We demonstrate that the electronic motion is much richer than indicated by the laser pulse which is a direct consequence of the Stark-shifted energies. Moreover, we show that there is a huge carrier-envelope phase effect due to the asymmetry of the Stark-shift for a short laser pulse. This manifests in an almost vanishing population of the first excited state at R=3.7 for CEP zero, while the for the CEP $\pi$ case the population of the excited state is 60 %. Due to this relatively simple change of the CEP the electronic motion is completely different. Further developments of semiclassical two-step model: Multielectron polarization effects and ionization of hydrogen molecule Shvetsov-Shilovskiy, Nikolay We extend the semiclassical two-step model to include a multielectron polarization-induced dipole potential. We investigate the imprints of multielectron effects in the momentum distributions of photoelectrons ionized by a linearly polarized laser pulse. We predict narrowing of the longitudinal momentum distributions due to electron focusing by the induced dipole potential. The polarization of the core also modifies interference structures in the photoelectron momentum distributions. Specifically, the number of fanlike interference structures in the low-energy part of the electron momentum distribution may be altered. We analyze the mechanisms underlying these effects. The account of the multielectron dipole potential seems to improve the agreement between theory and experiment. Furthermore, we extend the semiclassical two-step model to the hydrogen molecule. In the simplest case of the molecule oriented along the polarization direction of a linearly polarized laser field, we predict significant deviations of the photoelectron momentum distributions and the energy spectra from the case of atomic hydrogen. For the hydrogen molecule the energy spectrum falls off slower with increasing energy, and the holographic interference fringes are more pronounced than for the hydrogen atom at the same parameters of the laser pulse. Phase-dependent photoemission from Xenon and C60 molecule investigated by the "phase-of-the-phase spectroscopy" Skruszewicz, Slawomir Strong field ionization of atomic and molecular systems with tailored laser fields provides insight into the ionization and electron rescattering dynamics in atomic and molecular systems. We investigate the ionization and rescattering dynamics in Xenon and $C_{60}$ molecule with sw-IR wavelengths by applying recently introduced the "phase-of-the-phase" (PoP) spectroscopy [1,2]. PoP quantifies the amplitude and the phase lag of the electron yield changes as a function of the relative phase of the Reconstructing real-time quantum dynamics in strong and short laser fields Stooß, Veit In light-matter interaction, information about the dynamics of the matter part is encoded in the ultrafast, time-dependent, motion of the electron distribution, in other words, the electronic response. We present a method which allows the retrieval of the entire holographic (amplitude and phase) time-resolved information about the state specific electronic response of bound electron systems by recording the transient absorption spectrum of just one single ultrashort probe signal [1]. Most importantly, this still holds for the case of coherently excited systems interacting with multiple laser pulses, which therefore may exhibit non-trivial time dependence due to the interaction with strong external fields. Our finding is applied to the time-domain observation of excited-state dynamics of a two-electron system during the interaction with a strong laser pulse in the few-femtosecond regime. We directly resolve in real time the strong-field driven dynamics, including Rabi-cycling as well as the competition of strong-field ionization with autoionization. The presented approach generally allows for single-shot measurements of real-time-resolved response functions for non-equilibrium states of matter. [1] V.Stooß et al., PRL accepted, September 2018 Extraction of laser-coherent information from a photoelectron spectrum of a complex target using the phase-of-the-phase Tulsky, Vasily A commonly used way to obtain information about the inner structure of an atomic-scale target is the application of an intense laser to it in order to produce and analyze the spectrum of outcoming electrons. However, if a target is complex, this spectrum contains a significant (or even dominant) fraction of electrons in the total signal that is influenced by laser-incoherent scattering or is produced by thermal emission. A recently developed phase-of-the-phase (PoP) technique (see [1-6] and references therein) allows to reveal the coherent part of the signal, subtracting the rest. In order to demonstrate it we introduce a model of a spectrum obtained after application of an intense laser to an argon atom trapped inside a helium droplet. Photoelectrons produced from argon may experience multiple elastic scattering on helium atoms of the droplet before reaching the detector, and a fraction of such laser-incoherent electrons may be dominant. Nevertheless, the PoP technique can successfuly retrieve the features produced by a small remaining portion of laser-coherent electrons in the total signal. References [1] S. Skruszewicz, J. Tiggesbäumker, K.-H. Meiwes-Broer, M. Arbeiter, Th. Fennel, and D. Bauer, Phys. Rev. Lett. 115, 043001 (2015) [2] N. Eicke and M. Lein, J. of Mod. Opt., 64:10-11, 981-986 (2016) [3] M. A. Almajid, M. Zabel, S. Skruszewicz, J. Tiggesbäumker and D. Bauer, J. Phys. B 50, 19 (2017) [4] L. Seiffert, J. Köhn, C. Peltz, M. F. Kling and T. Fennel, J. Phys. B 50, 224001 (2017) [5] M. Kübel, C. Burger, R. Siemering et al, Mol. Phys. 115:15-16, 1835-1845 (2017) [6] V.A. Tulsky, M.A. Almajid, D.Bauer, https://arxiv.org/abs/1808.05167 (2018) Ionization and excitation with quasi sub-cycle laser pulses ​ Witzel, Bernd We have studied short pulse laser ionization (< 7 fs, 750 nm) and excitation with polarization-gated laser pulses. The laser pulse used is composed of a circular section at the beginning and the end of the pulse and an experimentally-defined linearly polarized central part. The duration of the central part can be chosen continually with an experimental setup [1] and can be made smaller than a period of laser light. Processes taking place only in linearly polarized light are limited in time by the width of the gate. Due to quantum mechanics selection rules ($Delta$m ±1), multiphoton excitation of Rydberg states with high angular momentum are only possible with linearly polarized light. We show that polarization gating allows us to study excitation and ionization with quasi sub-cycle laser pulses. The method allows us to determine the shortest temporal window needed for the excitation processes. [1] C. Marceau, G. Gingras and B. Witzel, Optics Express 19(2011)3576 Fragmentation of HeH$^+$ by intense ultrashort laser pulses Wustelt, Philipp The helium hydride molecular ion, HeH$^+$, is the simplest heteronuclear polar molecule and serves as a benchmark system for the investigation of multi-electron molecules and molecules with a permanent dipole. We specifically address the question: How does the permanent dipole of HeH$^+$ affect the fragmentation dynamics in intense ultrashort laser pulses? We study the laser induced laser-induced fragmentation; including non-ionizing dissociation, single ionization and double ionization; of an ion beam of helium hydride and an isotopologue at various wavelengths and intensities. These results are interpreted using reduced dimensionality solutions of the time-dependent Schr\"o dinger equation and with simulations based on Dressed surface hopping."
CommonCrawl
Journal of Cheminformatics An integrated quantitative structure and mechanism of action-activity relationship model of human serum albumin binding Angela Serra1 na1, Serli Önlü1 na1 nAff4, Pietro Coretto2 & Dario Greco ORCID: orcid.org/0000-0001-9195-90031,3,5 Journal of Cheminformatics volume 11, Article number: 38 (2019) Cite this article Traditional quantitative structure-activity relationship models usually neglect the molecular alterations happening in the exposed systems (the mechanism of action, MOA), that mediate between structural properties of compounds and phenotypic effects of an exposure. Here, we propose a computational strategy that integrates molecular descriptors and MOA information to better explain the mechanisms underlying biological endpoints of interest. By applying our methodology, we obtained a statistically robust and validated model to predict the binding affinity to human serum albumin. Our model is also able to provide new venues for the interpretation of the chemical-biological interactions. Our observations suggest that integrated quantitative models of structural and MOA-activity relationships are promising complementary tools in the arsenal of strategies aiming at developing new safe- and useful-by-design compounds. Quantitative structure-activity relationship (QSAR) models are increasingly applied in various fields, such as toxicity assessment and drug design [1]. QSAR models developed and validated in line with the Organization for Economic Co-Operation and Development (OECD) criteria [2] are recognized in silico tools for providing reliable activity data, bypassing long and laborious experimental assays. On the basis that structurally similar molecules have similar biological activities, classical QSAR models attempt to predict activity as a function of structural properties numerically defined as molecular descriptors (MDs) [1, 3]. MDs provide extensive chemical information, such as presence and count of different sub-structures, functional groups, connectivity between atoms, topological and geometrical characteristics, which are relevant for predictive studies. Furthermore, 3D alignment-free molecular descriptors, based on two, three and four linear algebraic forms have been introduced to codify novel and orthogonal chemical information [4, 5]. Traditional QSAR models usually neglect the primary biological fingerprint of the exposure, consisting of the ensemble of molecular alterations happening at various cellular compartments of the exposed biological system, hereafter denoted as the mechanism of action (MOA). However, the relationship between structural properties and phenotypic effects of an exposure is indirectly mediated by its MOA. Systematically integrating MOA information, such as gene expression or external bioassay data, into QSAR modelling would expand our understanding of the chemical-biological interactions, hence paving the way to the development of the next generations of safe- and useful-by-design compounds [6, 7]. In the recent years, the implementation of omics technologies in toxicology studies has ignited the new field of toxicogenomics [8]. In this context, in depth molecular profiling opened new possibilities to outline the biosignature or MOA of exposures at an unprecedented granularity. However, to date, this information has been seldom utilized in combination with structural properties of the compounds to predict their effects [9,10,11]. Indeed, Li et al developed a methodology that jointly analyzes the chemical structural information and the gene expression profiles of cells treated by drugs. By means of a clustering methodology, they identified the most structurally similar sets of chemicals and the minimum set of genes related to chemical structural features [9]. Low et al. [10] used a machine learning methodology based on multiple nonlinear classifiers that integrates chemical descriptors and toxicogenomic data to classify drug molecules based on their hepatotoxicity (toxic/or non-toxic) effect in rats. Perualila-Tan et al. [11] proposed a statistical methodology that combines transcriptomic data and chemical information to predict a biological response by means of gene expression and infer if the response is caused by the presence or absence of a particular chemical sub-structure. These approaches are limited to binary classification problems (toxic/non toxic) and to the identification of correlations between MDs and MOA features. However, when modelling a continuous response variable, integrative regression models are a preferred option. Between the wide range of linear and nonlinear regression models, Lasso based methods have the advantage to generate easy to interpret models, since they automatically perform feature selection and have less parameter to be estimated as compared to nonlinear models, such as random forests, support vector regressors or neural networks. Here, considering the OECD criteria [2], we propose a computational approach that combines MDs and MOA information to develop integrated quantitative structure and mechanism of action-activity relationship (QSMARt) models with the potential to better explain the role of specific structural properties in a bio-mechanistic way. To the best of our knowledge, the present study is the first report on an integrated QSMARt model to predict the binding affinity to HSA. Dataset preparation Curated experimental binding affinity data of drug and drug-like molecules to HSA (\(logK_{HSA}\); the binding constant obtained from the retention time on an immobilized HSA column using affinity chromatography) were obtained from [12]. All structures (as 3D SDF files) were retrieved from PubChem [13] and processed by the software DRAGON v. 7.0 [14] for the calculation of 5,325 MDs. An unsupervised feature reduction was applied to filter the constant (\(> 80\%\)) and highly intercorrelated descriptors (pairwise correlation among all pairs of descriptors \(> 95\%\)) prior to training/test set splitting, and variable selection [15]. Thus, a data matrix comprising 1,198 MDs was generated (hereafter denoted as A). Transcriptomic data for drug treatments were retrieved from the Connectivity Map (CMap) build v2.0 repository [16]. Three human cell lines were available in the CMap project: prostate cancer (PC3), breast cancer (MCF7), and leukemia (HL60), respectively. The transcriptomic datasets were analyzed independently for each cell line. Raw data was imported into R v. 3.4 by using the justRMA function from the Bioconductor utilities [17] to annotate probes to Ensembl genes (by using the hthgu133ahsensgcdf (v. 22.0.0) annotation file from the brainarray website http://brainarray.mbni.med.umich.edu/), and to quantile normalize the resulting expression matrix. Next, the experimental batch effect due to technical variables was estimated and removed using the ComBat algorithm implemented in the sva package [18]. Linear models followed by eBayes pairwise comparisons [19] were performed to compute the log fold-change of each gene in each drug-control pairs. Of the 88 chemicals in the curated dataset [12], 59 were identified with reported gene expression data for at least two cell lines of the CMap dataset (MCF7 and PC3). The list of drugs used in this analysis is available in Additional file 1. Consequently, two data matrices of log-fold changes for 11,868 genes in MCF7 (hereafter denoted as B) and PC3 cell lines (hereafter denoted as C) were generated, respectively. Finally, MDs (A) and gene expression profiles (B and C) were collated to create a single dataset (hereafter denoted as X) of 59 drugs and 24,934 features (1198 MDs and 11,868 genes for each cell line) for modeling the \(logK_{HSA}\). Modeling and validation QSMARt modeling was performed based on the lasso method [20] and power transformation of the MDs (\(\alpha\)) and genes (\(\gamma\)), respectively. 20% of the dataset X was kept as the test set and not used in the model selection phase. The remaining 80% of the data (training set) was further split 100 times in random training (90%) and validation (10%) sets by using a random split validation algorithm (RSVA). The splitting was performed based on the y-response variable, which was divided into three bins, from which the compounds are randomly assigned to train or test sets. Detailed methodology available as in Additional file 2. R scripts are available as Additional file 3. Next, the lasso method is used to fit a linear model to the training set for 100 different values of the lasso penalty estimated from the training matrix [21]. The lasso penalization value leading to the smallest mean squared error (MSE, \(\lambda =0.166\)), was considered (Additional file 4: Fig. S1). Only the features (MDs and/or genes) with non-zero coefficients were selected to derive the final model. Once the optimal features and parameters were identified, the entire training set was used to build the final model and the test set was only then used for external validation. The following model was considered to predict the \(logK_{HSA}\): $$\begin{aligned} y = X(\alpha , \gamma )\beta + \epsilon \end{aligned}$$ where, \(X(\alpha ,\gamma )\) is the matrix obtained by binding the matrices of \(A(\alpha )\), \(B(\gamma )\) , and \(C(\gamma )\), \(A(\alpha ) = (|a_{ij}|^{\alpha })\), \(B(\gamma ) = (|b_{ij}|^\gamma )\), \(C(\gamma ) = (|c_{ij} |^\gamma )\) (for \(\alpha >0\) and \(\gamma >0\)), \(\beta\) is the vector of coefficients, and \(\epsilon\) is the stochastic error, respectively. The same power transformation (\(\gamma\)) was used both for the MCF7 (B) and PC3 cell line (C). Considering \(\alpha\) and \(\gamma\) fixed, and \(\beta\) the only structural/genomic parameter to be estimated, is conceptually equivalent to replacing the original sample measurements X with \(X(\alpha ,\gamma )\). For fixed \(\alpha\) and \(\gamma\), the following lasso-type estimator is considered: $$\begin{aligned} {\hat{\beta }}= argmin|| y - X(\alpha , \gamma )\beta ||^{2}_{2} + \lambda ||\beta ||_1 \end{aligned}$$ where \(|| \cdot ||_2\) is the euclidean norm, \(||\cdot ||_1\) is the \(l^1\) norm and \(\lambda\) is the lasso penalty. The parameters \((\alpha ,\beta , \gamma )\) were tuned to minimize the MSE on the training set. The RSVA was performed for a grid of nine distinct \(\alpha\) and \(\gamma\) values (\(\alpha ,\gamma ={0.1,0.25,0.50,0.75,1,1.25,1.50,1.75,2}\)) for all 81 possible pairs of \((\alpha _i, \gamma _i)\) with \(i=1, \ldots , 81\). For each of the 81 combinations, the relevant set of features \(f_t = \beta (\alpha _t,\gamma _t)\) (at \(t=1,2,\ldots ,81)\) associated with non-zero coefficients was identified, validated the 60th percentile values of the distributions of the internal metrics computed on the multiple splits were considered), and used to train models on the whole training set. Next, the generated models were used to predict the \(logK_{HSA}\) on the test set. Following these steps, a population of candidate models was generated. Goodness of fit, robustness, and predictive performance of the candidate models were evaluated based on up-to-date internal and external validation parameters and criteria (Additional file 1) [22,23,24,25,26,27,28]. Comparison with single view models In order to validate the QSMARt model, the same procedure was applied to the MDs and MOA features separately. The RSVA procedure was performed on nine \(\alpha ={0.1,0.25,0.50,0.75,1,1.25,1.50,1.75,2}\) values for the MDs and nine \(\gamma ={0.1,0.25,0.50,0.75,1,1.25,1.50,1.75,2}\) values for the MOA features. Furthermore, these two parameters, together with the \(\lambda\) penalty, value were optimized independently for the MDs and MOA to identify the optimal setup that minimizes the MSE on the training set. These analyses led to 9 models for the MDs and 9 models for the MOA features. For each model, the relevant set of features, associated with non-zero coefficients, was identified and validated with the same approach described before. Goodness of fit, robustness, and predictive performance of the candidate models were evaluated based on up-to-date internal and external validation parameters and criteria (Additional file 1) [22,23,24,25,26,27,28]. In particular, distributions of the internal validation metrics computed with the RSVA procedure with 100 random splits were compared to identify which model overall give the better predictive performances. Applicability domain Based on the idea of consensus decision[29], different approaches were used to compute the applicability domain (AD) of the identified models. In particular, AD was computed by means of the leverage method [30], the standardization approach [31], the euclidean [32] and city block distance methods [33], and the k-nearest neighbours method [34]. In the leverage method, the response outliers were determined as those with the predicted activity value \(> \pm 3.0\) standardized residuals. The leverage value (h) measures the distance from the centroid of the modeled space. A warning leverage (critical hat value, h*) [30] was used to identify structural/MOA influential compounds (\(h > h^*\) denoting high-leverage chemicals). In a Williams plot [30], the leverage values were mapped against the standardized residuals to define the structural/MOA and the response spaces visually. Finally, the AD was reported as the percentile coverage for the training (\(AD_{Train}\)) and test (\(AD_{Test}\)) set, respectively. Moreover, the Insubria graph [15] of leverage values against calculated/predicted activity values was used to visualize the interpolated (\(h < h^*\) denoting chemicals inside the structural/MOA AD of the training set) and extrapolated (\(h > h^*\) characterizing chemicals outside the structural/MOA AD) predictions for all the datasets considered in this study. In this case, the response AD was the prediction range of the model. The standarization approach [31] is based on the assumption that in case of normal distribution, 99.7% of the population will remain within the range mean \(\pm 3.0\) standard deviation (SD). Thus, all molecular descriptors are first standardized. Afterwards, any compound outside this zone is dissimilar to the rest and majority of the compounds. Thus, if the standardized value for descriptor i of compound k is more than 3, then the compound should be an X-outlier (if in the training set) or outside AD (if in the test set) based on descriptor i. In the distance based methods [32,33,34], the distance between the chemical and the center of the training data set is computed. The threshold, for both Euclidean and City block distances, is the largest distance between the training set data points and the center of the training data set. Furthermore, the distance between the test samples and the center of the dataset is computed. The test points with a distance greater than the computed threshold are considered outliers. The AD was reported as the percentile coverage of the test set (\(AD_{Test}\)). In the k-nearest neighbours method [34], the distance between every train compound and its k-nearest neighbours in the training set is computed. A threshold is calculated as the largest of these distances. Subsequently, the distance between every test compounds and its k-nearest neighbours in the training set is computed. If the calculated distance values of test set compounds is within the defined threshold, then the prediction of these compounds are considered to be reliable. In this method the k value was set to 3. The AD was reported as the percentile coverage of the test set (\(AD_{Test}\)). The final consensus value on the training compounds is computed as a mean of the leverage and standardization methods, while the consensus on the test set is computed as the mean of all the different approaches. Selection of the final model Among the generated candidate models, the one with the best compromise between statistical robustness, predictive performance, widest AD, and smallest dimension was selected as the final model. To this end, all 81 alternative specifications were filtered based on multiple up-to-date statistical acceptance criteria (highlighted in Additional file 1). Only the models both satisfying the internal and external validation requirements, and providing 100% \(AD_{Test}\) coverage, with the consensus method, were considered eligible. Moreover, the transformation parameters (\(\alpha ^*\),\(\gamma ^*\)) achieving the best predictive performance were selected as the set of indices of eligible solutions by solving \((\alpha ^*\gamma ^*) = arg min_t ( E(\alpha _t,\gamma _t);\) \(t \in I)\) with \(I \subseteq \left\{ 1, \ldots ,81 \right\}\). Finally, the model satisfying all eligibility criteria, consisting of the smallest number of structural/MOA features, and with the widest \(AD_{Train}\) coverage was selected as the ultimate model. Application of the final model The optimal model was applied to a set of external compounds for which the \(logK_{HSA}\) is unavailable. To this purpose, an independent set of 799 drugs from the CMap dataset [16] with gene expression data available on both MCF7 and PC3 cell lines was considered. SDF files for these compounds were retrieved from PubChem [13] and fed into DRAGON v. 7.0 [14] to generate molecular descriptors. Gene expression data was preprocessed similarly to the dataset of 59 compounds, as described above. The list of drugs in the external dataset is available in Additional file 1. The TSNE projection technique [35] was used to visualize the distribution of the albumin and the external datasets based on the six MDs/MOA features of the QSMARt model as well as the three MOA features and three MDs. QSMARt predictive model for the binding affinity to HSA Here, we built an integrated model (QSMARt) comprising molecular descriptors and MOA features to predict binding affinity to human serum albumin. To this end, we derived 81 candidate models by applying a Lasso penalty parameter optimisation. The 81 models and their evaluation metrics are reported in Additional file 1. The full lists of selected MDs and genes, along with their occurrence frequencies, are available in Additional file 1. Upon rigorous evaluation based on the OECD validation principles [2], we selected a final model of six structural/MOA features: three molecular descriptors and three gene expression patterns (Eq. 3). $$\begin{aligned} LogK_{HSA}&= -0.372 + 0.012 |Mor23i|^{1.25} - 0.042 |N-072|^{1.25}\\&\quad+ 0.139 |ALOGP|^{1.25} -2.980 |MCF7\_ENSG00000112115|^{1.75}\\&\quad-0.075|PC3\_ENSG00000197646|^{1.75} -0.216|PC3\_ENSG00000276644|^{1.75} \end{aligned}$$ A good concordance between the predicted and experimental data is shown in Fig. 1. Our model fulfils the criteria regarding the goodness of fit and the internal and external validation requirements, as shown in Additional file 1. Moreover, the final hybrid model statistics passed all the recommended thresholds except the CCC metric (Additional file 1). Indeed, the \(Q_{L10-Out}\), \(R^2_{tr}\), \(R^2_{te}\), \(Q^2_{F_1}\), \(Q^2_{F_2}\), \(Q^2_{F_3}\) are greater than 0.6, but, the \(CCC_{Te}\) value is smaller than 0.85. Next, we defined the AD of our model based on the consensus strategy. Noteworthily, all chemicals of the test set are inside the AD spaces (Additional file 1), suggesting that all the predictions were reliably interpolated. For visualization purposes we show the AD computed by means of the the leverage approach [30] in Fig. 2. Predicted \(logK_{HSA}\) versus experimental \(logK_{HSA}\) values of training set (black) and test set (red) chemicals Standardized residuals versus leverage values of training set (black) and test set (red) chemicals (Williams plot). Dashed lines indicate \(3.0\sigma\) interval. Vertical line set at the warning leverage (critical hat value, \(h^* = 0.438\)) Impact of the integration approach and comparison to sub-models Next, in order to evaluate the impact of the integration strategy, we compared our QSMARt integrated final model with the two obtained by applying our approach to the MDs and genes separately. We ran the SVA methodology for the same nine \(\alpha\) and \(\gamma\) values and we obtained 9 models for the MDs, while only two models were obtained by using the genes alone, since no fitting was obtained for 7 \(\gamma\) values. The best models for the MDs and genes respectively are the following: $$\begin{aligned} LogK_{HSA}&= -0.335 + 0.077 |Mor23i|^{0.11} - 0.012 |R8s.|^{0.11} + 0.007 |C.040|^{0.11} \\&\quad- 0.061 |N-072|^{0.11} + 0.062 |ALOGP|^{0.11} - 0.003 |CATS3D_06_AP .|^{0.11} \\&\quad+ 0.0006 |piPC08|^{0.11} - 0.010 |GATS2i|^{0.11} + 0.001 |SpMax1_Bh.v.|^{0.11} \end{aligned}$$ $$\begin{aligned} LogK_{HSA}&= {} 0.042 + 3.84 |MCF7\_ENSG00000185950|^{0.16} \\&\quad-11.163 |MCF7\_ENSG00000112115|^{0.16} -0.758 |MCF7\_ENSG00000135100|^{0.16} \\&\quad+0.193 |PC3\_ENSG00000128228|^{0.16} + 0.0007 |PC3\_ENSG00000168209|^{0.16}\\&\quad+ 0.040|PC3\_ENSG00000110619|^{0.16} - 0.755|PC3\_ENSG00000064687 |^{0.16} \\&\quad-1.310|PC3\_ENSG00000168875|^{0.16} -9.301|PC3\_ENSG00000276644 |^{0.16} \end{aligned}$$ As evidenced in Additional file 1, the QSMARt model is characterized by overall better values of all the relevant diagnostic statistics. This analysis, hence, highlighted an overall better statistical performance of the integrated QSMARt model (Eq. 3) over the two competitor models (Eq. 4 and Eq. 5). In particular, the QSMARt model consists of less features, since it uses only 3 MDs and 3 genes, while the other two models use 9 MDs and 9 genes, respectively. Furthermore, the model coming from the genes does not show any predictive capability on the test set (\(R^2_{test} = 0.10\)) although its \(R^2_{train} = 0.61\). On the other hand the model obtained by using only MDs has good predictive capabilities, even thought they are smaller than the one obtained by the QSMARt model. Furthermore, when comparing the distributions of the \(Q^2\), \(Q^2F_1\),\(Q^2F_2\), \(Q^2F_3\) and CCC metrics that are computed with the RSVA method, the performances of the QSMARt model are better than those of the other two models (Additional file 5: Fig. S2). Mechanistic interpretation of the features included in the QSMARt model Mechanistic interpretation of the molecular descriptors included in a model is an OECD principle of QSAR validation [2]. The hybrid model was built with the following MDs: Mor23i, N-072, and ALOGP. Mor23i is a measure of the pair-wise interatomic distance and ionization potential [36]. Ionization potential is the amount of energy required to extract one electron from a chemical system, i.e., a measure of the capability of a molecule to give the corresponding cation. Mor23i and \(logK_{HSA}\) are positively correlated (Eq. 3), implying that the higher the ionization potential, the higher the HSA binding affinity. This further suggests that electron-pair acceptors (Lewis-acids) have higher binding affinity to HSA. Given the positive coefficient of Mor23i in our model equation, compounds with more acidic properties have higher binding affinity to HSA. On the other hand, due to the mathematical background, the distance between two influential atoms may majorly define the descriptor [36]. Therefore, a more detailed interpretation could be useful for molecular design purposes. N-072 is a descriptor counting the nitrogen-centered fragments of RCO-N< or > N-X=X in a chemical structure, where R is any group bound through carbon, X is any electronegative atom, such as oxygen, nitrogen, sulfur, phosphorus, and halogens, - is single and = is double bonds, respectively [3]. The negative coefficient in the final model indicates that chemicals with N-072 fragments show less affinity to HSA binding. Similarly, N-072 was reported elsewhere as affecting the relative fluorescence intensity ratio [37]. ALOGP is a measure of hydrophobicity as the logarithm of n-octanol/water partition coefficient. Based on the Ghose-Crippen method [38], it is calculated as the summation of atomic contributions to overall molecular hydrophobicity. Clearly, having a positive coefficient in the model equation, ALOGP explains an increased affinity to HSA binding. It has been already reported in relation to binding affinity to HSA [12, 39]. Furthermore, earlier studies on the crystallographic structure of HSA and binding affinity evidenced that the binding sites of HSA are mainly composed of hydrophobic residues, further revealing that hydrophobicity is a major property encoding the binding affinity, as reviewed in [40, 41]. Three genes are included in the final QSMARt model, namely Interleukin 17A (IL17A) from the MCF7, Programmed Cell Death 1 Ligand 2 (PD-L2), and Dachshund Family Transcription Factor 1 (DACH1) from the PC-3 transcriptomic datasets, respectively. The expression of IL17A is documented in the MCF7 cell line, where it has been tested as target of chemotherapeutic strategies aiming at altering autophagic ability of breast cancer cell lines [42]. Alteration of the expression of PD-L2, a ligand of PD-1, has been observed both in prostate cancer in response to anti-PD-1 therapy [43]. DACH1 is a transcription factor expressed in prostate cancer, where its low expression is associated with higher malignant potential [44]. Interestingly, all these three genes have known immunomodulatory properties, either as pro-inflammatory (IL17A) or immunosuppressive (PD-L2 and DACH1). Since their QSMARt model coefficients are negative, the impact of drugs to alter their expression is inversely proportional to HSA binding affinity. These results, for instance, suggest that the serum supplementation in the cell culture medium and the compound dosages should be mutually adjusted when testing drugs in vitro, such as in the CMap experiments. Next, we considered the correlation between the three gene expression patterns and the three MDs included in the QSMARt model (Fig. 3). All the genes in the model were negatively correlated with Mor23i and ALOGP, and positively correlated with N-072, respectively. These results imply that potentially less acidic (lower values of Mor23i) and less lipophilic compounds (lower values of ALOGP) have a higher impact in altering the expression of these three genes. Altogether, according to the QSMARt model, compounds with higher values of ionization potential and hydrophobicity, and less nitrogen-centered residuals, as well as lower expression alteration of the immunomodulatory genes IL17A, PD-L2 and DACH1, have higher binding affinity to HSA. Taken together, these results provide an extended mechanistic interpretation of the interactions of chemicals and biological systems by providing direct associations between specific structural and biological properties of the exposure. Correlation graph of the six MDs/MOA features of the QSMARt model. Vertex color represent the sign of the associated beta value while edge colors show the sign of the correlation of the features across the X dataset Application of the QSMARt model Finally, we tested the performance of our QSMARt model in predicting the \(logK_{HSA}\) for an independent set of 799 compounds extracted from the CMap dataset. With 741 chemicals in the AD, our model provided a remarkable prediction coverage of 93% (Fig. 4). It is noteworthy to emphasize that no external chemicals falling outside the structural/MOA feature domain were identified. However, 58 drugs appeared outside the model prediction range and were further investigated. For this, we inspected the distribution of the different subsets of compounds in a projected space based on the six MDs/MOA features of the QSMARt model (Fig. 5a) as well as the three MOA features (Fig. 5b) and three MDs (Fig. 5c) considered separately. This analysis evidenced that the external set chemicals falling outside the model prediction range show less structural commonalities with the rest of the compounds (Fig. 5c) but are genomically confounded with the others (Fig. 5b). Thus, we further investigated the value of the MDs for the external dataset and, as we can see from Fig. 5d, drugs falling outside the prediction range of our model have higher value for the ALOGP MD. Predicted \(logK_{HSA}\) by Eq. 3 versus leverage values of training set (black), test set (red), and external set (green) chemicals (Insubria graph). Dashed lines indicate the model prediction range. Vertical line set at the warning leverage (critical hat value, \(h^* = 0.438\)) TSNE projection of the drugs in the albumin and external dataset. The projection was performed by using the set of genes and MDs (a), only the genes (b) and only the MDs (c) in the optimal hybrid model. The outliers are in the border area of the dataset for the molecular descriptors (c), while they are similar to the rest of the external set fort the gene log-fold change (b). Likewise, the outliers still appear on the border for the combined two sets of features (a). In panel (d) the values of the three MDs is plotted (y axis) for the drugs in the albumin and external dataset (x axis). The drugs are ordered based on their predicted \(logK_{HSA}\) value. Drugs from the external set that falls in the model prediction range are marked in gray, while the ones that are outside the range are marked in blue. Drugs in the training set are marked in black while drugs in the test set are marked in red Biological relevance of the QSMARt model In order to better understand the possible impact of the QSMARt model, we investigated its performance on drugs grouped by the ATC (Anatomical, Therapeutic, Chemical) code system as defined by the World Health Organization (WHO) [45]. The ATC codes classify the drugs into different groups in accordance with the organ or system on which they act and their chemical, pharmacological, and therapeutic properties. We performed our analyses by considering the anatomical subgroup (level 1) and the therapeutic subgroup (level 2) of the ATC codes. We investigated the relationship between the experimental vs. predicted logKHSA values, of the 59 drugs present in our dataset, and their grouping in ATC level 1 and 2 (Additional file 6: Fig. S3). This analysis highlights that the two drugs cefuroxime and amoxicillin, belonging to the ATC class J (any-inflammatory), show the lowest range of experimental and predicted logKHSA. Likewise, a large group of ATC class C compounds (cardiovascular system) are in the mid range of the distribution, while four ATC class N (nervous system) are grouped in the highest range of the experimental/predicted logKHSA. Next, we inspected the larger set of 799 drugs used for the external validation, for which no experimental value of logKHSA was available. In this case, we looked at the distribution of the predicted logKHSA values in the level 1 and level 2 ATC codes (Additional files 7: Fig. S4 and 8: Fig. S5). Also, this analysis shows that the compounds belonging to the ATC class J (anti-inflammatory) have the lowest levels of predicted logKHSA. On the opposite, drugs of the ATC class A (digestive system), G (genitourinary system) and N (nervous system) have the highest predicted logKHSA. These results confirm our observations on the 59 drugs present in our discovery set. The genes selected in our model are involved in several signalling pathways, especially in cancer and immune signalling. Thus, we investigated their expression values between immunomodulatory and non immunomodulatory compounds. We identified the level 2 classes L03 and L04 to be immunostimulant and immunosuppressant, respectively. Unfortunatelly, none of the compounds available in the Connectivity map data set belong to the class L03, while four are annotated as L04. To perform the comparison, we selected the compound structurally least similar to each of the L04 drugs in the Connectivity Map dataset, and plotted the respective expression values for each of the three genes included in our final model (Additional file 9: Fig. S6). While MCF7_ENSG00000112115 And PC3_ENSG00000197646 did not show any difference, the gene PC3_ENSG00000276644 showed a trend with higher expression in L04 drugs as compared to their least similar ones. In this study, we proposed a computational strategy to define quantitative models of structural and mechanism of action-activity relationships (QSMARt). Moreover, we investigated the effectiveness of hybrid QSMARt model comprising both MDs and MOA information to better explain the biological mechanisms underlying endpoints of interest. We applied our methodology to predict human serum albumin (HSA) binding, obtaining a statistically robust and validated model that provides new venues for the interpretation of the chemical-biological interactions. QSMARt models are promising complementary tools to develop new safe- and useful-by-design compounds. The datasets supporting the conclusions of this article are included within the article as Additional Files. CMap: connectivity map HL60: leukemia cell line HSA: Lasso: least absolute shrinkage and selection operator MCF7: breast cancer cell line MDs: molecular descriptors MOA: OECD: PC3: prostate cancer cell line QSAR: quantitative structure-activity relationship QSMARt: quantitative structure and mechanism of action-activity relationship Cherkasov A, Muratov EN, Fourches D, Varnek A, Baskin II, Cronin M, Dearden J, Gramatica P, Martin YC, Todeschini R, Consonni V, Kuz'min VE, Cramer R, Benigni R, Yang C, Rathman J, Terfloth L, Gasteiger J, Richard A, Tropsha A (2014) QSAR modeling: where have you been? where are you going to? J Med Chem 57(12):4977–5010. https://doi.org/10.1021/jm4004285 OECD (2014) Guidance document on the validation of (quantitative) structure-activity relationship [(Q)SAR] models, p 154. https://www.oecd-ilibrary.org/content/publication/9789264085442-en. Accessed 12 Mar 2018 Todeschini R, Consonni V. (eds.): Molecular descriptors for chemoinformatics: volume I: alphabetical listing/volume II: appendices, references. methods and principles in medicinal chemistry. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany (2009). https://doi.org/10.1002/9783527628766. Accessed 12 Mar 2018 García-Jacas CR, Contreras-Torres E, Marrero-Ponce Y, Pupo-Meriño M, Barigye SJ, Cabrera-Leyva L (2016) Examining the predictive accuracy of the novel 3d n-linear algebraic molecular codifications on benchmark datasets. J Cheminform 8(1):10 Valdés-Martiní JR, Marrero-Ponce Y, García-Jacas CR, Martinez-Mayorga K, Barigye SJ, d'Almeida YSV, Pérez-Giménez F, Morell CA, et al. (2017) Qubils-mas, open source multi-platform software for atom-and bond-based topological (2d) and chiral (2.5 d) algebraic molecular descriptors computations. J Cheminform 9(1):35 (2017) Chen Q, Wu L, Liu W, Xing L, Fan X (2013) Enhanced QSAR model performance by integrating structural and gene expression information. Molecules 18(9):10789–10801. https://doi.org/10.3390/molecules180910789 Wang W, Kim MT, Sedykh A, Zhu H (2015) Developing enhanced blood-brain barrier permeability models: Integrating external bio-assay data in QSAR modeling. Pharm Res 32(9):3055–3065. https://doi.org/10.1007/s11095-015-1687-1 Alexander-Dann B, Pruteanu LL, Oerton E, Sharma N, Berindan-Neagoe I, Módos D, Bender A (2018) Developments in toxicogenomics: understanding and predicting compound-induced toxicity from gene expression data. Mol Omics 14(4):218–236. https://doi.org/10.1039/c8mo00042e Li Y, Tu K, Zheng S, Wang J, Li Y, Hao P, Li X (2011) Association of feature gene expression with structural fingerprints of chemical compounds. J Bioinform Comput Biol 9(4):503–519 Low Y, Uehara T, Minowa Y, Yamada H, Ohno Y, Urushidani T, Sedykh A, Muratov E, Kuz'min V, Fourches D, Zhu H, Rusyn I, Tropsha A (2011) Predicting drug-induced hepatotoxicity using QSAR and toxicogenomics approaches. Chem Res Toxicol 24(8):1251–1262. https://doi.org/10.1021/tx200148a Perualila-Tan N, Kasim A, Talloen W, Verbist B, Göhlmann HWH, Consortium Q, Shkedy Z (2016) A joint modeling approach for uncovering associations between gene expression, bioactivity and chemical structure in early drug discovery to guide lead selection and genomic biomarker development. Stat Appl Genet Mol Biol 15(4):291–304. https://doi.org/10.1515/sagmb-2014-0086 Önlü S, Türker Sacan M (2017) Impact of geometry optimization methods on QSAR modelling: a case study for predicting human serum albumin binding affinity. SAR QSAR Environ Res 28(6):491–509. https://doi.org/10.1080/1062936X.2017.13432 Kim S, Thiessen PA, Bolton EE, Chen J, Fu G, Gindulyte A, Han L, He J, He S, Shoemaker BA, Wang J, Yu B, Zhang J, Bryant SH (2016) PubChem substance and compound databases. Nucleic Acids Res 44(D1):1202–13. https://doi.org/10.1093/nar/gkv951 Mauri A, Consonni V, Pavan M, Todeschini R (2006) Dragon software: an easy approach to molecular descriptor calculations. Match 56(2):237–248 Gramatica P, Chirico N, Papa E, Cassani S, Kovarich S (2013) Qsarins: a new software for the development, analysis, and validation of QSAR mlr models. J Comput Chem 34(24):2121–2132 Lamb J, Crawford ED, Peck D, Modell JW, Blat IC, Wrobel MJ, Lerner J, Brunet J-P, Subramanian A, Ross KN, Reich M, Hieronymus H, Wei G, Armstrong SA, Haggarty SJ, Clemons PA, Wei R, Carr SA, Lander ES, Golub TR (2006) The connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. Science 313(5795):1929–1935. https://doi.org/10.1126/science.1132939 Irizarry RA, Bolstad BM, Collin F, Cope LM, Hobbs B, Speed TP (2003) Summaries of affymetrix GeneChip probe level data. Nucleic Acids Res 31(4):15. https://doi.org/10.1093/nar/gng015 Leek J, Johnson W, Parker H, Jaffe A, Storey J (2014) SVA: surrogate variable analysis R package version 3.10. 0 Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, Smyth GK (2015) limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res 43(7):47. https://doi.org/10.1093/nar/gkv007 Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol) 58:267–288 Friedman J, Hastie T, Tibshirani R (2010) Regularization paths for generalized linear models via coordinate descent. J Stat Softw 33(1):1–22. https://doi.org/10.18637/jss.v033.i01 Golbraikh A, Tropsha A (2002) Beware of q2!. J Mol Graph Model 20(4):269–276. https://doi.org/10.1016/S1093-3263(01)00123-1 Aptula AO, Jeliazkova NG, Schultz TW, Cronin MT (2005) The better predictive model: high q2 for the training set or low root mean square error of prediction for the test set? QSAR Comb Sci 24(3):385–396 Shi LM, Fang H, Tong W, Wu J, Perkins R, Blair RM, Branham WS, Dial SL, Moland CL, Sheehan DM (2001) QSAR models using a large diverse set of estrogens. J Chem Inf Comput. Sci 41(1):186–195 Chirico N, Gramatica P (2012) Real external predictivity of QSAR models. part 2. new intercomparable thresholds for different validation criteria and the need for scatter plot inspection. J Chem Inf Model 52(8):2044–2058. https://doi.org/10.1021/ci300084j Schüürmann G, Ebert R-U, Chen J, Wang B, Kühne R (2008) External validation and prediction employing the predictive squared correlation coefficient test set activity mean vs training set activity mean. J Chem Inf Model 48(11):2140–2145. https://doi.org/10.1021/ci800253u Consonni V, Ballabio D, Todeschini R (2009) Comments on the definition of the q2 parameter for QSAR validation. J Chem Inf Model 49(7):1669–1678. https://doi.org/10.1021/ci900115y Consonni V, Ballabio D, Todeschini R (2010) Evaluation of model predictive ability by external validation techniques. J Chemom 24(3–4):194–201. https://doi.org/10.1002/cem.1290 García-Jacas CR, Martinez-Mayorga K, Marrero-Ponce Y, Medina-Franco J (2017) Conformation-dependent qsar approach for the prediction of inhibitory activity of bromodomain modulators. SAR QSAR in Environ Res 28(1):41–58 Gramatica P (2007) Principles of qsar models validation: internal and external. Mol Inform 26(5):694–701 Roy K, Kar S, Ambure P (2015) On a simple approach for determining applicability domain of qsar models. Chemom Intell Lab Syst 145:22–29 Jaworska J, Nikolova-Jeliazkova N, Aldenberg T (2004) Review of methods for applicability domain estimation. Report. The European Commission-Joint Research Centre, Ispra, Italy CDATA-Hair J, Anderson R, Tatham R, Black W (1998) Multivariate Data Analysis. Prentice Hall, Englewood Cliffs, NJ Sheridan RP, Feuston BP, Maiorov VN, Kearsley SK (2004) Similarity to molecules in the training set is a good discriminator for prediction accuracy in qsar. J Chem inf Comput Sci 44(6):1912–1928 Maaten LVD, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9:2579–2605 Devinyak O, Havrylyuk D, Lesyk R (2014) 3d-morse descriptors explained. J Mol Graph Model 54:194–203 Xu J, Xiong Q, Chen B, Wang L, Liu L, Xu W (2009) Modeling the relative fluorescence intensity ratio of eu(III) complex in different solvents based on QSPR method. J Fluoresc 19(2):203–209. https://doi.org/10.1007/s10895-008-0403-5 Ghose AK, Crippen GM (1987) Atomic physicochemical parameters for three-dimensional-structure-directed quantitative structure-activity relationships. 2. Modeling dispersive and hydrophobic interactions. J Chem Inf Ccomput Sci 27(1):21–35 Chen L, Chen X (2012) Results of molecular docking as descriptors to predict human serum albumin binding affinity. J Mol Graph Model 33:35–43 Colmenarejo G (2003) In silico prediction of drug-binding strengths to human serum albumin. Med Rese Revi 23(3):275–301 Lambrinidis G, Vallianatou T, Tsantili-Kakoulidou A (2015) In vitro, in silico and integrated strategies for the estimation of plasma protein binding: A review. Adv Drug Deliv Rev 86:27–45 Garbar C, Mascaux C, Giustiniani J, Merrouche Y, Bensussan A (2017) Chemotherapy treatment induces an increase of autophagy in the luminal breast cancer cell MCF7, but not in the triple-negative MDA-MB231. Sci Rep 7(1):7201 Taube JM, Klein AP, Brahmer JR, Xu H, Pan X, Kim JH, Chen L, Pardoll DM, Topalian SL, Anders RA (2014) Association of PD-1, PD-1 ligands, and other features of the tumor immune microenvironment with response to anti-PD-1 therapy. Clin Cancer Res 3271 Wu K, Yuan X, Pestell R (2015) Endogenous dach1 in cancer. Oncoscience 2(10):803 Organization WH et al (2006) Who collaborating centre for drug statistics methodology: atc classification index with ddds and guidelines for atc classification and ddd assignment. Norwegian Institute of Public Health, Oslo, Norway This study was supported by the Academy of Finland (grant agreements 275151 and 292307). Serli Önlü Present address: Corporate Product Safety/Henkel AG & Co. KGaA, Düsseldorf, Germany Angela Serra and Serli Önlü equally contributed to this work Faculty of Medicine and Health Technology, Tampere University, Arvo Ylpön katu 34, Tampere, Finland Angela Serra, Serli Önlü & Dario Greco DISES, STATLAB, University of Salerno, Giovanni Paolo II 132, Fisciano, Italy Pietro Coretto Institute of Biotechnology, University of Helsinki, Finland, Helsinki, Finland Dario Greco BioMediTech institute, Tampere University, Tampere, Finland Angela Serra AS developed and implemented the method, analysed the data, wrote the manuscript; SÖ evaluated and interpreted the results, wrote the manuscript; PC developed the method and wrote the manuscript; DG conceived and supervised the project, interpreted the results, wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Dario Greco. This file contains the following supplementary tables: Table S1: Dataset; Table S2: Parameters and criteria considered for goodness of fit, internal and external validation; Table S3: External dataset with predicted LogKhsa; Table S4: Molecular descriptors appearing in the models; Table S5: Genes appearing in the models; Table S6: Summary of the models parameters and evaluation metrics. This file contains the formal descriptions of the methodology and the RSVA algorithm File containing the R functions and scripts to create the integrative model. Fig. S1. Estimation of the optimal λ; value with the RSVA algorithm. Fig. S2. Comparison of the model validation curves. Fig. S3. Scatterplot of experimental vs. predicted logKHSA values of the 59 drugs, coloured by ATC codes level 1 and 2. Fig. S4. Boxplot of the predicted logKHSA values of the 799 external compounds coming from CMap dataset grouped by ATC codes level 1. Fig. S6. Boxplot of the expression values of the three selected genes, grouped by immunosuppressant and their less similar drugs. Serra, A., Önlü, S., Coretto, P. et al. An integrated quantitative structure and mechanism of action-activity relationship model of human serum albumin binding. J Cheminform 11, 38 (2019). https://doi.org/10.1186/s13321-019-0359-2 QSAR Human serum albumin binding Integrative analysis Safe-by-design Submission enquiries: [email protected]
CommonCrawl
Physics Articles Physics Tutorials Physics Guides Physics FAQs Math Guides Math FAQs Bio/Chem/Tech Bio/Chem Articles Brachistochrone Articles Articles for: brachistochrone Explaining the General Brachistochrone Problem Consider a problem about the curve of fastest descent in the following generalized statement. Suppose that we have a Lagrangian system $$L(x,\dot x)=\frac{1}{2}g_{ij}(x)\dot… https://www.physicsforums.com/insights/wp-content/uploads/2016/07/Brachistochrone.png 135 240 wrobel https://www.physicsforums.com/insights/wp-content/uploads/2019/02/Physics_Forums_Insights_logo.png wrobel2016-07-10 17:00:582021-04-02 10:17:05Explaining the General Brachistochrone Problem A Brachistochrone Subway Is Not a Cost-effective Idea It is apparent that a subway tunnel could be built without the need for supplied energy like electricity, assuming zero friction everywhere. The tunnel… https://www.physicsforums.com/insights/wp-content/uploads/2015/05/subwayarticle.png 135 240 rude man https://www.physicsforums.com/insights/wp-content/uploads/2019/02/Physics_Forums_Insights_logo.png rude man2015-05-14 14:38:242021-04-17 09:13:17A Brachistochrone Subway Is Not a Cost-effective Idea Special and General Relativity Beyond the Standard Model Receive Insights Articles to Your Inbox Blog Author List astronomy (17) black holes (17) classical physics (32) cosmology (15) education (21) electromagnetism (18) general relativity (18) geometry (14) gravity (24) interview (22) mathematics (29) mathematics self-study (17) Physicist (26) programming (18) Quantum Field Theory (31) quantum mechanics (33) quantum physics (21) relativity (35) technology (16) universe (19) 2023 © PHYSICS FORUMS, ALL RIGHTS RESERVED - Contact Us - Privacy Policy - About PF Insights
CommonCrawl
Topics by WorldWideScience.org Sample records for algorithmic algebraic model Algorithms in Algebraic Geometry Dickenstein, Alicia; Sommese, Andrew J In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its Global identifiability of linear compartmental models--a computer algebra algorithm. Science.gov (United States) Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm International Nuclear Information System (INIS) Littlefield, R.J.; Maschhoff, K.J. Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab Algebraic Algorithm Design and Local Search National Research Council Canada - National Science Library Graham, Robert .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS... Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm Institute of Scientific and Technical Information of China (English) WANG ShunJin; ZHANG Hua Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm. Parallel algorithms for numerical linear algebra van der Vorst, H This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p Construction Example for Algebra System Using Harmony Search Algorithm Directory of Open Access Journals (Sweden) FangAn Deng Full Text Available The construction example of algebra system is to verify the existence of a complex algebra system, and it is a NP-hard problem. In this paper, to solve this kind of problems, firstly, a mathematical optimization model for construction example of algebra system is established. Secondly, an improved harmony search algorithm based on NGHS algorithm (INGHS is proposed to find as more solutions as possible for the optimization model; in the proposed INGHS algorithm, to achieve the balance between exploration power and exploitation power in the search process, a global best strategy and parameters dynamic adjustment method are present. Finally, nine construction examples of algebra system are used to evaluate the optimization model and performance of INGHS. The experimental results show that the proposed algorithm has strong performance for solving complex construction example problems of algebra system. Algorithmic algebraic geometry and flux vacua Gray, James; He Yanghui; Lukas, Andre We develop a new and efficient method to systematically analyse four dimensional effective supergravities which descend from flux compactifications. The issue of finding vacua of such systems, both supersymmetric and non-supersymmetric, is mapped into a problem in computational algebraic geometry. Using recent developments in computer algebra, the problem can then be rapidly dealt with in a completely algorithmic fashion. Two main results are (1) a procedure for calculating constraints which the flux parameters must satisfy in these models if any given type of vacuum is to exist; (2) a stepwise process for finding all of the isolated vacua of such systems and their physical properties. We illustrate our discussion with several concrete examples, some of which have eluded conventional methods so far Robust Algebraic Multilevel Methods and Algorithms Kraus, Johannes This book deals with algorithms for the solution of linear systems of algebraic equations with large-scale sparse matrices, with a focus on problems that are obtained after discretization of partial differential equations using finite element methods. Provides a systematic presentation of the recent advances in robust algebraic multilevel methods. Can be used for advanced courses on the topic. An algorithm to construct the basic algebra of a skew group algebra NARCIS (Netherlands) Horobeţ, E. We give an algorithm for the computation of the basic algebra Morita equivalent to a skew group algebra of a path algebra by obtaining formulas for the number of vertices and arrows of the new quiver Qb. We apply this algorithm to compute the basic algebra corresponding to all simple quaternion Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too). Gonzalez-Vega, Laureano Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM) The algebraic collective model Rowe, D.J.; Turner, P.S. A recently proposed computationally tractable version of the Bohr collective model is developed to the extent that we are now justified in describing it as an algebraic collective model. The model has an SU(1,1)xSO(5) algebraic structure and a continuous set of exactly solvable limits. Moreover, it provides bases for mixed symmetry collective model calculations. However, unlike the standard realization of SU(1,1), used for computing beta wave functions and their matrix elements in a spherical basis, the algebraic collective model makes use of an SU(1,1) algebra that generates wave functions appropriate for deformed nuclei with intrinsic quadrupole moments ranging from zero to any large value. A previous paper focused on the SO(5) wave functions, as SO(5) (hyper-)spherical harmonics, and computation of their matrix elements. This paper gives analytical expressions for the beta matrix elements needed in applications of the model and illustrative results to show the remarkable gain in efficiency that is achieved by using such a basis in collective model calculations for deformed nuclei Homogeneous Buchberger algorithms and Sullivant's computational commutative algebra challenge DEFF Research Database (Denmark) Lauritzen, Niels We give a variant of the homogeneous Buchberger algorithm for positively graded lattice ideals. Using this algorithm we solve the Sullivant computational commutative algebra challenge.......We give a variant of the homogeneous Buchberger algorithm for positively graded lattice ideals. Using this algorithm we solve the Sullivant computational commutative algebra challenge.... Optical linear algebra processors - Architectures and algorithms Casasent, David Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture. Algebraic dynamics solutions and algebraic dynamics algorithm for nonlinear ordinary differential equations WANG; Shunjin; ZHANG; Hua The problem of preserving fidelity in numerical computation of nonlinear ordinary differential equations is studied in terms of preserving local differential structure and approximating global integration structure of the dynamical system.The ordinary differential equations are lifted to the corresponding partial differential equations in the framework of algebraic dynamics,and a new algorithm-algebraic dynamics algorithm is proposed based on the exact analytical solutions of the ordinary differential equations by the algebraic dynamics method.In the new algorithm,the time evolution of the ordinary differential system is described locally by the time translation operator and globally by the time evolution operator.The exact analytical piece-like solution of the ordinary differential equations is expressd in terms of Taylor series with a local convergent radius,and its finite order truncation leads to the new numerical algorithm with a controllable precision better than Runge Kutta Algorithm and Symplectic Geometric Algorithm. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance... An algorithm for analysis of the structure of finitely presented Lie algebras Vladimir P. Gerdt Full Text Available We consider the following problem: what is the most general Lie algebra satisfying a given set of Lie polynomial equations? The presentation of Lie algebras by a finite set of generators and defining relations is one of the most general mathematical and algorithmic schemes of their analysis. That problem is of great practical importance, covering applications ranging from mathematical physics to combinatorial algebra. Some particular applications are constructionof prolongation algebras in the Wahlquist-Estabrook method for integrability analysis of nonlinear partial differential equations and investigation of Lie algebras arising in different physical models. The finite presentations also indicate a way to q-quantize Lie algebras. To solve this problem, one should perform a large volume of algebraic transformations which is sharply increased with growth of the number of generators and relations. For this reason, in practice one needs to use a computer algebra tool. We describe here an algorithm for constructing the basis of a finitely presented Lie algebra and its commutator table, and its implementation in the C language. Some computer results illustrating our algorithmand its actual implementation are also presented. Applied algebra codes, ciphers and discrete algorithms Hardy, Darel W; Walker, Carol L This book attempts to show the power of algebra in a relatively simple setting.-Mathematical Reviews, 2010… The book supports learning by doing. In each section we can find many examples which clarify the mathematics introduced in the section and each section is followed by a series of exercises of which approximately half are solved in the end of the book. Additional the book comes with a CD-ROM containing an interactive version of the book powered by the computer algebra system Scientific Notebook. … the mathematics in the book are developed as needed and the focus of the book lies clearly o Algebraic dynamics solutions and algebraic dynamics algorithm for nonlinear partial differential evolution equations of dynamical systems Using functional derivative technique in quantum field theory, the algebraic dy-namics approach for solution of ordinary differential evolution equations was gen-eralized to treat partial differential evolution equations. The partial differential evo-lution equations were lifted to the corresponding functional partial differential equations in functional space by introducing the time translation operator. The functional partial differential evolution equations were solved by algebraic dynam-ics. The algebraic dynamics solutions are analytical in Taylor series in terms of both initial functions and time. Based on the exact analytical solutions, a new nu-merical algorithm—algebraic dynamics algorithm was proposed for partial differ-ential evolution equations. The difficulty of and the way out for the algorithm were discussed. The application of the approach to and computer numerical experi-ments on the nonlinear Burgers equation and meteorological advection equation indicate that the algebraic dynamics approach and algebraic dynamics algorithm are effective to the solution of nonlinear partial differential evolution equations both analytically and numerically. High performance linear algebra algorithms: An introduction Gustavson, F.G.; Wasniewski, Jerzy his Mini-Symposium consisted of two back to back sessions, each consisting of five presentations, held on the afternoon of Monday, June 21, 2004. A major theme of both sessions was novel data structures for the matrices of dense linear algebra, DLA. Talks one to four of session one all centered... Algorithmic and experimental methods in algebra, geometry, and number theory Decker, Wolfram; Malle, Gunter This book presents state-of-the-art research and survey articles that highlight work done within the Priority Program SPP 1489 "Algorithmic and Experimental Methods in Algebra, Geometry and Number Theory�, which was established and generously supported by the German Research Foundation (DFG) from 2010 to 2016. The goal of the program was to substantially advance algorithmic and experimental methods in the aforementioned disciplines, to combine the different methods where necessary, and to apply them to central questions in theory and practice. Of particular concern was the further development of freely available open source computer algebra systems and their interaction in order to create powerful new computational tools that transcend the boundaries of the individual disciplines involved. The book covers a broad range of topics addressing the design and theoretical foundations, implementation and the successful application of algebraic algorithms in order to solve mathematical research problems. It off... An Improved Algorithm for Generating Database Transactions from Relational Algebra Specifications Daniel J. Dougherty Full Text Available Alloy is a lightweight modeling formalism based on relational algebra. In prior work with Fisler, Giannakopoulos, Krishnamurthi, and Yoo, we have presented a tool, Alchemy, that compiles Alloy specifications into implementations that execute against persistent databases. The foundation of Alchemy is an algorithm for rewriting relational algebra formulas into code for database transactions. In this paper we report on recent progress in improving the robustness and efficiency of this transformation. Algorithmic Algebraic Combinatorics and Gröbner Bases Klin, Mikhail; Jurisic, Aleksandar This collection of tutorial and research papers introduces readers to diverse areas of modern pure and applied algebraic combinatorics and finite geometries with a special emphasis on algorithmic aspects and the use of the theory of Grobner bases. Topics covered include coherent configurations, association schemes, permutation groups, Latin squares, the Jacobian conjecture, mathematical chemistry, extremal combinatorics, coding theory, designs, etc. Special attention is paid to the description of innovative practical algorithms and their implementation in software packages such as GAP and MAGM Algebraic Modeling of Topological and Computational Structures and Applications Theodorou, Doros; Stefaneas, Petros; Kauffman, Louis This interdisciplinary book covers a wide range of subjects, from pure mathematics (knots, braids, homotopy theory, number theory) to more applied mathematics (cryptography, algebraic specification of algorithms, dynamical systems) and concrete applications (modeling of polymers and ionic liquids, video, music and medical imaging). The main mathematical focus throughout the book is on algebraic modeling with particular emphasis on braid groups. The research methods include algebraic modeling using topological structures, such as knots, 3-manifolds, classical homotopy groups, and braid groups. The applications address the simulation of polymer chains and ionic liquids, as well as the modeling of natural phenomena via topological surgery. The treatment of computational structures, including finite fields and cryptography, focuses on the development of novel techniques. These techniques can be applied to the design of algebraic specifications for systems modeling and verification. This book is the outcome of a w... Matrix algebra for linear models Gruber, Marvin H J Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f Algebraic aspects of exact models Gaudin, M. Spin chains, 2-D spin lattices, chemical crystals, and particles in delta function interaction share the same underlying structures: the applicability of Bethe's superposition ansatz for wave functions, the commutativity of transfer matrices, and the existence of a ternary operator algebra. The appearance of these structures and interrelations from the eight vortex model, for delta function interreacting particles of general spin, and for spin 1/2, are outlined as follows: I. Eight Vortex Model. Equivalences to Ising model and the dimer system. Transfer matrix and symmetry of the Self Conjugate model. Relation between the XYZ Hamiltonian and the transfer matrix. One parameter family of commuting transfer matrices. A representation of the symmetric group spin. Diagonalization of the transfer matrix. The Coupled Spectrum equations. II. Identical particles with Delta Function interaction. The Bethe ansatz. Yang's representation. The Ternary Algebra and intergrability. III. Identical particles with delta function interaction: general solution for two internal states. The problem of spin 1/2 fermions. The Operator method The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details. Algorithm for solving polynomial algebraic Riccati equations and its application Czech Academy of Sciences Publication Activity Database Augusta, Petr; Augustová, Petra Ro�. 1, �. 4 (2012), s. 237-242 ISSN 2223-7038 R&D Projects: GA ČR GPP103/12/P494 Institutional support: RVO:67985556 Keywords : Numerical algorithms * algebraic Riccati equation * spatially distributed systems * optimal control Subject RIV: BC - Control Systems Theory http://lib.physcon.ru/doc?id=8b4876d6a57d ALGEBRA: ALgorithm for the heterogeneous dosimetry based on GEANT4 for BRAchytherapy. Afsharpour, H; Landry, G; D'Amours, M; Enger, S; Reniers, B; Poon, E; Carrier, J-F; Verhaegen, F; Beaulieu, L Task group 43 (TG43)-based dosimetry algorithms are efficient for brachytherapy dose calculation in water. However, human tissues have chemical compositions and densities different than water. Moreover, the mutual shielding effect of seeds on each other (interseed attenuation) is neglected in the TG43-based dosimetry platforms. The scientific community has expressed the need for an accurate dosimetry platform in brachytherapy. The purpose of this paper is to present ALGEBRA, a Monte Carlo platform for dosimetry in brachytherapy which is sufficiently fast and accurate for clinical and research purposes. ALGEBRA is based on the GEANT4 Monte Carlo code and is capable of handling the DICOM RT standard to recreate a virtual model of the treated site. Here, the performance of ALGEBRA is presented for the special case of LDR brachytherapy in permanent prostate and breast seed implants. However, the algorithm is also capable of handling other treatments such as HDR brachytherapy. MultiAspect Graphs: Algebraic Representation and Algorithms Klaus Wehmuth Full Text Available We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs. A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular, we show that, as a consequence of the properties associated with the MAG structure, a MAG can be represented in matrix form. Moreover, we also show that any possible MAG function (algorithm can be obtained from this matrix-based representation. This is an important theoretical result since it paves the way for adapting well-known graph algorithms for application in MAGs. We present a set of basic MAG algorithms, constructed from well-known graph algorithms, such as degree computing, Breadth First Search (BFS, and Depth First Search (DFS. These algorithms adapted to the MAG context can be used as primitives for building other more sophisticated MAG algorithms. Therefore, such examples can be seen as guidelines on how to properly derive MAG algorithms from basic algorithms on directed graphs. We also make available Python implementations of all the algorithms presented in this paper. Critical analysis of algebraic collective models Moshinsky, M. The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes Acoustooptic linear algebra processors - Architectures, algorithms, and applications Casasent, D. Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography. Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments. An Algorithm for Isolating the Real Solutions of Piecewise Algebraic Curves Jinming Wu to compute the real solutions of two piecewise algebraic curves. It is primarily based on the Krawczyk-Moore iterative algorithm and good initial iterative interval searching algorithm. The proposed algorithm is relatively easy to implement. Modeling digital switching circuits with linear algebra Thornton, Mitchell A Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf Towards Model Checking Stochastic Process Algebra Hermanns, H.; Grieskamp, W.; Santen, T.; Katoen, Joost P.; Stoddart, B.; Meyer-Kayser, J.; Siegle, M. Stochastic process algebras have been proven useful because they allow behaviour-oriented performance and reliability modelling. As opposed to traditional performance modelling techniques, the behaviour- oriented style supports composition and abstraction in a natural way. However, analysis of Advanced computer algebra algorithms for the expansion of Feynman integrals Ablinger, Jakob; Round, Mark; Schneider, Carsten Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+ε-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products. Energy Technology Data Exchange (ETDEWEB) Ablinger, Jakob; Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany) Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+{epsilon}-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products. Computational algebraic geometry of epidemic models Rodríguez Vega, Martín. Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases. Polynomial algebra of discrete models in systems biology. Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. [email protected] Supplementary data are available at Bioinformatics online. W algebra in the SU(3) parafermion model Ding, X.; Fan, H.; Shi, K.; Wang, P.; Zhu, C. A construction of W 3 algebra for the SU(3) parafermion model is proposed, in which a Z algebra technique is used instead of the popular free-field realization. The central charge of the underlying algebra is different from known W algebras Chiral algebras in Landau-Ginzburg models Dedushenko, Mykola Chiral algebras in the cohomology of the {\\overline{Q}}+ supercharge of two-dimensional N=(0,2) theories on flat spacetime are discussed. Using the supercurrent multiplet, we show that the answer is renormalization group invariant for theories with an R-symmetry. For N=(0,2) Landau-Ginzburg models, the chiral algebra is determined by the operator equations of motion, which preserve their classical form, and quantum renormalization of composite operators. We study these theories and then specialize to the N=(2,2) models and consider some examples. Ideals, varieties, and algorithms an introduction to computational algebraic geometry and commutative algebra Cox, David A; O'Shea, Donal This text covers topics in algebraic geometry and commutative algebra with a strong perspective toward practical and computational aspects. The first four chapters form the core of the book. A comprehensive chart in the preface illustrates a variety of ways to proceed with the material once these chapters are covered. In addition to the fundamentals of algebraic geometry—the elimination theorem, the extension theorem, the closure theorem, and the Nullstellensatz—this new edition incorporates several substantial changes, all of which are listed in the Preface. The largest revision incorporates a new chapter (ten), which presents some of the essentials of progress made over the last decades in computing Gröbner bases. The book also includes current computer algebra material in Appendix C and updated independent projects (Appendix D). The book may serve as a first or second course in undergraduate abstract algebra and, with some supplementation perhaps, for beginning graduate level courses in algebraic geom... A process algebra model of QED Sulis, William The process algebra approach to quantum mechanics posits a finite, discrete, determinate ontology of primitive events which are generated by processes (in the sense of Whitehead). In this ontology, primitive events serve as elements of an emergent space-time and of emergent fundamental particles and fields. Each process generates a set of primitive elements, using only local information, causally propagated as a discrete wave, forming a causal space termed a causal tapestry. Each causal tapestry forms a discrete and finite sampling of an emergent causal manifold (space-time) M and emergent wave function. Interactions between processes are described by a process algebra which possesses 8 commutative operations (sums and products) together with a non-commutative concatenation operator (transitions). The process algebra possesses a representation via nondeterministic combinatorial games. The process algebra connects to quantum mechanics through the set valued process and configuration space covering maps, which associate each causal tapestry with sets of wave functions over M. Probabilities emerge from interactions between processes. The process algebra model has been shown to reproduce many features of the theory of non-relativistic scalar particles to a high degree of accuracy, without paradox or divergences. This paper extends the approach to a semi-classical form of quantum electrodynamics. (paper) Algorithms of estimation for nonlinear systems a differential and algebraic viewpoint Martínez-Guerra, Rafael This book acquaints readers with recent developments in dynamical systems theory and its applications, with a strong focus on the control and estimation of nonlinear systems. Several algorithms are proposed and worked out for a set of model systems, in particular so-called input-affine or bilinear systems, which can serve to approximate a wide class of nonlinear control systems. These can either take the form of state space models or be represented by an input-output equation. The approach taken here further highlights the role of modern mathematical and conceptual tools, including differential algebraic theory, observer design for nonlinear systems and generalized canonical forms. Current algebra, statistical mechanics and quantum models Vilela Mendes, R. Results obtained in the past for free boson systems at zero and nonzero temperatures are revisited to clarify the physical meaning of current algebra reducible functionals which are associated to systems with density fluctuations, leading to observable effects on phase transitions. To use current algebra as a tool for the formulation of quantum statistical mechanics amounts to the construction of unitary representations of diffeomorphism groups. Two mathematical equivalent procedures exist for this purpose. One searches for quasi-invariant measures on configuration spaces, the other for a cyclic vector in Hilbert space. Here, one argues that the second approach is closer to the physical intuition when modelling complex systems. An example of application of the current algebra methodology to the pairing phenomenon in two-dimensional fermion systems is discussed. Searching dependency between algebraic equations: An algorithm applied to automated reasoning Yang Lu; Zhang Jingzhong An efficient computer algorithm is given to decide how many branches of the solution to a system of algebraic also solve another equation. As one of the applications, this can be used in practice to verify a conjecture with hypotheses and conclusion expressed by algebraic equations, despite the variety of reducible or irreducible. (author). 10 refs Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs Gene Frantz Full Text Available Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware. Fusion algebras of logarithmic minimal models Rasmussen, Joergen; Pearce, Paul A We present explicit conjectures for the chiral fusion algebras of the logarithmic minimal models LM(p,p') considering Virasoro representations with no enlarged or extended symmetry algebra. The generators of fusion are countably infinite in number but the ensuing fusion rules are quasi-rational in the sense that the fusion of a finite number of representations decomposes into a finite direct sum of representations. The fusion rules are commutative, associative and exhibit an sl(2) structure but require so-called Kac representations which are typically reducible yet indecomposable representations of rank 1. In particular, the identity of the fundamental fusion algebra p ≠1 is a reducible yet indecomposable Kac representation of rank 1. We make detailed comparisons of our fusion rules with the results of Gaberdiel and Kausch for p = 1 and with Eberle and Flohr for (p, p') = (2, 5) corresponding to the logarithmic Yang-Lee model. In the latter case, we confirm the appearance of indecomposable representations of rank 3. We also find that closure of a fundamental fusion algebra is achieved without the introduction of indecomposable representations of rank higher than 3. The conjectured fusion rules are supported, within our lattice approach, by extensive numerical studies of the associated integrable lattice models. Details of our lattice findings and numerical results will be presented elsewhere. The agreement of our fusion rules with the previous fusion rules lends considerable support for the identification of the logarithmic minimal models LM(p,p') with the augmented c p,p' (minimal) models defined algebraically Map algebra and model algebra for integrated model building Schmitz, O.; Karssenberg, D.J.; Jong, K. de; Kok, J.-L. de; Jong, S.M. de Computer models are important tools for the assessment of environmental systems. A seamless workflow of construction and coupling of model components is essential for environmental scientists. However, currently available software packages are often tailored either to the construction of model Tabak, John Looking closely at algebra, its historical development, and its many useful applications, Algebra examines in detail the question of why this type of math is so important that it arose in different cultures at different times. The book also discusses the relationship between algebra and geometry, shows the progress of thought throughout the centuries, and offers biographical data on the key figures. Concise and comprehensive text accompanied by many illustrations presents the ideas and historical development of algebra, showcasing the relevance and evolution of this branch of mathematics. Coherent states and classical limit of algebraic quantum models Scutaru, H. The algebraic models for collective motion in nuclear physics belong to a class of theories the basic observables of which generate selfadjoint representations of finite dimensional, real Lie algebras, or of the enveloping algebras of these Lie algebras. The simplest and most used for illustrations model of this kind is the Lipkin model, which is associated with the Lie algebra of the three dimensional rotations group, and which presents all characteristic features of an algebraic model. The Lipkin Hamiltonian is the image, of an element of the enveloping algebra of the algebra SO under a representation. In order to understand the structure of the algebraic models the author remarks that in both classical and quantum mechanics the dynamics is associated to a typical algebraic structure which we shall call a dynamical algebra. In this paper he shows how the constructions can be made in the case of the algebraic quantum systems. The construction of the symplectic manifold M can be made in this case using a quantum analog of the momentum map which he defines Flanders, Harley Algebra presents the essentials of algebra with some applications. The emphasis is on practical skills, problem solving, and computational techniques. Topics covered range from equations and inequalities to functions and graphs, polynomial and rational functions, and exponentials and logarithms. Trigonometric functions and complex numbers are also considered, together with exponentials and logarithms.Comprised of eight chapters, this book begins with a discussion on the fundamentals of algebra, each topic explained, illustrated, and accompanied by an ample set of exercises. The proper use of a Lectures on algebraic model theory Hart, Bradd In recent years, model theory has had remarkable success in solving important problems as well as in shedding new light on our understanding of them. The three lectures collected here present recent developments in three such areas: Anand Pillay on differential fields, Patrick Speissegger on o-minimality and Matthias Clasen and Matthew Valeriote on tame congruence theory. Performance analysis of a decoding algorithm for algebraic-geometry codes Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out... A polynomial time algorithm for checking regularity of totally normed process algebra Yang, F.; Huang, H. A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for An algebraic model for three-cluster giant molecules Hess, P.O.; Bijker, R.; Misicu, S. After an introduction to the algebraic U(7) model for three bodies, we present a relation of a geometrical description of three-cluster molecule to the algebraic U(7) model. Stiffness parameters of oscillations between each of two clusters are calculated and translated to the model parameter values of the algebraic model. The model is applied to the trinuclear system l32 Sn+ α + ll6 Pd which occurs in the ternary cold fission of 252 Cf. (Author) Realization of preconditioned Lanczos and conjugate gradient algorithms on optical linear algebra processors. Ghosh, A Lanczos and conjugate gradient algorithms are important in computational linear algebra. In this paper, a parallel pipelined realization of these algorithms on a ring of optical linear algebra processors is described. The flow of data is designed to minimize the idle times of the optical multiprocessor and the redundancy of computations. The effects of optical round-off errors on the solutions obtained by the optical Lanczos and conjugate gradient algorithms are analyzed, and it is shown that optical preconditioning can improve the accuracy of these algorithms substantially. Algorithms for optical preconditioning and results of numerical experiments on solving linear systems of equations arising from partial differential equations are discussed. Since the Lanczos algorithm is used mostly with sparse matrices, a folded storage scheme to represent sparse matrices on spatial light modulators is also described. Exchange algebra and exotic supersymmetry in the Chiral Potts model Bernard, D.; Pasquier, V. We obtain an exchange algebra for the Chiral Potts model, the elements of which are linear in the parameters defining the rapidity curve. This enables us to connect the Chiral Potts model to a U q (GL(2)) algebra. On the other hand, looking at the model from the S-matrix point of view relates it to a Z N generalisation of the supersymmetric algebra Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem Wei-Tao, Lu; Hua, Zhang; Shun-Jin, Wang Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge–Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP. (general) GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP. Sepanski, Mark R Mark Sepanski's Algebra is a readable introduction to the delightful world of modern algebra. Beginning with concrete examples from the study of integers and modular arithmetic, the text steadily familiarizes the reader with greater levels of abstraction as it moves through the study of groups, rings, and fields. The book is equipped with over 750 exercises suitable for many levels of student ability. There are standard problems, as well as challenging exercises, that introduce students to topics not normally covered in a first course. Difficult problems are broken into manageable subproblems Current algebra of WZNW models at and away from criticality Abdalla, E.; Forger, M. In this paper, the authors derive the current algebra of principal chiral models with a Wess-Zumino term. At the critical coupling where the model becomes conformally invariant (Wess-Zumino-Novikov-Witten theory), this algebra reduces to two commuting Kac-Moody algebras, while in the limit where the coupling constant is taken to zero (ordinary chiral model), we recover the current algebra of that model. In this way, the latter is explicitly realized as a deformation of the former, with the coupling constant as the deformation parameter Energy footprint of advanced dense numerical linear algebra using tile algorithms on multicore architectures KAUST Repository Dongarra, Jack; Ltaief, Hatem; Luszczek, Piotr R.; Weaver, Vincent M. We propose to study the impact on the energy footprint of two advanced algorithmic strategies in the context of high performance dense linear algebra libraries: (1) mixed precision algorithms with iterative refinement allow to run at the peak performance of single precision floating-point arithmetic while achieving double precision accuracy and (2) tree reduction technique exposes more parallelism when factorizing tall and skinny matrices for solving over determined systems of linear equations or calculating the singular value decomposition. Integrated within the PLASMA library using tile algorithms, which will eventually supersede the block algorithms from LAPACK, both strategies further excel in performance in the presence of a dynamic task scheduler while targeting multicore architecture. Energy consumption measurements are reported along with parallel performance numbers on a dual-socket quad-core Intel Xeon as well as a quad-socket quad-core Intel Sandy Bridge chip, both providing component-based energy monitoring at all levels of the system, through the Power Pack framework and the Running Average Power Limit model, respectively. © 2012 IEEE. Dongarra, Jack Isovectorial pairing in solvable and algebraic models Lerma, Sergio; Vargas, Carlos E; Hirsch, Jorge G Schematic interactions are useful to gain some insight in the behavior of very complicated systems such as the atomic nuclei. Prototypical examples are, in this context, the pairing interaction and the quadrupole interaction of the Elliot model. In this contribution the interplay between isovectorial pairing, spin-orbit, and quadrupole terms in a harmonic oscillator shell (the so-called pairing-plus-quadrupole model) is studied by algebraic methods. The ability of this model to provide a realistic description of N = Z even-even nuclei in the fp-shell is illustrated with 44 Ti. Our calculations which derive from schematic and simple terms confirm earlier conclusions obtained by using realistic interactions: the SU(3) symmetry of the quadrupole term is broken mainly by the spin-orbit term, but the energies depends strongly on pairing. Phase Transitions in Algebraic Cluster Models Yepez-Martinez, H.; Cseh, J.; Hess, P.O. Complete text of publication follows. Phase transitions in nuclear systems are of utmost interest. An interesting class of phase transitions can be seen in algebraic models of nuclear structure. They are called shapephase transitions due to the following reason. These models have analytically solvable limiting cases, called dynamical symmetries, which are characterized by a chain of nested subgroups. They correspond to well-defined geometrical shape and behaviour, e.g. to rotation of an ellipsoid, or spherical vibration. The general case of the model, which includes interactions described by more than one groupchain, breaks the symmetry, and changing the relative strengths of these interactions, one can go from one shape to the other. In doing so a phase-transition can be seen. A phase transition is defined as a discontinuity of some quantity as a function of the control parameter, which gives the relative strength of the interactions of different symmetries. Real phase transitions can take place only in infinite systems, like in the classical limits of these algebraic models, when the particle number N is very large: N → ∞. For finite N the discontinuities are smoothed out, nevertheless, some indications of the phase-transitions can still be there. A controlled way of breaking the dynamical symmetries may reveal another very interesting phenomenon, i.e. the appearance of a quasidynamical (or effective) symmetry. This rather general symmetry-concept of quantum mechanics corresponds to a situation, in which the symmetry-breaking interactions are so strong that the energy-eigenfunctions are not symmetric, i.e. are not basis states of an irreducible representation of the symmetry group, rather they are linear combinations of these basis states. However, they are very special linear combinations in the sense that their coefficients are (approximately) identical for states with different spin values. When this is the case, then the underlying intrinsic state is the Toda theories, W-algebras, and minimal models Mansfield, P.; Spence, B. We discuss the classical W-algebra symmetries of Toda field theories in terms of the pseudo-differential Lax operator associated with the Toda Lax pair. We then show how the W-algebra transformations can be understood as the non-abelian gauge transformations which preserve the form of the Lax pair. This provides a new understanding of the W-algebras, and we discuss their closure and co-cycle structure using this approach. The quantum Lax operator is investigated, and we show that this operator, which generates the quantum W-algebra currents, is conserved in the conformally extended Toda theories. The W-algebra minimal model primary fields are shown to arise naturally in these theories, leading to the conjecture that the conformally extended Toda theories provide a lagrangian formulation of the W-algebra minimal models. (orig.) Algebraic computability and enumeration models recursion theory and descriptive complexity Nourani, Cyrus F This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type... N=2 current algebra and coset models Hull, C.M.; Spence, B. The N=2 supersymmetric extension of the Kac-Moody algebra and the corresponding Sugawara construction of the N=2 superconformal algebra are discussed both in components and in N=1 superspace. A formulation of the Kac-Moody algebra and Sugawara construction is given in N=2 superspace in terms of supercurrents satisfying a non-linear chiral constraint. The operator product of two supercurrents includes terms that are non-linear in the supercurrents. The N=2 generalization of the GKO coset construction is then given and the conditions found by Kazama and Suzuki are seen to arise from the non-linearity of the algebra. (orig.) Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine Experimental Tests of the Algebraic Cluster Model Gai, Moshe The Algebraic Cluster Model (ACM) of Bijker and Iachello that was proposed already in 2000 has been recently applied to 12C and 16O with much success. We review the current status in 12C with the outstanding observation of the ground state rotational band composed of the spin-parity states of: 0+, 2+, 3-, 4± and 5-. The observation of the 4± parity doublet is a characteristic of (tri-atomic) molecular configuration where the three alpha- particles are arranged in an equilateral triangular configuration of a symmetric spinning top. We discuss future measurement with electron scattering, 12C(e,e') to test the predicted B(Eλ) of the ACM. Current algebra of classical non-linear sigma models Forger, M.; Laartz, J.; Schaeper, U. The current algebra of classical non-linear sigma models on arbitrary Riemannian manifolds is analyzed. It is found that introducing, in addition to the Noether current j μ associated with the global symmetry of the theory, a composite scalar field j, the algebra closes under Poisson brackets. (orig.) Algebraic Factoring algorithm to recognise read-once functions. Naidu, S.R. A fast polynomial-time algorithm was recently proposed to determine whether a logic function expressed as a unate DNF (disjunctive normal form) can be expressed as a read-once formula where each variable appears no more than once. The paper uses a combinatorial characterisation of read-once formulas Algebraic models of local period maps and Yukawa algebras Bandiera, Ruggero; Manetti, Marco We describe some L_{∞} model for the local period map of a compact Kähler manifold. Applications include the study of deformations with associated variation of Hodge structure constrained by certain closed strata of the Grassmannian of the de Rham cohomology. As a by-product, we obtain an interpretation in the framework of deformation theory of the Yukawa coupling. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States) An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells. Action Algebras and Model Algebras in Denotational Semantics Guedes, Luiz Carlos Castro; Haeusler, Edward Hermann This article describes some results concerning the conceptual separation of model dependent and language inherent aspects in a denotational semantics of a programming language. Before going into the technical explanation, the authors wish to relate a story that illustrates how correctly and precisely posed questions can influence the direction of research. By means of his questions, Professor Mosses aided the PhD research of one of the authors of this article and taught the other, who at the time was a novice supervisor, the real meaning of careful PhD supervision. The student's research had been partially developed towards the implementation of programming languages through denotational semantics specification, and the student had developed a prototype [12] that compared relatively well to some industrial compilers of the PASCAL language. During a visit to the BRICS lab in Aarhus, the student's supervisor gave Professor Mosses a draft of an article describing the prototype and its implementation experiments. The next day, Professor Mosses asked the supervisor, "Why is the generated code so efficient when compared to that generated by an industrial compiler?� and "You claim that the efficiency is simply a consequence of the Object- Orientation mechanisms used by the prototype programming language (C++); this should be better investigated. Pay more attention to the class of programs that might have this good comparison profile.� As a result of these aptly chosen questions and comments, the student and supervisor made great strides in the subsequent research; the advice provided by Professor Mosses made them perceive that the code generated for certain semantic domains was efficient because it mapped to the "right aspect� of the language semantics. (Certain functional types, used to represent mappings such as Stores and Environments, were pushed to the level of the object language (as in gcc). This had the side-effect of generating code for arrays in Additional operations in algebra of structural numbers for control algorithm development Morhun A.V. Full Text Available The structural numbers and the algebra of the structural numbers due to the simplicity of representation, flexibility and current algebraic operations are the powerful tool for a wide range of applications. In autonomous power supply systems and systems with distributed generation (Micro Grid mathematical apparatus of structural numbers can be effectively used for the calculation of the parameters of the operating modes of consumption of electric energy. The purpose of the article is the representation of the additional algebra of structural numbers. The standard algebra was proposed to be extended by the additional operations and modification current in order to expand the scope of their use, namely to construct a flexible, adaptive algorithms of control systems. It is achieved due to the possibility to consider each individual component of the system with its parameters and provide easy management of entire system and each individual component. Thus, structural numbers and extended algebra are the perspective line of research and further studying is required. ADAM: analysis of discrete models of biological systems using computer algebra. Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web A note on probabilistic models over strings: the linear algebra approach. Bouchard-Côté, Alexandre Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems. Hyper-lattice algebraic model for data warehousing Sen, Soumya; Chaki, Nabendu This book presents Hyper-lattice, a new algebraic model for partially ordered sets, and an alternative to lattice. The authors analyze some of the shortcomings of conventional lattice structure and propose a novel algebraic structure in the form of Hyper-lattice to overcome problems with lattice. They establish how Hyper-lattice supports dynamic insertion of elements in a partial order set with a partial hierarchy between the set members. The authors present the characteristics and the different properties, showing how propositions and lemmas formalize Hyper-lattice as a new algebraic structure. A differential algebraic integration algorithm for symplectic mappings in systems with three-dimensional magnetic field Chang, P.; Lee, S.Y.; Yan, Y.T. A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders Chang, P Observable algebras for the rational and trigonometric Euler-Calogero-Moser Models Avan, J.; Billey, E. We construct polynomial Poisson algebras of observables for the classical Euler-Calogero-Moser (ECM) models. Their structure connects them to flavour-indexed non-linear W ∞ algebras, albeit with qualitative differences. The conserved Hamiltonians and symmetry algebras derived in a previous work are subsets of these algebra. We define their linear, N →∞ limits, realizing W ∞ type algebras coupled to current algebras. ((orig.)) Thirty-three miniatures mathematical and algorithmic applications of linear algebra Matousek, Jiří This volume contains a collection of clever mathematical applications of linear algebra, mainly in combinatorics, geometry, and algorithms. Each chapter covers a single main result with motivation and full proof in at most ten pages and can be read independently of all other chapters (with minor exceptions), assuming only a modest background in linear algebra. The topics include a number of well-known mathematical gems, such as Hamming codes, the matrix-tree theorem, the Lov�sz bound on the Shannon capacity, and a counterexample to Borsuk's conjecture, as well as other, perhaps less popular but similarly beautiful results, e.g., fast associativity testing, a lemma of Steinitz on ordering vectors, a monotonicity result for integer partitions, or a bound for set pairs via exterior products. The simpler results in the first part of the book provide ample material to liven up an undergraduate course of linear algebra. The more advanced parts can be used for a graduate course of linear-algebraic methods or for s... Category-theoretic models of algebraic computer systems Kovalyov, S. P. A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the �ukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the �ukasiewicz logic matrix are found. Modeling Software Evolution using Algebraic Graph Rewriting Ciraci, Selim; van den Broek, Pim We show how evolution requests can be formalized using algebraic graph rewriting. In particular, we present a way to convert the UML class diagrams to colored graphs. Since changes in software may effect the relation between the methods of classes, our colored graph representation also employs the Model Checking Processes Specified In Join-Calculus Algebra Sławomir Piotr Maludziński Full Text Available This article presents a model checking tool used to verify concurrent systems specified in join-calculus algebra. The temporal properties of systems under verification are expressed in CTL logic. Join-calculus algebra with its operational semantics defined by the chemical abstract machine serves as the basic method for the specification of concurrent systems and their synchronization mechanisms, and allows the examination of more complex systems. Tensor models, Kronecker coefficients and permutation centralizer algebras Geloun, Joseph Ben; Ramgoolam, Sanjaye We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams. Solving the nuclear shell model with an algebraic method Feng, D.H.; Pan, X.W.; Guidry, M. We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.) A matrix-algebraic algorithm for the Riemannian logarithm on the Stiefel manifold under the canonical metric Zimmermann, Ralf We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the optimization-based approach known from the literature, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm converges locally and exhibits a linear rate of convergence. We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the optimization-based approach known from the literature, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm...... converges locally and exhibits a linear rate of convergence.... Logarithmic sâ""-hat (2) CFT models from Nichols algebras: I Semikhatov, A M; Tipunin, I Yu We construct chiral algebras that centralize rank-2 Nichols algebras with at least one fermionic generator. This gives 'logarithmic' W-algebra extensions of a fractional-level sℓ-hat (2) algebra. We discuss crucial aspects of the emerging general relation between Nichols algebras and logarithmic conformal field theory (CFT) models: (i) the extra input, beyond the Nichols algebra proper, needed to uniquely specify a conformal model; (ii) a relation between the CFT counterparts of Nichols algebras connected by Weyl groupoid maps; and (iii) the common double bosonization U(X) of such Nichols algebras. For an extended chiral algebra, candidates for its simple modules that are counterparts of the U(X) simple modules are proposed, as a first step toward a functorial relation between U(X) and W-algebra representation categories. (paper) Representations of the Virasoro algebra from lattice models Koo, W.M.; Saleur, H. We investigate in detail how the Virasoro algebra appears in the scaling limit of the simplest lattice models of XXZ or RSOS type. Our approach is straightforward but to our knowledge had never been tried so far. We simply formulate a conjecture for the lattice stress-energy tensor motivated by the exact derivation of lattice global Ward identities. We then check that the proper algebraic relations are obeyed in the scaling limit. The latter is under reasonable control thanks to the Bethe-ansatz solution. The results, which are mostly numerical for technical reasons, are remarkably precise. They are also corroborated by exact pieces of information from various sources, in particular Temperley-Lieb algebra representation theory. Most features of the Virasoro algebra (like central term, null vectors, metric properties, etc.) can thus be observed using the lattice models. This seems of general interest for lattice field theory, and also more specifically for finding relations between conformal invariance and lattice integrability, since a basis for the irreducible representations of the Virasoro algebra should now follow (at least in principle) from Bethe-ansatz computations. ((orig.)) Inverse Modelling Problems in Linear Algebra Undergraduate Courses Martinez-Luaces, Victor E. This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different… Optical linear algebra processors - Noise and error-source modeling Casasent, D.; Ghosh, A. The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed. Optical linear algebra processors: noise and error-source modeling. Casasent, D; Ghosh, A The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed. Algebraic Bethe ansatz for 19-vertex models with reflection conditions Utiel, Wagner In this work we solve the 19-vertex models with the use of algebraic Bethe ansatz for diagonal reflection matrices (Sklyanin K-matrices). The eigenvectors, eigenvalues and Bethe equations are given in a general form. Quantum spin chains of spin one derived from the 19-vertex models were also discussed Generalized algebra-valued models of set theory Löwe, B.; Tarafder, S. We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory. Ltaief, Hatem This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine-grained task parallelism that recasts the computation to operate on submatrices called tiles. In this way tile algorithms are formed. We show results from the power profiling of the most common routines, which permits us to clearly identify the different phases of the computations. This allows us to isolate the bottlenecks in terms of energy efficiency. Our results show that PLASMA surpasses LAPACK not only in terms of performance but also in terms of energy efficiency. © 2011 Springer-Verlag. New matrix bounds and iterative algorithms for the discrete coupled algebraic Riccati equation Liu, Jianzhou; Wang, Li; Zhang, Juan The discrete coupled algebraic Riccati equation (DCARE) has wide applications in control theory and linear system. In general, for the DCARE, one discusses every term of the coupled term, respectively. In this paper, we consider the coupled term as a whole, which is different from the recent results. When applying eigenvalue inequalities to discuss the coupled term, our method has less error. In terms of the properties of special matrices and eigenvalue inequalities, we propose several upper and lower matrix bounds for the solution of DCARE. Further, we discuss the iterative algorithms for the solution of the DCARE. In the fixed point iterative algorithms, the scope of Lipschitz factor is wider than the recent results. Finally, we offer corresponding numerical examples to illustrate the effectiveness of the derived results. An algebraic approach to modeling in software engineering Loegel, C.J.; Ravishankar, C.V. Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form Directed Abelian algebras and their application to stochastic models. Alcaraz, F C; Rittenberg, V With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma_(tau)=32 ). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma_(tau)=1.780+/-0.005 . Algebraic approach to small-world network models Rudolph-Lilith, Michelle; Muller, Lyle E. We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs. Steady state analysis of Boolean molecular network models via model reduction and computational algebra. Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for Continual Lie algebras and noncommutative counterparts of exactly solvable models Zuevsky, A. Noncommutative counterparts of exactly solvable models are introduced on the basis of a generalization of Saveliev-Vershik continual Lie algebras. Examples of noncommutative Liouville and sin/h-Gordon equations are given. The simplest soliton solution to the noncommutative sine-Gordon equation is found. Model selection for contingency tables with algebraic statistics Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P. Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of Extensions of Scott's Graph Model and Kleene's Second Algebra van Oosten, J.; Voorneveld, Niels We use a way to extend partial combinatory algebras (pcas) by forcing them to represent certain functions. In the case of Scott's Graph Model, equality is computable relative to the complement function. However, the converse is not true. This creates a hierarchy of pcas which relates to similar Algorithms for a parallel implementation of Hidden Markov Models with a small state space Nielsen, Jesper; Sand, Andreas Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces... 2D sigma models and differential Poisson algebras Arias, Cesar; Boulanger, Nicolas; Sundell, Per; Torres-Gomez, Alexander We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model. The Hamiltonian of the quantum trigonometric Calogero-Sutherland model in the exceptional algebra E8 Fernandez Nunez, J; Garcia Fuertes, W; Perelomov, A M We express the Hamiltonian of the quantum trigonometric Calogero-Sutherland model for the Lie algebra E 8 and coupling constant κ by using the fundamental irreducible characters of the algebra as dynamical independent variables Summary of the CSRI Workshop on Combinatorial Algebraic Topology (CAT): Software, Applications, & Algorithms Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Visualization and Scientific Computing Dept.; Day, David Minot [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Applied Mathematics and Applications Dept.; Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computer Science and Informatics Dept. This report summarizes the Combinatorial Algebraic Topology: software, applications & algorithms workshop (CAT Workshop). The workshop was sponsored by the Computer Science Research Institute of Sandia National Laboratories. It was organized by CSRI staff members Scott Mitchell and Shawn Martin. It was held in Santa Fe, New Mexico, August 29-30. The CAT Workshop website has links to some of the talk slides and other information, http://www.cs.sandia.gov/CSRI/Workshops/2009/CAT/index.html. The purpose of the report is to summarize the discussions and recap the sessions. There is a special emphasis on technical areas that are ripe for further exploration, and the plans for follow-up amongst the workshop participants. The intended audiences are the workshop participants, other researchers in the area, and the workshop sponsors. Developing ontological model of computational linear algebra - preliminary considerations Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Lirkov, I. The aim of this paper is to propose a method for application of ontologically represented domain knowledge to support Grid users. The work is presented in the context provided by the Agents in Grid system, which aims at development of an agent-semantic infrastructure for efficient resource management in the Grid. Decision support within the system should provide functionality beyond the existing Grid middleware, specifically, help the user to choose optimal algorithm and/or resource to solve a problem from a given domain. The system assists the user in at least two situations. First, for users without in-depth knowledge about the domain, it should help them to select the method and the resource that (together) would best fit the problem to be solved (and match the available resources). Second, if the user explicitly indicates the method and the resource configuration, it should "verify" if her choice is consistent with the expert recommendations (encapsulated in the knowledge base). Furthermore, one of the goals is to simplify the use of the selected resource to execute the job; i.e., provide a user-friendly method of submitting jobs, without required technical knowledge about the Grid middleware. To achieve the mentioned goals, an adaptable method of expert knowledge representation for the decision support system has to be implemented. The selected approach is to utilize ontologies and semantic data processing, supported by multicriterial decision making. As a starting point, an area of computational linear algebra was selected to be modeled, however, the paper presents a general approach that shall be easily extendable to other domains. Soliton surfaces associated with sigma models: differential and algebraic aspects Goldstein, P P; Grundland, A M; Post, S In this paper, we consider both differential and algebraic properties of surfaces associated with sigma models. It is shown that surfaces defined by the generalized Weierstrass formula for immersion for solutions of the CP N-1 sigma model with finite action, defined in the Riemann sphere, are themselves solutions of the Euler–Lagrange equations for sigma models. On the other hand, we show that the Euler–Lagrange equations for surfaces immersed in the Lie algebra su(N), with conformal coordinates, that are extremals of the area functional, subject to a fixed polynomial identity, are exactly the Euler–Lagrange equations for sigma models. In addition to these differential constraints, the algebraic constraints, in the form of eigenvalues of the immersion functions, are systematically treated. The spectrum of the immersion functions, for different dimensions of the model, as well as its symmetry properties and its transformation under the action of the ladder operators are discussed. Another approach to the dynamics is given, i.e. description in terms of the unitary matrix which diagonalizes both the immersion functions and the projectors constituting the model. (paper) Algorithms for finding Chomsky and Greibach normal forms for a fuzzy context-free grammar using an algebraic approach Lee, E.T. Algorithms for the construction of the Chomsky and Greibach normal forms for a fuzzy context-free grammar using the algebraic approach are presented and illustrated by examples. The results obtained in this paper may have useful applications in fuzzy languages, pattern recognition, information storage and retrieval, artificial intelligence, database and pictorial information systems. 16 references. Model Theory in Algebra, Analysis and Arithmetic Dries, Lou; Macpherson, H Dugald; Pillay, Anand; Toffalori, Carlo; Wilkie, Alex J Presenting recent developments and applications, the book focuses on four main topics in current model theory: 1) the model theory of valued fields; 2) undecidability in arithmetic; 3) NIP theories; and 4) the model theory of real and complex exponentiation. Young researchers in model theory will particularly benefit from the book, as will more senior researchers in other branches of mathematics. Geometric model of topological insulators from the Maxwell algebra Palumbo, Giandomenico We propose a novel geometric model of time-reversal-invariant topological insulators in three dimensions in presence of an external electromagnetic field. Their gapped boundary supports relativistic quantum Hall states and is described by a Chern-Simons theory, where the gauge connection takes values in the Maxwell algebra. This represents a non-central extension of the Poincaré algebra and takes into account both the Lorentz and magnetic-translation symmetries of the surface states. In this way, we derive a relativistic version of the Wen-Zee term and we show that the non-minimal coupling between the background geometry and the electromagnetic field in the model is in agreement with the main properties of the relativistic quantum Hall states in the flat space. I propose a novel geometric model of time-reversal-invariant topological insulators in three dimensions in presence of an external electromagnetic field. Their gapped boundary supports relativistic quantum Hall states and is described by a Chern-Simons theory, where the gauge connection takes values in the Maxwell algebra. This represents a non-central extension of the Poincare' algebra and takes into account both the Lorentz and magnetic-translation symmetries of the surface states. In this way, I derive a relativistic version of the Wen-Zee term and I show that the non-minimal coupling between the background geometry and the electromagnetic field in the model is in agreement with the main properties of the relativistic quantum Hall states in the flat space. This work is part of the DITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). Algebraic model checking for Boolean gene regulatory networks. Tran, Quoc-Nam We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems. Kac-Moody algebra is not hidden symmetry of chiral models Devchand, C.; Schiff, J. A detailed examination of the infinite dimensional loop algebra of hidden symmetry transformations of the Principal Chiral Model reveals it to have a structure differing from a standard centreless Kac-Moody algebra. A new infinite dimensional Abelian symmetry algebra is shown to preserve a symplectic form on the space of solutions. (author). 15 refs The algebra of non-local charges in non-linear sigma models Abdalla, E.; Abdalla, M.C.B.; Brunelli, J.C.; Zadra, A. It is derived the complete Dirac algebra satisfied by non-local charges conserved in non-linear sigma models. Some examples of calculation are given for the O(N) symmetry group. The resulting algebra corresponds to a saturated cubic deformation (with only maximum order terms) of the Kac-Moody algebra. The results are generalized for when a Wess-Zumino term be present. In that case the algebra contains a minor order correction (sub-saturation). (author). 1 ref Integrability in three dimensions: Algebraic Bethe ansatz for anyonic models Sh. Khachatryan Full Text Available We extend basic properties of two dimensional integrable models within the Algebraic Bethe Ansatz approach to 2+1 dimensions and formulate the sufficient conditions for the commutativity of transfer matrices of different spectral parameters, in analogy with Yang–Baxter or tetrahedron equations. The basic ingredient of our models is the R-matrix, which describes the scattering of a pair of particles over another pair of particles, the quark-anti-quark (meson scattering on another quark-anti-quark state. We show that the Kitaev model belongs to this class of models and its R-matrix fulfills well-defined equations for integrability. Algebraic fermion models and nuclear structure physics Troltenier, Dirk; Blokhin, Andrey; Draayer, Jerry P.; Rompf, Dirk; Hirsch, Jorge G. Recent experimental and theoretical developments are generating renewed interest in the nuclear SU(3) shell model, and this extends to the symplectic model, with its Sp(6,R) symmetry, which is a natural multi-(ℎ/2π)ω extension of the SU(3) theory. First and foremost, an understanding of how the dynamics of a quantum rotor is embedded in the shell model has established it as the model of choice for describing strongly deformed systems. Second, the symplectic model extension of the 0-(ℎ/2π)ω theory can be used to probe additional degrees of freedom, like core polarization and vorticity modes that play a key role in providing a full description of quadrupole collectivity. Third, the discovery and understanding of pseudo-spin has allowed for an extension of the theory from light (A≤40) to heavy (A≥100) nuclei. Fourth, a user-friendly computer code for calculating reduced matrix elements of operators that couple SU(3) representations is now available. And finally, since the theory is designed to cope with deformation in a natural way, microscopic features of deformed systems can be probed; for example, the theory is now being employed to study double beta decay and thereby serves to probe the validity of the standard model of particles and their interactions. A subset of these topics will be considered in this course--examples cited include: a consideration of the origin of pseudo-spin symmetry; a SU(3)-based interpretation of the coupled-rotor model, early results of double beta decay studies; and some recent developments on the pseudo-SU(3) theory. Nothing will be said about other fermion-based theories; students are referred to reviews in the literature for reports on developments in these related areas Methods of mathematical modeling using polynomials of algebra of sets Kazanskiy, Alexandr; Kochetkov, Ivan The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models. An algebraic model for quark mass matrices with heavy top Krolikowski, W.; Warsaw Univ. In terms of an intergeneration U(3) algebra, a numerical model is constructed for quark mass matrices, predicting the top-quark mass around 170 GeV and the CP-violating phase around 75 deg. The CKM matrix is nonsymmetric in moduli with |V ub | being very small. All moduli are consistent with their experimental limits. The model is motivated by the author's previous work on three replicas of the Dirac particle, presumably resulting into three generations of leptons and quarks. The paper may be also viewed as an introduction to a new method of intrinsic dynamical description of lepton and quark mass matrices. (author) MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS A. A. Zolotin Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when Four-parametric two-layer algebraic model of transition boundary layer at a planar plate Labusov, A.N.; Lapin, Yu.V. Consideration is given to four-parametric two-layer algebraic model of transition boundary layer on a plane plate, based on generalization of one-parametric algebraic Prandtl-Loitsjansky-Klauzer-3 model. The algebraic model uses Prandtl formulas for mixing path with Loitsjansky damping multiplier in the internal region and the relation for turbulent viscosity, based on universal scales of external region and named the Klauzer-3 formula. 12 refs., 10 figs Validation of Simulation Models without Knowledge of Parameters Using Differential Algebra Björn Haffke Full Text Available This study deals with the external validation of simulation models using methods from differential algebra. Without any system identification or iterative numerical methods, this approach provides evidence that the equations of a model can represent measured and simulated sets of data. This is very useful to check if a model is, in general, suitable. In addition, the application of this approach to verification of the similarity between the identifiable parameters of two models with different sets of input and output measurements is demonstrated. We present a discussion on how the method can be used to find parameter deviations between any two models. The advantage of this method is its applicability to nonlinear systems as well as its algorithmic nature, which makes it easy to automate. The Schwinger Dyson equations and the algebra of constraints of random tensor models at all orders Gurau, Razvan Random tensor models for a generic complex tensor generalize matrix models in arbitrary dimensions and yield a theory of random geometries. They support a 1/N expansion dominated by graphs of spherical topology. Their Schwinger Dyson equations, generalizing the loop equations of matrix models, translate into constraints satisfied by the partition function. The constraints have been shown, in the large N limit, to close a Lie algebra indexed by colored rooted D-ary trees yielding a first generalization of the Virasoro algebra in arbitrary dimensions. In this paper we complete the Schwinger Dyson equations and the associated algebra at all orders in 1/N. The full algebra of constraints is indexed by D-colored graphs, and the leading order D-ary tree algebra is a Lie subalgebra of the full constraints algebra. A new (in)finite-dimensional algebra for quantum integrable models Baseilhac, Pascal; Koizumi, Kozo A new (in)finite-dimensional algebra which is a fundamental dynamical symmetry of a large class of (continuum or lattice) quantum integrable models is introduced and studied in details. Finite-dimensional representations are constructed and mutually commuting quantities-which ensure the integrability of the system-are written in terms of the fundamental generators of the new algebra. Relation with the deformed Dolan-Grady integrable structure recently discovered by one of the authors and Terwilliger's tridiagonal algebras is described. Remarkably, this (in)finite-dimensional algebra is a 'q-deformed' analogue of the original Onsager's algebra arising in the planar Ising model. Consequently, it provides a new and alternative algebraic framework for studying massive, as well as conformal, quantum integrable models Algebraic formulation of collective models. I. The mass quadrupole collective model Rosensteel, G.; Rowe, D.J. This paper is the first in a series of three which together present a microscopic formulation of the Bohr--Mottelson (BM) collective model of the nucleus. In this article the mass quadrupole collective (MQC) model is defined and shown to be a generalization of the BM model. The MQC model eliminates the small oscillation assumption of BM and also yields the rotational and CM (3) submodels by holonomic constraints on the MQC configuration space. In addition, the MQC model is demonstrated to be an algebraic model, so that the state space of the MQC model carries an irrep of a Lie algebra of microscopic observables, the MQC algebra. An infinite class of new collective models is then given by the various inequivalent irreps of this algebra. A microscopic embedding of the BM model is achieved by decomposing the representation of the MQC algebra on many-particle state space into its irreducible components. In the second paper this decomposition is studied in detail. The third paper presents the symplectic model, which provides the realization of the collective model in the harmonic oscillator shell model A Structural Model of Algebra Achievement: Computational Fluency and Spatial Visualisation as Mediators of the Effect of Working Memory on Algebra Achievement Tolar, Tammy Daun; Lederberg, Amy R.; Fletcher, Jack M. The goal of this study was to develop and evaluate a structural model of the relations among cognitive abilities and arithmetic skills and college students' algebra achievement. The model of algebra achievement was compared to a model of performance on the Scholastic Assessment in Mathematics (SAT-M) to determine whether the pattern of relations… Algebraic Traveling Wave Solutions of a Non-local Hydrodynamic-type Model Chen, Aiyong; Zhu, Wenjing; Qiao, Zhijun; Huang, Wentao In this paper we consider the algebraic traveling wave solutions of a non-local hydrodynamic-type model. It is shown that algebraic traveling wave solutions exist if and only if an associated first order ordinary differential system has invariant algebraic curve. The dynamical behavior of the associated ordinary differential system is analyzed. Phase portraits of the associated ordinary differential system is provided under various parameter conditions. Moreover, we classify algebraic traveling wave solutions of the model. Some explicit formulas of smooth solitary wave and cuspon solutions are obtained Algebra for Enterprise Ontology: towards analysis and synthesis of enterprise models Suga, Tetsuya; Iijima, Junichi Enterprise modeling methodologies have made enterprises more likely to be the object of systems engineering rather than craftsmanship. However, the current state of research in enterprise modeling methodologies lacks investigations of the mathematical background embedded in these methodologies. Abstract algebra, a broad subfield of mathematics, and the study of algebraic structures may provide interesting implications in both theory and practice. Therefore, this research gives an empirical challenge to establish an algebraic structure for one aspect model proposed in Design & Engineering Methodology for Organizations (DEMO), which is a major enterprise modeling methodology in the spotlight as a modeling principle to capture the skeleton of enterprises for developing enterprise information systems. The results show that the aspect model behaves well in the sense of algebraic operations and indeed constructs a Boolean algebra. This article also discusses comparisons with other modeling languages and suggests future work. su(1,2) Algebraic Structure of XYZ Antiferromagnetic Model in Linear Spin-Wave Frame Jin Shuo; Xie Binghao; Yu Zhaoxian; Hou Jingmin The XYZ antiferromagnetic model in linear spin-wave frame is shown explicitly to have an su(1,2) algebraic structure: the Hamiltonian can be written as a linear function of the su(1,2) algebra generators. Based on it, the energy eigenvalues are obtained by making use of the similar transformations, and the algebraic diagonalization method is investigated. Some numerical solutions are given, and the results indicate that only one group solution could be accepted in physics Anisotropic correlated electron model associated with the Temperley-Lieb algebra Foerster, Angela; Links, Jon; Roditi, Itzhak We present and anisotropic correlated electron model on a periodic lattice, constructed from an R-matrix associated with the Temperley-Lieb algebra. By modification of the coupling of the first and last sites we obtain a model with quantum algebra invariance. (author) Automatic generation of Fortran programs for algebraic simulation models Schopf, W.; Rexer, G.; Ruehle, R. This report documents a generator program by which econometric simulation models formulated in an application-orientated language can be transformed automatically in a Fortran program. Thus the model designer is able to build up, test and modify models without the need of a Fortran programmer. The development of a computer model is therefore simplified and shortened appreciably; in chapter 1-3 of this report all rules are presented for the application of the generator to the model design. Algebraic models including exogeneous and endogeneous time series variables, lead and lag function can be generated. In addition, to these language elements, Fortran sequences can be applied to the formulation of models in the case of complex model interrelations. Automatically the generated model is a module of the program system RSYST III and is therefore able to exchange input and output data with the central data bank of the system and in connection with the method library modules can be used to handle planning problems. (orig.) [de Phases and phase transitions in the algebraic microscopic shell model Georgieva A. I. Full Text Available We explore the dynamical symmetries of the shell model number conserving algebra, which define three types of pairing and quadrupole phases, with the aim to obtain the prevailing phase or phase transition for the real nuclear systems in a single shell. This is achieved by establishing a correspondence between each of the pairing bases with the Elliott's SU(3 basis that describes collective rotation of nuclear systems. This allows for a complete classification of the basis states of different number of particles in all the limiting cases. The probability distribution of the SU(3 basis states within theirs corresponding pairing states is also obtained. The relative strengths of dynamically symmetric quadrupole-quadrupole interaction in respect to the isoscalar, isovector and total pairing interactions define a control parameter, which estimates the importance of each term of the Hamiltonian in the correct reproduction of the experimental data for the considered nuclei. Multiagent scheduling models and algorithms Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms. MODELING IN MAPLE AS THE RESEARCHING MEANS OF FUNDAMENTAL CONCEPTS AND PROCEDURES IN LINEAR ALGEBRA Vasil Kushnir Full Text Available The article is devoted to binary technology and "fundamental training technology." Binary training refers to the simultaneous teaching of mathematics and computer science, for example differential equations and Maple, linear algebra and Maple. Moreover the system of traditional course of Maple is not performed. The use of the opportunities of Maple-technology in teaching mathematics is based on the following fundamental concepts of computer science as an algorithm, program, a linear program, cycle, branching, relative operators, etc. That's why only a certain system of command operators in Maple is considered. They are necessary for fundamental concepts of linear algebra and differential equations studying in Maple-environment. Relative name - "the technology of fundamental training" reflects the study of fundamental mathematical concepts and procedures that express the properties of these concepts in Maple-environment. This article deals with the study of complex fundamental concepts of linear algebra (determinant of the matrix and algorithm of its calculation, the characteristic polynomial of the matrix and the eigenvalues of matrix, canonical form of characteristic matrix, eigenvectors of matrix, elementary divisors of the characteristic matrix, etc., which are discussed in the appropriate courses briefly enough, and sometimes are not considered at all, but they are important in linear systems of differential equations, asymptotic methods for solving differential equations, systems of linear equations. Herewith complex and voluminous procedures of finding of these linear algebra concepts embedded in Maple can be performed as a result of a simple command-operator. Especially important issue is building matrix to canonical form. In fact matrix functions are effectively reduced to the functions of the diagonal matrix or matrix in Jordan canonical form. These matrices are used to rise a square matrix to a power, to extract the roots of the n A set for relational reasoning: Facilitation of algebraic modeling by a fraction task. DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J Recent work has identified correlations between early mastery of fractions and later math achievement, especially in algebra. However, causal connections between aspects of reasoning with fractions and improved algebra performance have yet to be established. The current study investigated whether relational reasoning with fractions facilitates subsequent algebraic reasoning using both pre-algebra students and adult college students. Participants were first given either a relational reasoning fractions task or a fraction algebra procedures control task. Then, all participants solved word problems and constructed algebraic equations in either multiplication or division format. The word problems and the equation construction tasks involved simple multiplicative comparison statements such as "There are 4 times as many students as teachers in a classroom." Performance on the algebraic equation construction task was enhanced for participants who had previously completed the relational fractions task compared with those who completed the fraction algebra procedures task. This finding suggests that relational reasoning with fractions can establish a relational set that promotes students' tendency to model relations using algebraic expressions. Copyright © 2016 Elsevier Inc. All rights reserved. Edwards, Harold M In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras) Habiballa, Hashim; Jendryscik, Radek The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras. Parallel Algorithms for Model Checking van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph Finite automata over algebraic structures: models and some methods of analysis Volodymyr V. Skobelev Full Text Available In this paper some results of research in two new trends of finite automata theory are presented. For understanding the value and the aim of these researches some short retrospective analysis of development of finite automata theory is given. The first trend deals with families of finite automata defined via recurrence relations on algebraic structures over finite rings. The problem of design of some algorithm that simulates with some accuracy any element of given family of automata is investigated. Some general scheme for design of families of hash functions defined by outputless automata is elaborated. Computational security of these families of hash functions is analyzed. Automata defined on varieties with some algebra are presented and their homomorphisms are characterized. Special case of these automata, namely automata on elliptic curves, are investigated in detail. The second trend deals with quantum automata. Languages accepted by some basic models of quantum automata under supposition that unitary operators associated with input alphabet commute each with the others are characterized. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity. We obtain the exact Dirac algebra obeyed by the conserved non-local charges in bosonic non-linear sigma models. Part of the computation is specialized for a symmetry group O(N). As it turns out the algebra corresponds to a cubic deformation of the Kac-Moody algebra. The non-linear terms are computed in closed form. In each Dirac bracket we only find highest order terms (as explained in the paper), defining a saturated algebra. We generalize the results for the presence of a Wess-Zumino term. The algebra is very similar to the previous one, containing now a calculable correction of order one unit lower. (author). 22 refs, 5 figs An algebraic formulation of level one Wess-Zumino-Witten models Boeckenhauer, J. The highest weight modules of the chiral algebra of orthogonal WZW models at level one possess a realization in fermionic representation spaces; the Kac-Moody and Virasoro generators are represented as unbounded limits of even CAR algebras. It is shown that the representation theory of the underlying even CAR algebras reproduces precisely the sectors of the chiral algebra. This fact allows to develop a theory of local von Neumann algebras on the punctured circle, fitting nicely in the Doplicher-Haag-Roberts framework. The relevant localized endomorphisms which generate the charged sectors are explicitly constructed by means of Bogoliubov transformations. Using CAR theory, the fusion rules in terms of sector equivalence classes are proven. (orig.) Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras Dobay, M. P. D., E-mail: [email protected]; Alberola, A. Piera; Mendoza, E. R.; Raedler, J. O., E-mail: [email protected] [Ludwig-Maximilians University, Faculty of Physics, Center for NanoScience (Germany) Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery. Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O. Analysis of DIRAC's behavior using model checking with process algebra Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof; Diaz, Ricardo Graciani DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model. Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof Algorithmic Issues in Modeling Motion Agarwal, P. K; Guibas, L. J; Edelsbrunner, H. This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory of mot... Off-critical W∞ and Virasoro algebras as dynamical symmetries of the integrable models Sotkov, G.; Stanishkov, M. An infinite set of new non commuting conserved charges in a specific class of perturbed CFT's is founded and a criterion for their existence is presented. They appear to be higher momenta of the already known commuting conserved currents. The algebra they close consists of two non commuting W ∞ algebras. Various Virasoro subalgebras of the full symmetry algebra are founded. It is shown on the examples of the perturbed Ising and Potts models that one of them plays an essential role in the computation of the correlation functions of the fields of the theory. (author) Equivalent construction of the infinitesimal time translation operator in algebraic dynamics algorithm for partial differential evolution equation We give an equivalent construction of the infinitesimal time translation operator for partial differential evolution equation in the algebraic dynamics algorithm proposed by Shun-Jin Wang and his students. Our construction involves only simple partial differentials and avoids the derivative terms of δ function which appear in the course of computation by means of Wang-Zhang operator. We prove Wang's equivalent theorem which says that our construction and Wang-Zhang's are equivalent. We use our construction to deal with several typical equations such as nonlinear advection equation, Burgers equation, nonlinear Schrodinger equation, KdV equation and sine-Gordon equation, and obtain at least second order approximate solutions to them. These equations include the cases of real and complex field variables and the cases of the first and the second order time derivatives. Excel Spreadsheets for Algebra: Improving Mental Modeling for Problem Solving Engerman, Jason; Rusek, Matthew; Clariana, Roy This experiment investigates the effectiveness of Excel spreadsheets in a high school algebra class. Students in the experiment group convincingly outperformed the control group on a post lesson assessment. The student responses, teacher observations involving Excel spreadsheet revealed that it operated as a mindtool, which formed the users'… Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements Naseem Cassim Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions. Complex fluids modeling and algorithms Saramito, Pierre This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics. Computer algebra and operators Fateman, Richard; Grossman, Robert The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions. Mathematical Model for Dengue Epidemics with Differential Susceptibility and Asymptomatic Patients Using Computer Algebra Saldarriaga Vargas, Clarita When there are diseases affecting large populations where the social, economic and cultural diversity is significant within the same region, the biological parameters that determine the behavior of the dispersion disease analysis are affected by the selection of different individuals. Therefore and because of the variety and magnitude of the communities at risk of contracting dengue disease around all over the world, suggest defining differentiated populations with individual contributions in the results of the dispersion dengue disease analysis. In this paper those conditions were taken in account when several epidemiologic models were analyzed. Initially a stability analysis was done for a SEIR mathematical model of Dengue disease without differential susceptibility. Both free disease and endemic equilibrium states were found in terms of the basic reproduction number and were defined in the Theorem (3.1). Then a DSEIR model was solved when a new susceptible group was introduced to consider the effects of important biological parameters of non-homogeneous populations in the spreading analysis. The results were compiled in the Theorem (3.2). Finally Theorems (3.3) and (3.4) resumed the basic reproduction numbers for three and n different susceptible groups respectively, giving an idea of how differential susceptibility affects the equilibrium states. The computations were done using an algorithmic method implemented in Maple 11, a general-purpose computer algebra system. Algebraic Specifications, Higher-order Types and Set-theoretic Models Kirchner, Hélène; Mosses, Peter David , and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered.... Mathematical Modelling and the Learning Trajectory: Tools to Support the Teaching of Linear Algebra Cárcamo Bahamonde, Andrea Dorila; Fortuny Aymemí, Josep Maria; Gómez i Urgellés, Joan Vicenç In this article we present a didactic proposal for teaching linear algebra based on two compatible theoretical models: emergent models and mathematical modelling. This proposal begins with a problematic situation related to the creation and use of secure passwords, which leads students toward the construction of the concepts of spanning set and… Electrical Resistivity Tomography using a finite element based BFGS algorithm with algebraic multigrid preconditioning Codd, A. L.; Gross, L. We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S Klumpp, A. R. This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts Andrea Dorila Cárcamo Full Text Available The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts: span and spanning set. This was applied to first year engineering students. Results suggest that this type of instructional design contributes to the construction of these mathematical concepts and can also favour first year engineering students understanding of key linear algebra concepts and potentiate the development of higher order skills. Abstract algebra for physicists Zeman, J. Certain recent models of composite hadrons involve concepts and theorems from abstract algebra which are unfamiliar to most theoretical physicists. The algebraic apparatus needed for an understanding of these models is summarized here. Particular emphasis is given to algebraic structures which are not assumed to be associative. (2 figures) (auth) Synthesis of models for order-sorted first-order theories using linear algebra and constraint solving Salvador Lucas Full Text Available Recent developments in termination analysis for declarative programs emphasize the use of appropriate models for the logical theory representing the program at stake as a generic approach to prove termination of declarative programs. In this setting, Order-Sorted First-Order Logic provides a powerful framework to represent declarative programs. It also provides a target logic to obtain models for other logics via transformations. We investigate the automatic generation of numerical models for order-sorted first-order logics and its use in program analysis, in particular in termination analysis of declarative programs. We use convex domains to give domains to the different sorts of an order-sorted signature; we interpret the ranked symbols of sorted signatures by means of appropriately adapted convex matrix interpretations. Such numerical interpretations permit the use of existing algorithms and tools from linear algebra and arithmetic constraint solving to synthesize the models. Constraint Lie algebra and local physical Hamiltonian for a generic 2D dilatonic model Corichi, Alejandro; Karami, Asieh; Rastgoo, Saeed; Vukašinac, Tatjana We consider a class of two-dimensional dilatonic models, and revisit them from the perspective of a new set of 'polar type' variables. These are motivated by recently defined variables within the spherically symmetric sector of 4D general relativity. We show that for a large class of dilatonic models, including the case with matter, one can perform a series of canonical transformations in such a way that the Poisson algebra of the constraints becomes a Lie algebra. Furthermore, we construct Dirac observables and a reduced Hamiltonian that accounts for the time evolution of the system. (paper) Non-freely generated W-algebras and construction of N=2 super W-algebras Blumenhagen, R. Firstly, we investigate the origin of the bosonic W-algebras W(2, 3, 4, 5), W(2, 4, 6) and W(2, 4, 6) found earlier by direct construction. We present a coset construction for all three examples leading to a new type of finitely, non-freely generated quantum W-algebras, which we call unifying W-algebras. Secondly, we develop a manifest covariant formalism to construct N = 2 super W-algebras explicitly on a computer. Applying this algorithm enables us to construct the first four examples of N = 2 super W-algebras with two generators and the N = 2 super W 4 algebra involving three generators. The representation theory of the former ones shows that all examples could be divided into four classes, the largest one containing the N = 2 special type of spectral flow algebras. Besides the W-algebra of the CP(3) Kazama-Suzuki coset model, the latter example with three generators discloses a second solution which could also be explained as a unifying W-algebra for the CP(n) models. (orig.) Cárcamo Bahamonde, Andrea; Gómez Urgelles, Joan; Fortuny Aymemí, Josep The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasise the development of mathematical abilities primarily associated with modelling and interpreting, which are not exclusively calculus abilities. Considering this, an instructional design was created based on mathematical modelling and… The Model Method: Singapore Children's Tool for Representing and Solving Algebraic Word Problems Ng, Swee Fong; Lee, Kerry Solving arithmetic and algebraic word problems is a key component of the Singapore elementary mathematics curriculum. One heuristic taught, the model method, involves drawing a diagram to represent key information in the problem. We describe the model method and a three-phase theoretical framework supporting its use. We conducted 2 studies to… Algebraic approach to q-deformed supersymmetric variants of the Hubbard model with pair hoppings Arnaudon, D. Two quantum spin chains Hamiltonians with quantum sl(2/1) invariance are constructed. These spin chains define variants of the Hubbard model and describe electron models with pair hoppings. A cubic algebra that admits the Birman-Wenzl-Murakami algebra as a quotient allows exact solvability of the periodic chain. The two Hamiltonians, respectively built using the distinguished and the fermionic bases of U q (sl(2/1)) differ only in the boundary terms. They are actually equivalent, but the equivalence is non local. Reflection equations are solved to get exact solvability on open chains with non trivial boundary conditions. Two families of diagonal solutions are found. The centre and the s-Casimir of the quantum enveloping algebra of sl(2/1) appear as tools for the construction of exactly solvable Hamiltonians. (author) Automatic differentiation algorithms in model analysis Huiskes, M.J. Title: Automatic differentiation algorithms in model analysis Author: M.J. Huiskes Date: 19 March, 2002 In this thesis automatic differentiation algorithms and derivative-based methods Algebraic partial Boolean algebras Smith, Derek Partial Boolean algebras, first studied by Kochen and Specker in the 1960s, provide the structure for Bell-Kochen-Specker theorems which deny the existence of non-contextual hidden variable theories. In this paper, we study partial Boolean algebras which are 'algebraic' in the sense that their elements have coordinates in an algebraic number field. Several of these algebras have been discussed recently in a debate on the validity of Bell-Kochen-Specker theorems in the context of finite precision measurements. The main result of this paper is that every algebraic finitely-generated partial Boolean algebra B(T) is finite when the underlying space H is three-dimensional, answering a question of Kochen and showing that Conway and Kochen's infinite algebraic partial Boolean algebra has minimum dimension. This result contrasts the existence of an infinite (non-algebraic) B(T) generated by eight elements in an abstract orthomodular lattice of height 3. We then initiate a study of higher-dimensional algebraic partial Boolean algebras. First, we describe a restriction on the determinants of the elements of B(T) that are generated by a given set T. We then show that when the generating set T consists of the rays spanning the minimal vectors in a real irreducible root lattice, B(T) is infinite just if that root lattice has an A 5 sublattice. Finally, we characterize the rays of B(T) when T consists of the rays spanning the minimal vectors of the root lattice E 8 Voxel-based morphometric analysis in hypothyroidism using diffeomorphic anatomic registration via an exponentiated lie algebra algorithm approach. Singh, S; Modi, S; Bagga, D; Kaur, P; Shankar, L R; Khushu, S The present study aimed to investigate whether brain morphological differences exist between adult hypothyroid subjects and age-matched controls using voxel-based morphometry (VBM) with diffeomorphic anatomic registration via an exponentiated lie algebra algorithm (DARTEL) approach. High-resolution structural magnetic resonance images were taken in ten healthy controls and ten hypothyroid subjects. The analysis was conducted using statistical parametric mapping. The VBM study revealed a reduction in grey matter volume in the left postcentral gyrus and cerebellum of hypothyroid subjects compared to controls. A significant reduction in white matter volume was also found in the cerebellum, right inferior and middle frontal gyrus, right precentral gyrus, right inferior occipital gyrus and right temporal gyrus of hypothyroid patients compared to healthy controls. Moreover, no meaningful cluster for greater grey or white matter volume was obtained in hypothyroid subjects compared to controls. Our study is the first VBM study of hypothyroidism in an adult population and suggests that, compared to controls, this disorder is associated with differences in brain morphology in areas corresponding to known functional deficits in attention, language, motor speed, visuospatial processing and memory in hypothyroidism. © 2012 British Society for Neuroendocrinology. Little strings, quasi-topological sigma model on loop group, and toroidal Lie algebras Ashwinkumar, Meer; Cao, Jingnan; Luo, Yuan; Tan, Meng-Chwan; Zhao, Qin We study the ground states and left-excited states of the Ak-1 N = (2 , 0) little string theory. Via a theorem by Atiyah [1], these sectors can be captured by a supersymmetric nonlinear sigma model on CP1 with target space the based loop group of SU (k). The ground states, described by L2-cohomology classes, form modules over an affine Lie algebra, while the left-excited states, described by chiral differential operators, form modules over a toroidal Lie algebra. We also apply our results to analyze the 1/2 and 1/4 BPS sectors of the M5-brane worldvolume theory. Algebra of orthofermions and equivalence of their thermodynamics to the infinite U Hubbard model Kishore, R.; Mishra, A.K. The equivalence of thermodynamics of independent orthofermions to the infinite U Hubbard model, shown earlier for the one-dimensional infinite lattice, has been extended to a finite system of two lattice sites. Regarding the algebra of orthofermions, the algebraic expressions for the number operator for a given spin and the spin raising (lowering) operators in the form of infinite series are rearranged in such a way that the ith term, having the form of an infinite series, of the number (spin raising (lowering)) operator represents the number (spin raising (lowering)) operator at the ith lattice site Meer Ashwinkumar Full Text Available We study the ground states and left-excited states of the Ak−1 N=(2,0 little string theory. Via a theorem by Atiyah [1], these sectors can be captured by a supersymmetric nonlinear sigma model on CP1 with target space the based loop group of SU(k. The ground states, described by L2-cohomology classes, form modules over an affine Lie algebra, while the left-excited states, described by chiral differential operators, form modules over a toroidal Lie algebra. We also apply our results to analyze the 1/2 and 1/4 BPS sectors of the M5-brane worldvolume theory. Forward error correction based on algebraic-geometric theory A Alzubi, Jafar; M Chen, Thomas This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah's algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time. Priority in Process Algebras Cleaveland, Rance; Luettgen, Gerald; Natarajan, V. This paper surveys the semantic ramifications of extending traditional process algebras with notions of priority that allow for some transitions to be given precedence over others. These enriched formalisms allow one to model system features such as interrupts, prioritized choice, or real-time behavior. Approaches to priority in process algebras can be classified according to whether the induced notion of preemption on transitions is global or local and whether priorities are static or dynamic. Early work in the area concentrated on global pre-emption and static priorities and led to formalisms for modeling interrupts and aspects of real-time, such as maximal progress, in centralized computing environments. More recent research has investigated localized notions of pre-emption in which the distribution of systems is taken into account, as well as dynamic priority approaches, i.e., those where priority values may change as systems evolve. The latter allows one to model behavioral phenomena such as scheduling algorithms and also enables the efficient encoding of real-time semantics. Technically, this paper studies the different models of priorities by presenting extensions of Milner's Calculus of Communicating Systems (CCS) with static and dynamic priority as well as with notions of global and local pre- emption. In each case the operational semantics of CCS is modified appropriately, behavioral theories based on strong and weak bisimulation are given, and related approaches for different process-algebraic settings are discussed. Computer algebra applications Calmet, J. A survey of applications based either on fundamental algorithms in computer algebra or on the use of a computer algebra system is presented. Recent work in biology, chemistry, physics, mathematics and computer science is discussed. In particular, applications in high energy physics (quantum electrodynamics), celestial mechanics and general relativity are reviewed. (Auth.) Evaluation of global synchronization for iterative algebra algorithms on many-core ul Hasan Khan, Ayaz; Al-Mouhamed, Mayez; Firdaus, Lutfi A. © 2015 IEEE. Massively parallel computing is applied extensively in various scientific and engineering domains. With the growing interest in many-core architectures and due to the lack of explicit support for inter-block synchronization specifically in GPUs, synchronization becomes necessary to minimize inter-block communication time. In this paper, we have proposed two new inter-block synchronization techniques: 1) Relaxed Synchronization, and 2) Block-Query Synchronization. These schemes are used in implementing numerical iterative solvers where computation/communication overlapping is one used optimization to enhance application performance. We have evaluated and analyzed the performance of the proposed synchronization techniques using Jacobi Iterative Solver in comparison to the state of the art inter-block lock-free synchronization techniques. We have achieved about 1-8% performance improvement in terms of execution time over lock-free synchronization depending on the problem size and the number of thread blocks. We have also evaluated the proposed algorithm on GPU and MIC architectures and obtained about 8-26% performance improvement over the barrier synchronization available in OpenMP programming environment depending on the problem size and number of cores used. ul Hasan Khan, Ayaz Using computer algebra and SMT-solvers to analyze a mathematical model of cholera propagation Trujillo Arredondo, Mariana We analyze a mathematical model for the transmission of cholera. The model is already defined and involves variables such as the pathogen agent, which in this case is the bacterium Vibrio cholera, and the human population. The human population is divided into three classes: susceptible, infectious and removed. Using Computer Algebra, specifically Maple we obtain two equilibrium states: the disease free state and the endemic state. Using Maple it is possible to prove that the disease free state is locally asymptotically stable if and only if R0 1. Using the package Red-Log of the Computer algebra system Reduce and the SMT-Solver Z3Py it is possible to obtain numerical conditions for the model. The formula for the basic reproductive number makes a synthesis with all epidemic parameters in the model. Also it is possible to make numerical simulations which are very illustrative about the epidemic patters that are expected to be observed in real situations. We claim that these kinds of software are very useful in the analysis of epidemic models given that the symbolic computation provides algebraic formulas for the basic reproductive number and such algebraic formulas are very useful to derive control measures. For other side, computer algebra software is a powerful tool to make the stability analysis for epidemic models given that the all steps in the stability analysis can be made automatically: finding the equilibrium points, computing the jacobian, computing the characteristic polynomial for the jacobian, and applying the Routh-Hurwitz theorem to the characteristic polynomial. Finally, using SMT-Solvers is possible to make automatically checks of satisfiability, validity and quantifiers elimination being these computations very useful to analyse complicated epidemic models. Mathematical Modelling in Engineering: An Alternative Way to Teach Linear Algebra Domínguez-García, S.; García-Planas, M. I.; Taberna, J. Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic… Shape Optimization for Navier-Stokes Equations with Algebraic Turbulence Model: Existence Analysis Bulí�ek, M.; Haslinger, J.; Málek, J.; Stebel, Jan Ro�. 60, �. 2 (2009), s. 185-212 ISSN 0095-4616 R&D Projects: GA MŠk LC06052 Institutional research plan: CEZ:AV0Z10190503 Keywords : optimal shape design * paper machine headbox * incompressible non-Newtonian fluid * algebraic turbulence model * outflow boundary condition Subject RIV: BA - General Mathematics Impact factor: 0.757, year: 2009 The Virasoro algebra in integrable hierarchies and the method of matrix models Semikhatov, A.M. The action of the Virasoro algebra on hierarchies of nonlinear integrable equations, and also the structure and consequences of Virasoro constraints on these hierarchies, are studied. It is proposed that a broad class of hierarchies, restricted by Virasoro constraints, can be defined in terms of dressing operators hidden in the structure of integrable systems. The Virasoro-algebra representation constructed on the dressing operators displays a number of analogies with structures in conformal field theory. The formulation of the Virasoro constraints that stems from this representation makes it possible to translate into the language of integrable systems a number of concepts from the method of the 'matrix models' that describe nonperturbative quantum gravity, and, in particular, to realize a 'hierarchical' version of the double scaling limit. From the Virasoro constraints written in terms of the dressing operators generalized loop equations are derived, and this makes it possible to do calculations on a reconstruction of the field-theoretical description. The reduction of the Kadomtsev-Petviashvili (KP) hierarchy, subject to Virasoro constraints, to generalized Korteweg-deVries (KdV) hierarchies is implemented, and the corresponding representation of the Virasoro algebra on these hierarchies is found both in the language of scalar differential operators and in the matrix formalism of Drinfel'd and Sokolov. The string equation in the matrix formalism does not replicate the structure of the scalar string equation. The symmetry algebras of the KP and N-KdV hierarchies restricted by Virasoro constraints are calculated: A relationship is established with algebras from the family W ∞ (J) of infinite W-algebras Algorithms and Methods for High-Performance Model Predictive Control Frison, Gianluca routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity.... The bounded model property via step algebras and step frames Bezhanishvili, N.; Ghilardi, S. The paper introduces semantic and algorithmic methods for establishing a variant of the analytic subformula property (called 'the bounded proof property', bpp) for modal propositional logics. The bpp is much weaker property than full cut-elimination, but it is nevertheless sufficient for Matlab linear algebra Lopez, Cesar MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Linear Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. In addition to giving an introduction to The algebras of higher order currents of the fermionic Gross-Neveu model Saltini, Luis Eduardo Results are reported from our studies on the following 2-dimensional field theories: the supersymmetric non-linear sigma model and the fermionic Gross-Neveu model. About the supersymmetric non-linear sigma model, an attempt is made to solve the the algebraic problem of finding the non-local conserved charges and the corresponding algebra, extending the methods described in a previous article for the case of the purely bosonic non linear sigma model. For the fermionic Gross-Neveu model, we intend to construct the conserved currents and the respective charges, related to the abelian U(1) symmetry and non-abelian SU(n) symmetry, at the conformal point and calculate the correlation functions between them. From these results at the conformal point, we want to study the effects of perturbation to get a massive but integral theory Applied linear algebra and matrix analysis Shores, Thomas S In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Its blend of theory, computational exercises, and analytical writing projects is designed to highlight the interplay between these aspects of an application. This approach places special emphasis on linear algebra as an experimental science that provides tools for solving concrete problems. The second edition's revised text discusses applications of linear algebra like graph theory and network modeling methods used in Google's PageRank algorithm. Other new materials include modeling examples of diffusive processes, linear programming, image processing, digital signal processing, and Fourier analysis. These topics are woven into the core material of Gaussian elimination and other matrix operations; eigenvalues, eigenvectors, and discrete dynamical systems; and the geometrical aspects of vector spaces. Intended for a one-semester undergraduate course without a strict calculus prerequisite, Applied Linear Algebra and M... Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies Jongen, T.; Machiels, L.; Gatski, T. B. Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations. Lectures on algebraic statistics Drton, Mathias; Sullivant, Seth How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets. Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships. Modeling and Engineering Algorithms for Mobile Data Blunck, Henrik; Hinrichs, Klaus; Sondern, Joëlle In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion... Lefschetz, Solomon An introduction to algebraic geometry and a bridge between its analytical-topological and algebraical aspects, this text for advanced undergraduate students is particularly relevant to those more familiar with analysis than algebra. 1953 edition. Grassmann algebras Garcia, R.L. The Grassmann algebra is presented briefly. Exponential and logarithm of matrices functions, whose elements belong to this algebra, are studied with the help of the SCHOONSCHIP and REDUCE 2 algebraic manipulators. (Author) [pt Virasoro algebra action on integrable hierarchies and Virasoro contraints in matrix models The action of the Virasoro algebra on integrable hierarchies of non-linear equations and on related objects ('Schroedinger' differential operators) is investigated. The method consists in pushing forward the Virasoro action to the wave function of a hierarchy, and then reconstructing its action on the dressing and Lax operators. This formulation allows one to observe a number of suggestive similarities between the structures involved in the description of the Virasoro algebra on the hierarchies and the structure of conformal field theory on the world-sheet. This includes, in particular, an 'off-shell' hierarchy version of operator products and of the Cauchy kernel. In relation to matrix models, which have been observed to be effectively described by integrable hierarchies subjected to Virasoro constraints, I propose to define general Virasoro-constrained hierarchies also in terms of dressing operators, by certain equations which carry the information of the hierarchy and the Virasoro algebra simultaneously and which suggest an interpretation as operator versions of recursion/loop equations in topological theories. These same equations provide a relation with integrable hierarchies with quantized spectral parameter introduced recently. The formulation in terms of dressing operators allows a scaling (continuum limit) of discrete (i.e. lattice) hierarchies with the Virasoro constraints into 'continuous' Virasoro-constrained hierarchies. In particular, the KP hierarchy subjected to the Virasoro constraints is recovered as a scaling limit of the Virasoro-constrained Toda hierarchy. The dressing operator method also makes is straightforward to identify the full symmetry algebra of Virasoro-constrained hierarchies, which is related to the family of W ∞ (J) algebras introduced recently. (orig./HS) Entanglement in a model for Hawking radiation: An application of quadratic algebras Bambah, Bindu A.; Mukku, C.; Shreecharan, T.; Siva Prasad, K. Quadratic polynomially deformed su(1,1) and su(2) algebras are utilized in model Hamiltonians to show how the gravitational system consisting of a black hole, infalling radiation and outgoing (Hawking) radiation can be solved exactly. The models allow us to study the long-time behaviour of the black hole and its outgoing modes. In particular, we calculate the bipartite entanglement entropies of subsystems consisting of (a) infalling plus outgoing modes and (b) black hole modes plus the infalling modes, using the Janus-faced nature of the model. The long-time behaviour also gives us glimpses of modifications in the character of Hawking radiation. Finally, we study the phenomenon of superradiance in our model in analogy with atomic Dicke superradiance. - Highlights: ► We examine a toy model for Hawking radiation with quantized black hole modes. ► We use quadratic polynomially deformed su(1,1) algebras to study its entanglement properties. ► We study the "Dicke Superradiance� in black hole radiation using quadratically deformed su(2) algebras. ► We study the modification of the thermal character of Hawking radiation due to quantized black hole modes. An algebraic stress/flux model for two-phase turbulent flow Kumar, R. An algebraic stress model (ASM) for turbulent Reynolds stress and a flux model for turbulent heat flux are proposed for two-phase bubbly and slug flows. These mathematical models are derived from the two-phase transport equations for Reynolds stress and turbulent heat flux, and provide C μ , a turbulent constant which defines the level of eddy viscosity, as a function of the interfacial terms. These models also include the effect of heat transfer. When the interfacial drag terms and the interfacial momentum transfer terms are absent, the model reduces to a single-phase model used in the literature Lie algebras and applications Iachello, Francesco This course-based primer provides an introduction to Lie algebras and some of their applications to the spectroscopy of molecules, atoms, nuclei and hadrons. In the first part, it concisely presents the basic concepts of Lie algebras, their representations and their invariants. The second part includes a description of how Lie algebras are used in practice in the treatment of bosonic and fermionic systems. Physical applications considered include rotations and vibrations of molecules (vibron model), collective modes in nuclei (interacting boson model), the atomic shell model, the nuclear shell model, and the quark model of hadrons. One of the key concepts in the application of Lie algebraic methods in physics, that of spectrum generating algebras and their associated dynamic symmetries, is also discussed. The book highlights a number of examples that help to illustrate the abstract algebraic definitions and includes a summary of many formulas of practical interest, such as the eigenvalues of Casimir operators... A Csup(*)-algebra approach to the Schwinger model Carey, A.L.; Hurst, C.A. If cutoffs are introduced then existing results in the literature show that the Schwinger model is dynamically equivalent to a boson model with quadratic Hamiltonian. However, the process of quantising the Schwinger model destroys local gauge invariance. Gauge invariance is restored by the addition of a counterterm, which may be seen as a finite renormalisation, whereupon the Schwinger model becomes dynamically equivalent to a linear boson gauge theory. This linear model is exactly soluble. We find that different treatments of the supplementary (i.e. Lorentz) condition lead to boson models with rather different properties. We choose one model and construct, from the gauge invariant subalgebra, a class of inequivalent charge sectors. We construct sectors which coincide with those found by Lowenstein and Swieca for the Schwinger model. A reconstruction of the Hilbert space on which the Schwinger model exists is described and fermion operators on this space are defined. (orig.) Exact boson mappings for nuclear neutron (proton) shell-model algebras having SU(3) subalgebras Bonatsos, D.; Klein, A. In this paper the commutation relations of the fermion pair operators of identical nucleons coupled to spin zero are given for the general nuclear major shell in LST coupling. The associated Lie algebras are the unitary symplectic algebras Sp(2M). The corresponding multipole subalgebras are the unitary algebras U(M), which possess SU(3) subalgebras. Number conserving exact boson mappings of both the Dyson and hermitian form are given for the nuclear neutron (proton) s--d, p--f, s--d--g, and p--f--h shells, and their group theoretical structure is emphasized. The results are directly applicable in the case of the s--d shell, while in higher shells the experimentally plausible pseudo-SU(3) symmetry makes them applicable. The final purpose of this work is to provide a link between the shell model and the Interacting Boson Model (IBM) in the deformed limit. As already implied in the work of Draayer and Hecht, it is difficult to associate the boson model developed here with the conventional IBM model. The differences between the two approaches (due mainly to the effects of the Pauli principle) as well as their physical implications are extensively discussed A Linear Algebra Measure of Cluster Quality. Mather, Laura A. Discussion of models for information retrieval focuses on an application of linear algebra to text clustering, namely, a metric for measuring cluster quality based on the theory that cluster quality is proportional to the number of terms that are disjoint across the clusters. Explains term-document matrices and clustering algorithms. (Author/LRW) Algebraic models for the hierarchy structure of evolution equations at small x Rembiesa, P.; Stasto, A.M. We explore several models of QCD evolution equations simplified by considering only the rapidity dependence of dipole scattering amplitudes, while provisionally neglecting their dependence on transverse coordinates. Our main focus is on the equations that include the processes of pomeron splittings. We examine the algebraic structures of the governing equation hierarchies, as well as the asymptotic behavior of their solutions in the large-rapidity limit On the algebraic structure of self-dual gauge fields and sigma models Bais, F.A.; Sasaki, R. An extensive and detailed analysis of self-dual gauge fields, in particular with axial symmetry, is presented, culminating in a purely algebraic procedure to generate solutions. The method which is particularly suited for the construction of multimonopole solutions for a theory with arbitrary G, is also applicable to a wide class of non-linear sigma models. The relevant symmetries as well as the associated linear problems which underly the exact solubility of the problem, are constructed and discussed in detail. (orig.) Applications of the Local Algebras of Vector Fields to the Modelling of Physical Phenomena Bayak, Igor V. In this paper we discuss the local algebras of linear vector fields that can be used in the mathematical modelling of physical space by building the dynamical flows of vector fields on eight-dimensional cylindrical or toroidal manifolds. It is shown that the topological features of the vector fields obey the Dirac equation when moving freely within the surface of a pseudo-sphere in the eight-dimensional pseudo-Euclidean space. Shape optimization for Navier-Stokes equations with algebraic turbulence model : numerical analysis and computation Haslinger, J.; Stebel, Jan Ro�. 63, �. 2 (2011), s. 277-308 ISSN 0095-4616 R&D Projects: GA MŠk LC06052 Institutional research plan: CEZ:AV0Z10190503 Keywords : optimal shape design * paper machine headbox * incompressible non-Newtonian fluid * algebraic turbulence model Subject RIV: BA - General Mathematics Impact factor: 0.952, year: 2011 http://link.springer.com/article/10.1007%2Fs00245-010-9121-x Color Algebras Mulligan, Jeffrey B. A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. The difficulty addressed here is the fact that, because of metamerism, we cannot know with certainty the spectrum that produced a particular color solely on the basis of sensory data. Knowledge of the spectrum is not required to compute additive mixture of colors, but is critical for subtractive (multiplicative) mixture. Therefore, we cannot predict with certainty the multiplicative interactions between colors based solely on sensory data. There are two potential applications of a color algebra: first, to aid modeling phenomena of human visual perception, such as color constancy and transparency; and, second, to provide better models of the interactions of lights and surfaces for computer graphics rendering. Improved Collaborative Filtering Algorithm using Topic Model Liu Na Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets. Prediction of strongly-heated gas flows in a vertical tube using explicit algebraic stress/heat-flux models Baek, Seong Gu; Park, Seung O. This paper provides the assessment of prediction performance of explicit algebraic stress and heat-flux models under conditions of mixed convective gas flows in a strongly-heated vertical tube. Two explicit algebraic stress models and four algebraic heat-flux models are selected for assessment. Eight combinations of explicit algebraic stress and heat-flux models are used in predicting the flows experimentally studied by Shehata and McEligot (IJHMT 41(1998) p.4333) in which property variation was significant. Among the various model combinations, the Wallin and Johansson (JFM 403(2000) p. 89) explicit algebraic stress model-Abe, Kondo, and Nagano (IJHFF 17(1996) p. 228) algebraic heat-flux model combination is found to perform best. We also found that the dimensionless wall distance y + should be calculated based on the local property rather than the property at the wall for property-variation flows. When the buoyancy or the property variation effects are so strong that the flow may relaminarize, the choice of the basic platform two-equation model is a most important factor in improving the predictions Vertex algebras and algebraic curves Frenkel, Edward Vertex algebras are algebraic objects that encapsulate the concept of operator product expansion from two-dimensional conformal field theory. Vertex algebras are fast becoming ubiquitous in many areas of modern mathematics, with applications to representation theory, algebraic geometry, the theory of finite groups, modular functions, topology, integrable systems, and combinatorics. This book is an introduction to the theory of vertex algebras with a particular emphasis on the relationship with the geometry of algebraic curves. The notion of a vertex algebra is introduced in a coordinate-independent way, so that vertex operators become well defined on arbitrary smooth algebraic curves, possibly equipped with additional data, such as a vector bundle. Vertex algebras then appear as the algebraic objects encoding the geometric structure of various moduli spaces associated with algebraic curves. Therefore they may be used to give a geometric interpretation of various questions of representation theory. The book co... Implicative Algebras Tadesse In this paper we introduce the concept of implicative algebras which is an equivalent definition of lattice implication algebra of Xu (1993) and further we prove that it is a regular Autometrized. Algebra. Further we remark that the binary operation → on lattice implicative algebra can never be associative. Key words: Implicative ... A deformation of quantum affine algebra in squashed Wess-Zumino-Novikov-Witten models Kawaguchi, Io; Yoshida, Kentaroh [Department of Physics, Kyoto University, Kyoto 606-8502 (Japan) We proceed to study infinite-dimensional symmetries in two-dimensional squashed Wess-Zumino-Novikov-Witten models at the classical level. The target space is given by squashed S³ and the isometry is SU(2){sub L}×U(1){sub R}. It is known that SU(2){sub L} is enhanced to a couple of Yangians. We reveal here that an infinite-dimensional extension of U(1){sub R} is a deformation of quantum affine algebra, where a new deformation parameter is provided with the coefficient of the Wess-Zumino term. Then we consider the relation between the deformed quantum affine algebra and the pair of Yangians from the viewpoint of the left-right duality of monodromy matrices. The integrable structure is also discussed by computing the r/s-matrices that satisfy the extended classical Yang-Baxter equation. Finally, two degenerate limits are discussed. Enlarged symmetry algebras of spin chains, loop models, and S-matrices Read, N.; Saleur, H. The symmetry algebras of certain families of quantum spin chains are considered in detail. The simplest examples possess m states per site (m>=2), with nearest-neighbor interactions with U(m) symmetry, under which the sites transform alternately along the chain in the fundamental m and its conjugate representation m-bar. We find that these spin chains, even with arbitrary coefficients of these interactions, have a symmetry algebra A m much larger than U(m), which implies that the energy eigenstates fall into sectors that for open chains (i.e., free boundary conditions) can be labeled by j=0,1,...,L, for the 2L-site chain such that the degeneracies of all eigenvalues in the jth sector are generically the same and increase rapidly with j. For large j, these degeneracies are much larger than those that would be expected from the U(m) symmetry alone. The enlarged symmetry algebra A m (2L) consists of operators that commute in this space of states with the Temperley-Lieb algebra that is generated by the set of nearest-neighbor interaction terms; A m (2L) is not a Yangian. There are similar results for supersymmetric chains with gl(m+n|n) symmetry of nearest-neighbor interactions, and a richer representation structure for closed chains (i.e., periodic boundary conditions). The symmetries also apply to the loop models that can be obtained from the spin chains in a spacetime or transfer matrix picture. In the loop language, the symmetries arise because the loops cannot cross. We further define tensor products of representations (for the open chains) by joining chains end to end. The fusion rules for decomposing the tensor product of representations labeled j 1 and j 2 take the same form as the Clebsch-Gordan series for SU(2). This and other structures turn the symmetry algebra A m into a ribbon Hopf algebra, and we show that this is 'Morita equivalent' to the quantum group U q (sl 2 ) for m=q+q -1 . The open-chain results are extended to the cases vertical bar m vertical Max plus at work modeling and analysis of synchronized systems a course on max-plus algebra and its applications Heidergott, Bernd; van der Woude, Jacob Trains pull into a railroad station and must wait for each other before leaving again in order to let passengers change trains. How do mathematicians then calculate a railroad timetable that accurately reflects their comings and goings? One approach is to use max-plus algebra, a framework used to model Discrete Event Systems, which are well suited to describe the ordering and timing of events. This is the first textbook on max-plus algebra, providing a concise and self-contained introduction to the topic. Applications of max-plus algebra abound in the world around us. Traffic systems, compu Quantum trigonometric Calogero-Sutherland model, irreducible characters and Clebsch-Gordan series for the exceptional algebra E7 Fernandez Nunez, J.; Garcia Fuertes, W.; Perelomov, A.M. We reexpress the quantum Calogero-Sutherland model for the Lie algebra E 7 and the particular value of the coupling constant κ=1 by using the fundamental irreducible characters of the algebra as dynamical variables. For that, we need to develop a systematic procedure to obtain all the Clebsch-Gordan series required to perform the change of variables. We describe how the resulting quantum Hamiltonian operator can be used to compute more characters and Clebsch-Gordan series for this exceptional algebra Algebraic Structure of tt * Equations for Calabi-Yau Sigma Models Alim, Murad The tt * equations define a flat connection on the moduli spaces of {2d, \\mathcal{N}=2} quantum field theories. For conformal theories with c = 3 d, which can be realized as nonlinear sigma models into Calabi-Yau d-folds, this flat connection is equivalent to special geometry for threefolds and to its analogs in other dimensions. We show that the non-holomorphic content of the tt * equations, restricted to the conformal directions, in the cases d = 1, 2, 3 is captured in terms of finitely many generators of special functions, which close under derivatives. The generators are understood as coordinates on a larger moduli space. This space parameterizes a freedom in choosing representatives of the chiral ring while preserving a constant topological metric. Geometrically, the freedom corresponds to a choice of forms on the target space respecting the Hodge filtration and having a constant pairing. Linear combinations of vector fields on that space are identified with the generators of a Lie algebra. This Lie algebra replaces the non-holomorphic derivatives of tt * and provides these with a finer and algebraic meaning. For sigma models into lattice polarized K3 manifolds, the differential ring of special functions on the moduli space is constructed, extending known structures for d = 1 and 3. The generators of the differential rings of special functions are given by quasi-modular forms for d = 1 and their generalizations in d = 2, 3. Some explicit examples are worked out including the case of the mirror of the quartic in {\\mathbbm{P}^3}, where due to further algebraic constraints, the differential ring coincides with quasi modular forms. Remenska, Daniela; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Diaz, Ricardo Graciani; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple, the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike con... Monomial algebras Villarreal, Rafael The book stresses the interplay between several areas of pure and applied mathematics, emphasizing the central role of monomial algebras. It unifies the classical results of commutative algebra with central results and notions from graph theory, combinatorics, linear algebra, integer programming, and combinatorial optimization. The book introduces various methods to study monomial algebras and their presentation ideals, including Stanley-Reisner rings, subrings and blowup algebra-emphasizing square free quadratics, hypergraph clutters, and effective computational methods. Optlang: An algebraic modeling language for mathematical optimization Jensen, Kristian; Cardoso, Joao; Sonnenschein, Nikolaus Optlang is a Python package implementing a modeling language for solving mathematical optimization problems, i.e., maximizing or minimizing an objective function over a set of variables subject to a number of constraints. It provides a common native Python interface to a series of optimization... Thin-layer approximation and algebraic model for separated turbulent flows Baldwin, B.; Lomax, H. An algebraic turbulence model for two- and three-dimensional separated flows is specified that avoids the necessity for finding the edge of the boundary layer. Properties of the model are determined and comparisons made with experiment for an incident shock on a flat plate, separated flow over a compression corner, and transonic flow over an airfoil. Separation and reattachment points from numerical Navier-Stokes solutions agree with experiment within one boundary-layer thickness. Use of law-of-the-wall boundary conditions does not alter the predictions significantly. Applications of the model to other cases are contained in companion papers. Some results on the eigenfunctions of the quantum trigonometric Calogero-Sutherland model related to the Lie algebra D4 We express the Hamiltonian of the quantum trigonometric Calogero-Sutherland model related to the Lie algebra D 4 in terms of a set of Weyl-invariant variables, namely, the characters of the fundamental representations of the Lie algebra. This parametrization allows us to solve for the energy eigenfunctions of the theory and to study properties of the system of orthogonal polynomials associated with them such as recurrence relations and generating functions Jaynes-Cummings model and the deformed-oscillator algebra Crnugelj, J.; Martinis, M.; Mikuta-Martinis, V. We study the time evolution of the deformed Jaynes-Cummings model (DJCM). It is shown that the standard JCM and its recent non-linear generalizations involving the intensity-dependent coupling and/or the multiphoton coupling are only particular cases of the DJCM. The time evolution of the mean phonon number and the population inversion are evaluated. A special case of the q-deformed JCM is analyzed explicitly. The long time quasi-periodic revival effects of the q-deformed JCM are observed for q∼1 and an initially large mean photon number. For other values of the deformation parameter q we observe chaotic-like behaviour of the population inversion. Photons are assumed to be initially in the deformed coherent state. ((orig.)) Quadratic algebras Polishchuk, Alexander Quadratic algebras, i.e., algebras defined by quadratic relations, often occur in various areas of mathematics. One of the main problems in the study of these (and similarly defined) algebras is how to control their size. A central notion in solving this problem is the notion of a Koszul algebra, which was introduced in 1970 by S. Priddy and then appeared in many areas of mathematics, such as algebraic geometry, representation theory, noncommutative geometry, K-theory, number theory, and noncommutative linear algebra. The book offers a coherent exposition of the theory of quadratic and Koszul algebras, including various definitions of Koszulness, duality theory, Poincar�-Birkhoff-Witt-type theorems for Koszul algebras, and the Koszul deformation principle. In the concluding chapter of the book, they explain a surprising connection between Koszul algebras and one-dependent discrete-time stochastic processes. The relation between quantum W algebras and Lie algebras Boer, J. de; Tjin, T. By quantizing the generalized Drinfeld-Sokolov reduction scheme for arbitrary sl 2 embeddings we show that a large set W of quantum W algebras can be viewed as (BRST) cohomologies of affine Lie algebras. The set W contains many known W algebras such as W N and W 3 (2) . Our formalism yields a completely algorithmic method for calculating the W algebra generators and their operator product expansions, replacing the cumbersome construction of W algebras as commutants of screening operators. By generalizing and quantizing the Miura transformation we show that any W algebra in W can be embedded into the universal enveloping algebra of a semisimple affine Lie algebra which is, up to shifts in level, isomorphic to a subalgebra of the original affine algebra. Therefore any realization of this semisimple affine Lie algebra leads to a realization of the W algebra. In particular, one obtains in this way a general and explicit method for constructing the free field realizations and Fock resolutions for all algebras in W. Some examples are explicitly worked out. (orig.) Algebraic equations for the exceptional eigenspectrum of the generalized Rabi model Li, Zi-Min; Batchelor, Murray T We obtain the exceptional part of the eigenspectrum of the generalized Rabi model, also known as the driven Rabi model, in terms of the roots of a set of algebraic equations. This approach provides a product form for the wavefunction components and allows an explicit connection with recent results obtained for the wavefunction in terms of truncated confluent Heun functions. Other approaches are also compared. For particular parameter values the exceptional part of the eigenspectrum consists of doubly degenerate crossing points. We give a proof for the number of roots of the constraint polynomials and discuss the number of crossing points. (paper) Application of the algebraic RNG model for transition simulation. [renormalization group theory Lund, Thomas S. The algebraic form of the RNG model of Yakhot and Orszag (1986) is investigated as a transition model for the Reynolds averaged boundary layer equations. It is found that the cubic equation for the eddy viscosity contains both a jump discontinuity and one spurious root. A yet unpublished transformation to a quartic equation is shown to remove the numerical difficulties associated with the discontinuity, but only at the expense of merging both the physical and spurious root of the cubic. Jumps between the branches of the resulting multiple-valued solution are found to lead to oscillations in flat plate transition calculations. Aside from the oscillations, the transition behavior is qualitatively correct. From the topological development of matrix models to the topological string theory: arrangement of surfaces through algebraic geometry Orantin, N. The 2-matrix model has been introduced to study Ising model on random surfaces. Since then, the link between matrix models and arrangement of discrete surfaces has strongly tightened. This manuscript aims to investigate these deep links and extend them beyond the matrix models, following my work's evolution. First, I take care to define properly the hermitian 2 matrix model which gives rise to generating functions of discrete surfaces equipped with a spin structure. Then, I show how to compute all the terms in the topological expansion of any observable by using algebraic geometry tools. They are obtained as differential forms on an algebraic curve associated to the model: the spectral curve. In a second part, I show how to define such differentials on any algebraic curve even if it does not come from a matrix model. I then study their numerous symmetry properties under deformations of the algebraic curve. In particular, I show that these objects coincide with the topological expansion of the observable of a matrix model if the algebraic curve is the spectral curve of this model. Finally, I show that the fine tuning of the parameters ensures that these objects can be promoted to modular invariants and satisfy the holomorphic anomaly equation of the Kodaira-Spencer theory. This gives a new hint that the Dijkgraaf-Vafa conjecture is correct. (author) Special set linear algebra and special set fuzzy linear algebra Kandasamy, W. B. Vasantha; Smarandache, Florentin; Ilanthenral, K. The authors in this book introduce the notion of special set linear algebra and special set fuzzy Linear algebra, which is an extension of the notion set linear algebra and set fuzzy linear algebra. These concepts are best suited in the application of multi expert models and cryptology. This book has five chapters. In chapter one the basic concepts about set linear algebra is given in order to make this book a self contained one. The notion of special set linear algebra and their fuzzy analog... The algebra of the energy-momentum tensor and the Noether currents in classical non-linear sigma models Forger, M.; Mannheim Univ.; Laartz, J.; Schaeper, U. The recently derived current algrbra of classical non-linear sigma models on arbitrary Riemannian manifolds is extended to include the energy-momentum tensor. It is found that in two dimensions the energy-momentum tensor θ μv , the Noether current j μ associated with the global symmetry of the theory and the composite field j appearing as the coefficient of the Schwinger term in the current algebra, together with the derivatives of j μ and j, generte a closed algebra. The subalgebra generated by the light-cone components of the energy-momentum tensor consists of two commuting copies of the Virasoro algebra, with central charge c=0, reflecting the classical conformal invariance of the theory, but the current algebra part and the semidirect product structure are quite different from the usual Kac-Moody/Sugawara type contruction. (orig.) Model Checking Algorithms for CTMDPs Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie... Basic notions of algebra Shafarevich, Igor Rostislavovich This book is wholeheartedly recommended to every student or user of mathematics. Although the author modestly describes his book as 'merely an attempt to talk about' algebra, he succeeds in writing an extremely original and highly informative essay on algebra and its place in modern mathematics and science. From the fields, commutative rings and groups studied in every university math course, through Lie groups and algebras to cohomology and category theory, the author shows how the origins of each algebraic concept can be related to attempts to model phenomena in physics or in other branches Using process algebra to develop predator-prey models of within-host parasite dynamics. McCaig, Chris; Fenton, Andy; Graham, Andrea; Shankland, Carron; Norman, Rachel As a first approximation of immune-mediated within-host parasite dynamics we can consider the immune response as a predator, with the parasite as its prey. In the ecological literature of predator-prey interactions there are a number of different functional responses used to describe how a predator reproduces in response to consuming prey. Until recently most of the models of the immune system that have taken a predator-prey approach have used simple mass action dynamics to capture the interaction between the immune response and the parasite. More recently Fenton and Perkins (2010) employed three of the most commonly used prey-dependent functional response terms from the ecological literature. In this paper we make use of a technique from computing science, process algebra, to develop mathematical models. The novelty of the process algebra approach is to allow stochastic models of the population (parasite and immune cells) to be developed from rules of individual cell behaviour. By using this approach in which individual cellular behaviour is captured we have derived a ratio-dependent response similar to that seen in the previous models of immune-mediated parasite dynamics, confirming that, whilst this type of term is controversial in ecological predator-prey models, it is appropriate for models of the immune system. Copyright © 2013 Elsevier Ltd. All rights reserved. Contractions of quantum algebraic structures Doikou, A.; Sfetsos, K. A general framework for obtaining certain types of contracted and centrally extended algebras is reviewed. The whole process relies on the existence of quadratic algebras, which appear in the context of boundary integrable models. (Abstract Copyright [2010], Wiley Periodicals, Inc.) Identification of control targets in Boolean molecular network models via computational algebra. Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks. Fuzzy audit risk modeling algorithm Zohreh Hajihaa Full Text Available Fuzzy logic has created suitable mathematics for making decisions in uncertain environments including professional judgments. One of the situations is to assess auditee risks. During recent years, risk based audit (RBA has been regarded as one of the main tools to fight against fraud. The main issue in RBA is to determine the overall audit risk an auditor accepts, which impact the efficiency of an audit. The primary objective of this research is to redesign the audit risk model (ARM proposed by auditing standards. The proposed model of this paper uses fuzzy inference systems (FIS based on the judgments of audit experts. The implementation of proposed fuzzy technique uses triangular fuzzy numbers to express the inputs and Mamdani method along with center of gravity are incorporated for defuzzification. The proposed model uses three FISs for audit, inherent and control risks, and there are five levels of linguistic variables for outputs. FISs include 25, 25 and 81 rules of if-then respectively and officials of Iranian audit experts confirm all the rules. Rethinking exchange market models as optimization algorithms Luquini, Evandro; Omar, Nizam The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models Boolean algebra Goodstein, R L This elementary treatment by a distinguished mathematician employs Boolean algebra as a simple medium for introducing important concepts of modern algebra. Numerous examples appear throughout the text, plus full solutions. Algebraic Bethe ansatz for the XXX chain with triangular boundaries and Gaudin model Cirilo António, N., E-mail: [email protected] [Centro de Análise Funcional e Aplicações, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Manojlović, N., E-mail: [email protected] [Grupo de Física Matemática da Universidade de Lisboa, Av. Prof. Gama Pinto 2, PT-1649-003 Lisboa (Portugal); Departamento de Matemática, F.C.T., Universidade do Algarve, Campus de Gambelas, PT-8005-139 Faro (Portugal); Salom, I., E-mail: [email protected] [Institute of Physics, University of Belgrade, P.O. Box 57, 11080 Belgrade (Serbia) We implement fully the algebraic Bethe ansatz for the XXX Heisenberg spin chain in the case when both boundary matrices can be brought to the upper-triangular form. We define the Bethe vectors which yield the strikingly simple expression for the off shell action of the transfer matrix, deriving the spectrum and the relevant Bethe equations. We explore further these results by obtaining the off shell action of the generating function of the Gaudin Hamiltonians on the corresponding Bethe vectors through the so-called quasi-classical limit. Moreover, this action is as simple as it could possibly be, yielding the spectrum and the Bethe equations of the Gaudin model. Cirilo António, N.; Manojlović, N.; Salom, I. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models Srinivasan, Seshasai Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process. Model Checking Process Algebra of Communicating Resources for Real-time Systems Boudjadar, Jalil; Kim, Jin Hyun; Larsen, Kim Guldstrand This paper presents a new process algebra, called PACOR, for real-time systems which deals with resource constrained timed behavior as an improved version of the ACSR algebra. We define PACOR as a Process Algebra of Communicating Resources which allows to express preemptiveness, urgent ness... This paper presents a new process algebra, called PACoR, for real-time systems which deals with resource- constrained timed behavior as an improved version of the ACSR algebra. We define PACoR as a Process Algebra of Communicating Resources which allows to explicitly express preemptiveness... Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer. Müller, Dirk K; Pampel, André; Möller, Harald E Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved. Algebraic Bethe ansatz for U(1) invariant integrable models: Compact and non-compact applications Martins, M.J.; Melo, C.S. We apply the algebraic Bethe ansatz developed in our previous paper [C.S. Melo, M.J. Martins, Nucl. Phys. B 806 (2009) 567] to three different families of U(1) integrable vertex models with arbitrary N bond states. These statistical mechanics systems are based on the higher spin representations of the quantum group U q [SU(2)] for both generic and non-generic values of q as well as on the non-compact discrete representation of the SL(2,R) algebra. We present for all these models the explicit expressions for both the on-shell and the off-shell properties associated to the respective transfer matrices eigenvalue problems. The amplitudes governing the vectors not parallel to the Bethe states are shown to factorize in terms of elementary building blocks functions. The results for the non-compact SL(2,R) model are argued to be derived from those obtained for the compact systems by taking suitable N→∞ limits. This permits us to study the properties of the non-compact SL(2,R) model starting from systems with finite degrees of freedom. Martins, M. J.; Melo, C. S. We apply the algebraic Bethe ansatz developed in our previous paper [C.S. Melo, M.J. Martins, Nucl. Phys. B 806 (2009) 567] to three different families of U(1) integrable vertex models with arbitrary N bond states. These statistical mechanics systems are based on the higher spin representations of the quantum group U[SU(2)] for both generic and non-generic values of q as well as on the non-compact discrete representation of the SL(2,R) algebra. We present for all these models the explicit expressions for both the on-shell and the off-shell properties associated to the respective transfer matrices eigenvalue problems. The amplitudes governing the vectors not parallel to the Bethe states are shown to factorize in terms of elementary building blocks functions. The results for the non-compact SL(2,R) model are argued to be derived from those obtained for the compact systems by taking suitable N→∞ limits. This permits us to study the properties of the non-compact SL(2,R) model starting from systems with finite degrees of freedom. One-particle many-body Green's function theory: Algebraic recursive definitions, linked-diagram theorem, irreducible-diagram theorem, and general-order algorithms. Hirata, So; Doran, Alexander E; Knowles, Peter J; Ortiz, J V A thorough analytical and numerical characterization of the whole perturbation series of one-particle many-body Green's function (MBGF) theory is presented in a pedagogical manner. Three distinct but equivalent algebraic (first-quantized) recursive definitions of the perturbation series of the Green's function are derived, which can be combined with the well-known recursion for the self-energy. Six general-order algorithms of MBGF are developed, each implementing one of the three recursions, the ΔMPn method (where n is the perturbation order) [S. Hirata et al., J. Chem. Theory Comput. 11, 1595 (2015)], the automatic generation and interpretation of diagrams, or the numerical differentiation of the exact Green's function with a perturbation-scaled Hamiltonian. They all display the identical, nondivergent perturbation series except ΔMPn, which agrees with MBGF in the diagonal and frequency-independent approximations at 1≤n≤3 but converges at the full-configuration-interaction (FCI) limit at n=∞ (unless it diverges). Numerical data of the perturbation series are presented for Koopmans and non-Koopmans states to quantify the rate of convergence towards the FCI limit and the impact of the diagonal, frequency-independent, or ΔMPn approximation. The diagrammatic linkedness and thus size-consistency of the one-particle Green's function and self-energy are demonstrated at any perturbation order on the basis of the algebraic recursions in an entirely time-independent (frequency-domain) framework. The trimming of external lines in a one-particle Green's function to expose a self-energy diagram and the removal of reducible diagrams are also justified mathematically using the factorization theorem of Frantz and Mills. Equivalence of ΔMPn and MBGF in the diagonal and frequency-independent approximations at 1≤n≤3 is algebraically proven, also ascribing the differences at n = 4 to the so-called semi-reducible and linked-disconnected diagrams. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison Olympia Roeva Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two. Genetic coding and united-hypercomplex systems in the models of algebraic biology. Petoukhov, Sergey V Structured alphabets of DNA and RNA in their matrix form of representations are connected with Walsh functions and a new type of systems of multidimensional numbers. This type generalizes systems of complex numbers and hypercomplex numbers, which serve as the basis of mathematical natural sciences and many technologies. The new systems of multi-dimensional numbers have interesting mathematical properties and are called in a general case as "systems of united-hypercomplex numbers" (or briefly "U-hypercomplex numbers"). They can be widely used in models of multi-parametrical systems in the field of algebraic biology, artificial life, devices of biological inspired artificial intelligence, etc. In particular, an application of U-hypercomplex numbers reveals hidden properties of genetic alphabets under cyclic permutations in their doublets and triplets. A special attention is devoted to the author's hypothesis about a multi-linguistic in DNA-sequences in a relation with an ensemble of U-numerical sub-alphabets. Genetic multi-linguistic is considered as an important factor to provide noise-immunity properties of the multi-channel genetic coding. Our results attest to the conformity of the algebraic properties of the U-numerical systems with phenomenological properties of the DNA-alphabets and with the complementary device of the double DNA-helix. It seems that in the modeling field of algebraic biology the genetic-informational organization of living bodies can be considered as a set of united-hypercomplex numbers in some association with the famous slogan of Pythagoras "the numbers rule the world". Copyright © 2017 Elsevier B.V. All rights reserved. Numerical algebraic geometry for model selection and its application to the life sciences Gross, Elizabeth Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. The quantum Rabi model and Lie algebra representations of sl2 Wakayama, Masato; Yamasaki, Taishi The aim of the present paper is to understand the spectral problem of the quantum Rabi model in terms of Lie algebra representations of sl 2 (R). We define a second order element of the universal enveloping algebra U(sl 2 ) of sl 2 (R), which, through the image of a principal series representation of sl 2 (R), provides a picture equivalent to the quantum Rabi model drawn by confluent Heun differential equations. By this description, in particular, we give a representation theoretic interpretation of the degenerate part of the spectrum (i.e., Judd's eigenstates) of the Rabi Hamiltonian due to KuÅ› in 1985, which is a part of the exceptional spectrum parameterized by integers. We also discuss the non-degenerate part of the exceptional spectrum of the model, in addition to the Judd eigenstates, from a viewpoint of infinite dimensional irreducible submodules (or subquotients) of the non-unitary principal series such as holomorphic discrete series representations of sl 2 (R). (paper) Algebraic curves and cryptography Murty, V Kumar It is by now a well-known paradigm that public-key cryptosystems can be built using finite Abelian groups and that algebraic geometry provides a supply of such groups through Abelian varieties over finite fields. Of special interest are the Abelian varieties that are Jacobians of algebraic curves. All of the articles in this volume are centered on the theme of point counting and explicit arithmetic on the Jacobians of curves over finite fields. The topics covered include Schoof's \\ell-adic point counting algorithm, the p-adic algorithms of Kedlaya and Denef-Vercauteren, explicit arithmetic on New Parallel Algorithms for Landscape Evolution Model Jin, Y.; Zhang, H.; Shi, Y. Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT. Jordan algebras versus C*- algebras Stormer, E. The axiomatic formulation of quantum mechanics and the problem of whether the observables form self-adjoint operators on a Hilbert space, are discussed. The relation between C*- algebras and Jordan algebras is studied using spectral theory. (P.D.) Process Algebra and Markov Chains Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P. This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P. Lorentz invariant noncommutative algebra for cosmological models coupled to a perfect fluid Abreu, Everton M.C.; Marcial, Mateus V.; Mendes, Albert C.R.; Oliveira, Wilson [Universidade Federal Rural do Rio de Janeiro (UFRRJ), Seropedica, RJ (Brazil); Universidade Federal de Juiz de Fora, MG (Brazil) Full text: In current theoretical physics there is a relevant number of theoretical investigations that lead to believe that at the first moments of our Universe, the geometry was not commutative and the dominating physics at that time was ruled by the laws of noncommutative (NC) geometry. Therefore, the idea is that the physics of the early moments can be constructed based on these concepts. The first published work using the idea of a NC spacetime were carried out by Snyder who believed that NC principles could make the quantum field theory infinities disappear. However, it did not occur and Snyder's ideas were put to sleep for a long time. The main modern motivations that rekindle the investigation about NC field theories came from string theory and quantum gravity. In the context of quantum mechanics for example, R. Banerjee discussed how NC structures appear in planar quantum mechanics providing a useful way for obtaining them. The analysis was based on the NC algebra used in planar quantum mechanics that was originated from 't Hooft's analysis on dissipation and quantization. In this work we carry out a NC algebra analysis of the Friedmann-Robert-Walker model, coupled to a perfect fluid and in the presence of a cosmological constant. The classical field equations are modified, by the introduction of a shift operator, in order to introduce noncommutativity in these models. (author) Equations of motion for a spectrum-generating algebra: Lipkin-Meshkov-Glick model Rosensteel, G; Rowe, D J; Ho, S Y For a spectrum-generating Lie algebra, a generalized equations-of-motion scheme determines numerical values of excitation energies and algebra matrix elements. In the approach to the infinite particle number limit or, more generally, whenever the dimension of the quantum state space is very large, the equations-of-motion method may achieve results that are impractical to obtain by diagonalization of the Hamiltonian matrix. To test the method's effectiveness, we apply it to the well-known Lipkin-Meshkov-Glick (LMG) model to find its low-energy spectrum and associated generator matrix elements in the eigenenergy basis. When the dimension of the LMG representation space is 10 6 , computation time on a notebook computer is a few minutes. For a large particle number in the LMG model, the low-energy spectrum makes a quantum phase transition from a nondegenerate harmonic vibrator to a twofold degenerate harmonic oscillator. The equations-of-motion method computes critical exponents at the transition point Abreu, Everton M.C.; Marcial, Mateus V.; Mendes, Albert C.R.; Oliveira, Wilson Full text: In current theoretical physics there is a relevant number of theoretical investigations that lead to believe that at the first moments of our Universe, the geometry was not commutative and the dominating physics at that time was ruled by the laws of noncommutative (NC) geometry. Therefore, the idea is that the physics of the early moments can be constructed based on these concepts. The first published work using the idea of a NC spacetime were carried out by Snyder who believed that NC principles could make the quantum field theory infinities disappear. However, it did not occur and Snyder's ideas were put to sleep for a long time. The main modern motivations that rekindle the investigation about NC field theories came from string theory and quantum gravity. In the context of quantum mechanics for example, R. Banerjee discussed how NC structures appear in planar quantum mechanics providing a useful way for obtaining them. The analysis was based on the NC algebra used in planar quantum mechanics that was originated from 't Hooft's analysis on dissipation and quantization. In this work we carry out a NC algebra analysis of the Friedmann-Robert-Walker model, coupled to a perfect fluid and in the presence of a cosmological constant. The classical field equations are modified, by the introduction of a shift operator, in order to introduce noncommutativity in these models. (author) Templates for Linear Algebra Problems Bai, Z.; Day, D.; Demmel, J.; Dongarra, J.; Gu, M.; Ruhe, A.; Vorst, H.A. van der The increasing availability of advanced-architecture computers is having a very signicant eect on all spheres of scientic computation, including algorithm research and software development in numerical linear algebra. Linear algebra {in particular, the solution of linear systems of equations and A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology. McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions. Model based development of engine control algorithms Dekker, H.J.; Sturm, W.L. Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed Algorithms and Models for the Web Graph Gleich, David F.; Komjathy, Julia; Litvak, Nelli This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee MATLAB matrix algebra Pe�rez Lo�pez, Ce�sar MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ... Dynamic Airspace Managment - Models and Algorithms Cheng, Peng; Geng, Rui This chapter investigates the models and algorithms for implementing the concept of Dynamic Airspace Management. Three models are discussed. First two models are about how to use or adjust air route dynamically in order to speed up air traf�c flow and reduce delay. The third model gives a way to dynamically generate the optimal sector con�guration for an air traf�c control center to both balance the controller's workload and save control resources. The �rst model, called the Dynami... Optimization in engineering models and algorithms Sioshansi, Ramteen This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ... The Einstein action for algebras of matrix valued functions - Toy models Hajac, P.M. Two toy models are considered within the framework of noncommutative differential geometry. In the first one, the Einstein action of the Levi-Civita connection is computed for the algebra of matrix valued functions on a torus. It is shown that, assuming some constraints on the metric, this action splits into a classical-like, a quantum-like and a mixed term. In the second model, an analogue of the Palatini method of variation is applied to obtain critical points of the Einstein action functional for M 4 (R). It is pointed out that a solution to the Palatini variational problem is not necessarily a Levi-Civita connection. In this model, no additional assumptions regarding metrics are made. (author). 14 refs Hajac, P M Two toy models are considered within the framework of noncommutative differential geometry. In the first one, the Einstein action of the Levi-Civita connection is computed for the algebra of matrix valued functions on a torus. It is shown that, assuming some constraints on the metric, this action splits into a classical-like, a quantum-like and a mixed term. In the second model, an analogue of the Palatini method of variation is applied to obtain critical points of the Einstein action functional for M{sub 4}(R). It is pointed out that a solution to the Palatini variational problem is not necessarily a Levi-Civita connection. In this model, no additional assumptions regarding metrics are made. (author). 14 refs. Comparing Cognitive Models of Domain Mastery and Task Performance in Algebra: Validity Evidence for a State Assessment Warner, Zachary B. This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be… An evolutionary algorithm for model selection Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany) When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty. Conceptual Explanation for the Algebra in the Noncommutative Approach to the Standard Model Chamseddine, Ali H.; Connes, Alain The purpose of this Letter is to remove the arbitrariness of the ad hoc choice of the algebra and its representation in the noncommutative approach to the standard model, which was begging for a conceptual explanation. We assume as before that space-time is the product of a four-dimensional manifold by a finite noncommmutative space F. The spectral action is the pure gravitational action for the product space. To remove the above arbitrariness, we classify the irreducible geometries F consistent with imposing reality and chiral conditions on spinors, to avoid the fermion doubling problem, which amounts to have total dimension 10 (in the K-theoretic sense). It gives, almost uniquely, the standard model with all its details, predicting the number of fermions per generation to be 16, their representations and the Higgs breaking mechanism, with very little input Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach. Sulis, William H Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems. Study of the 'non-Abelian' current algebra of a non-linear σ-model Ghosh, Subir A particular form of non-linear σ-model, having a global gauge invariance, is studied. The detailed discussion on current algebra structures reveals the non-Abelian nature of the invariance, with field dependent structure functions. Reduction of the field theory to a point particle framework yields a non-linear harmonic oscillator, which is a special case of similar models studied before in [J.F. Carinena et al., Nonlinearity 17 (2004) 1941, math-ph/0406002; J.F. Carinena et al., in: Proceedings of 10th International Conference in Modern Group Analysis, Larnaca, Cyprus, 2004, p. 39, math-ph/0505028; J.F. Carinena et al., Rep. Math. Phys. 54 (2004) 285, hep-th/0501106]. The connection with non-commutative geometry is also established Form factors in sinh- and sine-Gordon models, deformed Virasoro algebra, Macdonald polynomials and resonance identities Lashkevich, Michael; Pugai, Yaroslav We continue the study of form factors of descendant operators in the sinh- and sine-Gordon models in the framework of the algebraic construction proposed in [1]. We find the algebraic construction to be related to a particular limit of the tensor product of the deformed Virasoro algebra and a suitably chosen Heisenberg algebra. To analyze the space of local operators in the framework of the form factor formalism we introduce screening operators and construct singular and cosingular vectors in the Fock spaces related to the free field realization of the obtained algebra. We show that the singular vectors are expressed in terms of the degenerate Macdonald polynomials with rectangular partitions. We study the matrix elements that contain a singular vector in one chirality and a cosingular vector in the other chirality and find them to lead to the resonance identities already known in the conformal perturbation theory. Besides, we give a new derivation of the equation of motion in the sinh-Gordon theory, and a new representation for conserved currents Separable algebras Ford, Timothy J This book presents a comprehensive introduction to the theory of separable algebras over commutative rings. After a thorough introduction to the general theory, the fundamental roles played by separable algebras are explored. For example, Azumaya algebras, the henselization of local rings, and Galois theory are rigorously introduced and treated. Interwoven throughout these applications is the important notion of étale algebras. Essential connections are drawn between the theory of separable algebras and Morita theory, the theory of faithfully flat descent, cohomology, derivations, differentials, reflexive lattices, maximal orders, and class groups. The text is accessible to graduate students who have finished a first course in algebra, and it includes necessary foundational material, useful exercises, and many nontrivial examples. Operators and representation theory canonical models for algebras of operators arising in quantum mechanics Jorgensen, Palle E T Historically, operator theory and representation theory both originated with the advent of quantum mechanics. The interplay between the subjects has been and still is active in a variety of areas.This volume focuses on representations of the universal enveloping algebra, covariant representations in general, and infinite-dimensional Lie algebras in particular. It also provides new applications of recent results on integrability of finite-dimensional Lie algebras. As a central theme, it is shown that a number of recent developments in operator algebras may be handled in a particularly e Elements of algebraic coding systems Cardoso da Rocha, Jr, Valdemar Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f... Parameter Estimation and Prediction of a Nonlinear Storage Model: an algebraic approach Doeswijk, T.G.; Keesman, K.J. Generally, parameters that are nonlinear in system models are estimated by nonlinear least-squares optimization algorithms. In this paper, if a nonlinear discrete-time model with a polynomial quotient structure in input, output, and parameters, a method is proposed to re-parameterize the model such Reachability for Finite-State Process Algebras Using Static Analysis Skrypnyuk, Nataliya; Nielson, Flemming of the Data Flow Analysis are used in order to "cut off� some of the branches in the reachability analysis that are not important for determining, whether or not a state is reachable. In this way, it is possible for our reachability algorithm to avoid building large parts of the system altogether and still......In this work we present an algorithm for solving the reachability problem in finite systems that are modelled with process algebras. Our method uses Static Analysis, in particular, Data Flow Analysis, of the syntax of a process algebraic system with multi-way synchronisation. The results...... solve the reachability problem in a precise way.... Markov chains models, algorithms and applications Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters. Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods Genetic Algorithm Based Microscale Vehicle Emissions Modelling Sicong Zhu Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness. Modelling Evolutionary Algorithms with Stochastic Differential Equations. Heredia, Jorge Pérez There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm. New insights in the standard model of quantum physics in Clifford algebra Daviau, Claude Why Clifford algebra is the true mathematical frame of the standard model of quantum physics. Why the time is everywhere oriented and why the left side shall never become the right side. Why positrons have also a positive proper energy. Why there is a Planck constant. Why a mass is not a charge. Why a system of particles implies the existence of the inverse of the individual wave function. Why a fourth neutrino should be a good candidate for black matter. Why concepts as "parity� and "reverse� are essential. Why the electron of a H atom is in only one bound state. Plus 2 very remarkable identities, and the invariant wave equations that they imply. Plus 3 generations and 4 neutrinos. Plus 5 dimensions in the space and 6 dimensions in space-time… Approach method of the solutions of algebraic models of the N body problem Dufour, M. We have studied a class of algebraic eigenvalue problems that generate tridiagonal matrices. The Lipkin Hamiltonian was chosen as representative. Three methods have been implemented, whose extension to more general many body problems seems possible i) Degenerate Linked Cluster Theory (LCT), which disregards special symmetries of the interaction and defines a hierarchy of approximation based on model spaces at fixed number of particle-hole excitation of the unperturbed Hamiltonian. The method works for small perturbations but does not yield a complete description. ii) A new linearization method that replaces the matrix to be diagonalized by local (tangent) approximations by harmonic matrices. This method generalizes LCT and is a posteriori reminiscent of semi-classical ones. However of is simpler, more precise and yields a complete description of spectra. iii) A global way to characterize spectra based on Gershgorine-Hadamard disks [fr On the algebraic theory of kink sectors: Application to quantum field theory models and collision theory Schlingemann, D. Several two dimensional quantum field theory models have more than one vacuum state. An investigation of super selection sectors in two dimensions from an axiomatic point of view suggests that there should be also states, called soliton or kink states, which interpolate different vacua. Familiar quantum field theory models, for which the existence of kink states have been proven, are the Sine-Gordon and the φ 4 2 -model. In order to establish the existence of kink states for a larger class of models, we investigate the following question: Which are sufficient conditions a pair of vacuum states has to fulfill, such that an interpolating kink state can be constructed? We discuss the problem in the framework of algebraic quantum field theory which includes, for example, the P(φ) 2 -models. We identify a large class of vacuum states, including the vacua of the P(φ) 2 -models, the Yukawa 2 -like models and special types of Wess-Zumino models, for which there is a natural way to construct an interpolating kink state. In two space-time dimensions, massive particle states are kink states. We apply the Haag-Ruelle collision theory to kink sectors in order to analyze the asymptotic scattering states. We show that for special configurations of n kinks the scattering states describe n freely moving non interacting particles. (orig.) Killing scalar of non-linear σ-model on G/H realizing the classical exchange algebra Aoyama, Shogo The Poisson brackets for non-linear σ-models on G/H are set up on the light-like plane. A quantity which transforms irreducibly by the Killing vectors, called Killing scalar, is constructed in an arbitrary representation of G. It is shown to satisfy the classical exchange algebra Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the The Moyal momentum algebra applied to θ-deformed 2d conformal models and KdV-hierarchies Boulahoual, A.; Sedra, M.B. The properties of the Das-Popowicz Moyal momentum algebra that we introduce in hep-th/0207242 are reexamined in detail and used to discuss some aspects of integrable models and 2d conformal field theories. Among the results presented we setup some useful convention notations which lead to extract some non trivial properties of the Moyal momentum algebra. We use the particular sub-algebra sl n -Σ-tilde n (0,n) to construct the sl 2 -Liouville conformal model δδ-barΦ=2/θe -1/θΦ and its sl 3 -Toda extension δδ-bar 1 =Ae -1/2θ(Φ 1 +1/2Φ 2 ) and δδ-barΦ 2 =Be -1/2 / θ (Φ 1 +2Φ 2 ) . We also show that the central charge, a la Feigin-Fuchs, associated to the spin-2 conformal current of the θ-Liouville model is given by c θ =(1+24θ 2 ). Moreover, the results obtained for the Das-Popowicz Mm algebra are applied to study systematically some properties of the Moyal KdV and Boussinesq hierarchies generalizing some known results. We also discuss the primarily condition of conformal w θ -currents and interpret this condition as being a dressing gauge symmetry in the Moyal momentum space. Some computations related to the dressing gauge group are explicitly presented. (author) The algebra of the general Markov model on phylogenetic trees and networks. Sumner, J G; Holland, B R; Jarvis, P D It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the "splitting� operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications. Linear Algebra and Smarandache Linear Algebra Vasantha, Kandasamy The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p... Computational algebraic geometry for statistical modeling FY09Q2 progress. Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones in more detail; the next section provides an overview of the project and how the current progress fits into it. DISTING: A web application for fast algorithmic computation of alternative indistinguishable linear compartmental models. Davidson, Natalie R; Godfrey, Keith R; Alquaddoomi, Faisal; Nola, David; DiStefano, Joseph J We describe and illustrate use of DISTING, a novel web application for computing alternative structurally identifiable linear compartmental models that are input-output indistinguishable from a postulated linear compartmental model. Several computer packages are available for analysing the structural identifiability of such models, but DISTING is the first to be made available for assessing indistinguishability. The computational algorithms embedded in DISTING are based on advanced versions of established geometric and algebraic properties of linear compartmental models, embedded in a user-friendly graphic model user interface. Novel computational tools greatly speed up the overall procedure. These include algorithms for Jacobian matrix reduction, submatrix rank reduction, and parallelization of candidate rank computations in symbolic matrix analysis. The application of DISTING to three postulated models with respectively two, three and four compartments is given. The 2-compartment example is used to illustrate the indistinguishability problem; the original (unidentifiable) model is found to have two structurally identifiable models that are indistinguishable from it. The 3-compartment example has three structurally identifiable indistinguishable models. It is found from DISTING that the four-compartment example has five structurally identifiable models indistinguishable from the original postulated model. This example shows that care is needed when dealing with models that have two or more compartments which are neither perturbed nor observed, because the numbering of these compartments may be arbitrary. DISTING is universally and freely available via the Internet. It is easy to use and circumvents tedious and complicated algebraic analysis previously done by hand. Copyright © 2017 Elsevier B.V. All rights reserved. El desempeño del docente en el proceso de desarrollo de habilidades de trabajo con algoritmos en la disciplina �lgebra Lineal / Teachers' performance and the process of developing skills to work with algorithms in Linear Algebra Ivonne Burguet Lago Full Text Available ABSTRACT The paper describes a proposal of professional pedagogical performance tests to assess teachers' role in the process of developing the skill of working with algorithms in Linear Algebra. It aims at devising a testing tool to assess teachers' performance in the skill-developing process. This tool is a finding of Cuba theory of Advanced Education, systematically used in recent years. The findings include the test design and the illustration of its use in a sample of 22 Linear Algebra teachers during the first term of the 2017-2018 academic year at Informatics Sciences Engineering major. Keywords: ABSTRACT The paper describes a proposal of professional pedagogical performance tests to assess teachers' role in the process of developing the skill of working with algorithms in Linear Algebra. It aims at devising a testing tool to assess teachers' performance in the skill-developing process. This tool is a finding of Cuba theory of Advanced Education, systematically used in recent years. The findings include the test design and the illustration of its use in a sample of 22 Linear Algebra teachers during the first term of the 2017-2018 academic year at Informatics Sciences Engineering major. Constructing canonical bases of quantized enveloping algebras Graaf, W.A. de An algorithm for computing the elements of a given weight of the canonical basis of a quantized enveloping algebra is described. Subsequently, a similar algorithm is presented for computing the canonical basis of a finite-dimensional module. Pyramid algorithms as models of human cognition Pizlo, Zygmunt; Li, Zheng There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle. Modeling Trees with a Space Colonization Algorithm Morell Higueras, Marc [CATALÀ] Aquest TFG tracta la implementació d'un algorisme de generació procedural que construeixi una estructura reminiscent a la d'un arbre de clima temperat, i també la implementació del pas de l'estructura a un model tridimensional, acompanyat de l'eina per a visualitzar el resultat i fer-ne l'exportació [ANGLÈS] This TFG consists of the implementation of a procedural generation algorithm that builds a structure reminiscent of that of a temperate climate tree, and also consists of the ... Genetic Algorithms Principles Towards Hidden Markov Model Nabil M. Hewahi Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography. Cierniak, Robert; Lorent, Anna The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS V. E. Marley Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan. Abstract algebra Garrett, Paul B Designed for an advanced undergraduate- or graduate-level course, Abstract Algebra provides an example-oriented, less heavily symbolic approach to abstract algebra. The text emphasizes specifics such as basic number theory, polynomials, finite fields, as well as linear and multilinear algebra. This classroom-tested, how-to manual takes a more narrative approach than the stiff formalism of many other textbooks, presenting coherent storylines to convey crucial ideas in a student-friendly, accessible manner. An unusual feature of the text is the systematic characterization of objects by universal Kolman, Bernard College Algebra, Second Edition is a comprehensive presentation of the fundamental concepts and techniques of algebra. The book incorporates some improvements from the previous edition to provide a better learning experience. It provides sufficient materials for use in the study of college algebra. It contains chapters that are devoted to various mathematical concepts, such as the real number system, the theory of polynomial equations, exponential and logarithmic functions, and the geometric definition of each conic section. Progress checks, warnings, and features are inserted. Every chapter c Living on the edge: a toy model for holographic reconstruction of algebras with centers Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason [Department of Physics, University of California,Santa Barbara, CA 93106 (United States) We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G{sub N})) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G{sub N})) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space. Algebraic stress model for axial flow in a bare rod-bundle de Lemos, M.J.S. The problem of predicting transport properties for momentum and heat across the boundaries of interconnected channels has been the subject of many investigations. In the particular case of axial flow through rod-bundles, transport coefficients for channel faces aligned with rod centers are known to be considerably higher than those calculated by simple isotropic theories. And yet, it was been found that secondary flows play only a minor role in this overall transport, being turbulence highly enhanced across that hypothetical surface. In order to numerically predict the correct amount of the quantity being transported, the approach taken by many investigators was then to artificially increase the diffusion coefficient obtained via a simple isopropic theory (usually the standard k-ε model) and numerically match the correct experimentally observed mixing rates. The present paper reports an attempt to describe the turbulent stresses by means of an Algebraic Stress Model for turbulence. Relative turbulent kinetic energy distribution in all three directions are presented and compared with experiments in a square lattice. The strong directional dependence of transport terms are then obtained via a model for the Reynolds stresses. The results identify a need for a better representation of the mean-flow field part of the pressure-strain correlation term Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G N )) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G N )) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space. Calculus domains modelled using an original bool algebra based on polygons Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T. Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering. Motion Model Employment using interacting Motion Model Algorithm Hussain, Dil Muhammad Akbar The paper presents a simulation study to track a maneuvering target using a selective approach in choosing Interacting Multiple Models (IMM) algorithm to provide a wider coverage to track such targets. Initially, there are two motion models in the system to track a target. Probability of each m... Models and Algorithms for Tracking Target with Coordinated Turn Motion Xianghui Yuan Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness. Invariants of triangular Lie algebras Boyko, Vyacheslav; Patera, Jiri; Popovych, Roman Triangular Lie algebras are the Lie algebras which can be faithfully represented by triangular matrices of any finite size over the real/complex number field. In the paper invariants ('generalized Casimir operators') are found for three classes of Lie algebras, namely those which are either strictly or non-strictly triangular, and for so-called special upper triangular Lie algebras. Algebraic algorithm of Boyko et al (2006 J. Phys. A: Math. Gen.39 5749 (Preprint math-ph/0602046)), developed further in Boyko et al (2007 J. Phys. A: Math. Theor.40 113 (Preprint math-ph/0606045)), is used to determine the invariants. A conjecture of Tremblay and Winternitz (2001 J. Phys. A: Math. Gen.34 9085), concerning the number of independent invariants and their form, is corroborated Waterloo Workshop on Computer Algebra Zima, Eugene; WWCA-2016; Advances in computer algebra : in honour of Sergei Abramov's' 70th birthday This book discusses the latest advances in algorithms for symbolic summation, factorization, symbolic-numeric linear algebra and linear functional equations. It presents a collection of papers on original research topics from the Waterloo Workshop on Computer Algebra (WWCA-2016), a satellite workshop of the International Symposium on Symbolic and Algebraic Computation (ISSAC'2016), which was held at Wilfrid Laurier University (Waterloo, Ontario, Canada) on July 23–24, 2016.  This workshop and the resulting book celebrate the 70th birthday of Sergei Abramov (Dorodnicyn Computing Centre of the Russian Academy of Sciences, Moscow), whose highly regarded and inspirational contributions to symbolic methods have become a crucial benchmark of computer algebra and have been broadly adopted by many Computer Algebra systems. Nonlinear model predictive control theory and algorithms Grüne, Lars This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T... A review of ocean chlorophyll algorithms and primary production models Li, Jingwen; Zhou, Song; Lv, Nan This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model. Adaptive Numerical Algorithms in Space Weather Modeling Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical A new algebraic turbulence model for accurate description of airfoil flows Xiao, Meng-Juan; She, Zhen-Su We report a new algebraic turbulence model (SED-SL) based on the SED theory, a symmetry-based approach to quantifying wall turbulence. The model specifies a multi-layer profile of a stress length (SL) function in both the streamwise and wall-normal directions, which thus define the eddy viscosity in the RANS equation (e.g. a zero-equation model). After a successful simulation of flat plate flow (APS meeting, 2016), we report here further applications of the model to the flow around airfoil, with significant improvement of the prediction accuracy of the lift (CL) and drag (CD) coefficients compared to other popular models (e.g. BL, SA, etc.). Two airfoils, namely RAE2822 airfoil and NACA0012 airfoil, are computed for over 50 cases. The results are compared to experimental data from AGARD report, which shows deviations of CL bounded within 2%, and CD within 2 counts (10-4) for RAE2822 and 6 counts for NACA0012 respectively (under a systematic adjustment of the flow conditions). In all these calculations, only one parameter (proportional to the Karmen constant) shows slight variation with Mach number. The most remarkable outcome is, for the first time, the accurate prediction of the drag coefficient. The other interesting outcome is the physical interpretation of the multi-layer parameters: they specify the corresponding multi-layer structure of turbulent boundary layer; when used together with simulation data, the SED-SL enables one to extract physical information from empirical data, and to understand the variation of the turbulent boundary layer. A genetic algorithm for solving supply chain network design model Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A. Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance. Rate-control algorithms testing by using video source model Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.... Entropy correlation and entanglement for mixed states in an algebraic model Hou Xiwen; Chen Jinghua; Wan Mingfang; Ma Zhongqi As an alternative with potential connections to actual experiments, other than the systems more usually used in the field of entanglement, the dynamics of entropy correlation and entanglement between two anharmonic vibrations in a well-established algebraic model, with parameters extracted from fitting to highly excited spectral experimental results for molecules H 2 O and SO 2 , is studied in terms of the linear entropy and two negativities for various initial states that are respectively taken to be the mixed density matrices of thermal states and squeezed states on each mode. For a suitable parameter in initial states the entropies in two stretches can show positive correlation or anti-correlation. And the linear entropy of each mode is positively correlated with the negativities just for the mixed-squeezed states with small parameters in H 2 O while they do not display any correlation in other cases. For the mixed-squeezed states the negativities exhibit dominantly positive correlations with an effective mutual entropy. The differences in the linear entropy and the negativities between H 2 O and SO 2 are discussed as well. Those are useful for molecular quantum computing and quantum information processing Mathematical Modeling and Algebraic Technique for Resolving a Single-Producer Multi-Retailer Integrated Inventory System with Scrap Yuan-Shyi Peter Chiu; Chien-Hua Lee; Nong Pan; Singa Wang Chiu This study uses mathematical modeling along with an algebraic technique to resolve the production-distribution policy for a single-producer multi-retailer integrated inventory system with scrap in production. We assume that a product is manufactured through an imperfect production process where all nonconforming items will be picked up and scrapped in each production cycle. After the entire lot is quality assured, multiple shipments will be delivered synchronously to m different retailers in ... Algebraic entropy for algebraic maps Hone, A N W; Ragnisco, Orlando; Zullo, Federico We propose an extension of the concept of algebraic entropy, as introduced by Bellon and Viallet for rational maps, to algebraic maps (or correspondences) of a certain kind. The corresponding entropy is an index of the complexity of the map. The definition inherits the basic properties from the definition of entropy for rational maps. We give an example with positive entropy, as well as two examples taken from the theory of Bäcklund transformations. (letter) Galois Connections for Flow Algebras Filipiuk, Piotr; Terepeta, Michal Tomasz; Nielson, Hanne Riis to the approach taken by Monotone Frameworks and other classical analyses. We present a generic framework for static analysis based on flow algebras and program graphs. Program graphs are often used in Model Checking to model concurrent and distributed systems. The framework allows to induce new flow algebras... Computations in finite-dimensional Lie algebras A. M. Cohen Full Text Available This paper describes progress made in context with the construction of a general library of Lie algebra algorithms, called ELIAS (Eindhoven Lie Algebra System, within the computer algebra package GAP. A first sketch of the package can be found in Cohen and de Graaf[1]. Since then, in a collaborative effort with G. Ivanyos, the authors have continued to develop algorithms which were implemented in ELIAS by the second author. These activities are part of a bigger project, called ACELA and financed by STW, the Dutch Technology Foundation, which aims at an interactive book on Lie algebras (cf. Cohen and Meertens [2]. This paper gives a global description of the main ways in which to present Lie algebras on a computer. We focus on the transition from a Lie algebra abstractly given by an array of structure constants to a Lie algebra presented as a subalgebra of the Lie algebra of n×n matrices. We describe an algorithm typical of the structure analysis of a finite-dimensional Lie algebra: finding a Levi subalgebra of a Lie algebra. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E. In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform Fusion rules of chiral algebras Gaberdiel, M. Recently we showed that for the case of the WZW and the minimal models fusion can be understood as a certain ring-like tensor product of the symmetry algebra. In this paper we generalize this analysis to arbitrary chiral algebras. We define the tensor product of conformal field theory in the general case and prove that it is associative and symmetric up to equivalence. We also determine explicitly the action of the chiral algebra on this tensor product. In the second part of the paper we demonstrate that this framework provides a powerful tool for calculating restrictions for the fusion rules of chiral algebras. We exhibit this for the case of the W 3 algebra and the N=1 and N=2 NS superconformal algebras. (orig.) Generalized Jaynes-Cummings model as a quantum search algorithm Romanelli, A. We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm. Aeon: Synthesizing Scheduling Algorithms from High-Level Models Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms. Modelling and performance analysis of clinical pathways using the stochastic process algebra PEPA. Yang, Xian; Han, Rui; Guo, Yike; Bradley, Jeremy; Cox, Benita; Dickinson, Robert; Kitney, Richard Hospitals nowadays have to serve numerous patients with limited medical staff and equipment while maintaining healthcare quality. Clinical pathway informatics is regarded as an efficient way to solve a series of hospital challenges. To date, conventional research lacks a mathematical model to describe clinical pathways. Existing vague descriptions cannot fully capture the complexities accurately in clinical pathways and hinders the effective management and further optimization of clinical pathways. Given this motivation, this paper presents a clinical pathway management platform, the Imperial Clinical Pathway Analyzer (ICPA). By extending the stochastic model performance evaluation process algebra (PEPA), ICPA introduces a clinical-pathway-specific model: clinical pathway PEPA (CPP). ICPA can simulate stochastic behaviours of a clinical pathway by extracting information from public clinical databases and other related documents using CPP. Thus, the performance of this clinical pathway, including its throughput, resource utilisation and passage time can be quantitatively analysed. A typical clinical pathway on stroke extracted from a UK hospital is used to illustrate the effectiveness of ICPA. Three application scenarios are tested using ICPA: 1) redundant resources are identified and removed, thus the number of patients being served is maintained with less cost; 2) the patient passage time is estimated, providing the likelihood that patients can leave hospital within a specific period; 3) the maximum number of input patients are found, helping hospitals to decide whether they can serve more patients with the existing resource allocation. ICPA is an effective platform for clinical pathway management: 1) ICPA can describe a variety of components (state, activity, resource and constraints) in a clinical pathway, thus facilitating the proper understanding of complexities involved in it; 2) ICPA supports the performance analysis of clinical pathway, thereby assisting Visualization of logistic algorithm in Wilson model Glushchenko, A. S.; Rodin, V. A.; Sinegubov, S. V. Economic order quantity (EOQ), defined by the Wilson's model, is widely used at different stages of production and distribution of different products. It is useful for making decisions in the management of inventories, providing a more efficient business operation and thus bringing more economic benefits. There is a large amount of reference material and extensive computer shells that help solving various logistics problems. However, the use of large computer environments is not always justified and requires special user training. A tense supply schedule in a logistics model is optimal, if, and only if, the planning horizon coincides with the beginning of the next possible delivery. For all other possible planning horizons, this plan is not optimal. It is significant that when the planning horizon changes, the plan changes immediately throughout the entire supply chain. In this paper, an algorithm and a program for visualizing models of the optimal value of supplies and their number, depending on the magnitude of the planned horizon, have been obtained. The program allows one to trace (visually and quickly) all main parameters of the optimal plan on the charts. The results of the paper represent a part of the authors' research work in the field of optimization of protection and support services of ports in the Russian North. Algebraic Reconstruction of Current Dipoles and Quadrupoles in Three-Dimensional Space Takaaki Nara Full Text Available This paper presents an algebraic method for an inverse source problem for the Poisson equation where the source consists of dipoles and quadrupoles. This source model is significant in the magnetoencephalography inverse problem. The proposed method identifies the source parameters directly and algebraically using data without requiring an initial parameter estimate or iterative computation of the forward solution. The obtained parameters could be used for the initial solution in an optimization-based algorithm for further refinement. Applications of Computer Algebra Conference Martínez-Moro, Edgar The Applications of Computer Algebra (ACA) conference covers a wide range of topics from Coding Theory to Differential Algebra to Quantam Computing, focusing on the interactions of these and other areas with the discipline of Computer Algebra. This volume provides the latest developments in the field as well as its applications in various domains, including communications, modelling, and theoretical physics. The book will appeal to researchers and professors of computer algebra, applied mathematics, and computer science, as well as to engineers and computer scientists engaged in research and development. Applied matrix algebra in the statistical sciences Basilevsky, Alexander This comprehensive text offers teachings relevant to both applied and theoretical branches of matrix algebra and provides a bridge between linear algebra and statistical models. Appropriate for advanced undergraduate and graduate students. 1983 edition. Exactly solvable model of transitional nuclei based on dual algebraic structure for the three level pairing model in the framework of sdg interacting boson model Jafarizadeh, M. A.; Ranjbar, Z.; Fouladi, N.; Ghapanvari, M. In this paper, a successful algebraic method based on the dual algebraic structure for three level pairing model in the framework of sdg IBM is proposed for transitional nuclei which show transitional behavior from spherical to gamma-unstable quantum shape phase transition. In this method complicated sdg Hamiltonian, which is a three level pairing Hamiltonian is determined easily via the exactly solvable method. This description provides a better interpretation of some observables such as BE (4) in nuclei which exhibits the necessity of inclusion of g boson in the sd IBM, while BE (4) cannot be explained in the sd boson model. Some observables such as Energy levels, BE (2), BE (4), the two neutron separation energies signature splitting of the γ-vibrational band and expectation values of the g-boson number operator are calculated and examined for 46 104 - 110Pd isotopes. Continuous Time Dynamic Contraflow Models and Algorithms Urmila Pyakurel Full Text Available The research on evacuation planning problem is promoted by the very challenging emergency issues due to large scale natural or man-created disasters. It is the process of shifting the maximum number of evacuees from the disastrous areas to the safe destinations as quickly and efficiently as possible. Contraflow is a widely accepted model for good solution of evacuation planning problem. It increases the outbound road capacity by reversing the direction of roads towards the safe destination. The continuous dynamic contraflow problem sends the maximum number of flow as a flow rate from the source to the sink in every moment of time unit. We propose the mathematical model for the continuous dynamic contraflow problem. We present efficient algorithms to solve the maximum continuous dynamic contraflow and quickest continuous contraflow problems on single source single sink arbitrary networks and continuous earliest arrival contraflow problem on single source single sink series-parallel networks with undefined supply and demand. We also introduce an approximation solution for continuous earliest arrival contraflow problem on two-terminal arbitrary networks. Bouc–Wen hysteresis model identification using Modified Firefly Algorithm Zaman, Mohammad Asif; Sikder, Urmita The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found Zaman, Mohammad Asif, E-mail: [email protected] [Department of Electrical Engineering, Stanford University (United States); Sikder, Urmita [Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (United States) The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found. Algebraic computing MacCallum, M.A.H. The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author) Liesen, Jörg This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several 'MATLAB-Minutes' students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc... Stoll, R R Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand Lie algebras Jacobson, Nathan Lie group theory, developed by M. Sophus Lie in the 19th century, ranks among the more important developments in modern mathematics. Lie algebras comprise a significant part of Lie group theory and are being actively studied today. This book, by Professor Nathan Jacobson of Yale, is the definitive treatment of the subject and can be used as a textbook for graduate courses.Chapter I introduces basic concepts that are necessary for an understanding of structure theory, while the following three chapters present the theory itself: solvable and nilpotent Lie algebras, Carlan's criterion and its A classic text and standard reference for a generation, this volume and its companion are the work of an expert algebraist who taught at Yale for two decades. Nathan Jacobson's books possess a conceptual and theoretical orientation, and in addition to their value as classroom texts, they serve as valuable references.Volume I explores all of the topics typically covered in undergraduate courses, including the rudiments of set theory, group theory, rings, modules, Galois theory, polynomials, linear algebra, and associative algebra. Its comprehensive treatment extends to such rigorous topics as L Solving multi-customer FPR model with quality assurance and discontinuous deliveries using a two-phase algebraic approach. Chiu, Yuan-Shyi Peter; Chou, Chung-Li; Chang, Huei-Hsin; Chiu, Singa Wang A multi-customer finite production rate (FPR) model with quality assurance and discontinuous delivery policy was investigated in a recent paper (Chiu et al. in J Appl Res Technol 12(1):5-13, 2014) using differential calculus approach. This study employs mathematical modeling along with a two-phase algebraic method to resolve such a specific multi-customer FPR model. As a result, the optimal replenishment lot size and number of shipments can be derived without using the differential calculus. Such a straightforward method may assist practitioners who with insufficient knowledge of calculus in learning and managing the real multi-customer FPR systems more effectively. Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras Andrade, Jefferson O.; Kameyama, Yukiyoshi Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In this paper, we propose a novel, efficient algorithm for multi-valued bounded model checking. A notable feature of our algorithm is that it is not based on reduction of multi-values into two-values; instead, it generates a single formula which represents multi-valuedness by a suitable encoding, and asks a standard SAT solver to check its satisfiability. Our experimental results show a significant improvement in the number of variables and clauses and also in execution time compared with the reduction-based one. Current and Future Tests of the Algebraic Cluster Model of12C A new theoretical approach to clustering in the frame of the Algebraic Cluster Model (ACM) has been developed. It predicts, in12C, rotation-vibration structure with rotational bands of an oblate equilateral triangular symmetric spinning top with a D 3h symmetry characterized by the sequence of states: 0+, 2+, 3-, 4±, 5- with a degenerate 4+ and 4- (parity doublet) states. Our newly measured {2}2+ state in12C allows the first study of rotation-vibration structure in12C. The newly measured 5- state and 4- states fit very well the predicted ground state rotational band structure with the predicted sequence of states: 0+, 2+, 3-, 4±, 5- with almost degenerate 4+ and 4- (parity doublet) states. Such a D 3h symmetry is characteristic of triatomic molecules, but it is observed in the ground state rotational band of12C for the first time in a nucleus. We discuss predictions of the ACM of other rotation-vibration bands in12C such as the (0+) Hoyle band and the (1-) bending mode with prediction of ("missing 3- and 4-�) states that may shed new light on clustering in12C and light nuclei. In particular, the observation (or non observation) of the predicted ("missing�) states in the Hoyle band will allow us to conclude the geometrical arrangement of the three alpha particles composing the Hoyle state at 7.6542 MeV in12C. We discuss proposed research programs at the Darmstadt S- DALINAC and at the newly constructed ELI-NP facility near Bucharest to test the predictions of the ACM in isotopes of carbon. Developing CORE model-based worksheet with recitation task to facilitate students' mathematical communication skills in linear algebra course Risnawati; Khairinnisa, S.; Darwis, A. H. The purpose of this study was to develop a CORE model-based worksheet with recitation task that were valid and practical and could facilitate students' communication skills in Linear Algebra course. This study was conducted in mathematics education department of one public university in Riau, Indonesia. Participants of the study were media and subject matter experts as validators as well as students from mathematics education department. The objects of this study are students' worksheet and students' mathematical communication skills. The results of study showed that: (1) based on validation of the experts, the developed students' worksheet was valid and could be applied for students in Linear Algebra courses; (2) based on the group trial, the practicality percentage was 92.14% in small group and 90.19% in large group, so the worksheet was very practical and could attract students to learn; and (3) based on the post test, the average percentage of ideals was 87.83%. In addition, the results showed that the students' worksheet was able to facilitate students' mathematical communication skills in linear algebra course. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models. Djurfeldt, Mikael The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released. Proposing and Testing a Model to Explain Traits of Algebra Preparedness Venenciano, Linda; Heck, Ronald Early experiences with theoretical thinking and generalization in measurement are hypothesized to develop constructs we name here as logical reasoning and preparedness for algebra. Based on work of V. V. Davydov (1975), the Measure Up (MU) elementary grades experimental mathematics curriculum uses quantities of area, length, volume, and mass to… Developing Pre-Algebraic Thinking in Generalizing Repeating Pattern Using SOLO Model Lian, Lim Hooi; Yew, Wun Thiam In this paper, researchers discussed the application of the generalization perspective in helping the primary school pupils to develop their pre-algebraic thinking in generalizing repeating pattern. There are two main stages of the generalization perspective had been adapted, namely investigating and generalizing the pattern. Since the Biggs and… Indian Academy of Sciences (India) polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming. Fast algorithms for transport models. Final report Manteuffel, T.A. This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)) Tracing a planar algebraic curve Chen Falai; Kozak, J. In this paper, an algorithm that determines a real algebraic curve is outlined. Its basic step is to divide the plane into subdomains that include only simple branches of the algebraic curve without singular points. Each of the branches is then stably and efficiently traced in the particular subdomain. Except for the tracing, the algorithm requires only a couple of simple operations on polynomials that can be carried out exactly if the coefficients are rational, and the determination of zeros of several polynomials of one variable. (author). 5 refs, 4 figs tion - 6. How Architectural Features Affect. Building During Earthquakes? C VRMurty. 48 Turbulence and Dispersion. K 5 Gandhi. BOOK REVIEWS. 86 Algebraic Topology. Siddhartha Gadgil. Front Cover. - .. ..-.......... -. Back Cover. Two-dimensional vertical section through a turbulent plume. (Courtesy: G S Shat, CAOS, IISc.). Algebraic stacks Deligne, Mumford and Artin [DM, Ar2]) and consider algebraic stacks, then we can cons- truct the 'moduli ... the moduli scheme and the moduli stack of vector bundles. First I will give ... 1–31. © Printed in India. 1 ...... Cultura, Spain. References. A numerical study of scalar dispersion downstream of a wall-mounted cube using direct simulations and algebraic flux models Rossi, R., E-mail: [email protected] [Laboratorio di Termofluidodinamica Computazionale Seconda Facolta di Ingegneria di Forli, Universita di Bologna Via Fontanelle 40, 47100 Forli (Italy); Center for Turbulence Research Department of Mechanical Engineering Stanford University, CA 94305 (United States); Philips, D.A.; Iaccarino, G. [Center for Turbulence Research Department of Mechanical Engineering Stanford University, CA 94305 (United States) Research highlights: {yields} The computed DNS statistics indicate that a gradient-transport scheme can be applied to the vertical and spanwise scalar flux components. {yields} The streamwise scalar flux is characterized by a counter-gradient transport mechanism in the wake region close to the obstacle. {yields} The wake profiles of scalar fluctuations and the shape of probability density functions do not suggest a significant flapping movement of the scalar plume. {yields} The evaluation of scalar dispersion models must include a careful assessment of the computed mean velocity field and Reynolds stress tensor. {yields} Algebraic models provide an improved prediction of the mean concentration field as compared to the standard eddy-diffusivity model. -- Abstract: The dispersion of a passive scalar downstream of a wall-mounted cube is examined using direct numerical simulations and turbulence models applied to the Reynolds equations. The scalar is released from a circular source located on top of the obstacle, which is immersed in a developing boundary-layer flow. Direct simulations are performed to give insight into the mixing process and to provide a reference database for turbulence closures. Algebraic flux models are evaluated against the standard eddy-diffusivity representation. Coherent structures periodically released from the cube top are responsible for a counter-diffusion mechanism appearing in the streamwise scalar flux. Alternating vortex pairs form from the lateral edges of the cube, but the intensity profiles and probability density functions of scalar fluctuations suggest that they do not cause a significant flapping movement of the scalar plume. The gradient-transport scheme is consistent with the vertical and spanwise scalar flux components. From the comparative study with our direct simulations, we further stress that Reynolds stress predictions must be carefully evaluated along with scalar flux closures in order to establish the reliability of Rossi, R.; Philips, D.A.; Iaccarino, G. Research highlights: → The computed DNS statistics indicate that a gradient-transport scheme can be applied to the vertical and spanwise scalar flux components. → The streamwise scalar flux is characterized by a counter-gradient transport mechanism in the wake region close to the obstacle. → The wake profiles of scalar fluctuations and the shape of probability density functions do not suggest a significant flapping movement of the scalar plume. → The evaluation of scalar dispersion models must include a careful assessment of the computed mean velocity field and Reynolds stress tensor. → Algebraic models provide an improved prediction of the mean concentration field as compared to the standard eddy-diffusivity model. -- Abstract: The dispersion of a passive scalar downstream of a wall-mounted cube is examined using direct numerical simulations and turbulence models applied to the Reynolds equations. The scalar is released from a circular source located on top of the obstacle, which is immersed in a developing boundary-layer flow. Direct simulations are performed to give insight into the mixing process and to provide a reference database for turbulence closures. Algebraic flux models are evaluated against the standard eddy-diffusivity representation. Coherent structures periodically released from the cube top are responsible for a counter-diffusion mechanism appearing in the streamwise scalar flux. Alternating vortex pairs form from the lateral edges of the cube, but the intensity profiles and probability density functions of scalar fluctuations suggest that they do not cause a significant flapping movement of the scalar plume. The gradient-transport scheme is consistent with the vertical and spanwise scalar flux components. From the comparative study with our direct simulations, we further stress that Reynolds stress predictions must be carefully evaluated along with scalar flux closures in order to establish the reliability of Reynolds Efficient Implementation Algorithms for Homogenized Energy Models Braun, Thomas R; Smith, Ralph C ... for real-time control implementation. In this paper, we develop algorithms employing lookup tables which permit the high speed implementation of formulations which incorporate relaxation mechanisms and electromechanical coupling... Computational linear and commutative algebra Kreuzer, Martin This book combines, in a novel and general way, an extensive development of the theory of families of commuting matrices with applications to zero-dimensional commutative rings, primary decompositions and polynomial system solving. It integrates the Linear Algebra of the Third Millennium, developed exclusively here, with classical algorithmic and algebraic techniques. Even the experienced reader will be pleasantly surprised to discover new and unexpected aspects in a variety of subjects including eigenvalues and eigenspaces of linear maps, joint eigenspaces of commuting families of endomorphisms, multiplication maps of zero-dimensional affine algebras, computation of primary decompositions and maximal ideals, and solution of polynomial systems. This book completes a trilogy initiated by the uncharacteristically witty books Computational Commutative Algebra 1 and 2 by the same authors. The material treated here is not available in book form, and much of it is not available at all. The authors continue to prese... Characteristic Dynkin diagrams and W algebras Ragoucy, E. We present a classification of characteristic Dynkin diagrams for the A N , B N , C N and D N algebras. This classification is related to the classification of W(G, K) algebras arising from non-abelian Toda models, and we argue that it can give new insight on the structure of W algebras. (orig.) The Das-Popowicz Moyal momentum algebra We introduce in this short note some aspects of the Moyal momentum algebra that we call the Das-Popowicz Mm algebra. Our interest on this algebra is motivated by the central role that it can play in the formulation of integrable models and in higher conformal spin theories. (author) On Elementary and Algebraic Cellular Automata Gulak, Yuriy In this paper we study elementary cellular automata from an algebraic viewpoint. The goal is to relate the emergent complex behavior observed in such systems with the properties of corresponding algebraic structures. We introduce algebraic cellular automata as a natural generalization of elementary ones and discuss their applications as generic models of complex systems. Complex algebraic geometry Kollár, János This volume contains the lectures presented at the third Regional Geometry Institute at Park City in 1993. The lectures provide an introduction to the subject, complex algebraic geometry, making the book suitable as a text for second- and third-year graduate students. The book deals with topics in algebraic geometry where one can reach the level of current research while starting with the basics. Topics covered include the theory of surfaces from the viewpoint of recent higher-dimensional developments, providing an excellent introduction to more advanced topics such as the minimal model program. Also included is an introduction to Hodge theory and intersection homology based on the simple topological ideas of Lefschetz and an overview of the recent interactions between algebraic geometry and theoretical physics, which involve mirror symmetry and string theory. Helmholtz algebraic solitons Christian, J M; McDonald, G S [Joule Physics Laboratory, School of Computing, Science and Engineering, Materials and Physics Research Centre, University of Salford, Salford M5 4WT (United Kingdom); Chamorro-Posada, P, E-mail: [email protected] [Departamento de Teoria de la Senal y Comunicaciones e Ingenieria Telematica, Universidad de Valladolid, ETSI Telecomunicacion, Campus Miguel Delibes s/n, 47011 Valladolid (Spain) We report, to the best of our knowledge, the first exact analytical algebraic solitons of a generalized cubic-quintic Helmholtz equation. This class of governing equation plays a key role in photonics modelling, allowing a full description of the propagation and interaction of broad scalar beams. New conservation laws are presented, and the recovery of paraxial results is discussed in detail. The stability properties of the new solitons are investigated by combining semi-analytical methods and computer simulations. In particular, new general stability regimes are reported for algebraic bright solitons. Christian, J M; McDonald, G S; Chamorro-Posada, P A faithful functor among algebras and graphs Falcón Ganfornina, Óscar Jesús; Falcón Ganfornina, Raúl Manuel; Núñez Valdés, Juan; Pacheco Martínez, Ana María; Villar Liñán, María Trinidad; Vigo Aguiar, Jesús (Coordinador) The problem of identifying a functor between the categories of algebras and graphs is currently open. Based on a known algorithm that identifies isomorphisms of Latin squares with isomorphism of vertex-colored graphs, we describe here a pair of graphs that enable us to find a faithful functor between finite-dimensional algebras over finite fields and these graphs. An algebraic approach to the scattering equations Huang, Rijun; Rao, Junjie [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Feng, Bo [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Center of Mathematical Science, Zhejiang University,Hangzhou, 310027 (China); He, Yang-Hui [School of Physics, NanKai University,Tianjin, 300071 (China); Department of Mathematics, City University,London, EC1V 0HB (United Kingdom); Merton College, University of Oxford,Oxford, OX14JD (United Kingdom) We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism. Huang, Rijun; Rao, Junjie; Feng, Bo; He, Yang-Hui Loop algorithms for quantum simulations of fermion models on lattices Kawashima, N.; Gubernatis, J.E.; Evertz, H.G. Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed Eigenvectors determination of the ribosome dynamics model during mRNA translation using the Kleene Star algorithm Ernawati; Carnia, E.; Supriatna, A. K. Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ. Algebraic characterizations of measure algebras Jech, Thomas Ro�. 136, �. 4 (2008), s. 1285-1294 ISSN 0002-9939 R&D Projects: GA AV ČR IAA100190509 Institutional research plan: CEZ:AV0Z10190503 Keywords : Von - Neumann * sequential topology * Boolean-algebras * Souslins problem * Submeasures Subject RIV: BA - General Mathematics Impact factor: 0.584, year: 2008 Comments on two-loop Kac-Moody algebras Ferreira, L A; Gomes, J F; Zimerman, A H [Instituto de Fisica Teorica (IFT), Sao Paulo, SP (Brazil); Schwimmer, A [Istituto Nazionale di Fisica Nucleare, Trieste (Italy) It is shown that the two-loop Kac-Moody algebra is equivalent to a two variable loop algebra and a decouple {beta}-{gamma} system. Similarly WZNW and CSW models having as algebraic structure the Kac-Moody algebra are equivalent to an infinity to versions of the corresponding ordinary models and decoupled Abelian fields. (author). 15 refs. Quantum W-algebras and elliptic algebras Feigin, B.; Kyoto Univ.; Frenkel, E. We define a quantum W-algebra associated to sl N as an associative algebra depending on two parameters. For special values of the parameters, this algebra becomes the ordinary W-algebra of sl N , or the q-deformed classical W-algebra of sl N . We construct free field realizations of the quantum W-algebras and the screening currents. We also point out some interesting elliptic structures arising in these algebras. In particular, we show that the screening currents satisfy elliptic analogues of the Drinfeld relations in U q (n). (orig.) Thinking Visually about Algebra Baroudi, Ziad Many introductions to algebra in high school begin with teaching students to generalise linear numerical patterns. This article argues that this approach needs to be changed so that students encounter variables in the context of modelling visual patterns so that the variables have a meaning. The article presents sample classroom activities,… On 2-Banach algebras Mohammad, N.; Siddiqui, A.H. The notion of a 2-Banach algebra is introduced and its structure is studied. After a short discussion of some fundamental properties of bivectors and tensor product, several classical results of Banach algebras are extended to the 2-Banach algebra case. A condition under which a 2-Banach algebra becomes a Banach algebra is obtained and the relation between algebra of bivectors and 2-normed algebra is discussed. 11 refs Fireworks algorithm for mean-VaR/CVaR models Zhang, Tingting; Liu, Zhifeng Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field. Engineering of Algorithms for Hidden Markov models and Tree Distances Sand, Andreas Bioinformatics is an interdisciplinary scientific field that combines biology with mathematics, statistics and computer science in an effort to develop computational methods for handling, analyzing and learning from biological data. In the recent decades, the amount of available biological data has...... speed up all the classical algorithms for analyses and training of hidden Markov models. And I show how two particularly important algorithms, the forward algorithm and the Viterbi algorithm, can be accelerated through a reformulation of the algorithms and a somewhat more complicated parallelization...... contribution to the theoretically fastest set of algorithms presently available to compute two closely related measures of tree distance, the triplet distance and the quartet distance. And I further demonstrate that they are also the fastest algorithms in almost all cases when tested in practice.... Muzzle Flash Onset: An Algebraic Criterion and Further Validation of the Muzzle Exhaust Flow Field Model Tic, equals to (NI/ Nic ) where Nic , defined as the net chemical production rate of i-th species, is in general the algebraic sum of terms which are...detailed analysis has shown that in preignition regions the chemical rates which make a significant contribution to any of the Nic are such that at least...Elkton Division Lab., Inc. ATTN. R. Biddle ATTN: M. Summeitield Tech Lib. 1041 US Hlighway One North P. 0. Box 241 Princeton, NJ 08540 Elkton, MD Computers in nonassociative rings and algebras Beck, Robert E Computers in Nonassociative Rings and Algebras provides information pertinent to the computational aspects of nonassociative rings and algebras. This book describes the algorithmic approaches for solving problems using a computer.Organized into 10 chapters, this book begins with an overview of the concept of a symmetrized power of a group representation. This text then presents data structures and other computational methods that may be useful in the field of computational algebra. Other chapters consider several mathematical ideas, including identity processing in nonassociative algebras, str Classical Exchange Algebra of the Nonlinear Sigma Model on a Supercoset Target with Z2n Grading Ke San-Min; Li Xin-Ying; Wang Chun; Yue Rui-Hong The classical exchange algebra satisfied by the monodromy matrix of the nonlinear sigma model on a supercoset target with Z 2n grading is derived using a first-order Hamiltonian formulation and by adding to the Lax connection terms proportional to constraints. This enables us to show that the conserved charges of the theory are in involution. When n = 2, our results coincide with the results given by Magro for the pure spinor description of AdS 5 × S 5 string theory (when the ghost terms are omitted). (the physics of elementary particles and fields) Concurrence of Quantum States: Algebraic Dynamical Method Study XXX Models in a Time-Depending Random External Field Fu Chuanji; Zhu Qinsheng; Wu Shaoyi Based on algebraic dynamics and the concept of the concurrence of the entanglement, we investigate the evolutive properties of the two-qubit entanglement that formed by Heisenberg XXX models under a time-depending external held. For this system, the property of the concurrence that is only dependent on the coupling constant J and total values of the external field is proved. Furthermore, we found that the thermal concurrence of the system under a static random external field is a function of the coupling constant J, temperature T, and the magnitude of external held. (general) Algebraic modeling and thermodynamic design of fan-supplied tube-fin evaporators running under frosting conditions Ribeiro, Rafael S.; Hermes, Christian J.L. In this study, the method of entropy generation minimization (i.e., design aimed at facilitating both heat, mass and fluid flows) is used to assess the evaporator design (aspect ratio and fin density) considering the thermodynamic losses due to heat and mass transfer, and viscous flow processes. A fully algebraic model was put forward to simulate the thermal-hydraulic behavior of tube-fin evaporator coils running under frosting conditions. The model predictions were validated against experimental data, showing a good agreement between calculated and measured counterparts. The optimization exercise has pointed out that high aspect ratio heat exchanger designs lead to lower entropy generation in cases of fixed cooling capacity and air flow rate constrained by the characteristic curve of the fan. - Highlights: • An algebraic model for frost accumulation on tube-fin heat exchangers was advanced. • Model predictions for cooling capacity and air flow rate were compared with experimental data, with errors within ±5% band. • Minimum entropy generation criterion was used to optimize the evaporator geometry. • Thermodynamic analysis led to slender designs for fixed cooling capacity and fan characteristics On an Algebraic Property of the Disordered Phase of the Ising Model with Competing Interactions on a Cayley Tree Mukhamedov, Farrukh, E-mail: [email protected], E-mail: [email protected] [International Islamic University Malaysia, Department of Computational and Theoretical Sciences, Faculty of Science (Malaysia); Barhoumi, Abdessatar, E-mail: [email protected] [Carthage University, Department of Mathematics, Nabeul Preparatory Engineering Institute (Tunisia); Souissi, Abdessatar, E-mail: [email protected] [Carthage University, Department of Mathematics, Marsa Preparatory Institute for Scientific and Technical Studies (Tunisia) It is known that the disordered phase of the classical Ising model on the Caley tree is extreme in some region of the temperature. If one considers the Ising model with competing interactions on the same tree, then about the extremity of the disordered phase there is no any information. In the present paper, we first aiming to analyze the correspondence between Gibbs measures and QMC's on trees. Namely, we establish that states associated with translation invariant Gibbs measures of the model can be seen as diagonal quantum Markov chains on some quasi local algebra. Then as an application of the established correspondence, we study some algebraic property of the disordered phase of the Ising model with competing interactions on the Cayley tree of order two. More exactly, we prove that a state corresponding to the disordered phase is not quasi-equivalent to other states associated with translation invariant Gibbs measures. This result shows how the translation invariant states relate to each other, which is even a new phenomena in the classical setting. To establish the main result we basically employ methods of quantum Markov chains. Availability Allocation of Networked Systems Using Markov Model and Heuristics Algorithm Ruiying Li Full Text Available It is a common practice to allocate the system availability goal to reliability and maintainability goals of components in the early design phase. However, the networked system availability is difficult to be allocated due to its complex topology and multiple down states. To solve these problems, a practical availability allocation method is proposed. Network reliability algebraic methods are used to derive the availability expression of the networked topology on the system level, and Markov model is introduced to determine that on the component level. A heuristic algorithm is proposed to obtain the reliability and maintainability allocation values of components. The principles applied in the AGREE reliability allocation method, proposed by the Advisory Group on Reliability of Electronic Equipment, and failure rate-based maintainability allocation method persist in our allocation method. A series system is used to verify the new algorithm, and the result shows that the allocation based on the heuristic algorithm is quite accurate compared to the traditional one. Moreover, our case study of a signaling system number 7 shows that the proposed allocation method is quite efficient for networked systems. Current algebra Jacob, M. The first three chapters of these lecture notes are devoted to generalities concerning current algebra. The weak currents are defined, and their main properties given (V-A hypothesis, conserved vector current, selection rules, partially conserved axial current,...). The SU (3) x SU (3) algebra of Gell-Mann is introduced, and the general properties of the non-leptonic weak Hamiltonian are discussed. Chapters 4 to 9 are devoted to some important applications of the algebra. First one proves the Adler- Weisberger formula, in two different ways, by either the infinite momentum frame, or the near-by singularities method. In the others chapters, the latter method is the only one used. The following topics are successively dealt with: semi leptonic decays of K mesons and hyperons, Kroll- Ruderman theorem, non leptonic decays of K mesons and hyperons ( ΔI = 1/2 rule), low energy theorems concerning processes with emission (or absorption) of a pion or a photon, super-convergence sum rules, and finally, neutrino reactions. (author) [fr Algebraic classification of the conformal tensor Ares de Parga, Gonzalo; Chavoya, O.; Lopez B, J.L.; Ovando Z, Gerardo Starting from the Petrov matrix method, we deduce a new algorithm (adaptable to computers), within the Newman-Penrose formalism, to obtain the algebraic type of the Weyl tensor in general relativity. (author) Genetic hotels for the standard genetic code: evolutionary analysis based upon novel three-dimensional algebraic models. José, Marco V; Morgado, Eberto R; Govezensky, Tzipe Herein, we rigorously develop novel 3-dimensional algebraic models called Genetic Hotels of the Standard Genetic Code (SGC). We start by considering the primeval RNA genetic code which consists of the 16 codons of type RNY (purine-any base-pyrimidine). Using simple algebraic operations, we show how the RNA code could have evolved toward the current SGC via two different intermediate evolutionary stages called Extended RNA code type I and II. By rotations or translations of the subset RNY, we arrive at the SGC via the former (type I) or via the latter (type II), respectively. Biologically, the Extended RNA code type I, consists of all codons of the type RNY plus codons obtained by considering the RNA code but in the second (NYR type) and third (YRN type) reading frames. The Extended RNA code type II, comprises all codons of the type RNY plus codons that arise from transversions of the RNA code in the first (YNY type) and third (RNR) nucleotide bases. Since the dimensions of remarkable subsets of the Genetic Hotels are not necessarily integer numbers, we also introduce the concept of algebraic fractal dimension. A general decoding function which maps each codon to its corresponding amino acid or the stop signals is also derived. The Phenotypic Hotel of amino acids is also illustrated. The proposed evolutionary paths are discussed in terms of the existing theories of the evolution of the SGC. The adoption of 3-dimensional models of the Genetic and Phenotypic Hotels will facilitate the understanding of the biological properties of the SGC. Computationally efficient model predictive control algorithms a neural network approach �awryńczuk, Maciej This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·        A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·        Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·        The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·        The MPC algorithms with neural approximation with no on-line linearization. ·        The MPC algorithms with guaranteed stability and robustness. ·        Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d... AMPTRACT: an algebraic model for computing pressure tube circumferential and steam temperature transients under stratified channel coolant conditions Gulshani, P.; So, C.B. In a number of postulated accident scenarios in a CANDU reactor, some of the horizontal fuel channels are predicted to experience periods of stratified channel coolant condition which can lead to a circumferential temperature gradient around the pressure tube. To study pressure tube strain and integrity under stratified flow channel conditions, it is, necessary to determine the pressure tube circumferential temperature distribution. This paper presents an algebraic model, called AMPTRACT (Algebraic Model for Pressure Tube TRAnsient Circumferential Temperature), developed to give the transient temperature distribution in a closed form. AMPTRACT models the following modes of heat transfer: radiation from the outermost elements to the pressure tube and from the pressure to calandria tube, convection between the fuel elements and the pressure tube and superheated steam, and circumferential conduction from the exposed to submerged part of the pressure tube. An iterative procedure is used to solve the mass and energy equations in closed form for axial steam and fuel-sheath transient temperature distributions. The one-dimensional conduction equation is then solved to obtain the pressure tube circumferential transient temperature distribution in a cosine series expansion. In the limit of large times and in the absence of convection and radiation to the calandria tube, the predicted pressure tube temperature distribution reduces identically to a parabolic profile. In this limit, however, radiation cannot be ignored because the temperatures are generally high. Convection and radiation tend to flatten the parabolic distribution Fibered F-Algebra Kleyn, Aleks The concept of F-algebra and its representation can be extended to an arbitrary bundle. We define operations of fibered F-algebra in fiber. The paper presents the representation theory of of fibered F-algebra as well as a comparison of representation of F-algebra and of representation of fibered F-algebra. to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ... DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM K. Srinivasan Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated. Model-Free Adaptive Control Algorithm with Data Dropout Compensation Xuhui Bu Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained. Evaluation of models generated via hybrid evolutionary algorithms ... Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images Tzimiropoulos, Georgios; Pantic, Maja Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple "project-out‿ Models and algorithms for biomolecules and molecular networks DasGupta, Bhaskar By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises Optimization algorithms intended for self-tuning feedwater heater model Czop, P; Barszcz, T; Bednarz, J This work presents a self-tuning feedwater heater model. This work continues the work on first-principle gray-box methodology applied to diagnostics and condition assessment of power plant components. The objective of this work is to review and benchmark the optimization algorithms regarding the time required to achieve the best model fit to operational power plant data. The paper recommends the most effective algorithm to be used in the model adjustment process. Geometric Algebra Computing Corrochano, Eduardo Bayro This book presents contributions from a global selection of experts in the field. This useful text offers new insights and solutions for the development of theorems, algorithms and advanced methods for real-time applications across a range of disciplines. Written in an accessible style, the discussion of all applications is enhanced by the inclusion of numerous examples, figures and experimental analysis. Features: provides a thorough discussion of several tasks for image processing, pattern recognition, computer vision, robotics and computer graphics using the geometric algebra framework; int Model theory and algebraic geometry an introduction to E. Hrushovski's proof of the geometric Mordell-Lang conjecture This introduction to the recent exciting developments in the applications of model theory to algebraic geometry, illustrated by E. Hrushovski's model-theoretic proof of the geometric Mordell-Lang Conjecture starts from very basic background and works up to the detailed exposition of Hrushovski's proof, explaining the necessary tools and results from stability theory on the way. The first chapter is an informal introduction to model theory itself, making the book accessible (with a little effort) to readers with no previous knowledge of model theory. The authors have collaborated closely to achieve a coherent and self- contained presentation, whereby the completeness of exposition of the chapters varies according to the existence of other good references, but comments and examples are always provided to give the reader some intuitive understanding of the subject. Insertion algorithms for network model database management systems Mamadolimov, Abdurashid; Khikmat, Saburov The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm. Numerical analysis of three-dimensional turbulent flow in a 90deg bent tube by algebraic Reynolds stress model Sugiyama, Hitoshi; Akiyama, Mitsunobu; Shinohara, Yasunori; Hitomi, Daisuke A numerical analysis has been performed for three dimensional developing turbulent flow in a 90deg bent tube with straight inlet and outlet sections by an algebraic Reynolds stress model. To our knowledge, very little has been reported about detailed comparison between calculated results and experimental data containing Reynolds stresses. In calculation, an algebraic Reynolds stress model together with a boundary-fitted coordinate system is applied to a 90deg bent tube in order to solve anisotropic turbulent flow precisely. The calculated results display comparatively good agreement with the experimental data of time averaged velocity and secondary vectors. In addition, the present method predicts as a characteristic feature that the intensity of secondary flow near the inner wall is increased immediately downstream from the bend outlet by the pressure gradient. With regard to comparison of Reynolds stresses, the present method is able to reproduce well the distributions of streamwise normal stress and shear stress defined streamwise and radial velocity fluctuation except for the shear stress defined streamwise and circumferential velocity fluctuation. The present calculation has been found to simulate many features of the developing flow in bent tube satisfactorily, but it has a tendency to underpredict the Reynolds stresses. (author) Heisenberg XXX Model with General Boundaries: Eigenvectors from Algebraic Bethe Ansatz Samuel Belliard Full Text Available We propose a generalization of the algebraic Bethe ansatz to obtain the eigenvectors of the Heisenberg spin chain with general boundaries associated to the eigenvalues and the Bethe equations found recently by Cao et al. The ansatz takes the usual form of a product of operators acting on a particular vector except that the number of operators is equal to the length of the chain. We prove this result for the chains with small length. We obtain also an off-shell equation (i.e. satisfied without the Bethe equations formally similar to the ones obtained in the periodic case or with diagonal boundaries. An Ada Linear-Algebra Software Package Modeled After HAL/S Klumpp, Allan R.; Lawson, Charles L. New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1. Lie algebra in quantum physics by means of computer algebra Kikuchi, Ichio; Kikuchi, Akihito This article explains how to apply the computer algebra package GAP (www.gap-system.org) in the computation of the problems in quantum physics, in which the application of Lie algebra is necessary. The article contains several exemplary computations which readers would follow in the desktop PC: such as, the brief review of elementary ideas of Lie algebra, the angular momentum in quantum mechanics, the quark eight-fold way model, and the usage of Weyl character formula (in order to construct w... Algorithmic detectability threshold of the stochastic block model Kawamoto, Tatsuro The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition. Vertex algebras and mirror symmetry Borisov, L.A. Mirror Symmetry for Calabi-Yau hypersurfaces in toric varieties is by now well established. However, previous approaches to it did not uncover the underlying reason for mirror varieties to be mirror. We are able to calculate explicitly vertex algebras that correspond to holomorphic parts of A and B models of Calabi-Yau hypersurfaces and complete intersections in toric varieties. We establish the relation between these vertex algebras for mirror Calabi-Yau manifolds. This should eventually allow us to rewrite the whole story of toric mirror symmetry in the language of sheaves of vertex algebras. Our approach is purely algebraic and involves simple techniques from toric geometry and homological algebra, as well as some basic results of the theory of vertex algebras. Ideas of this paper may also be useful in other problems related to maps from curves to algebraic varieties.This paper could also be of interest to physicists, because it contains explicit description of holomorphic parts of A and B models of Calabi-Yau hypersurfaces and complete intersections in terms of free bosons and fermions. (orig.) Generalized symmetry algebras Dragon, N. The possible use of trilinear algebras as symmetry algebras for para-Fermi fields is investigated. The shortcomings of the examples are argued to be a general feature of such generalized algebras. (author) Hom-Novikov algebras Yau, Donald We study a twisted generalization of Novikov algebras, called Hom-Novikov algebras, in which the two defining identities are twisted by a linear map. It is shown that Hom-Novikov algebras can be obtained from Novikov algebras by twisting along any algebra endomorphism. All algebra endomorphisms on complex Novikov algebras of dimensions 2 or 3 are computed, and their associated Hom-Novikov algebras are described explicitly. Another class of Hom-Novikov algebras is constructed from Hom-commutative algebras together with a derivation, generalizing a construction due to Dorfman and Gel'fand. Two other classes of Hom-Novikov algebras are constructed from Hom-Lie algebras together with a suitable linear endomorphism, generalizing a construction due to Bai and Meng. Algebraic functions Bliss, Gilbert Ames This book, immediately striking for its conciseness, is one of the most remarkable works ever produced on the subject of algebraic functions and their integrals. The distinguishing feature of the book is its third chapter, on rational functions, which gives an extremely brief and clear account of the theory of divisors.... A very readable account is given of the topology of Riemann surfaces and of the general properties of abelian integrals. Abel's theorem is presented, with some simple applications. The inversion problem is studied for the cases of genus zero and genus unity. The chapter on t Concurrent algorithms for nuclear shell model calculations Mackenzie, L.M.; Macleod, A.M.; Berry, D.J.; Whitehead, R.R. The calculation of nuclear properties has proved very successful for light nuclei, but is limited by the power of the present generation of computers. Starting with an analysis of current techniques, this paper discusses how these can be modified to map parallelism inherent in the mathematics onto appropriate parallel machines. A prototype dedicated multiprocessor for nuclear structure calculations, designed and constructed by the authors, is described and evaluated. The approach adopted is discussed in the context of a number of generically similar algorithms. (orig.) Iterated Leavitt Path Algebras Hazrat, R. Leavitt path algebras associate to directed graphs a Z-graded algebra and in their simplest form recover the Leavitt algebras L(1,k). In this note, we introduce iterated Leavitt path algebras associated to directed weighted graphs which have natural ± Z grading and in their simplest form recover the Leavitt algebras L(n,k). We also characterize Leavitt path algebras which are strongly graded. (author) A Developed Artificial Bee Colony Algorithm Based on Cloud Model Ye Jin Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees' search formula and changing the scout bees' updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants. PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ... This paper proposes dynamic modeling simulation for ac Surface Permanent Magnet Synchronous ... Simulations are implemented using MATLAB with its genetic algorithm toolbox. .... selection, the process that drives biological evolution. Quantum affine algebras and deformations of the virasoro and W-algebras Frenkel, E.; Reshetikhin, N. Using the Wakimoto realization of quantum affine algebras we define new Poisson algebras, which are q-deformations of the classical W-algebras. We also define their free field realizations, i.e. homomorphisms into some Heisenberg-Poisson algebras. The formulas for these homomorphisms coincide with formulas for spectra of transfer-matrices in the corresponding quantum integrable models derived by the Bethe-Ansatz method. (orig.) A Mining Algorithm for Extracting Decision Process Data Models Cristina-Claudia DOLEAN Full Text Available The paper introduces an algorithm that mines logs of user interaction with simulation software. It outputs a model that explicitly shows the data perspective of the decision process, namely the Decision Data Model (DDM. In the first part of the paper we focus on how the DDM is extracted by our mining algorithm. We introduce it as pseudo-code and, then, provide explanations and examples of how it actually works. In the second part of the paper, we use a series of small case studies to prove the robustness of the mining algorithm and how it deals with the most common patterns we found in real logs. Seismotectonic models and CN algorithm: The case of Italy Costa, G.; Orozova Stanishkova, I.; Panza, G.F.; Rotwain, I.M. The CN algorithm is here utilized both for the intermediate term earthquake prediction and to validate the seismotectonic model of the Italian territory. Using the results of the analysis, made through the CN algorithm and taking into account the seismotectonic model, three areas, one for Northern Italy, one for Central Italy and one for Southern Italy, are defined. Two transition areas, between the three main areas are delineated. The earthquakes which occurred in these two areas contribute to the precursor phenomena identified by the CN algorithm in each main area. (author). 26 refs, 6 figs, 2 tabs Universal algebra Grätzer, George Universal Algebra, heralded as ". . . the standard reference in a field notorious for the lack of standardization . . .," has become the most authoritative, consistently relied on text in a field with applications in other branches of algebra and other fields such as combinatorics, geometry, and computer science. Each chapter is followed by an extensive list of exercises and problems. The "state of the art" account also includes new appendices (with contributions from B. Jónsson, R. Quackenbush, W. Taylor, and G. Wenzel) and a well-selected additional bibliography of over 1250 papers and books which makes this a fine work for students, instructors, and researchers in the field. "This book will certainly be, in the years to come, the basic reference to the subject." --- The American Mathematical Monthly (First Edition) "In this reviewer's opinion [the author] has more than succeeded in his aim. The problems at the end of each chapter are well-chosen; there are more than 650 of them. The book is especially sui... Filiform Lie algebras of order 3 Navarro, R. M. The aim of this work is to generalize a very important type of Lie algebras and superalgebras, i.e., filiform Lie (super)algebras, into the theory of Lie algebras of order F. Thus, the concept of filiform Lie algebras of order F is obtained. In particular, for F = 3 it has been proved that by using infinitesimal deformations of the associated model elementary Lie algebra it can be obtained families of filiform elementary lie algebras of order 3, analogously as that occurs into the theory of Lie algebras [M. Vergne, "Cohomologie des algèbres de Lie nilpotentes. Application à l'étude de la variété des algèbres de Lie nilpotentes,� Bull. Soc. Math. France 98, 81–116 (1970)]. Also we give the dimension, using an adaptation of the sl(2,C)-module Method, and a basis of such infinitesimal deformations in some generic cases The aim of this work is to generalize a very important type of Lie algebras and superalgebras, i.e., filiform Lie (super)algebras, into the theory of Lie algebras of order F. Thus, the concept of filiform Lie algebras of order F is obtained. In particular, for F = 3 it has been proved that by using infinitesimal deformations of the associated model elementary Lie algebra it can be obtained families of filiform elementary lie algebras of order 3, analogously as that occurs into the theory of Lie algebras [M. Vergne, "Cohomologie des algèbres de Lie nilpotentes. Application à l'étude de la variété des algèbres de Lie nilpotentes," Bull. Soc. Math. France 98, 81-116 (1970)]. Also we give the dimension, using an adaptation of the {sl}(2,{C})-module Method, and a basis of such infinitesimal deformations in some generic cases. Yoneda algebras of almost Koszul algebras Abstract. Let k be an algebraically closed field, A a finite dimensional connected. (p,q)-Koszul self-injective algebra with p, q ≥ 2. In this paper, we prove that the. Yoneda algebra of A is isomorphic to a twisted polynomial algebra A![t; β] in one inde- terminate t of degree q +1 in which A! is the quadratic dual of A, β is an ... The bubble algebra: structure of a two-colour Temperley-Lieb Algebra Grimm, Uwe; Martin, Paul P We define new diagram algebras providing a sequence of multiparameter generalizations of the Temperley-Lieb algebra, suitable for the modelling of dilute lattice systems of two-dimensional statistical mechanics. These algebras give a rigorous foundation to the various 'multi-colour algebras' of Grimm, Pearce and others. We determine the generic representation theory of the simplest of these algebras, and locate the nongeneric cases (at roots of unity of the corresponding parameters). We show by this example how the method used (Martin's general procedure for diagram algebras) may be applied to a wide variety of such algebras occurring in statistical mechanics. We demonstrate how these algebras may be used to solve the Yang-Baxter equations Quantitative Methods in Supply Chain Management Models and Algorithms Christou, Ioannis T Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, "solving problems� usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev... Open algebraic surfaces Miyanishi, Masayoshi Open algebraic surfaces are a synonym for algebraic surfaces that are not necessarily complete. An open algebraic surface is understood as a Zariski open set of a projective algebraic surface. There is a long history of research on projective algebraic surfaces, and there exists a beautiful Enriques-Kodaira classification of such surfaces. The research accumulated by Ramanujan, Abhyankar, Moh, and Nagata and others has established a classification theory of open algebraic surfaces comparable to the Enriques-Kodaira theory. This research provides powerful methods to study the geometry and topology of open algebraic surfaces. The theory of open algebraic surfaces is applicable not only to algebraic geometry, but also to other fields, such as commutative algebra, invariant theory, and singularities. This book contains a comprehensive account of the theory of open algebraic surfaces, as well as several applications, in particular to the study of affine surfaces. Prerequisite to understanding the text is a basic b... A linear time layout algorithm for business process models Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B. The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is DiamondTorre Algorithm for High-Performance Wave Modeling Vadim Levchenko Full Text Available Effective algorithms of physical media numerical modeling problems' solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU's (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five. Collaborative filtering recommendation model based on fuzzy clustering algorithm Yang, Ye; Zhang, Yunhua As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy. Applicability of genetic algorithms to parameter estimation of economic models Marcel Å evela Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong. Hopf algebras and topological recursion Esteves, João N We consider a model for topological recursion based on the Hopf algebra of planar binary trees defined by Loday and Ronco (1998 Adv. Math. 139 293–309 We show that extending this Hopf algebra by identifying pairs of nearest neighbor leaves, and thus producing graphs with loops, we obtain the full recursion formula discovered by Eynard and Orantin (2007 Commun. Number Theory Phys. 1 347–452). (paper) Said-Houari, Belkacem This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t... Approximation Algorithms for Model-Based Diagnosis Feldman, A.B. Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation Basic Research on Adaptive Model Algorithmic Control Control Conference. Richalet, J., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial...pp.977-982. Richalet, J., A. Rault, J. L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial processes Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling A. Alexandre Trindade Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated. Stochastic cluster algorithms for discrete Gaussian (SOS) models Evertz, H.G.; Hamburg Univ.; Hasenbusch, M.; Marcu, M.; Tel Aviv Univ.; Pinn, K.; Muenster Univ.; Solomon, S. We present new Monte Carlo cluster algorithms which eliminate critical slowing down in the simulation of solid-on-solid models. In this letter we focus on the two-dimensional discrete Gaussian model. The algorithms are based on reflecting the integer valued spin variables with respect to appropriately chosen reflection planes. The proper choice of the reflection plane turns out to be crucial in order to obtain a small dynamical exponent z. Actually, the successful versions of our algorithm are a mixture of two different procedures for choosing the reflection plane, one of them ergodic but slow, the other one non-ergodic and also slow when combined with a Metropolis algorithm. (orig.) Algorithms and procedures in the model based control of accelerators Bozoki, E. The overall design of a Model Based Control system was presented. The system consists of PLUG-IN MODULES, governed by a SUPERVISORY PROGRAM and communicating via SHARED DATA FILES. Models can be ladded or replaced without affecting the oveall system. There can be more then one module (algorithm) to perform the same task. The user can choose the most appropriate algorithm or can compare the results using different algorithms. Calculations, algorithms, file read and write, etc. which are used in more than one module, will be in a subroutine library. This feature will simplify the maintenance of the system. A partial list of modules is presented, specifying the task they perform. 19 refs., 1 fig The Yoneda algebra of a K2 algebra need not be another K2 algebra Cassidy, T.; Phan, C.; Shelton, B. The Yoneda algebra of a Koszul algebra or a D-Koszul algebra is Koszul. K2 algebras are a natural generalization of Koszul algebras, and one would hope that the Yoneda algebra of a K2 algebra would be another K2 algebra. We show that this is not necessarily the case by constructing a monomial K2 algebra for which the corresponding Yoneda algebra is not K2. An Improved Nested Sampling Algorithm for Model Selection and Assessment Zeng, X.; Ye, M.; Wu, J.; WANG, D. Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model. ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ... Novikov-Jordan algebras Dzhumadil'daev, A. S. Algebras with identity $(a\\star b)\\star (c\\star d) -(a\\star d)\\star(c\\star b)$ $=(a,b,c)\\star d-(a,d,c)\\star b$ are studied. Novikov algebras under Jordan multiplication and Leibniz dual algebras satisfy this identity. If algebra with such identity has unit, then it is associative and commutative. The classical exchange algebra of a Green-Schwarz sigma model on supercoset target space with Z4m grading Ke Sanmin; Yang Wenli; Shi Kangjie; Wang Chun; Jiang Kexia We investigate the classical exchange algebra of the monodromy matrix for a Green-Schwarz sigma model on supercoset target space with Z 4m grading by using a first-order Hamiltonian formulation and by adding to the Lax connection terms proportional to constraints. This enables us to show that the conserved charges of the theory are in involution in the Poisson bracket sense. Our calculation is based on a general world-sheet metric. Taking a particular case of m= 1 (and a particular choice of supergroup), our results coincide with those of the Green-Schwarz superstring theory in AdS 5 xS 5 background obtained by Magro [J. High Energy Phys. 0901, 021 (2009)]. An algebraic method to develop well-posed PML models Absorbing layers, perfectly matched layers, linearized Euler equations Rahmouni, Adib N. In 1994, Berenger [Journal of Computational Physics 114 (1994) 185] proposed a new layer method: perfectly matched layer, PML, for electromagnetism. This new method is based on the truncation of the computational domain by a layer which absorbs waves regardless of their frequency and angle of incidence. Unfortunately, the technique proposed by Berenger (loc. cit.) leads to a system which has lost the most important properties of the original one: strong hyperbolicity and symmetry. We present in this paper an algebraic technique leading to well-known PML model [IEEE Transactions on Antennas and Propagation 44 (1996) 1630] for the linearized Euler equations, strongly well-posed, preserving the advantages of the initial method, and retaining symmetry. The technique proposed in this paper can be extended to various hyperbolic problems Study of phase transition of even and odd nuclei based on q-deforme SU(1,1) algebraic model Jafarizadeh, M. A.; Amiri, N.; Fouladi, N.; Ghapanvari, M.; Ranjbar, Z. The q-deformed Hamiltonian for the SO (6) ↔ U (5) transitional case in s, d interaction boson model (IBM) can be constructed by using affine SUq (1 , 1) Lie algebra in the both IBM-1 and 2 versions and IBFM. In this research paper, we have studied the energy spectra of 120-128Xe isotopes and 123-131Xe isotopes and B(E2) transition probabilities of 120-128Xe isotopes in the shape phase transition region between the spherical and gamma unstable deformed shapes of the theory of quantum deformation. The theoretical results agree with the experimental data fairly well. It is shown that the q-deformed SO (6) ↔ U (5) transitional dynamical symmetry remains after deformation. Algorithmic cryptanalysis Joux, Antoine Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic Introduction to relation algebras relation algebras Givant, Steven The first volume of a pair that charts relation algebras from novice to expert level, this text offers a comprehensive grounding for readers new to the topic. Upon completing this introduction, mathematics students may delve into areas of active research by progressing to the second volume, Advanced Topics in Relation Algebras; computer scientists, philosophers, and beyond will be equipped to apply these tools in their own field. The careful presentation establishes first the arithmetic of relation algebras, providing ample motivation and examples, then proceeds primarily on the basis of algebraic constructions: subalgebras, homomorphisms, quotient algebras, and direct products. Each chapter ends with a historical section and a substantial number of exercises. The only formal prerequisite is a background in abstract algebra and some mathematical maturity, though the reader will also benefit from familiarity with Boolean algebra and naïve set theory. The measured pace and outstanding clarity are particularly ... Algebraic structure of chiral anomalies Stora, R. I will describe first the algebraic aspects of chiral anomalies, exercising however due care about the topological delicacies. I will illustrate the structure and methods in the context of gauge anomalies and will eventually make contact with results obtained from index theory. I will go into two sorts of generalizations: on the one hand, generalizing the algebraic set up yields e.g. gravitational and mixed gauge anomalies, supersymmetric gauge anomalies, anomalies in supergravity theories; on the other hand most constructions applied to the cohomologies which characterize anomalies easily extend to higher cohomologies. Section II is devoted to a description of the general set up as it applies to gauge anomalies. Section III deals with a number of algebraic set ups which characterize more general types of anomalies: gravitational and mixed gauge anomalies, supersymmetric gauge anomalies, anomalies in supergravity theories. It also includes brief remarks on σ models and a reminder on the full BRST algebra of quantized gauge theories A Parallel Algebraic Multigrid Solver on Graphics Processing Units Haase, Gundolf The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core. © 2010 Springer-Verlag. Wavelets and quantum algebras Ludu, A.; Greiner, M. A non-linear associative algebra is realized in terms of translation and dilation operators, and a wavelet structure generating algebra is obtained. We show that this algebra is a q-deformation of the Fourier series generating algebra, and reduces to this for certain value of the deformation parameter. This algebra is also homeomorphic with the q-deformed su q (2) algebra and some of its extensions. Through this algebraic approach new methods for obtaining the wavelets are introduced. (author). 20 refs Banach Synaptic Algebras Foulis, David J.; Pulmannov, Sylvia Using a representation theorem of Erik Alfsen, Frederic Schultz, and Erling Størmer for special JB-algebras, we prove that a synaptic algebra is norm complete (i.e., Banach) if and only if it is isomorphic to the self-adjoint part of a Rickart C∗-algebra. Also, we give conditions on a Banach synaptic algebra that are equivalent to the condition that it is isomorphic to the self-adjoint part of an AW∗-algebra. Moreover, we study some relationships between synaptic algebras and so-called generalized Hermitian algebras. Topological expansion of mixed correlations in the Hermitian 2-matrix model and x-y symmetry of the Fg algebraic invariants Eynard, B; Orantin, N We compute expectation values of mixed traces containing both matrices in a two matrix model, i.e. a generating function for counting bicolored discrete surfaces with non-uniform boundary conditions. As an application, we prove the x-y symmetry of Eynard and Orantin (2007 Invariants of algebraic curves and topological expansion Preprint math-ph/0702045) The spin-1/2 XXZ Heisenberg chain, the quantum algebra Uq[sl(2)], and duality transformations for minimal models Grimm, Uwe; Schuetz, Gunter The finite-size spectra of the spin-1/2 XXZ Heisenberg chain with toroidal boundary conditions and an even number of sites provide a projection mechanism yielding the spectra of models with central charge c q [sl(2)] quantum algebra transformations. (author) Quantum complexity of graph and algebraic problems Doern, Sebastian This thesis is organized as follows: In Chapter 2 we give some basic notations, definitions and facts from linear algebra, graph theory, group theory and quantum computation. In Chapter 3 we describe three important methods for the construction of quantum algorithms. We present the quantum search algorithm by Grover, the quantum amplitude amplification and the quantum walk search technique by Magniez et al. These three tools are the basis for the development of our new quantum algorithms for graph and algebra problems. In Chapter 4 we present two tools for proving quantum query lower bounds. We present the quantum adversary method by Ambainis and the polynomial method introduced by Beals et al. The quantum adversary tool is very useful to prove good lower bounds for many graph and algebra problems. The part of the thesis containing the original results is organized in two parts. In the first part we consider the graph problems. In Chapter 5 we give a short summary of known quantum graph algorithms. In Chapter 6 to 8 we study the complexity of our new algorithms for matching problems, graph traversal and independent set problems on quantum computers. In the second part of our thesis we present new quantum algorithms for algebraic problems. In Chapter 9 to 10 we consider group testing problems and prove quantum complexity bounds for important problems from linear algebra. (orig.) Co-clustering models, algorithms and applications Govaert, Gérard Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture Comparison of parameter estimation algorithms in hydrological modelling Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...... and provides a better coverage of the Pareto optimal solutions at a lower computational cost.... Applied economic model development algorithm for electronics company Mikhailov I. Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm's stages and explains their meanings with participants' goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management's requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study. Real algebraic geometry Bochnak, Jacek; Roy, Marie-Françoise This book is a systematic treatment of real algebraic geometry, a subject that has strong interrelation with other areas of mathematics: singularity theory, differential topology, quadratic forms, commutative algebra, model theory, complexity theory etc. The careful and clearly written account covers both basic concepts and up-to-date research topics. It may be used as text for a graduate course. The present edition is a substantially revised and expanded English version of the book "Géometrie algébrique réelle" originally published in French, in 1987, as Volume 12 of ERGEBNISSE. Since the publication of the French version the theory has made advances in several directions. Many of these are included in this English version. Thus the English book may be regarded as a completely new treatment of the subject. Economic Models and Algorithms for Distributed Systems Neumann, Dirk; Altmann, Jorn; Rana, Omer F Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems Robust Return Algorithm for Anisotropic Plasticity Models Tidemann, L.; Krenk, Steen Plasticity models can be defined by an energy potential, a plastic flow potential and a yield surface. The energy potential defines the relation between the observable elastic strains ϒe and the energy conjugate stresses Τe and between the non-observable internal strains i and the energy conjugat... A tractable algorithm for the wellfounded model Jonker, C.M.; Renardel de Lavalette, G.R. In the area of general logic programming (negated atoms allowed in the bodies of rules) and reason maintenance systems, the wellfounded model (first defined by Van Gelder, Ross and Schlipf in 1988) is generally considered to be the declarative semantics of the program. In this paper we present Rationality problem for algebraic tori Hoshi, Akinari The authors give the complete stably rational classification of algebraic tori of dimensions 4 and 5 over a field k. In particular, the stably rational classification of norm one tori whose Chevalley modules are of rank 4 and 5 is given. The authors show that there exist exactly 487 (resp. 7, resp. 216) stably rational (resp. not stably but retract rational, resp. not retract rational) algebraic tori of dimension 4, and there exist exactly 3051 (resp. 25, resp. 3003) stably rational (resp. not stably but retract rational, resp. not retract rational) algebraic tori of dimension 5. The authors make a procedure to compute a flabby resolution of a G-lattice effectively by using the computer algebra system GAP. Some algorithms may determine whether the flabby class of a G-lattice is invertible (resp. zero) or not. Using the algorithms, the suthors determine all the flabby and coflabby G-lattices of rank up to 6 and verify that they are stably permutation. The authors also show that the Krull-Schmidt theorem for G-... A new subalgebra of the Lie algebra A2 and two types of integrable Hamiltonian hierarchies, expanding integrable models Yan Qingyou; Zhang Yufeng; Wei Xiaopeng A new subalgebra G of the Lie algebra A 2 is first constructed. Then two loop algebra G-bar 1 , G-bar 2 are presented in terms of different definitions of gradations. Using G-bar 1 , G-bar 2 designs two isospectral problems, respectively. Again utilizing Tu-pattern obtains two types of various integrable Hamiltonian hierarchies of evolution equations. As reduction cases, the well-known Schroedinger equation and MKdV equation are obtained. At last, we turn the subalgebras G-bar 1 , G-bar 2 of the loop algebra A-bar 2 into equivalent subalgebras of the loop algebra A-bar 1 by making a suitable linear transformation so that the two types of 5-dimensional loop algebras are constructed. Two kinds of integrable couplings of the obtained hierarchies are showed. Specially, the integrable couplings of Schroedinger equation and MKdV equation are obtained, respectively Quantum cluster algebras and quantum nilpotent algebras Goodearl, Kenneth R.; Yakimov, Milen T. A major direction in the theory of cluster algebras is to construct (quantum) cluster algebra structures on the (quantized) coordinate rings of various families of varieties arising in Lie theory. We prove that all algebras in a very large axiomatically defined class of noncommutative algebras possess canonical quantum cluster algebra structures. Furthermore, they coincide with the corresponding upper quantum cluster algebras. We also establish analogs of these results for a large class of Poisson nilpotent algebras. Many important families of coordinate rings are subsumed in the class we are covering, which leads to a broad range of applications of the general results to the above-mentioned types of problems. As a consequence, we prove the Berenstein–Zelevinsky conjecture [Berenstein A, Zelevinsky A (2005) Adv Math 195:405–455] for the quantized coordinate rings of double Bruhat cells and construct quantum cluster algebra structures on all quantum unipotent groups, extending the theorem of Geiß et al. [Geiß C, et al. (2013) Selecta Math 19:337–397] for the case of symmetric Kac–Moody groups. Moreover, we prove that the upper cluster algebras of Berenstein et al. [Berenstein A, et al. (2005) Duke Math J 126:1–52] associated with double Bruhat cells coincide with the corresponding cluster algebras. PMID:24982197 Differential Evolution algorithm applied to FSW model calibration Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J. Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel. Numerical linear algebra theory and applications Beilina, Larisa; Karchevskii, Mikhail This book combines a solid theoretical background in linear algebra with practical algorithms for numerical solution of linear algebra problems. Developed from a number of courses taught repeatedly by the authors, the material covers topics like matrix algebra, theory for linear systems of equations, spectral theory, vector and matrix norms combined with main direct and iterative numerical methods, least squares problems, and eigen problems. Numerical algorithms illustrated by computer programs written in MATLAB® are also provided as supplementary material on SpringerLink to give the reader a better understanding of professional numerical software for the solution of real-life problems. Perfect for a one- or two-semester course on numerical linear algebra, matrix computation, and large sparse matrices, this text will interest students at the advanced undergraduate or graduate level. Commutative algebra constructive methods finite projective modules Lombardi, Henri Translated from the popular French edition, this book offers a detailed introduction to various basic concepts, methods, principles, and results of commutative algebra. It takes a constructive viewpoint in commutative algebra and studies algorithmic approaches alongside several abstract classical theories. Indeed, it revisits these traditional topics with a new and simplifying manner, making the subject both accessible and innovative. The algorithmic aspects of such naturally abstract topics as Galois theory, Dedekind rings, Prüfer rings, finitely generated projective modules, dimension theory of commutative rings, and others in the current treatise, are all analysed in the spirit of the great developers of constructive algebra in the nineteenth century. This updated and revised edition contains over 350 well-arranged exercises, together with their helpful hints for solution. A basic knowledge of linear algebra, group theory, elementary number theory as well as the fundamentals of ring and module theory is r... Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Coset realization of unifying W-algebras Blumenhagen, R.; Huebel, R. We construct several quantum coset W-algebras, e.g. sl(2,R)/U(1) and sl(2,R)+sl(2,R)/sl(2,R), and argue that they are finitely nonfreely generated. Furthermore, we discuss in detail their role as unifying W-algebras of Casimir W-algebras. We show that it is possible to give coset realizations of various types of unifying W-algebras, e.g. the diagonal cosets based on the symplectic Lie algebras sp(2n) realize the unifying W-algebras which have previously been introduced as 'WD -n '. In addition, minimal models of WD -n are studied. The coset realizations provide a generalization of level-rank-duality of dual coset pairs. As further examples of finitely nonfreely generated quantum W-algebras we discuss orbifolding of W-algebras which on the quantum level has different properties than in the classical case. We demonstrate in some examples that the classical limit according to Bowcock and Watts of these nonfreely finitely generated quantum W-algebras probably yields infinitely nonfreely generated classical W-algebras. (orig.) The algebras of large N matrix mechanics Halpern, M.B.; Schwartz, C. Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N. A Differential-Algebraic Model for the Once-Through Steam Generator of MHTGR-Based Multimodular Nuclear Plants Zhe Dong Full Text Available Small modular reactors (SMRs are those fission reactors whose electrical output power is no more than 300 MWe. SMRs usually have the inherent safety feature that can be applicable to power plants of any desired power rating by applying the multimodular operation scheme. Due to its strong inherent safety feature, the modular high temperature gas-cooled reactor (MHTGR, which uses helium as coolant and graphite as moderator and structural material, is a typical SMR for building the next generation of nuclear plants (NGNPs. The once-through steam generator (OTSG is the basis of realizing the multimodular scheme, and modeling of the OTSG is meaningful to study the dynamic behavior of the multimodular plants and to design the operation and control strategy. In this paper, based upon the conservation laws of mass, energy, and momentum, a new differential-algebraic model for the OTSGs of the MHTGR-based multimodular nuclear plants is given. This newly-built model can describe the dynamic behavior of the OTSG in both the cases of providing superheated steam and generating saturated steam. Numerical simulation results show the feasibility and satisfactory performance of this model. Moreover, this model has been applied to develop the real-time simulation software for the operation and regulation features of the world first underconstructed MHTGR-based commercial nuclear plant—HTR-PM. Cluster algebras in mathematical physics Francesco, Philippe Di; Gekhtman, Michael; Kuniba, Atsuo; Yamazaki, Masahito identities in conformal field theory and so forth. It is remarkable that the key ingredients in such a variety of theories and models are captured and described universally in the common language of cluster algebras. This special issue provides a bird's-eye view of the known and latest results in various topics in mathematical physics where cluster algebras have played an essential role. The contributed articles are themselves an eloquent illustration of the breadth and depth of the subject of cluster algebras. We are confident that the issue will stimulate both newcomers and experts, since the applications to physics still seem to be growing Methodology, models and algorithms in thermographic diagnostics Živ�ák, Jozef; Madarász, Ladislav; Rudas, Imre J This book presents the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te... Classification and identification of Lie algebras Snobl, Libor The purpose of this book is to serve as a tool for researchers and practitioners who apply Lie algebras and Lie groups to solve problems arising in science and engineering. The authors address the problem of expressing a Lie algebra obtained in some arbitrary basis in a more suitable basis in which all essential features of the Lie algebra are directly visible. This includes algorithms accomplishing decomposition into a direct sum, identification of the radical and the Levi decomposition, and the computation of the nilradical and of the Casimir invariants. Examples are given for each algorithm. For low-dimensional Lie algebras this makes it possible to identify the given Lie algebra completely. The authors provide a representative list of all Lie algebras of dimension less or equal to 6 together with their important properties, including their Casimir invariants. The list is ordered in a way to make identification easy, using only basis independent properties of the Lie algebras. They also describe certain cl... A division algebra classification of generalized supersymmetries Toppan, Francesco Generalized supersymmetries admitting bosonic tensor central charges are classified in accordance with their division algebra properties. Division algebra consistent constraints lead (in the complex and quaternionic cases) to the classes of hermitian and holomorphic generalized supersymmetries. Applications to the analytic continuation of the M-algebra to the Euclidean and the systematic investigation of certain classes of models in generic space-times are briefly mentioned. (author) Solutions of the Yang-Baxter equation: Descendants of the six-vertex model from the Drinfeld doubles of dihedral group algebras Finch, P.E.; Dancer, K.A.; Isaac, P.S.; Links, J. The representation theory of the Drinfeld doubles of dihedral groups is used to solve the Yang-Baxter equation. Use of the two-dimensional representations recovers the six-vertex model solution. Solutions in arbitrary dimensions, which are viewed as descendants of the six-vertex model case, are then obtained using tensor product graph methods which were originally formulated for quantum algebras. Connections with the Fateev-Zamolodchikov model are discussed. Modeling Algorithms in SystemC and ACL2 John W. O'Leary Full Text Available We describe the formal language MASC, based on a subset of SystemC and intended for modeling algorithms to be implemented in hardware. By means of a special-purpose parser, an algorithm coded in SystemC is converted to a MASC model for the purpose of documentation, which in turn is translated to ACL2 for formal verification. The parser also generates a SystemC variant that is suitable as input to a high-level synthesis tool. As an illustration of this methodology, we describe a proof of correctness of a simple 32-bit radix-4 multiplier. Algorithmic fault tree construction by component-based system modeling Majdara, Aref; Wakabayashi, Toshio Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author) Algorithm of Dynamic Model Structural Identification of the Multivariable Plant Л.М. Блохін Full Text Available  The new algorithm of dynamic model structural identification of the multivariable stabilized plant with observable and unobservable disturbances in the regular operating modes is offered in this paper. With the help of the offered algorithm it is possible to define the "perturbed� models of dynamics not only of the plant, but also the dynamics characteristics of observable and unobservable casual disturbances taking into account the absence of correlation between themselves and control inputs with the unobservable perturbations. Numerical linear algebra with applications using Matlab Ford, William Designed for those who want to gain a practical knowledge of modern computational techniques for the numerical solution of linear algebra problems, Numerical Linear Algebra with Applications contains all the material necessary for a first year graduate or advanced undergraduate course on numerical linear algebra with numerous applications to engineering and science. With a unified presentation of computation, basic algorithm analysis, and numerical methods to compute solutions, this book is ideal for solving real-world problems. It provides necessary mathematical background information for Circle Maps and C*-algebras Schmidt, Thomas Lundsgaard; Thomsen, Klaus We consider a construction of $C^*$-algebras from continuous piecewise monotone maps on the circle which generalizes the crossed product construction for homeomorphisms and more generally the construction of Renault, Deaconu and Anantharaman-Delaroche for local homeomorphisms. Assuming that the map...... is surjective and not locally injective we give necessary and sufficient conditions for the simplicity of the $C^*$-algebra and show that it is then a Kirchberg algebra. We provide tools for the calculation of the K-theory groups and turn them into an algorithmic method for Markov maps.... Introduction to genetic algorithms as a modeling tool Wildberger, A.M.; Hickok, K.A. Genetic algorithms are search and classification techniques modeled on natural adaptive systems. This is an introduction to their use as a modeling tool with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in genetic algorithms and to recognize those which might impact on electric power engineering. Beginning with a discussion of genetic algorithms and their origin as a model of biological adaptation, their advantages and disadvantages are described in comparison with other modeling tools such as simulation and neural networks in order to provide guidance in selecting appropriate applications. In particular, their use is described for improving expert systems from actual data and they are suggested as an aid in building mathematical models. Using the Thermal Performance Advisor as an example, it is suggested how genetic algorithms might be used to make a conventional expert system and mathematical model of a power plant adapt automatically to changes in the plant's characteristics Gauss Elimination: Workhorse of Linear Algebra. linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *). algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ... ABC Algorithm based Fuzzy Modeling of Optical Glucose Detection SARACOGLU, O. G. Full Text Available This paper presents a modeling approach based on the use of fuzzy reasoning mechanism to define a measured data set obtained from an optical sensing circuit. For this purpose, we implemented a simple but effective an in vitro optical sensor to measure glucose content of an aqueous solution. Measured data contain analog voltages representing the absorbance values of three wavelengths measured from an RGB LED in different glucose concentrations. To achieve a desired model performance, the parameters of the fuzzy models are optimized by using the artificial bee colony (ABC algorithm. The modeling results presented in this paper indicate that the fuzzy model optimized by the algorithm provide a successful modeling performance having the minimum mean squared error (MSE of 0.0013 which are in clearly good agreement with the measurements. Leavitt path algebras Abrams, Gene; Siles Molina, Mercedes This book offers a comprehensive introduction by three of the leading experts in the field, collecting fundamental results and open problems in a single volume. Since Leavitt path algebras were first defined in 2005, interest in these algebras has grown substantially, with ring theorists as well as researchers working in graph C*-algebras, group theory and symbolic dynamics attracted to the topic. Providing a historical perspective on the subject, the authors review existing arguments, establish new results, and outline the major themes and ring-theoretic concepts, such as the ideal structure, Z-grading and the close link between Leavitt path algebras and graph C*-algebras. The book also presents key lines of current research, including the Algebraic Kirchberg Phillips Question, various additional classification questions, and connections to noncommutative algebraic geometry. Leavitt Path Algebras will appeal to graduate students and researchers working in the field and related areas, such as C*-algebras and... Quantum algebras in nuclear structure Bonatsos, D.; Daskaloyannis, C. Quantum algebras is a mathematical tool which provides us with a class of symmetries wider than that of Lie algebras, which are contained in the former as a special case. After a self-contained introduction through the necessary mathematical tools (q-numbers, q-analysis, q-oscillators, q-algebras), the su q (2) rotator model and its extensions, the construction of deformed exactly soluble models (Interacting Boson Model, Moszkowski model), the use of deformed bosons in the description of pairing correlations, and the symmetries of the anisotropic quantum harmonic oscillator with rational ratios of frequencies, which underline the structure of superdeformed and hyperdeformed nuclei are discussed in some details. A brief description of similar applications to molecular structure and an outlook are also given. (author) 2 Tabs., 324 Refs Algebraic theory of numbers Samuel, Pierre Algebraic number theory introduces students not only to new algebraic notions but also to related concepts: groups, rings, fields, ideals, quotient rings and quotient fields, homomorphisms and isomorphisms, modules, and vector spaces. Author Pierre Samuel notes that students benefit from their studies of algebraic number theory by encountering many concepts fundamental to other branches of mathematics - algebraic geometry, in particular.This book assumes a knowledge of basic algebra but supplements its teachings with brief, clear explanations of integrality, algebraic extensions of fields, Gal Lukasiewicz-Moisil algebras Boicescu, V; Georgescu, G; Rudeanu, S The Lukasiewicz-Moisil algebras were created by Moisil as an algebraic counterpart for the many-valued logics of Lukasiewicz. The theory of LM-algebras has developed to a considerable extent both as an algebraic theory of intrinsic interest and in view of its applications to logic and switching theory.This book gives an overview of the theory, comprising both classical results and recent contributions, including those of the authors. N-valued and &THgr;-valued algebras are presented, as well as &THgr;-algebras with negation.Mathematicians interested in lattice theory or symbolic logic, and computer scientists, will find in this monograph stimulating material for further research. Introduction to quantum algebras Kibler, M.R. The concept of a quantum algebra is made easy through the investigation of the prototype algebras u qp (2), su q (2) and u qp (1,1). The latter quantum algebras are introduced as deformations of the corresponding Lie algebras; this is achieved in a simple way by means of qp-bosons. The Hopf algebraic structure of u qp (2) is also discussed. The basic ingredients for the representation theory of u qp (2) are given. Finally, in connection with the quantum algebra u qp (2), the qp-analogues of the harmonic oscillator are discussed and of the (spherical and hyperbolical) angular momenta. (author) 50 refs An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model Yan Guo Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer's feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems. Homotopy Theory of C*-Algebras Ostvaer, Paul Arne Homotopy theory and C* algebras are central topics in contemporary mathematics. This book introduces a modern homotopy theory for C*-algebras. One basic idea of the setup is to merge C*-algebras and spaces studied in algebraic topology into one category comprising C*-spaces. These objects are suitable fodder for standard homotopy theoretic moves, leading to unstable and stable model structures. With the foundations in place one is led to natural definitions of invariants for C*-spaces such as homology and cohomology theories, K-theory and zeta-functions. The text is largely self-contained. It will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ... Prediction of turbulent heat transfer with surface blowing using a non-linear algebraic heat flux model Bataille, F.; Younis, B.A.; Bellettre, J.; Lallemand, A. The paper reports on the prediction of the effects of blowing on the evolution of the thermal and velocity fields in a flat-plate turbulent boundary layer developing over a porous surface. Closure of the time-averaged equations governing the transport of momentum and thermal energy is achieved using a complete Reynolds-stress transport model for the turbulent stresses and a non-linear, algebraic and explicit model for the turbulent heat fluxes. The latter model accounts explicitly for the dependence of the turbulent heat fluxes on the gradients of mean velocity. Results are reported for the case of a heated boundary layer which is first developed into equilibrium over a smooth impervious wall before encountering a porous section through which cooler fluid is continuously injected. Comparisons are made with LDA measurements for an injection rate of 1%. The reduction of the wall shear stress with increase in injection rate is obtained in the calculations, and the computed rates of heat transfer between the hot flow and the wall are found to agree well with the published data Efficient simulation of gas-liquid pipe flows using a generalized population balance equation coupled with the algebraic slip model Icardi, Matteo; Ronco, Gianni; Marchisio, Daniele Luca; Labois, Mathieu The inhomogeneous generalized population balance equation, which is discretized with the direct quadrature method of moment (DQMOM), is solved to predict the bubble size distribution (BSD) in a vertical pipe flow. The proposed model is compared with a more classical approach where bubbles are characterized with a constant mean size. The turbulent two-phase flow field, which is modeled using a Reynolds-Averaged Navier-Stokes equation approach, is assumed to be in local equilibrium, thus the relative gas and liquid (slip) velocities can be calculated with the algebraic slip model, thereby accounting for the drag, lift, and lubrication forces. The complex relationship between the bubble size distribution and the resulting forces is described accurately by the DQMOM. Each quadrature node and weight represents a class of bubbles with characteristic size and number density, which change dynamically in time and space to preserve the first moments of the BSD. The predictions obtained are validated against previously published experimental data, thereby demonstrating the advantages of this approach for large-scale systems as well as suggesting future extensions to long piping systems and more complex geometries. © 2014 Elsevier Inc. Icardi, Matteo Epidemic Processes on Complex Networks : Modelling, Simulation and Algorithms Van de Bovenkamp, R. Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS Worm Algorithm for CP(N-1) Model Rindlisbacher, Tobias The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l... Optimisation of Hidden Markov Model using Baum–Welch algorithm Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ... Heterogenous Agents Model with the Worst Out Algorithm Vácha, Lukáš; Vošvrda, Miloslav -, �. 8 (2006), s. 3-19 ISSN 1801-5999 Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out algorithm Subject RIV: AH - Economics INPUT-OUTPUT STRUCTURE OF LINEAR-DIFFERENTIAL ALGEBRAIC SYSTEMS KUIJPER, M; SCHUMACHER, JM Systems of linear differential and algebraic equations occur in various ways, for instance, as a result of automated modeling procedures and in problems involving algebraic constraints, such as zero dynamics and exact model matching. Differential/algebraic systems may represent an input-output Boundary Lax pairs from non-ultra-local Poisson algebras Avan, Jean; Doikou, Anastasia We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example. Application of genetic algorithm in radio ecological models parameter determination Pantelic, G. [Institute of Occupatioanl Health and Radiological Protection ' Dr Dragomir Karajovic' , Belgrade (Serbia) The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 {+-} 3) days and transfer coefficient from grass to milk is (0.019 {+-} 0.005). (authors) Some links on this page may take you to non-federal websites. Their policies may differ from this site. Website Policies/Important Links
CommonCrawl
Dynamics of hyperbolic meromorphic functions DCDS Home Sharper estimates on the eigenvalues of Dirichlet fractional Laplacian May 2015, 35(5): 2227-2272. doi: 10.3934/dcds.2015.35.2227 Minimal period problems for brake orbits of nonlinear autonomous reversible semipositive Hamiltonian systems Duanzhi Zhang 1, School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071 Received June 2012 Revised January 2013 Published December 2014 In this paper, for any positive integer $n$, we study the Maslov-type index theory of $i_{L_0}$, $i_{L_1}$ and $i_{\sqrt{-1}}^{L_0}$ with $L_0 = \{0\}\times \mathbf{R}^n\subset \mathbf{R}^{2n}$ and $L_1=\mathbf{R}^n\times \{0\} \subset \mathbf{R}^{2n}$. As applications we study the minimal period problems for brake orbits of nonlinear autonomous reversible Hamiltonian systems. For first order nonlinear autonomous reversible Hamiltonian systems in $\mathbf{R}^{2n}$, which are semipositive, and superquadratic at zero and infinity, we prove that for any $T>0$, the considered Hamiltonian systems possesses a nonconstant $T$ periodic brake orbit $X_T$ with minimal period no less than $\frac{T}{2n+2}$. Furthermore if $\int_0^T H''_{22}(x_T(t))dt$ is positive definite, then the minimal period of $x_T$ belongs to $\{T,\;\frac{T}{2}\}$. Moreover, if the Hamiltonian system is even, we prove that for any $T>0$, the considered even semipositive Hamiltonian systems possesses a nonconstant symmetric brake orbit with minimal period belonging to $\{T,\;\frac{T}{3}\}$. Keywords: Symmetric, semipositive and reversible, Maslov-type index, minimal period, Hamiltonian systems., brake orbit. Mathematics Subject Classification: 58E05, 70H05, 34C2. Citation: Duanzhi Zhang. Minimal period problems for brake orbits of nonlinear autonomous reversible semipositive Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2227-2272. doi: 10.3934/dcds.2015.35.2227 A. Ambrosetti, V. Benci and Y. Long, A note on the existence of multiple brake orbits,, Nonlinear Anal. T. M. A., 21 (1993), 643. doi: 10.1016/0362-546X(93)90061-V. Google Scholar A. Ambrosetti and V. Coti Zelati, Solutions with minimal period for Hamiltonian systems in a potential well,, Ann. I. H. P. Anal. non linéaire, 4 (1987), 275. Google Scholar A. Ambrosetti and G. Mancini, Solutions of minimal period for a class of convex Hamiltonian systems,, Math. Ann., 255 (1981), 405. doi: 10.1007/BF01450713. Google Scholar T. An and Y. Long, Index theories of second order Hamiltonian systems,, Nonlinear Anal., 34 (1998), 585. doi: 10.1016/S0362-546X(97)00572-5. Google Scholar V. Benci, Closed geodesics for the Jacobi metric and periodic solutions of prescribed energy of natural Hamiltonian systems,, Ann. I. H. P. Analyse Nonl., 1 (1984), 401. Google Scholar V. Benci and F. Giannoni, A new proof of the existence of a brake orbit. In "Advanced Topics in the Theory of Dynamical Systems",, Notes Rep. Math. Sci. Eng., 6 (1989), 37. Google Scholar S. Bolotin, Libration motions of natural dynamical systems,, Vestnik Moskov Univ. Ser. I. Mat. Mekh., 6 (1978), 72. Google Scholar S. Bolotin and V. V. Kozlov, Librations with many degrees of freedom,, J. Appl. Math. Mech., 42 (1978), 245. Google Scholar B. Booss and K. Furutani, The Maslov-type index - a functional analytical definition and the spectral flow formula,, Tokyo J. Math., 21 (1998), 1. doi: 10.3836/tjm/1270041982. Google Scholar B. Booss and C. Zhu, General spectral flow formula for fixed maximal domain,, Central Eur. J. Math., 3 (2005), 558. doi: 10.2478/BF02475923. Google Scholar S. E. Cappell, R. Lee and E. Y. Miller, On the Maslov-type index,, Comm. Pure Appl. Math., 47 (1994), 121. doi: 10.1002/cpa.3160470202. Google Scholar C. Conley and E. Zehnder, Maslov-type index theory for flows and periodix solutions for Hamiltonian equations,, Commu. Pure. Appl. Math., 37 (1984), 207. doi: 10.1002/cpa.3160370204. Google Scholar D. Dong and Y. Long, The Iteration Theory of the Maslov-type Index Theory with Applications to Nonlinear Hamiltonian Systems,, Trans. Amer. Math. Soc., 349 (1997), 2619. doi: 10.1090/S0002-9947-97-01718-2. Google Scholar J. J. Duistermaat, Fourier Integral Operators,, Birkhäuser, (1996). Google Scholar I. Ekeland, Convexity Methods in Hamiltonian Mechanics,, Spring-Verlag. Berlin, (1990). doi: 10.1007/978-3-642-74331-3. Google Scholar I. Ekeland and E. Hofer, Periodic solutions with percribed period for convex autonomous Hamiltonian systems,, Invent. Math., 81 (1985), 155. doi: 10.1007/BF01388776. Google Scholar G. Fei and Q. Qiu, Minimal period solutions of nonlinear Hamiltonian systems,, Nonlinear Anal., 27 (1996), 811. doi: 10.1016/0362-546X(95)00077-9. Google Scholar G. Fei, S.-K. Kim and T. Wang, Minimal Period Estimates of Period Solutions for Superquadratic Hamiltonian Syetems,, J. Math. Anal. Appl., 238 (1999), 216. doi: 10.1006/jmaa.1999.6527. Google Scholar G. Fei, S.-K. Kim and T. Wang, Solutions of minimal period for even classical Hamiltonian systems,, Nonlinear Anal., 43 (2001), 363. doi: 10.1016/S0362-546X(99)00199-6. Google Scholar M. Girardi and M. Matzeu, Some results on solutions of minimal period to superquadratic Hamiltonian equations,, Nonlinear Anal., 7 (1983), 475. doi: 10.1016/0362-546X(83)90039-1. Google Scholar M. M. Girardi and M. Matzeu, Solutions of minimal period for a class of nonconvex Hamiltonian systems and applications to the fixed energy problem,, Nonlinear Anal. TMA., 10 (1986), 371. doi: 10.1016/0362-546X(86)90134-3. Google Scholar M. Girardi and M. Matzeu, Periodic solutions of convex Hamiltonian systems with a quadratic growth at the origin and superquadratic at infinity,, Ann. Math. Pura ed App., 147 (1987), 21. doi: 10.1007/BF01762410. Google Scholar M. Girardi and M. Matzeu, Dual Morse index estimates for periodic solutions of Hamiltonian systems in some nonconvex superquadratic case,, Nonlinear Anal. TMA., 17 (1991), 481. doi: 10.1016/0362-546X(91)90143-O. Google Scholar H. Gluck and W. Ziller, Existence of periodic solutions of conservtive systems,, Seminar on Minimal Submanifolds, (1983), 65. Google Scholar E. W. C. van Groesen, Analytical mini-max methods for Hamiltonian brake orbits of prescribed energy,, J. Math. Anal. Appl., 132 (1988), 1. doi: 10.1016/0022-247X(88)90039-X. Google Scholar K. Hayashi, Periodic solution of classical Hamiltonian systems,, Tokyo J. Math., 6 (1983), 473. doi: 10.3836/tjm/1270213886. Google Scholar C. Liu, A note on the monotonicity of Maslov-type index of Linear Hamiltonian systems with applications,, Proceedings of the royal Society of Edinburg, 135 (2005), 1263. doi: 10.1017/S0308210500004364. Google Scholar C. Liu, Maslov-type index theory for symplectic paths with Lagrangian boundary conditions,, Adv. Nonlinear Stud., 7 (2007), 131. Google Scholar C. Liu, Asymptotically linear hamiltonian systems with largrangian boundary conditions,, Pacific J. Math., 232 (2007), 233. doi: 10.2140/pjm.2007.232.233. Google Scholar C. Liu, Minimal period estimates for brake orbits of nonlinear symmetric Hamiltonian systems,, Discrete Contin. Dyn. Syst., 27 (2010), 337. doi: 10.3934/dcds.2010.27.337. Google Scholar C. Liu and Y. Long, An optimal increasing estimate for iterated Maslov-type indices,, Chinese Sci. Bull., 42 (1997), 2275. Google Scholar C. Liu and Y. Long, Iteration inequalities of the Maslov-type index theory with applications,, J. Diff. Equa., 165 (2000), 355. doi: 10.1006/jdeq.2000.3775. Google Scholar C. Liu and D. Zhang, An iteration theory of Maslov-type index for symplectic paths associated with a Lagranfian subspace and Multiplicity of brake orbits in bounded convex symmetric domains,, , (). Google Scholar Y. Long, Maslov-type index, degenerate critical points, and asymptotically linear Hamiltonian systems,, Science in China, 7 (1990), 673. Google Scholar Y. Long, The minimal period problem of classical Hamiltonian systems with even potentials,, Ann. I. H. P. Anal. non linéaire, 10 (1993), 605. Google Scholar Y. Long, The minimal period problem of period solutions for autonomous superquadratic second Hamiltonian systems,, J. Diff. Equa., 111 (1994), 147. doi: 10.1006/jdeq.1994.1079. Google Scholar Y. Long, On the minimal period for periodic solution problem of nonlinear Hamiltonian systems,, Chinese Ann. of math., 18 (1997), 481. Google Scholar Y. Long, Bott formula of the Maslov-type index theory,, Pacific J. Math., 187 (1999), 113. doi: 10.2140/pjm.1999.187.113. Google Scholar Y. long, Index Theory for Symplectic Paths with Applications,, Birkhäuser, (2002). doi: 10.1007/978-3-0348-8175-3. Google Scholar Y. Long, D. Zhang and C. Zhu, Multiple brake orbits in bounded convex symmetric domains,, Advances in Math., 203 (2006), 568. doi: 10.1016/j.aim.2005.05.005. Google Scholar P. H. Rabinowitz, Periodic solution of Hamiltonian systems,, Commu. Pure Appl. Math., 31 (1978), 157. doi: 10.1002/cpa.3160310203. Google Scholar P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations,, CBMS Regional Conf. Ser. in Math., 45 (1986), 287. Google Scholar P. H. Rabinowitz, On the existence of periodic solutions for a class of symmetric Hamiltonian systems,, Nonlinear Anal. T. M. A., 11 (1987), 599. doi: 10.1016/0362-546X(87)90075-7. Google Scholar J. Robbin and D. Salamon, The Maslov indices for paths,, Topology, 32 (1993), 827. doi: 10.1016/0040-9383(93)90052-W. Google Scholar H. Seifert, Periodische Bewegungen mechanischer Systeme,, Math. Z., 51 (1948), 197. doi: 10.1007/BF01291002. Google Scholar A. Szulkin, Cohomology and Morse theory for strongly indefinite functions,, Math. Z., 209 (1992), 375. doi: 10.1007/BF02570842. Google Scholar Y. Xiao, Periodic Solutions with Prescribed Minimal Period for Second Order Hamiltonian Systems with Even Potentials,, Acta Math. Sinica, 26 (2010), 825. doi: 10.1007/s10114-009-8305-2. Google Scholar D. Zhang, Symmetric period solutions with prescribed period for even autonomous semipositive hamiltonian systems,, Sci. China Math., 57 (2014), 81. doi: 10.1007/s11425-013-4598-9. Google Scholar D. Zhang, Maslov-type index and brake orbits in nonlinear Hamiltonian systems,, Science in China, 50 (2007), 761. doi: 10.1007/s11425-007-0034-3. Google Scholar C. Zhu and Y. Long, Maslov index theory for symplectic paths and spectral flow(I),, Chinese Ann. of Math., 20 (1999), 413. doi: 10.1142/S0252959999000485. Google Scholar Simon Hochgerner. Symmetry actuated closed-loop Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 641-669. doi: 10.3934/jgm.2020030 Hua Shi, Xiang Zhang, Yuyan Zhang. Complex planar Hamiltonian systems: Linearization and dynamics. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020406 Manuel de León, Víctor M. Jiménez, Manuel Lainz. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2021001 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Skyler Simmons. Stability of broucke's isosceles orbit. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021015 Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368 Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001 Norman Noguera, Ademir Pastor. Scattering of radial solutions for quadratic-type Schrödinger systems in dimension five. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021018 Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218 Attila Dénes, Gergely Röst. Single species population dynamics in seasonal environment with short reproduction period. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020288 Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055 Lin Jiang, Song Wang. Robust multi-period and multi-objective portfolio selection. Journal of Industrial & Management Optimization, 2021, 17 (2) : 695-709. doi: 10.3934/jimo.2019130 Fengwei Li, Qin Yue, Xiaoming Sun. The values of two classes of Gaussian periods in index 2 case and weight distributions of linear codes. Advances in Mathematics of Communications, 2021, 15 (1) : 131-153. doi: 10.3934/amc.2020049 Teresa D'Aprile. Bubbling solutions for the Liouville equation around a quantized singularity in symmetric domains. Communications on Pure & Applied Analysis, 2021, 20 (1) : 159-191. doi: 10.3934/cpaa.2020262 HTML views (0) Duanzhi Zhang
CommonCrawl
A trivial proof of Bertrand's postulate Write the integers from any $n$ through $0$ descending in a column, where $n \geq 2$, and begin a second column with the value $2n$. For each entry after that, if the two numbers on that line share a factor, copy the the entry unchanged, but if they're coprime, subtract $1$. We'll refer to the first column as $a$, where each value is the same as its index, and the second column as $b$, where the $a$th row's entry is $b_a$. The $0$-index refers to the bottom row. Equivalently, $$ b_a = \begin{cases} 2n & \textrm{if } a = n \\ b_{a+1} - 1 & \textrm{if }\gcd(a+1,b_{a+1})=1 \\ b_{a+1} & \textrm{otherwise}\end{cases}$$ Consider the following example where $n=8$. I've also included a column showing $\gcd(a,b_a)$, and colored those $b_a$ that share a factor with $a$ and thus don't change. $$ \begin{array}{|c|c|c|} \hline a & b_a & (a,b_a) \\ \hline 8 & \color{red}{16} & 8 \\ \hline 7 & 16 & 1 \\ \hline 6 & \color{red}{15} & 3 \\ \hline 5 & \color{red}{15} & 5 \\ \hline 4 & 15 & 1 \\ \hline 3 & 14 & 1 \\ \hline 2 & 13 & 1 \\ \hline 1 & 12 & 1 \\ \hline 0 & 11 & 11 \\ \hline \end{array} $$ Assertion: $b_0$ will always be prime. Why? Well, suppose not, that some smaller prime $p<b_0$ divides it. In particular, let $p$ be the smallest prime factor that divides $b_0$. Since $b_0 \neq b_n$, and $p\geq 2$, we have $p<n$, so if a prime does divide $b_0$, it must be in our column of $a$ values. $p \mid b_0 \implies p \mid b_p$. This is because $p$ can only divide $b_0$ if it has already been established by dividing $b_{kp}$ for some $k\geq 1$. A factor cannot have its first appearance at $b_0$ unless it is prime. That said, $p \mid b_p \implies b_p = b_{p-1}$. However, that means $b_{p-1} \not\equiv b_0 \pmod {p}$, regardless of which $b_a$ decrement or not; there are one too few to make it back to our asserted divisibility, and we're left with $b_0 \not\equiv 0 \pmod {p}$, i.e. $p \nmid b_0$, a contradiction. (Recall that $b_1 - b_0 = 1$ always, preventing a constant $0 \pmod p$ all the way down.) $$ \begin{array}{|l|l|} \hline n & 2n \\ \hline \dots & \dots \\ \hline p & b_p \equiv 0 \pmod{p} \\ \hline p-1 & b_{p-1} \equiv 0 \pmod {p} \\ \hline p-2 & b_{p-2} \equiv \{0\text{ or } p-1\} \pmod{p} \\ \hline p-3 & b_{p-3} \equiv \{0\text{ or } p-1 \text{ or }p-2\} \pmod{p} \\ \hline \dots & \dots \\ \hline 0 & b_0 \not\equiv 0 \pmod{p} \\ \hline \end{array} $$ Conclusion: As we've established there can be no smallest prime factor dividing $b_0$, it must be prime. Now that we have prime $b_0$, we can apply the same process arbitrarily with any $n$, and immediately we've shown there exists a prime in any $(n,2n)$ interval. It's pretty clear I got the logic wrong for an important chunk of the proof, and I'm working on a clever way to solve that, but in the meantime, I have an idea for a less elegant fix. If you look at the actual mechanism of what's going on, it's basically this. The subtracting one only when coprime essentially maintains a number (the difference $b_a - a$ for any $a$) which it's trying to rule out as a prime. This starts off as $n$, which is automatically bumped up to $n+1$ on the next line since $n \mid 2n$. Thereafter, whenever any factor in $a$ is shared by a factor in $b_a -a$, it's marking $b_a-a$ as composite and moving on. You can see that in this partial chart for $n=113$, where the right hand column is just the difference of the first two: $$ \begin{array}{|l|l|l|} \hline 113 & 226 = 2 \cdot 113 & 113 \\ \hline 112 = 2^4\cdot 17 & 226 = 2 \cdot 113 & 114=2\cdot 3 \cdot 19 \\ \hline 111 = 3\cdot 37 & 226 = 2 \cdot 113 & 115 = 5 \cdot 23 \\ \hline 110 = 2\cdot 5\cdot 11 & 225 = 3^2 \cdot 5^2 & 115 = 5 \cdot 23 \\ \hline 109 & 225 = 3^2 \cdot 5^2 & 116 = 2^2 \cdot 29 \\ \hline 108 = 2^2 \cdot 3^3 & 224=2^5 \cdot 7 & 116 = 2^2 \cdot 29 \\ \hline 107 & 224=2^5 \cdot 7 & 117 = 3^2 \cdot 13 \\ \hline 106 = 2 \cdot 53 & 223 & 117 = 3^2 \cdot 13 \\ \hline 105 = 3 \cdot 5 \cdot 7 & 222 = 2\cdot 3 \cdot 37 & 117 = 3^2 \cdot 13 \\ \hline 104 = 2^3 \cdot 13 & 222 = 2\cdot 3 \cdot 37 & 118 = 2\cdot 59 \\ \hline 103 & 222 = 2\cdot 3 \cdot 37 & 119 = 7 \cdot 17 \\ \hline 102 = 2 \cdot 3 \cdot 17 & 221=13 \cdot 17 & 119 = 7 \cdot 17 \\ \hline 101 & 221=13 \cdot 17 & 120 = 2^3 \cdot 3 \cdot 5 \\ \hline 100 = 2^2 \cdot 5^2 & 220 = 2^2 \cdot 5 \cdot 11 & 120 = 2^3 \cdot 3 \cdot 5 \\ \hline \dots & \dots & \dots \\ \hline 88 = 2^3 \cdot 11 & 214 = 2 \cdot 107 & 126 = 2 \cdot 3^2 \cdot 7 \\ \hline 87 = 3 \cdot 29 & 214 = 2 \cdot 107 & 127 \\ \hline 86 = 2 \cdot 43 & 213 = 3 \cdot 71 & 127 \\ \hline \dots & \dots & \dots \\ \hline \end{array} $$ It takes $14$ non-decrements, which is exactly the amount needed to get from $113$ through the big gap there up to the next prime $127$, and thereafter there are no more shared factors and it stays $127$ the whole way down, and it does indeed always work like this. So the size of the prime gap is one determiner of how long that "trial division" section lasts, and the other is the size of the factors themselves. As I said, any factor present will do, and I can't discern much rhyme or reason to it, so that leaves us with making a worst-case upper bound estimate of the sum of the least prime factors comprising every number in the prime gap. In this example, I think that adds up to $60$ or so, but it's one of if not the worst case around. To make this rigorous, we can use the current best upper bound established on prime gap size for sufficiently large $x$ of $x^{0.525}$. If we consider some large $x$ as having a gap of that size, we can immediately mark half of those entries as being even, which means in the worst case, it would require two $a$-decrements to move past each of those entries within the gap. So that half of the gap is just $$x^{0.525} / 2 \times 2 = x^{0.525},$$ and leaves us half left to deal with. Here, we could undoubtedly continue to whittle down our estimate by taking out other small factors, but I'm not sure that really helps anyway. Ignoring removing small factors, our bottom line is that we need $$x^{0.525} x^k < x,$$ where $x^k$ represents an upper bound for the sum of the least prime factors in that gap, and it looks like we need $k<0.475$. I would expect that $x^k$ to work out to something more like $\log{x}$, but I'm not aware of any bounds that immediately say that. So no, this isn't a completed proof either, but I thought I would share some of my thinking. I'm still hoping for a nice elegant solution to pop out. That said, if this approach can be made to work, that should instantly prove my approach valid for large $n$... but of course, using something more powerful than Bertrand's postulate to help do it sort of defeats the purpose. More updates later. One other thing worth a mention. There's an easy workaround for scenarios where this fails. If $b_0=cd$, some composite, restart the process using $c, (c+1)d$, and repeat as necessary. This lets you do fun stuff like hit the prime values in $p(p+1)$. For example, starting with $\{29, 29\cdot 30\}$ will yield $b_0=851=23\cdot 37$. Restart with $\{23, 23\cdot 37 + 23\}$, and you'll get a valid $b_0=853$. This seems to work fine empirically, but I have to doubt there's any way to justify it rigorously. Update: Just a quicky. I got to thinking about Arnaud's note about reverse engineering, and I've got an idea to float. I tried doing some mapping of the origin possibilities for various $b_0$, and while the primes are nice and robust, the composites are not. The very best they have to offer in the first 500 or so is probably: which makes sense, what with $209$ being a larger semiprime and $233$ up top being one half of a problem semiprime that shows up a bit. I had hoped that that possibility graphs for the primes could be infinite, but if my code is right, it turns out they're merely far larger than the non-primes. Here's a sample: \begin{array}{|l|l|l|l|} \hline \mathbf{b_0} & & \textbf{nodes} & \textbf{max length} \\ \hline 101 & 101 & 6206 & 818 \\ \hline 102 & 2\cdot 3\cdot 17 & 1 & 0 \\ \hline 103 & 103 & 9779 & 918 \\ \hline 104 & 2^3\cdot 13 & 1 & 0 \\ \hline 105 & 3\cdot 5\cdot 7 & 4 & 2 \\ \hline 106 & 2\cdot 53 & 1 & 0 \\ \hline 107 & 107 & 11059 & 1074 \\ \hline 108 & 2^2\cdot 3^3 & 1 & 0 \\ \hline 109 & 109 & 6293 & 1094 \\ \hline 110 & 2\cdot 5\cdot 11 & 1 & 0 \\ \hline 111 & 3\cdot 37 & 4 & 2 \\ \hline 112 & 2^4\cdot 7 & 1 & 0 \\ \hline 113 & 113 & 8886 & 1184 \\ \hline 114 & 2\cdot 3\cdot 19 & 1 & 0 \\ \hline 115 & 5\cdot 23 & 8 & 4 \\ \hline 116 & 2^2\cdot 29 & 1 & 0 \\ \hline 117 & 3^2\cdot 13 & 4 & 2 \\ \hline 118 & 2\cdot 59 & 1 & 0 \\ \hline 119 & 7\cdot 17 & 44 & 14 \\ \hline 120 & 2^3\cdot 3\cdot 5 & 1 & 0 \\ \hline 121 & 11^2 & 70 & 22 \\ \hline 122 & 2\cdot 61 & 1 & 0 \\ \hline 123 & 3\cdot 41 & 4 & 2 \\ \hline 124 & 2^2\cdot 31 & 1 & 0 \\ \hline 125 & 5^3 & 20 & 8 \\ \hline 126 & 2\cdot 3^2\cdot 7 & 1 & 0 \\ \hline 127 & 127 & 12230 & 1268 \\ \hline \end{array} I also analyzed some parameters from the first $15000$ non-prime graphs. There are a few strong correlations, particularly that between large semiprimes and larger graphs, but the most promising find is what looks like a strong bound on the ratio of total nodes in the graph to $b_0$. It was $<1$ always, and looked to be decreasing, suggesting a strong bound might be possible. (This same ratio was $>1$ for all primes, and scaled very close to linearly.) Since the maximum length (or height if you like) of the graph is the critical piece that determines whether or not this whole conjecture works, and since that length is a subset of the total graph, a hard limit on the number of nodes would effectively be a proof that the conjecture holds up. To be clear, "nodes" correspond to starting pairs of numbers which would lead to a given $b_0$. The pair of numbers in question are the ones we previously called $n$ and $2n$, but we're being more flexible now. So, if it turned out there were some compelling reason why any given composite $m$ must have less than $m$ different starting pairs that led to its being $b_0$, that would be sufficient for a proof. Latest attempt All right. I'm going to try justifying the original $(n,2n)$ approach again. First, however, I think it serves to look at $(n,n+2)$ as the seed pair. $n=16$ looks good for illustrative purposes. Here's a chart for it; as someone else pointed out, the $b$ column is unnecessary in this case. We could replace it with $c=b-a$, which is more clear and will share all of $b$'s relevant factors, since we're only interested in where $a$ and $b$ overlap. That said, we'll leave it in for this one. $$ \begin{array}{|ll|ll|ll|} \hline \textbf{a} & & \textbf{b} & & \textbf{c} & \\ \hline 16 &= 2^4 & 18 &= 2 \cdot 3^2 & 2 & \\ \hline 15 &= 3\cdot 5 & 18 &= 2 \cdot 3^2 & 3 \\ \hline 14 &= 2 \cdot 7 & 18 &= 2 \cdot 3^2 & 4 &= 2^2 \\ \hline 13 & & 18 &= 2 \cdot 3^2 & 5 & \\ \hline 12 &= 2^2 \cdot 3 & 17 & & 5 & \\ \hline 11 & & 16 &= 2^4 & 5 & \\ \hline 10 &= 2 \cdot 5 & 15 &= 3\cdot 5 & 5 & \\ \hline 9 &= 3^2 & 15 &= 3\cdot 5 & 6 &= 2 \cdot 3 \\ \hline 8 &= 2^3 & 15 &= 3\cdot 5 & 7 \\ \hline 7 & & 14 &= 2 \cdot 7 & 7 & \\ \hline 6 &= 2 \cdot 3 & 14 &= 2 \cdot 7 & 8 &= 2^3 \\ \hline 5 & & 14 &= 2 \cdot 7 & 9 &=3^2 \\ \hline 4 &= 2^2 & 13 & & 9 &=3^2 \\ \hline 3 & & 12 &= 2^2 \cdot 3 & 9 &=3^2 \\ \hline 2 & & 12 &= 2^2 \cdot 3 & 10 &=2\cdot 5 \\ \hline 1 & & 12 &= 2^2 \cdot 3 & 11 &\\ \hline 0 & & 11 & & 11 &\\ \hline \end{array} $$ We're using the same system here for determining the successive values in $b$ as we did earlier: subtract $1$ when coprime with $a$, otherwise move it down unchanged. I'd mostly like to use this table to point out there's nothing magical or inexplicable happening here. It's probably most clear in $c$: we're simply counting up from $2$, and keeping each value until it matches with a factor in $a$, and then we increment by one. Any factor at all will do, so long as it's shared with $a$. A few things to notice. First, since $a$ is ascending with no pauses and $c$ is descending but waiting for a match before incrementing, it's natural that $c$ will grow more slowly, but given the large number of small factors available as $a$ flies by, it'll still grow a respectable amount. Second thing, and this is really the important one, is to note the $11$ at the bottom of the column. Our whole system is predicated on the notion that this number will always be a prime, provided you plug in reasonable seed values. And this table shows why. To state the obvious first off, we had to end on something. We didn't know it had to be prime, perhaps, but it's obvious $c$ was counting up and going to end somewhere. More to the point, note that we're not claiming that it's going to reach any specific prime yet, just that it's slowly growing. So the question is, why should we expect that bottom value to be prime necessarily? Look at the penultimate prime, the $7$. It won't always be $7$, but there will always be a next-to-last prime, and after we hit it, there's often a spatter of small factor annihilation just like we see below. Whether this happened at $7$ or at $737$, the space and factors needed to bridge the gap to the next prime will always be available. The upshot is that a prime will always be waiting there, since obviously no big factors are showing up between $1$ and $0$. In particular, only smaller factors come after the penultimate prime. Usually there's plenty of room; this example shows as close as it ever gets to having the prime superseded by small factors. I realize this isn't proof-level justification that that can never happen. That said, I think I could explicitly point out a sufficiently covering bijective mapping of factors from one column to the other that always takes place, but at the moment I'm satisfied if that was persuasive. And that's the bulk of it. I think taking $(n,n+2)$ better illustrates the underlying mechanism, but if you look closely, you'll notice that line $7$ with $14$ next to it. That means that from there down, this chart is identical to if we had used $(7,14)$ as our seed pair from the outset. The same applies to any $(p,2p)$; there are arbitrarily many $(n,n+2)$ charts that can be cut off to get whatever pair you like. Presumably this is true for $(n,2n)$ as well, although we'll avoid that just to play it safe. And of course there is no need of actually finding such charts; if you subscribe to the validity of the example process, that should suffice to show the validity of using any $(p,2p)$ as a seed pair. A couple of closing notes, then. When we do use $(p,2p)$, it has the additional handy feature of providing not only a prime in that range, but the very next prime larger than $p$. This should make sense after having seen our example. And finally, do note that this gives us what we were after all along: a proof of primes in every $2n$ interval. We can of course apply this as much as we like using any arguments we want for $p$. Some of my additional data also suggest that after five or ten early exceptions have passed, we should be able to use $(4p,5p)$, and sometime after getting up to $1000$ or so, $(9p,10p)$ and even $(19p,20p)$, giving us far tighter bounds on those intervals. I think that covers it. So what crucial element did I miss this time? Specifically, is the factor matchup stuff a critical tricky part which defeats the whole purpose if I omit it, or is it as straightforward to actually prove as I hope? (...actually since writing that, I went and ran a battery of tests against that general principle of factor matching. It is ROBUST. This is the least of what it can do reliably. Still not a proof, but I'm much more convinced one would be easy to come up with now.) elementary-number-theory alternative-proof solution-verification prime-gaps TrevorTrevor $\begingroup$ Why would it have to be $p+q$ in the $b$ column where $a=p$? This would be assuming that the number in the $b$ column keeps decreasing $1$ by $1$ (never stabilises even once) between this place and the bottom, I can't see why this would need to hold. $\endgroup$ – Arnaud Mortier Dec 13 '19 at 13:38 $\begingroup$ Seems clear to me there is a good idea here. I agree with the comments that suggest the phrasing could be improved, but it works far too well to be a simple blunder. $\endgroup$ – lulu Dec 13 '19 at 13:50 $\begingroup$ Is $p+q$ even in the table? I mean, does it have to be less than $2n$? $\endgroup$ – Arnaud Mortier Dec 13 '19 at 13:50 $\begingroup$ I don't understand why $p|b_0$ should imply $p|b_p$ - what does it mean to "establish" a factor and why should this occur exactly at $p$? The rest is clear now, but I'm unconvinced by this implication. (Also: it might help if you really fully illustrated a particular case, like $p=2$ to show that $b_0$ is odd. It's often helpful for others to include worked examples like that, since then you can avoid putting loads of indeterminates in tables and things like that) $\endgroup$ – Milo Brandt Dec 13 '19 at 22:11 $\begingroup$ @Trevor The justification that $p|b_0\implies p|b_p$ is still totally unclear to me. $\endgroup$ – Arnaud Mortier Dec 14 '19 at 10:11 Partial answer. Conjecture 1: $b_0$ is the smallest prime larger than $n$. Conjecture 2: $b_0$ is always a prime number as soon as $b_n$ is greater than $n+1$ and lower than some increasing bound. For a fixed $n$, all those prime values of $b_0$ make up a set of consecutive primes. What is proved so far: Regarding Conjecture 1 If the bottom-right value is a prime, then it's the smallest prime larger than $n$. The conjecture is true when the gap between $n$ and the next prime is $|p-n|\leq 4$ The table below shows the range of $b_n$ values for which $b_0$ is a prime. Proof of Conjecture 1 in the case where $n=p-1$ with $p$ prime. The second row is $(p-2, p+(p-2))$, which are coprime numbers, and therefore by an immediate induction since $p$ is prime you can see that every subsequent row is of the form $$(a,p+a)$$ down to the last row $(0,p)$ as promised.$\,\,\square$ Proof in the case where $n=p-2$ with $p$ prime ($p>2$). The second row is $(p-3, 2(p-2))$ and these two are not coprime: since $p>2$ is prime, $p-3$ is even. Therefore the third row is $(p-4, (p-4)+p)$ and from here we conclude the same way as before. $\,\,\square$ Proof in the case where $n=p-3$ with $p$ prime. There you start to see some new arguments, where the proof is not constructive. The second row is $(p-4, (p-4)+(p-2))$. They're coprime since $p$ is odd. You go down to $(p-5, (p-5)+(p-2))$. As long as you keep coprime pairs, you go down as $(p-k, (p-k)+(p-2))$. But the trick is that $p-2$ can't be prime, otherwise you wouldn't be in the case $n=p-3$, $p$ prime but rather $n=q-1$, $q$ prime (first case treated above) with $q=p-2$. So at the very least, when $a$ becomes a factor of $p-2$, you will get $(a,a+(p-2))$ and from there get down to $(a-1,(a-1)+(p-1))$. From then on you can't stay at a difference $b-a=p-1$ for long, since $p-1$ is even. As soon as $a$ becomes even you will get up to $b-a=p$ and win.$\,\,\square$ Proof (sketch) in the case where $n=p-4$ with $p$ prime ($p>2$). The proof for $n=p-3$ can be repeated: you're going to get rid of the difference $b-a=p-3$ very fast since $p$ is odd, you're getting rid of $b-a=p-2$ sooner or later since $p-2$ can't be a prime, and then you're getting rid of $b-a=p-1$ in at most two moves since $p$ is odd.$\,\,\square$ One problem in the general case is that you can't reverse-engineer the table, e.g. $(1,8)$ could come from $(2,8)$ or it could come from $(2,9)$. If you add a column $b-a$, it starts at $n$, and goes non-decreasing. If it ever reaches a prime number, then it will stay at that prime number, since from then down you will have $(a=k, b=p+k)$ down to $(0,p)$ and the output will therefore be the smallest prime greater than $n$. So all you've got to do is prove that you do reach a prime at some point. You could try to do that assuming Bertrand's postulate, it would already be some achievement. Arnaud MortierArnaud Mortier $\begingroup$ FYI - in general it seemed like it was possible to reverse engineer this sort of thing to the nearest prime if I remember right. $\endgroup$ – Trevor Dec 13 '19 at 14:25 The argument given makes no sense to me (and, judging from the comments, I'm not alone). To try to fix it, I suggest that you Use some notation which lets you talk unambiguously about different rows in the table. It's fairly standard to use subscripts for states in a process, so define $$b_a = \begin{cases} 2n & \textrm{if } a = n \\ b_{a+1} - [a+1, b_{a+1} \textrm{ coprime}] & \textrm{otherwise} \end{cases}$$ Fix $2 \le p < q$ to be the smallest non-trivial factor of $q$ (assumed composite). Work up from $a=0$ to $a=p$ rather than beginning the argument at $a=p$. But it's not going to be an easy task, because there are unstated assumptions which don't seem to be justified. In particular, the line And if $p \mid q$, then $p \mid q+p$. But if it did, then because the right side would be unchanged on the next line $p-1$ seems to assume that if $b_0$ is composite with prime factor $p$ then $b_p = b_0 + p$. It's easy to derive a contradiction from "$b_0$ is composite with prime factor $p$ and $b_p = b_0 + p$". It's easy to show that if $p$ is the smallest prime factor of $b_p$ then $b_0 = b_p - p$. But neither of those is anywhere near sufficient: the goal is to derive a contradiction from the much simpler statement that $b_0$ is composite. Edit: it's now claimed explicitly that $p | b_0$ implies $p | b_p$, but to me it looks like a proof by assertion. This needs much more detail to show that there's a justified argument. Another issue which I think should be addressed is the strength of the argument. In particular, why should the same argument not hold when we change the definition to $b_n = n^2$? It's still the case that if $b_0$ is composite then it has a prime factor $p$ which has appeared in the first column, but under these starting conditions e.g. $n=10$ yields $b_0 = 95$. Peter TaylorPeter Taylor $\begingroup$ Thanks for the notation suggestions, that's more or less what I intend to rewrite as when I get a chance. However, $b_a$ need not always be built around $2n$, and actually should only be when determining your $b_0$ value initially. And the fundamental contradiction is that $p \mid b_0 \implies b_p = b_{p-1}$ and then that $b_p \equiv b_{p-1} \not\equiv b_0 \pmod {p}$ implies $p \nmid b_0$ after all. $\endgroup$ – Trevor Dec 13 '19 at 19:04 $\begingroup$ I think your definition of $b_a$ needs some fixing, there are $3$ cases. $\endgroup$ – Arnaud Mortier Dec 14 '19 at 16:00 $\begingroup$ @ArnaudMortier, the square brackets in the second line are Iverson brackets. $\endgroup$ – Peter Taylor Dec 14 '19 at 19:48 $\begingroup$ @PeterTaylor Oh I see! Thanks. When I discovered the Kronecker delta as a student, we were making fun of the fact that such a trivial thing was named after someone - now I discover that it was actually named after two different people! $\endgroup$ – Arnaud Mortier Dec 14 '19 at 19:57 Let me start by saying, this is awesome! Here is a partial answer. Let me call the number next to $i$ on the table $a_i$. Also, I would rather work with $b_i=a_i-i$. Notice that $$ \operatorname{gcd}(i, a_i) = \operatorname{gcd}(i, a_i -i) = \operatorname{gcd}(i, b_i). $$ As we go down the table, we follow the rules: $a_n = 2n$, so $b_n = n$. $a_{n-1} = 2n$, so $b_{n-1} = n+1$. If $(a_i, i) = 1$, then $a_{i-1} = a_i - 1$, so $b_{i - 1} = b_i$ If $(a_i, i) \neq 1$, then $a_{i-1} = a_i$, so $b_{i - 1} = b_i + 1$ At the end, $a_0 = b_0 = q$. Now, if we look at the sequence $b_i$ as $i$ decreases, it will increase until it hits a prime and then it won't ever increase. I have no clue why it would reach this prime before $n$ steps. I am like 85% confident on my coding skills and I think this works for all $n$'s up to $80000$. Also, if you look at the number of steps before you reach a prime, the numbers look half as long (as in it looks like the square root), so I am going to guess that the sequence reaches a prime pretty fast. MoisésMoisés Not the answer you're looking for? Browse other questions tagged elementary-number-theory alternative-proof solution-verification prime-gaps or ask your own question. Relative sizes of prime gaps Does OEIS sequence A059046 contain any odd squares $u^2$, with $\omega(u) \geq 2$?
CommonCrawl
Money-Weighted Rate of Return: Definition, Formula, and Example Caroline Banton Caroline Banton has 6+ years of experience as a freelance writer of business and finance articles. She also writes biographies for Story Terrace. Gordon Scott Reviewed by Gordon Scott Gordon Scott has been an active investor and technical analyst or 20+ years. He is a Chartered Market Technician (CMT). What Is the Money-Weighted Rate of Return? The money-weighted rate of return (MWRR) is a measure of the performance of an investment. The MWRR is calculated by finding the rate of return that will set the present values (PV) of all cash flows equal to the value of the initial investment. The MWRR is equivalent to the internal rate of return (IRR). MWRR can be compared with the time-weighted return (TWR), which removes the effects of cash in- and outflows. The money-weighted rate of return (MWRR) calculates the performance of an investment that accounts for the size and timing of deposits or withdrawals. The MWRR is calculated by finding the rate of return that will set the present values of all cash flows equal to the value of the initial investment. The MWRR is equivalent to the internal rate of return (IRR). The MWRR sets the initial value of an investment to equal future cash flows, such as dividends added, withdrawals, deposits, and sale proceeds. Understanding the Money-Weighted Rate of Return The formula for the MWRR is as follows: P V O = P V I = C F 0 + C F 1 ( 1 + I R R ) + C F 2 ( 1 + I R R ) 2 + C F 3 ( 1 + I R R ) 3 + . . . C F n ( 1 + I R R ) n where: P V O = PV Outflows P V I = PV Inflows C F 0 = Initial cash outlay or investment C F 1 , C F 2 , C F 3 , . . . C F n = Cash flows N = Each period I R R = Initial rate of return \begin{aligned} &PVO = PVI = CF_{0} \, +\, \frac{CF_{1}}{(1\, +\, IRR)}\, +\, \frac{CF_{2}}{(1\, +\, IRR)^{2}}\,\\ &\qquad\quad\, +\, \frac{CF_{3}}{(1\, +\, IRR)^{3}}\,\, +\,... \frac{CF_{n}}{(1\, +\, IRR)^{n}}\,\\ &\textbf{where:}\\ &PVO = \text{PV Outflows}\\ &PVI = \text{PV Inflows}\\ &CF_0 = \text{Initial cash outlay or investment}\\ &CF_1, CF_2, CF_3, ... CF_n = \text{Cash flows}\\ &N = \text{Each period}\\ &IRR = \text{Initial rate of return}\\ \end{aligned} ​PVO=PVI=CF0​+(1+IRR)CF1​​+(1+IRR)2CF2​​+(1+IRR)3CF3​​+...(1+IRR)nCFn​​where:PVO=PV OutflowsPVI=PV InflowsCF0​=Initial cash outlay or investmentCF1​,CF2​,CF3​,...CFn​=Cash flowsN=Each periodIRR=Initial rate of return​ How to Calculate the Money-Weighted Rate of Return To calculate the IRR using the formula, set the net present value (NPV) equal to zero and solve for the discount rate (r), which is the IRR. However, because of the nature of the formula, the IRR cannot be calculated analytically and instead must be calculated either through trial and error or by using software programmed to calculate the IRR. What Does the Money-Weighted Rate of Return Tell You? There are many ways to measure asset returns, and it is important to know which method is being used when reviewing asset performance. The MWRR incorporates the size and timing of cash flows, so it is an effective measure of portfolio returns. The MWRR sets the initial value of an investment to equal future cash flows, such as dividends added, withdrawals, deposits, and sale proceeds. In other words, the MWRR helps to determine the rate of return needed to start with the initial investment amount, factoring all of the changes to cash flows during the investment period, including the sale proceeds. Cash Flows and the Money-Weighted Rate of Return As stated above, the MWRR for an investment is identical in concept to the IRR. In other words, it is the discount rate on which the net present value (NPV) = 0, or the present value of inflows = the present value of outflows. It's important to identify the cash flows in and out of a portfolio, including the sale of the asset or investment. Some of the cash flows that an investor might have in a portfolio include: Outflows The cost of any investment purchased Reinvested dividends or interest Inflows The proceeds from any investment sold Dividends or interest received Example of the Money-Weighted Rate of Return Each inflow or outflow must be discounted back to the present by using a rate (r) that will make PV (inflows) = PV (outflows). Let's say an investor buys one share of a stock for $50 that pays an annual $2 dividend and sells it after two years for $65. Thus you would discount the first dividend after year one and for year two discount both the dividend and the selling price. The MWRR will be a rate that satisfies the following equation: P V Outflows = P V Inflows = $ 2 1 + r + $ 2 1 + r 2 + $ 65 1 + r 2 = $ 50 \begin{aligned} PV \text{ Outflows} &= PV \text{ Inflows} \\ &= \frac{ \$2 }{ 1 + r } + \frac{ \$2 }{ 1 + r^2 } + \frac{ \$65 }{ 1 + r^2} \\ &= \$50 \end{aligned} PV Outflows​=PV Inflows=1+r$2​+1+r2$2​+1+r2$65​=$50​ Solving for r using a spreadsheet or financial calculator, we have an MWRR of 11.73%. The Difference Between Money-Weighted Rate of Return and Time-Weighted Rate of Return The MWRR is often compared to the time-weighted rate of return (TWRR), but the two calculations have distinct differences. The TWRR is a measure of the compound rate of growth in a portfolio. The TWRR measure is often used to compare the returns of investment managers because it eliminates the distorting effects on growth rates created by inflows and outflows of money. It can be difficult to determine how much money was earned on a portfolio because deposits and withdrawals distort the value of the return on the portfolio. Investors can't simply subtract the beginning balance, after the initial deposit, from the ending balance since the ending balance reflects both the rate of return on the investments and any deposits or withdrawals during the time invested in the fund. The TWRR breaks up the return on an investment portfolio into separate intervals based on whether money was added to or withdrawn from the fund. The MWRR differs in that it takes into account investor behavior via the impact of fund inflows and outflows on performance but doesn't separate the intervals where cash flows occurred, as the TWRR does. Therefore, cash outflows or inflows can impact the MWRR. If there are no cash flows, then both methods should deliver the same or similar results. Limitations of Using Money-Weighted Rate of Return The MWRR considers all the cash flows from the fund or contribution, including withdrawals. Should an investment extend over several quarters, for example, the MWRR lends more weight to the performance of the fund when it is at its largest—hence, the description "money-weighted." The weighting can penalize fund managers because of cash flows over which they have no control. In other words, if an investor adds a large sum of money to a portfolio just before its performance rises, then it equates to positive action. This is because the larger portfolio benefits more (in dollar terms) from the growth of the portfolio than if the contribution had not been made. On the other hand, if an investor withdraws funds from a portfolio just before a surge in performance, then it equates to a negative action. The now-smaller fund sees less benefit (in dollar terms) from the growth of the portfolio than if the withdrawal had not occurred. Internal Rate of Return (IRR) Rule: Definition and Example The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments. Modified Internal Rate of Return (MIRR) While the internal rate of return (IRR) assumes that the cash flows from a project are reinvested at the IRR, the modified internal rate of return (MIRR) assumes that positive cash flows are reinvested at the firm's cost of capital, and the initial outlays are financed at the firm's financing cost. Net Present Value (NPV): What It Means and Steps to Calculate It Net present value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows over a period of time. Profitability Index (PI): Definition, Components, and Formula The profitability index (PI) is a technique used to measure a proposed project's costs and benefits by dividing the projected capital inflow by the investment. Discounted Cash Flow (DCF) Explained With Formula and Examples Discounted cash flow (DCF) is a valuation method used to estimate the attractiveness of an investment opportunity. Pooled Internal Rate of Return (PIRR) Pooled internal rate of return computes overall IRR for a portfolio that contains several projects by aggregating their cash flows. Corporate Finance Basics Capital Budgeting: What It Is and How It Works Simple vs. Compounding Interest: Definitions and Formulas Formula for Calculating Internal Rate of Return in Excel How to Calculate Internal Rate of Return (IRR) in Excel Return on Investment vs. Internal Rate of Return: What's the Difference? The Formula for Calculating the Internal Rate of Return
CommonCrawl
IPI Home On recovery of an inhomogeneous cavity in inverse acoustic scattering April 2018, 12(2): 261-280. doi: 10.3934/ipi.2018011 Reconstruction of cloud geometry from high-resolution multi-angle images Guillaume Bal 1, , Jiaming Chen 2, and Anthony B. Davis 3, Departments of Statistics and Mathematics and CCAM, University of Chicago, Chicago, IL 60637, USA Department of Mathematical Sciences, Rensselear Polytechnic Institute, Troy, NY 12180, USA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Received November 2015 Revised October 2016 Published February 2018 Full Text(HTML) Figure(3) We consider the reconstruction of the interface of compact, connected "clouds" from satellite or airborne light intensity measurements. In a two-dimensional setting, the cloud is modeled by an interface, locally represented as a graph, and an outgoing radiation intensity that is consistent with a diffusion model for light propagation in the cloud. Light scattering inside the cloud and the internal optical parameters of the cloud are not modeled explicitly. The main objective is to understand what can or cannot be reconstructed in such a setting from intensity measurements in a finite (on the order of 10) number of directions along the path of a satellite or an aircraft. Numerical simulations illustrate the theoretical predictions. Finally, we explore a kinematic extension of the algorithm for retrieving cloud motion (wind) along with its geometry. Keywords: Cloud reconstructions, integral geometry, inverse linear transport, boundary reconstructions, satellite measurements. Mathematics Subject Classification: 35R30, 35Q86, 85A25. Citation: Guillaume Bal, Jiaming Chen, Anthony B. Davis. Reconstruction of cloud geometry from high-resolution multi-angle images. Inverse Problems & Imaging, 2018, 12 (2) : 261-280. doi: 10.3934/ipi.2018011 M. D. Alexandrov et al., Derivation of cumulus cloud dimensions and shape from the airborne measurements by the Research Scanning Polarimeter, Remote Sensing of the Environment, 177 (2016), 144-152.Google Scholar G. Bal, Inverse transport theory and applications, Inverse Problems, 25 (2009), 053001, 48pp. Google Scholar B. Cairns, E. E. Russell and L. D. Travis, Research Scanning Polarimeter: Calibration and ground-based measurements, SPIE Proc., 3754 (1999), 186-197. Google Scholar S. Chandrasekhar, Radiative Transfer, Dover Publications, New York, 1960. Google Scholar C. Cornet and R. Davies, Use of MISR measurements to study the radiative transfer of an isolated convective cloud: Implications for cloud optical thickness retrieval, J. Geophys. Res.-Atmospheres, 113 (2008), D04202. doi: 10.1029/2007JD008921. Google Scholar A. B. Davis, Cloud remote sensing with sideways looks: Theory and first results using Multispectral Thermal Imager (MTI) data, SPIE Proc., 4725 (2002), 397-405. doi: 10.1117/12.478772. Google Scholar A. B. Davis and A. Marshak, Solar radiation transport in the cloudy atmosphere: A 3D perspective on observations and climate impacts, Reports on Progress in Physics, 73 (2010), 026801 (70pp). doi: 10.1088/0034-4885/73/2/026801. Google Scholar D. J. Diner et al., Multi-angle Imaging SpectroRadiometer (MISR) instrument description and experiment overview, IEEE Transactions in Geoscience and Remote Sensing, 36 (1998), 1072-1087.Google Scholar _____, The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI): A new tool for aerosol and cloud remote sensing, Atmospheric Measurement Techniques, 6 (2013), 2007-2025.Google Scholar H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Dordrecht, 1996. Google Scholar Á. Horváth and R. Davies, Feasibility and error analysis of cloud motion wind extraction from near-simultaneous multiangle MISR measurements, Journal of Atmospheric and Oceanic Technology, 18 (2001), 591-608. Google Scholar _____, Simultaneous retrieval of cloud motion and height from polar-orbiter multiangle measurements, Geophysical Research Letters, 28 (2001), 2915-2918.Google Scholar A. Levis, Y. Y. Schechner, A. Aides and A. B. Davis, Airborne three-dimensional cloud tomography, in Proceedings of the IEEE International Conference on Computer Vision 2015 (ICCV2015), (2015), 3379-3387. doi: 10.1109/ICCV.2015.386. Google Scholar A. Levis, Y. Y. Schechner and A. B. Davis, Multiple-scattering microphysics tomography, in Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR17), (2017), http://openaccess.thecvf.com/content_cvpr_2017/papers/Levis_Multiple-Scattering_Microphysics_Tomography_CVPR_2017_paper.pdf. doi: 10.1109/CVPR.2017.614. Google Scholar S. M. Lovejoy, The area-parameter relation for rain and clouds, Science, 216 (1982), 185-187. Google Scholar B. B. Mandelbrot, Fractals: Form, Chance and Dimension, W. H. Freeman & Co., San Diego(CA), 1977. Google Scholar R. Marchand and T. Ackerman, Evaluation of radiometric measurements from the NASA Multiangle Imaging SpectroRadiometer (MISR): Two-and three-dimensional radiative transfer modeling of an inhomogeneous stratocumulus cloud deck, J. Geophys. Res. - Atmospheres, 109 (2004), D18208. Google Scholar A. Marshak and A. B. Davis, 3D Radiative Transfer in Cloudy Atmospheres, Springer, New York, 2005. doi: 10.1007/3-540-28519-9. Google Scholar W. G. K. Martin, B. Cairns and G. Bal, Adjoint methods for adjusting three-dimensional atmosphere and surface properties to multi-angle polarimetric measurements, J. Quant. Spectroscopy Radiative Transfer, 144 (2014), 68-85. doi: 10.1016/j.jqsrt.2014.03.030. Google Scholar W. G. K. Martin and O. P. Hasekamp, A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface, J. Quant. Spectroscopy Radiative Transfer, 204 (2018), 215-231. doi: 10.1016/j.jqsrt.2017.09.031. Google Scholar C. Moroney, R. Davies and J. -P. Muller, Operational retrieval of cloud-top heights using MISR data, IEEE Transactions on Geoscience and Remote Sensing, 40 (2002), 1532-1540. doi: 10.1109/TGRS.2002.801150. Google Scholar J. -P. Muller et al., MISR stereoscopic image matchers: Techniques and results, IEEE Transactions on Geoscience and Remote Sensing, 40 (2002), 1547-1559.Google Scholar T. Nakajima and M. D. King, Determination of the optical thickness and effective particle radius of clouds from reflected solar radiation measurements. Part 1: Theory, Journal of the Atmospheric Sciences, 47 (1990), 1878-1893. doi: 10.1175/1520-0469(1990)047<1878:DOTOTA>2.0.CO;2. Google Scholar S. Platnick et al., The MODIS cloud products: Algorithms and examples from Terra, IEEE Transactions on Geoscience and Remote Sensing, 41 (2003), 459-473.Google Scholar G. Seiz and R. Davies, Reconstruction of cloud geometry from multi-view satellite images, Remote Sensing of the Environment, 100 (2006), 143-149. doi: 10.1016/j.rse.2005.09.016. Google Scholar P. G. Weber, Multispectral Thermal Imager mission overview, SPIE Proc., 3753 (1999), 340-346. Google Scholar Figure 1. Geometry of cloud interface Figure Options Download as PowerPoint slide Figure 2. Left: A cloud model. Right: Simulated radiances $u_j(X) : = u(X,Z,\theta_j)$ for that cloud using (7) with $j = 1,\dots,J = 5$ (specifically, $\theta \in \{90,90\pm26.1,90\pm45.6\}$ in degrees clockwise from the positive $x$ axis) for a uniform $\alpha$ and $\beta = \sin\phi$. Figure 3. True angular radiation function $\beta(\phi) = \sin\phi$ (in green), reconstructed function (in blue), and initial guess (in red). Carlos E. Kenig, Mikko Salo, Gunther Uhlmann. Reconstructions from boundary measurements on admissible manifolds. Inverse Problems & Imaging, 2011, 5 (4) : 859-877. doi: 10.3934/ipi.2011.5.859 Guillaume Bal, Ian Langmore, François Monard. Inverse transport with isotropic sources and angularly averaged measurements. Inverse Problems & Imaging, 2008, 2 (1) : 23-42. doi: 10.3934/ipi.2008.2.23 Gérard Gagneux, Olivier Millet. A geological delayed response model for stratigraphic reconstructions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 457-474. doi: 10.3934/dcdss.2016007 Nuutti Hyvönen, Lassi Päivärinta, Janne P. Tamminen. Enhancing D-bar reconstructions for electrical impedance tomography with conformal maps. Inverse Problems & Imaging, 2018, 12 (2) : 373-400. doi: 10.3934/ipi.2018017 Jutta Bikowski, Jennifer L. Mueller. 2D EIT reconstructions using Calderon's method. Inverse Problems & Imaging, 2008, 2 (1) : 43-61. doi: 10.3934/ipi.2008.2.43 Melody Alsaker, Jennifer L. Mueller. Use of an optimized spatial prior in D-bar reconstructions of EIT tank data. Inverse Problems & Imaging, 2018, 12 (4) : 883-901. doi: 10.3934/ipi.2018037 Corinna Burkard, Roland Potthast. A time-domain probe method for three-dimensional rough surface reconstructions. Inverse Problems & Imaging, 2009, 3 (2) : 259-274. doi: 10.3934/ipi.2009.3.259 Hiroshi Isozaki. Inverse boundary value problems in the horosphere - A link between hyperbolic geometry and electrical impedance tomography. Inverse Problems & Imaging, 2007, 1 (1) : 107-134. doi: 10.3934/ipi.2007.1.107 Klas Modin. Geometry of matrix decompositions seen through optimal transport and information geometry. Journal of Geometric Mechanics, 2017, 9 (3) : 335-390. doi: 10.3934/jgm.2017014 Robert J. McCann. A glimpse into the differential topology and geometry of optimal transport. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1605-1621. doi: 10.3934/dcds.2014.34.1605 Guillaume Bal, Alexandre Jollivet. Stability estimates in stationary inverse transport. Inverse Problems & Imaging, 2008, 2 (4) : 427-454. doi: 10.3934/ipi.2008.2.427 Brittan Farmer, Cassandra Hall, Selim Esedoḡlu. Source identification from line integral measurements and simple atmospheric models. Inverse Problems & Imaging, 2013, 7 (2) : 471-490. doi: 10.3934/ipi.2013.7.471 Giovanni Alessandrini, Elio Cabib. Determining the anisotropic traction state in a membrane by boundary measurements. Inverse Problems & Imaging, 2007, 1 (3) : 437-442. doi: 10.3934/ipi.2007.1.437 Mourad Sini, Nguyen Trung Thành. Inverse acoustic obstacle scattering problems using multifrequency measurements. Inverse Problems & Imaging, 2012, 6 (4) : 749-773. doi: 10.3934/ipi.2012.6.749 Guillaume Bal, Alexandre Jollivet. Generalized stability estimates in inverse transport theory. Inverse Problems & Imaging, 2018, 12 (1) : 59-90. doi: 10.3934/ipi.2018003 Kim Knudsen, Mikko Salo. Determining nonsmooth first order terms from partial boundary measurements. Inverse Problems & Imaging, 2007, 1 (2) : 349-369. doi: 10.3934/ipi.2007.1.349 Carlos J. S. Alves, Nuno F. M. Martins, Nilson C. Roberty. Full identification of acoustic sources with multiple frequencies and boundary measurements. Inverse Problems & Imaging, 2009, 3 (2) : 275-294. doi: 10.3934/ipi.2009.3.275 Antonino Morassi, Edi Rosset, Sergio Vessella. Unique determination of a cavity in an elastic plate by two boundary measurements. Inverse Problems & Imaging, 2007, 1 (3) : 481-506. doi: 10.3934/ipi.2007.1.481 Elena Beretta, Elisa Francini, Sergio Vessella. Uniqueness and Lipschitz stability for the identification of Lamé parameters from boundary measurements. Inverse Problems & Imaging, 2014, 8 (3) : 611-644. doi: 10.3934/ipi.2014.8.611 Lauri Oksanen. Solving an inverse problem for the wave equation by using a minimization algorithm and time-reversed measurements. Inverse Problems & Imaging, 2011, 5 (3) : 731-744. doi: 10.3934/ipi.2011.5.731 HTML views (207) Guillaume Bal Jiaming Chen Anthony B. Davis
CommonCrawl
Tribology Letters March 2019 , 67:15 | Cite as On the Non-trivial Origin of Atomic-Scale Patterns in Friction Force Microscopy Dirk W. van Baarle Sergey Yu. Krylov M. E. Stefan Beck Joost W. M. Frenken Friction between two surfaces is due to nano- and micro-asperities at the interface that establish true contact and are responsible for the energy dissipation. To understand the friction mechanism, often single-asperity model experiments are conducted in atomic-force microscopes. Here, we show that the conventional interpretation of the typical results of such experiments, based on a simple mass-spring model, hides a fundamental contradiction. Via an estimate of the order of magnitude of the dissipative forces required to produce atomic-scale patterns in the stick-slip motion of a frictional nano-contact, we find that the energy dissipation must be dominated by a very small, highly dynamic mass at the very end of the asperity. Our conclusion casts new light on the behavior of sliding surfaces and invites us to speculate about new ways to control friction by manipulation of the contact geometry. Friction force microscopy Atomic-scale friction Energy dissipation Stick-slip motion Friction, the familiar force experienced both on the macroscopic scale in everyday life and on the scale of atoms and molecules, is directly related to the dissipation of energy. One may hope that a full understanding of the phenomenon will ultimately result in complete control of friction, which can be employed, e.g., to eliminate undesired forms of friction and associated wear in engines and other mechanical applications. In this context, special attention is given to the origin of friction [1, 2, 3 and refs. therein]. One of the key instruments in the experimental investigation of friction at the nano-scale is the friction force microscope (FFM) [2], in which the tip–surface contact serves as a model for a single-asperity contact. Two-dimensional maps of the lateral force experienced by the tip in an FFM-experiment often display a clear, atomic-scale periodicity, as is illustrated in Fig. 1a and b. Such patterns reflect the stick-slip (SS) motion that the tip executes between nearby minima in the corrugated tip–substrate interaction potential. a Typical lateral force map 3 nm × 0.8 nm, obtained by friction force microscopy. In each line, the silicon tip moves from left to right over a graphite substrate and the gray scale indicates the strength of the opposing lateral force. b Lateral force variation during an individual sweep of the tip from left to right (blue curve; center line in panel a) and from right to left (green). c Traditional schematic view of the FFM-experiment; a rigid support (white rectangle) is moving at constant velocity and drags the tip (blue triangle) via a spring over the atomically corrugated substrate (red spheres). The extension of the spring is a direct measure of the lateral force experienced by the tip. Data in panels (a) and (b) with courtesy of Prof. A. Schirmeisen (Justus-Liebig University, Gießen, Germany); further details on these measurements can be found in [4] Here we use the mere observation of such SS-behavior to draw conclusions about both the kinetic energy and the energy dissipation rate that are in play during a typical atomic slip event in an FFM. We demonstrate that the amount of energy one should reasonably expect to be dissipated by the atomic-size contact is some six orders of magnitude lower than what is typically assumed. This completely invalidates the traditional interpretation of the observed slip dynamics. Instead, we propose a more sophisticated dissipation mechanism that involves the dynamics of a tiny mass at the very end of the tip apex. Our mechanism is consistent with the experimental observations. In addition, it provides more general insight in sliding and friction and may open the possibility to predict the friction behavior of both typical and non-typical contact geometries. 2 Basic Considerations We start by inspecting typical SS-motion (Fig. 1b) in more detail. It is characterized by an initial 'stick' part, in which a mechanical force is gradually building up between the stationary tip and a rigid support that moves at constant velocity, and a rapid 'slip' event, during which the invested potential energy is transferred into kinetic energy and the tip translates over one lattice spacing. This motion is interpreted in the context of the well-known Prandtl–Tomlinson model [5, 6] as the natural behavior of a mass-spring system that is pulled over a corrugated surface, as shown schematically in Fig. 1c. Mathematically, it is described by a Langevin equation for the x-direction $${m_{{\text{eff}}}}{\ddot {x}_{{\text{tip}}}}= - {\left. {\frac{{\partial {V_{{\text{int}}}}\left( {x,y} \right)}}{{\partial x}}} \right|_{\left( {x,y} \right)=\left( {{x_{{\text{tip}}}},{y_{{\text{tip}}}}} \right)}} - {k_{{\text{cant}}}}\left( {{x_{{\text{tip}}}} - {x_{{\text{supp}}}}} \right)+\xi - \gamma {\dot {x}_{{\text{tip}}}},$$ and an equivalent equation for the y-direction, where \(\left( {{x_{{\text{tip}}}},{y_{{\text{tip}}}}} \right)\) is the position of the tip in the surface plane. \({m_{{\text{eff}}}}\) represents the effective mass of the tip and the atomically corrugated tip–substrate interaction potential is given by \({V_{{\text{int}}}}\). The spring force resulting from the difference between the positions of support and tip is governed by an effective spring constant \({k_{{\text{cant}}}}\), which is a combination of the stiffness of the cantilever of the FFM and the stiffness of the tip–surface contact and can be deduced directly from the slope of the measured lateral force trace in Fig. 1b [7]. In addition to the two static forces, one due to the interaction with the substrate and the other due to the spring, the tip also experiences two dynamic forces. The first, \(\xi\), is due to thermal noise, while the second, \(\gamma {\dot {x}_{tip}}\), is the velocity-dependent dissipation. For the present study, the dissipation term establishes the most important term in the equation; it is through this term that friction is introduced. Note that the values of the dissipation rate \(\gamma\) and the amplitude of the noise term \(\xi\) cannot be 'chosen' independently, as they are intimately connected via the fluctuation–dissipation theorem. Most numerical calculations that are used to reproduce FFM-experiments apply the 'recipe' of Eq. 1. Typically, the effective 'tip' mass is assumed to be in the order of 10−11 kg, which corresponds to the combination of the tip and a sizable fraction of the cantilever that moves together with the tip. The dissipation rate \(\gamma\) is almost always used as the fitting parameter to reproduce the experimental data, which results in a typical value of 10−6 kg s−1, corresponding to tip motion that is in the order of critically damped [3, 4, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Although this approach produces atomic patterns, it does not lead to a physical understanding of the resulting \(\gamma\)-values, for example in terms of the microscopic dissipation mechanism. 3 Dissipation as a Sum of Atomic Contributions In this study, we approach the subject from the other side, by first considering the dissipation rates that are familiar on the scale of single atoms. There is ample evidence from the fields of the vibrations and diffusion of atoms and molecules on surfaces, e.g., through a variety of calculations and through the observation of a modest percentage of long jumps of diffusing atoms, that the motion on this small scale is close to critically damped [19, 20, 21, 22, 23, 24, 25]. In other words, the typical timescale on which an atom, intimately adsorbed on a surface, can get rid of its excess momentum is in the order of half its own natural vibrational period on that surface, which we express as $$\frac{{{\gamma _{{\text{at}}}}}}{{{m_{{\text{at}}}}}} \cong 2\sqrt {\frac{{{k_{{\text{at}}}}}}{{{m_{{\text{at}}}}}}} .$$ Here, \({m_{{\text{at}}}}\) is the mass of an atom, \({k_{{\text{at}}}}\) is the typical spring coefficient, binding each atom to its equilibrium position, and \({\gamma _{{\text{at}}}}\) is the atomic dissipation rate. In order to turn this into a simple, order-of-magnitude estimate for the friction that we should expect between a sharp tip and a surface, we assume that an atomically sharp tip would experience the same friction force (momentum dissipation rate) as the final tip apex atom would feel on that surface, without the rest of the tip connected to it. We further assume that the friction force on a blunter tip is simply proportional to the number of atoms \({N_{{\text{cont}}}}\) by which the tip is in contact with the surface. $${F_{{\text{diss}}}}= - \gamma {\dot {x}_{{\text{tip}}}}= - {N_{{\text{cont}}}}{\gamma _{{\text{at}}}}{\dot {x}_{{\text{tip}}}}.$$ Note that Eq. 3 does not imply that dissipation would be restricted to the contact atoms. It does, however, pay tribute to the fact that the mechanical excitations in the sliding interface that lead to friction certainly originate from the contact, even when dissipation of these excitations occurs further away in the substrate and the tip body. Thereby, it is the contact area, i.e., the number of atoms in contact \({N_{{\text{cont}}}}\), that determines how much mechanical energy is temporarily stored in the contact and subsequently dissipated in a slip event. In Appendix A, we provide a further justification for the assumed proportionality between the friction force and \({N_{{\text{cont}}}}\). By relating the sliding friction force directly to the microscopic, true area of contact [26], to the actual, interfacial velocity, and to the atomic-scale dissipation rate, we propose to put the phenomenological Amontons–Coulomb law on a truly fundamental footing for the case of wearless friction. A typical value for \({N_{{\text{cont}}}}\) for FFM and contact-AFM experiments is 10. For a typical atomic mass number of 74 amu (tungsten) and a typical atomic or short-wavelength lattice vibration frequency of 1012 Hz, this leads to a dissipation rate of the contact of 10−11 kg s−1, which is some five orders of magnitude lower than the values required to reproduce atomic patterns in FFM-calculations. The two assumptions that have gone into Eq. 3 may be crude, but a more refined description will not remove the blatant discrepancy. At this point, we draw the conclusion that the values adopted for the dissipation parameter in most FFM-calculations are unphysically high. Correspondingly, calculations within the same model for realistic \(\gamma\)-values would result in highly underdamped motion and could never reproduce a regular SS-pattern with lattice periodicity. 4 Strong Dissipation Implies Ultralow Mass In order to remedy the phenomenal discrepancy encountered above, we return to Eq. 1. The only remaining element that can be completely incorrect in the employed, typical estimates is the assumed value of the effective mass \({m_{{\text{eff}}}}\) [18]. What does this mass stand for and how does its value affect the friction behavior? The effective mass represents the number of dynamic atoms \({N_{{\text{dyn}}}}\) that effectively move at the same velocity as the contact and it is usually associated with the entire tip plus a part of the FFM-cantilever, hence its large, typical value of 10− 11 kg. Since the maximum velocity \({\dot {x}_{{\text{tip}}}}\) that the tip apex can acquire during a slip event in the absence of damping (cf. Equation 1) is inversely proportional to the square root of the effective mass, this mass also influences the maximum friction force \({F_{{\text{diss}}}}\) that the tip apex can experience in this motion (cf. Equation 3). We should thus expect a simple scaling between the maximum dissipative force and the effective mass of the type \(F_{{{\text{diss}}}}^{{{\text{max}}}} \propto 1/\sqrt {{m_{{\text{eff}}}}}\). So, the required increase in the friction force by five orders of magnitude translates directly into a reduction in the effective mass by at least ten orders of magnitude with respect to the value typically assumed. Obviously, we can no longer associate this with the entire tip and part of the FFM-cantilever, but see that this small mass can only stand for a very small portion of the tip, namely its very apex. We will refer to this tiny mass as the dynamic mass \({m_{{\text{dyn}}}}\) and will be forced to describe its dynamics and that of the rest of the tip and cantilever with separate equations of motion (see below). It is instructive to cast the above argument about the dynamic mass \({m_{{\text{dyn}}}}\) in terms of the two typical timescales involved. One is introduced by the timescale of a slip event, which we equate, here, to half the vibrational period of the slipping tip apex, \({t_{{\text{slip}}}}=\frac{1}{2}\sqrt {{m_{{\text{dyn}}}}/{k_{{\text{dyn}}}}}\). Here, \({k_{{\text{dyn}}}}\) stands for the effective spring coefficient by which the dynamic mass is connected to the rest of the system, which is the stiffness of the tip apex itself. The other is the time required to dissipate all kinetic energy via the \({N_{{\text{cont}}}}\) contact atoms. It is directly related to the dissipation rate \(\gamma\) in Eq. 3, through \({t_{{\text{diss}}}}\,=\,~{m_{{\text{dyn}}}}/\gamma \,=\,{m_{{\text{dyn}}}}/{N_{{\text{cont}}}}{\gamma _{{\text{at}}}}\), which we can rewrite to \({t_{{\text{diss}}}}=\frac{1}{2}{m_{{\text{dyn}}}}/{N_{{\text{cont}}}}\sqrt {{k_{{\text{at}}}}{m_{{\text{at}}}}}\) using Eq. 2. In order to obtain patterns with atomic periodicity, we require the dynamic part of the system to be at least critically damped, i.e., \({t_{{\text{diss}}}} \leqslant {t_{{\text{slip}}}}\). We can now write this critical-damping condition in terms of the spring coefficients and the numbers of atoms involved: $${N_{{\text{dyn}}}} \equiv \frac{{{m_{{\text{dyn}}}}}}{{{m_{{\text{at}}}}}} \leqslant \frac{{{k_{{\text{at}}}}}}{{{k_{{\text{dyn}}}}}}N_{{{\text{cont}}}}^{2}.$$ In Eq. 4, the total number of dynamic atoms \({N_{{\text{dyn}}}}\) is expressed in terms of the dynamic mass and the atomic mass. It should, of course, be larger than or equal to the number of contacting atoms \({N_{{\text{cont}}}}\). Equation 4 provides us with a straightforward condition that needs to be satisfied in order to 'see atoms' in FFM-measurements. It is cast in the form of a relation between the number of tip atoms that are in direct contact with the substrate and the total number of atoms that can be considered to effectively move rigidly with these contact atoms, including the contact atoms themselves. In practice, both spring coefficients, \({k_{{\text{at}}}}\) and \({k_{{\text{dyn}}}}\), are in the order of 1 N/m, so that the ratio \({k_{{\text{at}}}}/{k_{{\text{dyn}}}}\) is approximately unity. This provides us with a unique way to estimate the typical dynamic mass in FFM-experiments, based on the mere fact that atomic patterns are often observed in these experiments, which implies that the inequality of Eq. 4 is often satisfied. For a typical, 10-atom contact (\({N_{{\text{cont}}}}=10\)), \({N_{{\text{dyn}}}}\) has to be below 100 atoms, corresponding to a maximum dynamic mass of approximately 10−23 kg. This is about 12 orders of magnitude lower than what is typically assumed, obviously satisfying the coarse, 10-orders-of-magnitude estimate that we arrived at above. The qualitative picture that goes with these small numbers is illustrated in Fig. 2, which shows the situation in which a significant fraction of the elastic deformation is concentrated in the very apex of the FFM-tip. Schematic view of the deformation at the very tip apex, prior to a slip event. The green tip apex atoms in (a) are the \({N_{{\text{cont}}}}\) atoms that make contact with the red substrate. The blue atoms share most of the lateral displacement of the green atoms. Together with the green atoms, they establish the \({N_{{\text{dyn}}}}\) 'dynamic' atoms with mass \({m_{{\text{dyn}}}}\) that will be accelerated most strongly in the upcoming slip event. The lateral displacement of the yellow atoms is so modest, that these atoms are associated with the tip body, i.e., the rigid part of the tip. Panel (b) shows how this translates into a two-mass-two-spring model, illustrated here for the x-direction (see [3] and references therein) 5 Model Calculations The essential feature of our new description is formed by the high velocities to which the ultralow dynamic mass is accelerated, which enable efficient energy dissipation. We used numerical calculations for an extensive test of this scenario. To this end, we describe the combined system of the support, cantilever, flexible tip and small dynamic tip apex as a classical 2-mass-2-spring configuration [3, 27, 28, 29, 30]. One mass is the low, dynamic mass \({m_{{\text{dyn}}}}\), associated with the very tip apex. It is connected via the effective spring \({k_{{\text{dyn}}}}\) to a large mass that corresponds to the rest of the tip and part of the cantilever. This large mass is, in turn, connected via the typical cantilever spring coefficient to the rigid support. As described in Appendix B, we employ a fully two-dimensional version of this model in the form of 2 × 2 coupled Langevin equations, each similar to Eq. 1. These are solved numerically, with special attention given to the enormous difference in the characteristic frequencies of the two pairs of coupled oscillators in this model. As in the real FFM-experiment, the displacement of the cantilever mass is monitored to construct the two-dimensional, lateral force maps. 5.2 Numerical Results Typical results of these model calculations are shown in Fig. 3. In order to keep calculation times manageable, we used a relatively large contact size of \({N_{{\text{cont}}}}\;=\,100\) atoms, corresponding to a damping rate of \(\gamma \,=\,{10^{ - 10}}\) kg/s. Different values were chosen for \({N_{{\text{dyn}}}}\), in order to explore overdamped, critically damped and underdamped situations. The numerical results are in complete agreement with our qualitative description. Close to critical damping, more or less regular SS-motion is observed, which is reflected in patterns with clear atomic-scale periodicity. Numerically calculated lateral force maps, obtained by numerical integration of four coupled Langevin equations (see Appendix B), with the parameters chosen to match those of the FFM-experiment of Fig. 1 [4], namely \({k_{{\text{cant}}}}=30\) N/m, \({k_{{\text{dyn}}}}\,=\,2\) N/m, \({v_{{\text{supp}}}}=30\) µm/s, \({m_{{\text{cant}}}}={10^{ - 11}}\) kg, \({m_{{\text{at}}}}=74\) amu, \({U_0}=0.8\) eV (peak–peak), and \(T=298\) K. The number of contacting atoms was set to \({N_{{\text{cont}}}}=100\) (see text). The number of dynamic atoms \({N_{{\text{dyn}}}}\) was chosen differently for each panel, in order to vary the damping regime. a \({N_{{\text{dyn}}}}={10^8},\) corresponding to one-hundredfold underdamped motion. b \({N_{{\text{dyn}}}}\,=\,{10^6},\) tenfold underdamped. c \({N_{{\text{dyn}}}}=4 \times {10^4},\) twofold underdamped. d \({N_{{\text{dyn}}}}=2.5\, \times \,{10^3},\) twofold overdamped. Note that the two underdamped cases in (a) and (b) show many multiple slip events and that the overdamped case shows an increase in the force fluctuations. For lateral force patterns with a well-defined lattice signature, the damping should be close to critical Underdamping of the tip apex motion by only a few orders of magnitude (Fig. 3a) leads to a complete loss of atomic periodicity. As a result of the lack of damping, slip events frequently extend over multiple lattice spacings, which ruins the regularity. Actually, the calculations show that also overdamping destroys the regular SS-signature in the FFM-patterns. This is caused by the fact that high damping goes hand in hand with a high level of thermal fluctuations, as a direct consequence of the fluctuation–dissipation theorem. Still, the loss of regular SS-patterns at overdamping may seem surprising. After all, at high damping all fluctuations are damped in a very short time. As a consequence, thermal activation of 'early slips' becomes ineffective, so that all slip events start very close to points of mechanical instability. In combination with the suppression of multiple slips, this might make one expect the SS-patterns to become very regular. However, such an expectation would be wrong. After all, the points of mechanical instability are determined by the force balance of Eq. 1 (or better, Eqs. 5 and 6) and the strong force fluctuations that accompany the strong damping introduce strong variations in the precise locations at which the slip events commence. This establishes the second mechanism by which the SS-patterns become more chaotic. This trend can be recognized in Fig. 3d. Only in a limited window around critical damping, the dissipation rate is high enough to avoid frequent multiple slip events and the statistical force fluctuations are low enough to leave the pattern of slip positions well defined. As Eq. 4 indicates, calculations for other values of \({N_{{\text{cont}}}}\) give similar results, when \({N_{{\text{dyn}}}}\) is changed accordingly. Figure 4 summarizes our understanding in the form of a friction 'phase' diagram. Depending on the number of atoms in the contact \({N_{{\text{cont}}}}\) and the total number of atoms in the dynamic part of the tip \({N_{{\text{dyn}}}}\), the diagram goes from underdamped motion (upper left corner) with multiple slips, to overdamped behavior (lower right) with strong fluctuations. Only the intermediate regime around critical damping, \(D\,=\,1\) in Fig. 4, is characterized by FFM-patterns with clear atomic periodicities. Note that \({N_{{\text{dyn}}}}\) cannot be lower than \({N_{{\text{cont}}}}\), so that the far lower right of the diagram is unphysical. For further details and trajectories of the tip and the cantilever, we refer to [31]. Friction 'phase' diagram as a function of the number of atoms \({N_{{\text{cont}}}}\) in the contact and the number of dynamic atoms \({N_{{\text{dyn}}}}\). Note that the accessible region in the diagram is that above the gray dashed line, \({N_{{\text{dyn}}}}\, \geqslant \,{N_{{\text{cont}}}}\). The blue line indicates the situation for critical damping of the tip apex motion, \(~D \equiv {N_{{\text{cont}}}}/\sqrt {{N_{{\text{dyn}}}}} =1\). The colors indicate the quality of the stick-slip patterns. Green corresponds to a clearly recognizable atomic lattice, and red to strongly washed out patterns. Both underdamping and overdamping destroy the lattice signature, due to multiple slips and due to strong thermal fluctuations, respectively 6 Summary and Discussion Stick-slip motion in friction force microscopy with clearly visible atomic periodicity is only possible by virtue of the high-speed motion that is carried out by a small and highly dynamic mass at the very end of the tip apex. The speed needs to be high enough to make the motion close to critically damped. Damping that would be either much higher or much lower would destroy the periodic patterns via excessively strong fluctuations or frequent slips of the tip over multiple lattice spacings. In this study, we arrived at this insight through estimates of the order of magnitude of the dissipative forces, contributed by the atoms in the contact and we formulated a condition on the effective shape of the tip apex, in Eq. 4. By integrating a set of four coupled Langevin equations, two for the x- and y-coordinates of the 'regular' tip-plus-cantilever combination and two for the x- and y-coordinates of the tiny dynamic mass at the end of the tip, we numerically explored the conditions that the tip needs to satisfy, which is summarized in the 'phase' diagram of Fig. 4. The conclusion drawn here highlights a hitherto unrecognized signature of the flexibility and ultralow mass of the tip apex and the resulting, highly dynamic character of its motion. In several earlier publications [e.g., 3,27–30], we concentrated on cases where the atomic corrugation of the interaction potential between tip and substrate is relatively modest, so that the high rate, with which the rapidly moving tip is 'attempting' to overcome the energy barriers, can lead to a delocalization of the contact and a corresponding, extreme lowering of the friction force—an effect that we have nick-named "thermolubricity." In the present manuscript, we focused on the increase in dissipation, associated with the high velocity that the tip acquires during a slip event, which is relevant for turning the sliding into nearly ideal atom-scale stick-slip motion, in case of a more pronounced corrugation of the interaction potential. In view of the strict conditions found here for \({N_{{\text{cont}}}}\) and \({N_{{\text{dyn}}}}\), it may seem miraculous that many FFM-experiments yield force maps with clear, atomic-scale SS-motion. However, the tips used in AFM- and FFM-experiments have geometries that make them naturally fall in the central, green part of the diagram. In other words, even though we now understand that it is far from trivial that an FFM-experiment would be sensitive at all to the lattice periodicity, it is the typical shape of the employed tips that leads to a nearly critically damped motion of the tip apex, once it is brought in contact with the surface. We stress that the naïve, ideal FFM setup (Fig. 1c), equipped with a fully rigid tip apex, should always perform highly underdamped motion (upper left, far outside the diagram of Fig. 4) and should never produce atomic-scale stick-slip patterns. What we have found here for a single asperity, moving over atomic distances, may also apply to frictional energy dissipation in practical contacts, consisting of large numbers of micrometer-scale contact regions. Both the geometry of each contact apex and the elasticity with which it is connected to the rest of the moving body determine the amount of energy each contact can dissipate. We speculate that when the connection between apex and body is made more rigid, e.g., by making the contacting surfaces nearly perfectly flat, the motion of the apices should become less strongly damped and the average friction force should reduce. The other extreme would be to pattern surfaces deliberately, for example in the form of nano-pillar arrays, to pre-define \({N_{{\text{dyn}}}}\) and \({k_{{\text{dyn}}}}\) in order to make the apex motion highly overdamped, which would maximize the friction force. We note that more advanced algorithms exist for the numerical integration of Langevin equations [32]; our application of the traditional Verlet scheme may have introduced numerical inaccuracies in the calculated trajectories and the precise value of the simulated temperature. The authors gratefully acknowledge support from the European Research Council through the ERC Advanced Grant project Science F(r)iction and from the Netherlands Organisation for Scientific Research (NWO), through the FOM-Program on Fundamental Aspects of Friction. Prof. André Schirmeisen (Justus-Liebig University, Gießen, Germany) is gratefully acknowledged for providing us with the data of panels (a) and (b) in Fig. 1. Appendix A: Mechanism and Location of Dissipation: Arguments for Linear Scaling of Dissipation Rate with Contact Size The main, but not sole mechanism (see below) of mechanical energy dissipation in friction force microscopy is related to the rapid motion of the tip apex with respect to the surface. It is accompanied by the creation of phonons in the substrate and in the tip. These phonons lead to the dissipation (irretrievable loss) of energy [19], and result (at a later stage) in thermalization. As mentioned in the main text, damping of the motion of an adsorbed atom on a surface is always close to critical. In the case of a sharp tip that is in contact with the surface via one single atom, phonons are produced not only in the substrate but also in the tip. As a result, the dissipation rate will then be higher than in the adsorbed-atom case by at most a factor two. In this study, we have assumed the total dissipation rate, experienced when the tip apex is blunt, simply to be proportional to number of atoms in contact \({N_{{\text{cont}}}}\) (see Eq. 3). A justification for this can be given at more than one level. Obviously, if we consider each atom in the contact as a separate, independent (Einstein) oscillator that is vibrationally excited in the slip process and dissipates that vibrational energy independently, our model automatically exhibits the linear scaling of Eq. 3. In a more realistic description, the \({N_{{\text{cont}}}}\) contacting atoms are treated as coupled oscillators, rather than independent ones. The total number of vibrational modes introduced by these atoms amounts to \(3{N_{{\text{cont}}}} - 6\), which is nearly proportional to the contact size. If, on average, each of these modes contributes equally strongly to the energy dissipation, this description still results in near-linear scaling, very close to Eq. 3. Of course, the vibrations that are relevant here are not limited to the \({N_{{\text{cont}}}}\) contact atoms, but also involve nearby tip atoms that are not in contact with the counter-surface themselves. As a result, more than \(3{N_{{\text{cont}}}} - 6\) modes will be involved and the effective number of modes contributing to the dissipation should scale super-linearly with \({N_{{\text{cont}}}},~\)for example approximately proportional to \(N_{{{\text{cont}}}}^{{{\raise0.7ex\hbox{$3$} \!\mathord{\left/ {\vphantom {3 2}}\right.\kern-0pt}\!\lower0.7ex\hbox{$2$}}}}\) (under the assumption that the aspect ratios of the tip dimensions are all constant). Such an alternative scaling will lead to modifications in the powers of \({N_{{\text{cont}}}}\) in Eqs. 3 and 4 (3/2 and 3, respectively, instead of 1 and 2) and a change in the horizontal axis of Fig. 4. We should also mention the phonon discrimination mechanism that was proposed in [23] to explain surface diffusion of atomic clusters. Surface motion of a large (rigid) object can couple only to phonons in the substrate that have wavelengths that are comparable to or larger than the lateral size of the object. This discrimination effect actually leads to sublinear scaling with the size as \(\sqrt {{N_{{\text{cont}}}}}\). Combining this sublinear tendency with the super-linear tendency mentioned in the previous paragraph, we may expect the naïve, linear scaling to provide a useful order-of-magnitude estimate. We close this part by emphasizing that, in view of the low values of \({N_{{\text{cont}}}}\), considered here, the numerical effects of any non-linearity will necessarily be modest and the qualitative conclusions reached in this article should remain valid. For completeness, also more 'remote' channels of mechanical energy dissipation may be active in friction force microscopy. For example, the cantilever may exhibit internal damping. Since the quality factors measured for free cantilevers are typically in the order of thousands [14], this internal damping must be negligibly weak. Similarly, the internal damping of the bending motion of the tip or its apex must be insignificant, since the quality factors measured for cantilevers with the tip in contact with a substrate are still relatively high, in the order of hundreds [14]. These observations justify the choice in our calculations not to introduce explicit damping in Eqs. 7 and 8 (see Appendix B), and not to include terms depending on the relative velocity of the tip apex and the cantilever in Eqs. 5 and 6. Note that Eqs. 5–8 do imply modest, indirect damping of the cantilever via the slow, forced motion of the tip with respect to the substrate; this effect is smaller than the explicit damping of the rapid mode by a factor \({m_{{\text{dyn}}}}/{m_{{\text{cant}}}}\). Appendix B: Details of the Numerical Calculations Equations of Motion In our calculations, we numerically integrated the equations of motion for the dynamic tip mass, \({m_{{\text{dyn}}}},\) and of the combination of the rest of the tip and the moving part of the cantilever, \({m_{{\text{cant}}}}.\) Both were followed in two dimensions, leading to the following four coupled Langevin equations: $${m_{{\text{dyn}}}}{\ddot {x}_{{\text{tip}}}}= - {\left. {\frac{{\partial {V_{{\text{int}}}}\left( {x,y} \right)}}{{\partial x}}} \right|_{\left( {x,y} \right)=\left( {{x_{{\text{tip}}}},{y_{{\text{tip}}}}} \right)}} - {k_{{\text{dyn}}}}\left( {{x_{{\text{tip}}}} - {x_{{\text{cant}}}}} \right)+{\xi _x} - \gamma {\dot {x}_{{\text{tip}}}},$$ $${m_{{\text{dyn}}}}{\ddot {y}_{{\text{tip}}}}= - {\left. {\frac{{\partial {V_{{\text{int}}}}\left( {x,y} \right)}}{{\partial y}}} \right|_{\left( {x,y} \right)=\left( {{x_{{\text{tip}}}},{y_{{\text{tip}}}}} \right)}} - {k_{dyn}}\left( {{y_{{\text{tip}}}} - {y_{{\text{cant}}}}} \right)+{\xi _y} - \gamma {\dot {y}_{{\text{tip}}}},$$ $${m_{{\text{cant}}}}{\ddot {x}_{{\text{cant}}}}= - {k_{{\text{cant}}}}\left( {{x_{{\text{cant}}}} - {x_{{\text{supp}}}}} \right) - {k_{{\text{dyn}}}}\left( {{x_{{\text{cant}}}} - {x_{{\text{tip}}}}} \right),$$ $${m_{{\text{cant}}}}{\ddot {y}_{{\text{cant}}}}= - {k_{{\text{cant}}}}\left( {{y_{{\text{cant}}}} - {y_{{\text{supp}}}}} \right) - {k_{{\text{dyn}}}}\left( {{y_{{\text{cant}}}} - {y_{{\text{tip}}}}} \right).$$ Note that Eqs. 7 and 8 have no damping and no noise term, which reduces them to straightforward Newton equations of motion. In our calculation, damping and noise exclusively derive from the tip–surface contact and are therefore only explicitly present in Eqs. 5 and 6. Here, the spring coefficients have been chosen equal for the x- and y-directions, but they can be replaced easily by an anisotropic choice. In our computational scheme, we numerically integrated these equations, using the Verlet algorithm to periodically update the four velocities, \({\dot {x}_{{\text{tip}}}},\) \({\dot {y}_{{\text{tip}}}}\), \({\dot {x}_{{\text{cant}}}},\) and \({\dot {y}_{{\text{cant}}}},\) and the four positions, \({x_{{\text{tip}}}},\) \({y_{{\text{tip}}}},\) \({x_{{\text{cant}}}},\) and \({y_{{\text{cant}}}},~\)iterating with a time step \({\text{d}}t\) [32].1 Interaction Potential The two-dimensional periodic potential \({V_{{\text{int}}}}\) describes the interaction between the tip apex and the crystal surface over which the tip is forced to slide. Here, we used a potential with the symmetry and lattice period of graphite, based on the potential that was used before by Verhoeven et al. [33 and references therein]. $${V_{{\text{int}}}}\left( {x,y} \right)= - \frac{2}{9}{U_0}\left[ {2{\text{cos}}\left( {{a_1}x} \right){\text{cos}}\left( {{a_2}y} \right)+{\text{cos}}\left( {2{a_2}y} \right)} \right]$$ with \({a_1}=2\pi /\left( {0.246~{\text{nm}}} \right)\) and \({a_2}=2\pi /\left( {0.426~{\text{nm}}} \right)\), determined by the periodicity of the graphite surface. The prefactor of \(2/9\) serves to make the peak-to-peak variation of \({V_{{\text{int}}}}\) equal to \({U_0}.\) Time Step For the time step \({\text{d}}t\) between subsequent iterations, we introduce two upper limits. On the one hand, \({\text{d}}t\) should be short enough to capture the behavior of the most dynamic element in the system, namely the dynamic tip. Therefore, we respect one upper limit for \({\text{d}}t\) as 10% of the vibrational period of the tip apex, $${\text{d}}t_{1}^{{max}}=0.1 \cdot 2\pi \sqrt {\frac{{{m_{{\text{dyn}}}}}}{{{k_{{\text{dyn}}}}+{k_{{\text{latt}}}}}}},$$ in which \({k_{{\text{latt}}}}\) is the maximum force derivative by which the interaction potential contributes to the total spring constant, experienced by the tip apex. The second constraint on the time step derives from the random force, generated via the noise terms \({\xi _x}\) and \({\xi _y}\) in Eqs. 5 and 6. In case of strong dissipation, the noise will be high too, as they are related via the fluctuation–dissipation theorem. Hence, the influence of the noise on the acceleration will be high and thus we need to choose an accordingly small time step. The characteristic timescale of the noise is set by the dissipation rate and the mass, and we verified that our numerical results are sufficiently reliable and stable when we keep \({\text{d}}t\) below 20% of this time, which defines our second upper limit as $${\text{d}}t_{2}^{{{\text{ma}}x}}=0.2 \cdot \frac{{{m_{{\text{dyn}}}}}}{\gamma }.$$ We combine these two conditions into \(dt={\text{min}}\left( {dt_{1}^{{{\text{max}}}},dt_{2}^{{{\text{max}}}}} \right).\) Noise Term The noise term \(\xi\) in Eqs. 5 and 6 is provided in the calculations as a Gaussian-distributed, uncorrelated random number with a standard deviation in accordance with the fluctuation–dissipation theorem \(\left< {\xi \left( t \right)\xi \left( {t'} \right)}\right>=2\gamma {k_B}T\delta \left( {t - t'} \right)\), where \({k_B}\) is the Boltzmann constant. Actual \(\xi\)-values are generated in our calculations by means of regular, uniformly distributed random numbers and a Box–Muller transformation [34], in order to obtain a Gaussian distribution. We verified that this procedure indeed generates a Gaussian distribution of random forces, with an average of zero and a mean square value satisfying the fluctuation–dissipation theorem for the specific combination of temperature \(T\) and dissipation rate \(\gamma\) of the calculation. Damping of the Cantilever In the numerical calculations reported, no damping was applied to the cantilever. As typical cantilevers have very high-quality factors (thousands), even in contact (hundreds) [14], both the damping and the noise on the cantilever motion are relatively small (see also Appendix A). Calculations were performed to check this in case of a close-to-critically-damped tip apex and almost no effect on the dynamic tip apex behavior was observed at typical cantilever damping values. Taking confidence from these observations, we set the damping of the cantilever to zero in all calculations reported here, which resulted in a significant reduction in calculation time. Scan Trajectory of the Support In FFM-experiments, like in most scanning probe experiments, the support is scanned over the surface in a line-by-line fashion. In our calculations, we oriented our coordinate system such that the scan lines were oriented along the x-axis. Every combination of a forward and subsequent backward line was performed at a fixed y-coordinate of the support. Prior to the next forward–backward line pair, the support made a small step along the y-direction. We have faithfully followed this scan sequence in our calculations, because it leads to a characteristic hysteresis in the lateral force maps, both along the x-direction and, after a few initial scan lines, also along the y-direction. For example, during the initial part of each forward line, the x-force first has to reduce to zero, change sign, and build up in the opposite direction to a level high enough to induce slip events. Such hysteresis is commonly observed in experiments [2] and is reproduced well by our calculations, indicating that not only the typical slip-induced force variations in the calculations are similar to those in experiments, but also the typical force level at which slip events take place. Persson, B.N.J.: Sliding Friction, Physical Principles and Applications, 2nd edn. Springer, Berlin (2000)CrossRefGoogle Scholar Gnecco, E., Meyer, E. (eds.): Fundamentals of Friction and Wear on the Nanoscale, 2nd edn. Springer, Cham (2015)Google Scholar Krylov, S.-Yu, Frenken, J.W.M.: The physics of atomic-scale friction: basic considerations and open questions. Phys. Status Solidi B 251(4), 711 (2014)CrossRefGoogle Scholar Schirmeisen, A., Jansen, L., Fuchs, H.: Tip-jump statistics of stick-slip friction. Phys. Rev. B 71(24), 25403 (2005)CrossRefGoogle Scholar Prandtl, L.: Ein Gedankenmodell zur kinetischen theorie der festen Körper. Z. Angew. Math. Mech. 8, 85 (1928)CrossRefGoogle Scholar Tomlinson, G.A.: A molecular theory of friction. Philos. Mag. 7, 905 (1929)CrossRefGoogle Scholar Carpick, R.W., Ogletree, D.F., Salmeron, M.: Lateral stiffness: a new nanomechanical measurement for the determination of shear strengths with friction force microscopy. Appl. Phys. Lett. 70(12), 1548 (1997)CrossRefGoogle Scholar Wieferink, C., Krüger, P., Pollmann, J.: Simulations of friction force microscopy on the KBr(001) surface based on ab initio calculated tip-sample forces. Phys. Rev. B 83(23), 235328 (2011)CrossRefGoogle Scholar Roth, R., Glatzel, T., Steiner, P., Gnecco, E., Baratoff, A., Meyer, E.: Multiple slips in atomic-scale friction: an indicator for the lateral contact damping. Tribol. Lett. 39(1), 63 (2010)CrossRefGoogle Scholar Steiner, P., Roth, R., Gnecco, E., Baratoff, A., Maier, S., Glatzel, T., Meyer, E.: Two-dimensional simulation of superlubricity on NaCl and highly oriented pyrolytic graphite. Phys. Rev. B 79(4), 045414 (2009)CrossRefGoogle Scholar Tshiprut, Z., Filippov, A.E., Urbakh, M.: Effect of tip flexibility on stick-slip motion in friction force microscopy experiments. J. Phys.: Cond. Mat. 20(35), 354002 (2008)Google Scholar Reimann, P., Evstigneev, M.: Description of atomic friction as forced Brownian motion. New J. Phys. 7(1), 25 (2005)CrossRefGoogle Scholar Nakamura, J., Wakunami, S., Natori, A.: Double-slip mechanism in atomic-scale friction: Tomlinson model at finite temperatures. Phys. Rev. B 72(23), 235415 (2005)CrossRefGoogle Scholar Maier, S., Sang, Y., Filleter, T., Grant, M., Bennewitz, R., Gnecco, E., Meyer, E.: Fluctuations and jump dynamics in atomic friction experiments. Phys. Rev. B 72(24), 245418 (2005)CrossRefGoogle Scholar Conley, W.G., Raman, A., Krousgrill, C.M.: Nonlinear dynamics in Tomlinson's model for atomic-scale friction and friction force Microscopy. J. Appl. Phys. 98(5), 053519 (2005)CrossRefGoogle Scholar Dudko, O.K., Filippov, A.E., Klafter, J., Urbakh, M.: Dynamic force spectroscopy: a Fokker-Planck approach. Chem. Phys. Lett. 352, 499 (2002)CrossRefGoogle Scholar Sang, Y., Dubé, M., Grant, M.: Thermal effects on atomic friction. Phys. Rev. Lett. 87(17), 174301 (2001)CrossRefGoogle Scholar Johnson, K.L., Woodhouse, J.: Stick-slip motion in the atomic force microscope. Trib. Lett. 5(2–3), 155 (1998)CrossRefGoogle Scholar Hu, R., Krylov, S.-Y., Frenken, J.W.M.: On the origin of frictional dissipation. to be publishedGoogle Scholar Meyer, J.: Ab Initio Modeling of Energy Dissipation during Chemical Reactions at Transition Metal Surfaces, PhD dissertation, Freie Universität Berlin, Berlin (2011)Google Scholar Lorente, N., Ueba, H.: CO dynamics induced by tunneling electrons: differences on Cu(110) and Ag(110). Eur. Phys. J. D. 35(2), 341 (2005)CrossRefGoogle Scholar Linderoth, T.R., Horch, S., Lægsgaard, E., Stensgaard, I., Besenbacher, F.: Surface diffusion of Pt on Pt(110): Arrhenius behavior of long jumps. Phys. Rev. Lett. 78(26), 4978 (1997)CrossRefGoogle Scholar Krylov, S.-Yu: Surface gliding of large low-dimensional clusters. Phys. Rev. Lett. 83(22), 4602 (1999)CrossRefGoogle Scholar Senft, D.C., Ehrlich, G.: Long jumps in surface diffusion: one-dimensional migration of isolated adatoms. Phys. Rev. Lett. 74(2), 294 (1995)CrossRefGoogle Scholar Ala-Nissila, T., Ying, S.C.: Universality in diffusion of classical adatoms on surfaces. Mod. Phys. Lett. B 4(22), 1369 (1990)CrossRefGoogle Scholar Weber, B., Suhina, T., Junge, T., Pastewka, L., Brouwer, A.M., Bonn, D.: Molecular probes reveal deviations from Amontons' law in multi-asperity frictional contacts. Nature Comm. 9, 888 (2018)CrossRefGoogle Scholar Krylov, S.-Yu, Frenken, J.W.M.: The crucial role of temperature in atomic scale friction. J. Phys.: Condens. Matter. 20(35), 354003 (2008)Google Scholar Krylov, S.-Yu, Frenken, J.W.M.: Thermal contact delocalization in atomic scale friction: a multitude of friction regimes. New J. Phys. 9, 398 (2007)CrossRefGoogle Scholar Abel, D.G., Krylov, S.-Yu, Frenken, J.W.M.:, Abel, D.G.: Evidence for contact delocalization in atomic scale friction. Phys. Rev. Lett. 99(16), 166102 (2007)CrossRefGoogle Scholar Krylov, S.-Yu, Dijksman, J.A., Van Loo, W.A., Frenken, J.W.M.: Stick-slip motion in spite of a slippery contact: do we get what we see in atomic friction? Phys. Rev. Lett. 97(16), 166103 (2006)CrossRefGoogle Scholar Van Baarle, D.W.: The origins of friction and the growth of graphene. PhD Thesis, Leiden University, The Netherlands (2016). ISBN: 978-90-8593-277-2. This PhD thesis can be accessed electronically on https://openaccess.leidenuniv.nl Grønbech-Jensen, N., Farago, O.: A simple and effective Verlet-type algorithm for simulating Langevin dynamics. Mol. Phys. 111(8), 983 (2013)CrossRefGoogle Scholar Verhoeven, G.S., Dienwiebel, M., Frenken, J.W.M.: Model calculations of superlubricity of graphite. Phys. Rev. B 70(16), 165418 (2004)CrossRefGoogle Scholar Box, G.E.P., Muller, M.E.: A note on the generation of random normal deviates. Ann. Math. Statist. 29(2), 610 (1958)CrossRefGoogle Scholar 1.Advanced Research Center for NanolithographyAmsterdamThe Netherlands 2.Huygens - Kamerlingh Onnes LaboratoryLeiden UniversityLeidenThe Netherlands 3.Institute of Physical Chemistry and ElectrochemistryRussian Academy of SciencesMoscowRussia 4.Institute of PhysicsUniversity of AmsterdamAmsterdamThe Netherlands van Baarle, D.W., Krylov, S.Y., Beck, M.E.S. et al. Tribol Lett (2019) 67: 15. https://doi.org/10.1007/s11249-018-1127-6 Received 21 July 2018
CommonCrawl
Bacterial protein meta-interactomes predict cross-species interactions and protein function J. Harry Caufield1, Christopher Wimble1, Semarjit Shary1, Stefan Wuchty2,3,4 & Peter Uetz1 Protein-protein interactions (PPIs) can offer compelling evidence for protein function, especially when viewed in the context of proteome-wide interactomes. Bacteria have been popular subjects of interactome studies: more than six different bacterial species have been the subjects of comprehensive interactome studies while several more have had substantial segments of their proteomes screened for interactions. The protein interactomes of several bacterial species have been completed, including several from prominent human pathogens. The availability of interactome data has brought challenges, as these large data sets are difficult to compare across species, limiting their usefulness for broad studies of microbial genetics and evolution. In this study, we use more than 52,000 unique protein-protein interactions (PPIs) across 349 different bacterial species and strains to determine their conservation across data sets and taxonomic groups. When proteins are collapsed into orthologous groups (OGs) the resulting meta-interactome still includes more than 43,000 interactions, about 14,000 of which involve proteins of unknown function. While conserved interactions provide support for protein function in their respective species data, we found only 429 PPIs (~1% of the available data) conserved in two or more species, rendering any cross-species interactome comparison immediately useful. The meta-interactome serves as a model for predicting interactions, protein functions, and even full interactome sizes for species with limited to no experimentally observed PPI, including Bacillus subtilis and Salmonella enterica which are predicted to have up to 18,000 and 31,000 PPIs, respectively. In the course of this work, we have assembled cross-species interactome comparisons that will allow interactomics researchers to anticipate the structures of yet-unexplored microbial interactomes and to focus on well-conserved yet uncharacterized interactors for further study. Such conserved interactions should provide evidence for important but yet-uncharacterized aspects of bacterial physiology and may provide targets for anti-microbial therapies. Our understanding of a protein's role in a biological system strongly depends on its placement in a network of protein-protein interactions (PPIs), or interactome. Recently, interactome data sets involving proteins from various microbial species have been constructed using experimental [1, 2] and inferred data (Table 1) while numerous databases have been created to store and disseminate this information [3–5]. Bacterial proteomes are particularly attractive subjects for interactome analysis due to their manageable size. The proteomes of many bacterial species include only a few thousand proteins, suggesting that they are about an order of magnitude smaller than their counterparts in many animals and plants. Therefore, most bacterial species provide more tractable interactomes compared to the human genome that has more than 20,000 protein coding genes [6] and more than 650,000 predicted PPIs [7]. Table 1 Comprehensive experimental microbial interactome sizes Nearly all published bacterial interactomes have been created using either the yeast two-hybrid (Y2H) system or affinity purification followed by mass spectrometry analysis (AP/MS). Although E. coli is the only bacterial species with a comprehensive interactome that has been studied by both Y2H [8] and AP/MS [9] methodologies a comparison of both methods surprisingly showed largely non-overlapping interaction data sets. In the Y2H data set of 2234 E. coli PPIs roughly 1800 were found outside of known protein complexes [8]. Similarly, roughly a third of ~1500 interactions that are thought to occur in protein complexes were detected by the Y2H approach, indicating that existing methodologies in isolation produce incomplete datasets [8]. A way to overcome such problems is to combine not only different datasets from the same species but also data from different species. Although cross-species interactome approaches have been recently presented for human and yeast protein sets [10] no comprehensive comparison of bacterial interactomes currently exists. While the majority of reports focus on one interactome (Fig. 1), far fewer include data from more than one set of interactions, and just two recent reports [11, 12] have investigated 6 or more out of 11 available large-scale bacterial interactome datasets. One of these studies provides an analysis of bacterial genomes in terms of their predicted functional complexity rather than the exact interactions in their interactomes [11]. Other studies dealt with four or five published interactomes (see Additional file 1 for a guide to all additional files and a complete list of interactome publications discussed here in Additional file 2), presenting only a general discussion of the evolution of protein networks [13] or a review of ways to mine high-throughput experimental data to link gene and function [14]. Citation analysis of the bacterial interactome literature. Publication counts include all papers that cite at least one of a total of 11 published bacterial interactome studies (as of August 2015) One of the most promising applications of interactomics is in the analysis of protein function. In a "guilt by association" approach [15, 16], PPIs provide context to proteins by considering functional roles of their known interaction partners. For example, a protein that interacts predominantly with metabolic proteins probably has a role in metabolism as well. In particular, such a method has been applied as part of the analysis of interactomics data [9, 17, 18]. As part of a guilt-by-association approach, proteins and their interaction networks may be compared through their participation in orthologous groups (OGs). Specifically, OGs are defined through non-supervised, taxonomy-limited methods [19] and reduce the complexity of interaction networks by joining proteins of similar sequence and potentially similar function. An orthology-based approach may be species-independent and can allow interaction networks of different species to be used to predict uncharacterized, conserved interactions as well as to provide an evolutionary basis for the reasons an interaction may not be present. Although analyses of conserved networks have been performed [20], in some cases alongside interactome studies [21], they have generally been limited by low proteome coverage in the underlying interactomes. Here, we combine experimentally-derived, previously published PPIs from 349 bacterial species and strains to form a consensus meta-interactome, using orthologous groups (OG) of proteins to combine all known interactions into a single network. Notably, we observe that such a network shares characteristics of single species interactomes. Furthermore, the augmentation of single species interaction networks with a bacterial meta-interactome boosts our ability to predict functions of the underlying proteins, given its dramatically increased information content. Finally, we utilize such a bacterial meta-interactome to predict interactome sizes of species for which incomplete interaction data is available. The bacterial meta-interactome resembles individual interactomes in structure To compare interactions across multiple species, we first mapped proteins to orthologous groups (OGs; for details see Methods). As a source of information about OGs, we utilized the EggNOG database [19], expanding the idea of clusters of orthologous groups [22] constructed from numerous organisms. As a source of PPIs in bacteria we utilized the IntAct database [3]. Furthermore, we accounted for the protein interactome of Mesorhizobium loti [23], a PPI data set that was not available in the IntAct database. In particular, we accounted for all experimental sources of PPIs, suggesting that the majority of interactions (>60%) have been found in E. coli and C. jejuni (Fig. 2a). Based on the total set of roughly 52,000 interactions between proteins in the underlying organisms, we merged their OGs, resulting in a meta-interactome with nodes and edges of differing weights (Fig. 2b). In total, we obtained a consensus meta-interactome of 8475 orthologous groups that are embedded in web of 43,545 weighted links, covering 349 distinct bacterial species and strains (Fig. 2c, see Additional file 3 for details). Such a network consisted of 205 connected components that included 1352 self-connected nodes. Moreover, the largest component pooled 88.9% of all nodes. In Fig. 2d, we observed that the majority of OGs in the meta-interactome corresponds to a single protein while the majority of links is composed of one interaction (Fig. 2e, f). Since the average weight of links is 1.0 ± 0.1, we can consider our network as largely unweighted. As a consequence, we found that the average path length in the unweighted network is 3.7 ± 0.9 while the diameter of the network is roughly 15, indicating small world network characteristics [24]. The average number of neighbors is 10.2 ± 23.9, an average that is likely influenced by the presence of several broadly-defined OGs. Since these large OGs contain thousands of members across hundreds of genomes in some cases, we treat them as groups of paralogs [22]. Demonstrating the scale-free tendency of many similar networks [25], we found that the distribution of the number of neighbors has a fat tail. Consensus meta-interactome. a A breakdown of source species of the meta-interactome shows that PPIs in E. coli or C. jejuni contributed to more than half of the total set of interactions in the meta-interactome. b We defined the pool of interactions between proteins in different bacteria as the meta-interactome. To account for homologous proteins, we considered groups of orthologous proteins (OG) as nodes in a consensus meta-interactome. In particular, we weighted links between OGs by the underlying number of observed interactions between proteins in such groups. c The main component of the consensus meta-interactome pools 88.9% of all OGs. Our graphical depiction suggests that the majority of OGs consist of one protein, while such groups are mostly linked by one underlying PPI. d More quantitatively, we found that the majority of OGs in the consensus meta-interactome indeed have only one protein while a minority of groups includes many proteins. e The distributions of the number of PPI that connect proteins in different OGs (f) as well as the number of neighboring OGs decay as a power-laws Functional annotation of orthologous groups Single interactomes are known to have many gaps, indicating PPIs that went undetected in experimental studies [26, 27]. Since a missed interaction in one study may be found in an independent study through evolutionary conservation of the corresponding interacting proteins, a meta-interactome network potentially reveals such gaps. As such, we assume that links between orthologous groups in the consensus meta-interactome may be indicative of undetermined PPIs between orthologs in the corresponding organisms. Counting the number of bacteria a given PPI was observed in we found that relatively few interactions appear in multiple bacterial species (Fig. 3a). In particular, we found 43,116 interactions that occurred only in a single species, 361 appeared in two species while only 68 interactions occurred in three or more species. Conserved and cross-functional interactions in the consensus meta-interactome. a Counts of PPIs in the consensus meta-interactome network. Nspecies indicates the number of distinct bacterial species contributing the interaction; a value of 1 denotes an interaction observed for a single species only. For each count, subsets denote how many PPIs involve two, one, or zero interactors of known function (as both, one, and none, respectively). b Significant connections between functional classes are mediated by the underlying PPIs in the consensus meta-interactome. For each class combination we calculated a Z-score that reflects the significance of the interaction density between classes and class coverage. While interactions mostly appear between the same classes, we also observe that most functional cross-talk emanates from OGs with translational as well as posttranslational functions Any single bacterial proteome may contain hundreds or even thousands of proteins of unknown or unclear function. Out of more than 43,000 interactions, less than 10,000 involve two interactors of unknown or unclear function (Fig. 3a). Due to limited cross-species overlap, just a small subset of fewer than 100 interactions is observed in more than one species and involves one or more interactors of unknown function. Certain functional groups contribute more extensively to the meta-interactome than others, potentially reflecting the occurrence of more common types of PPIs across bacteria in general. In Fig. 3b, we determined the overrepresentation of functional crosstalk between orthologous groups based on the underlying interactions between different proteins in the consensus meta-interactome. In particular, we determined a log-odds ratio of the observed and expected frequencies of interactions between OGs of the corresponding functional classes, allowing us to calculate a Z-score (see Methods). While most interactions appeared between the same classes, significant cross-talk mostly emanated from OGs with translational as well as posttranslational functions (Fig. 3b). To determine the impact of the consensus meta-interactome on our ability to predict functions we generated a network of functionally annotated orthologous groups that were composed of PPIs between proteins in E. coli. In particular, we randomly sampled 80% of all functionally annotated OGs 1000 times to predict the functions of the remaining 20%. Using a stochastic model [28] (see Methods) every OG is represented by a profile, reflecting the probability of having a certain function. Applying different probability thresholds for the presence of a functional annotation, we determined ROC curves, and measured the area under the curve as a measure of the prediction quality (Fig. 4a). In comparison, we considered all remaining interactions in the consensus meta-interactome, demanding that each OG was functionally annotated. Analogously, we randomly sampled 20% of annotated OGs that appeared in the original network of OGs based on interactions in E. coli. Notably, we observed a shift toward increased values of the area under the ROC curve. Such a difference was statistically significant (P < 10-50, Student's t-test), suggesting that the augmentation of the underlying network with interactions from other bacteria significantly improved the quality of functional predictions (Fig. 4a). Analogously, we found similar results when we considered PPIs in C. jejuni (Fig. 4b, P < 10-50). Based on our random samples, we calculated the fraction of correctly predicted functions of OGs as a function of the degree in the underlying OG networks of E. coli and C. jejuni (inset, Fig. 4c). Specifically, we observed that increased number of links corresponds to elevated levels of prediction accuracy of a given OG. In the main plot of Fig. 4c, we assessed the impact of the consensus meta-interactome on the accuracy of predicted functions of OGs. Comparing frequencies of correctly predicted OGs, we found that the prediction of OGs with low degree was especially improved. The consensus meta-interactome improves functional predictions. a Predicting the functions of sampled OGs we observed that the addition of the consensus meta-interactome allowed for better functional prediction (P < 10-50, Student's t-test). Connecting functionally annotated orthologous groups (OG) if they harbored interacting proteins of E. coli we randomly sampled 20% of all OGs 1000 times and utilized the remainder to predict the functions of the sampled OGs. As a measure of the prediction quality we calculated the area under the ROC curve. In comparison, we augmented the underlying E. coli specific network of OGs with remaining links in the underlying consensus meta-interactome. b We obtained similar results when we considered OGs that were initially connected by interactions between proteins of C. jejuni. In the inset of (c) we calculated the fraction of correctly predicted functions of OGs as a function of the degree in the underlying OG networks of E. coli and C. jejuni, suggesting that increased number of links corresponds to elevated levels of prediction accuracy. Assessing the impact of the consensus meta-interactome on the accuracy of predicted functions of OGs, we observed that the functional prediction for OGs with low degree was improved To harness the power of the consensus meta-interactome, we used the network of OGs to predict the functions of otherwise functionally unknown orthologous groups. In Fig. 5, we observed that most OGs were clearly involved in translational functions and posttranslational modifications. Notably, such results corresponded well to the observations that most functional crosstalk emanated from these functional classes (Fig. 3b). Functional prediction of uncharacterized orthologous groups. Functional similarity of interacting orthologous groups in the network. Each orthologous group (specifically, a node in the network) occupies a single row in the heatmap. A node's degree in the consensus meta-interactome is shown on the right. Each column is a single functional category The meta-interactome predicts interactomes and their size The construction of a meta-interactome as described above can be utilized to predict the interactome of any species with or without interaction data. We used the consensus meta-interactome as a model to predict any potential interactions in a given proteome independently of the availability of protein interactions in the underlying organism. In particular, we considered all interactions between OGs that contain proteins of a given proteome of an underlying organism. As such, we consider all proteins of the given proteome interacting, when we find their corresponding OGs interacting. As a consequence, the interactome of a well-studied species such as E. coli can be improved by predicting yet undetected PPIs using data from a related but distinct species. This simple prediction method was used with all protein-coding genes from each of several representative bacterial species of varied genome and proteome size (Table 2). Out of all 11 bacterial species shown, six have had comprehensive protein interactomes published, and the data is reflected in the total number of proteins participating in PPIs with experimental evidence. To obtain a starting point for our predictions, we used the interactome size estimation methods developed by Stumpf et al. [7]. These methods primarily depend on the number of interactors and interactions in an experimental interactome to predict the true interactome size and therefore account for interactions not detected in the interactome. Here, we used the Stumpf et al. methods with three different counts of interactors and interactions: those from each of the six published interactomes, the larger counts found in the meta-interactome, and the fraction of the interactome derived from experimental data. In cases where a given species has been the subject of just one comprehensive interactome study (e.g., with Synechocystis), the counts provided by the first option are very similar to the third. Considering a set of reference proteomes (Table 2), we found that interactome sizes thus obtained appear to increase linearly with the proteome size of the underlying bacterial species (Fig. 6). This is based on the assumption that the average number of interactions (or "functions") per protein remains roughly the same, except in cases of genomes that increased by additional paralogs (which may be involved in additional interactions). For example, the E. coli genome codes for more than 4000 unique proteins, and more than 3000 of which have been found to participate in at least one PPI in one or more studies. The B. subtilis genome codes for roughly the same number of unique proteins but fewer than 1000 of these proteins have been found to participate in PPIs. However, B. subtilis has also been studied much less extensively, hence these numbers do not reflect the true number of interactions in a cell. Table 2 Predicted bacterial interactome sizes Predictions of maximal interactome size. Based on the consensus meta-interactome we show the upper bounds of predicted interactome size (in number of PPI) as a function of proteome size. Each point corresponds to the Uniprot reference proteome of a single species (see Methods for strain identities and text for details) The more interactions are detected, the fewer are left to be predicted. As a result, unstudied or incomplete interactomes have the largest potential for prediction. For instance, there are very few PPIs known from Streptococcus pneumoniae: just 63 of the 2030 proteins coded for in the S. pneumoniae R6 genome have experimental interactions in IntAct. Our predicted interactome for this protein set increases that total to 850 proteins (Fig. 6). Similar results are seen for B. subtilis and for Mycobacterium tuberculosis. Biological differences vs. technical differences in interactomes Published interactomes vary in size and composition across different studies and species, rendering them difficult to compare. In the case of Campylobacter jejuni, a genome of 1654 ORFs yielded an interactome of more than 11,000 distinct PPIs from yeast two hybrid (Y2H) screens using ~90% of the ORFs, or 1477 in total [29]. By contrast, the interactome of M. loti as reported by Shimoda et al. [30] includes just over 3100 PPI though its proteome contains 7281 predicted proteins. These discrepancies are clearly determined by different coverage: in the case of the M. loti interactome, the full genome was used as yeast two hybrid preys but only 1542 of 7281 genes were used as baits. This subset was curated as per the goals of the study and therefore represents a conscious technical difference between interactomes. The comparison of interactomes also reveals unavoidable methodological discrepancies. More than half of the PPIs contributing to the meta-interactome were observed using two hybrid methods, offering some methodological consistency, yet these methods may vary in technical implementation details such as protein expression conditions, growth conditions, or even the exact yeast or bacterial strains used. As we have shown previously, even when exactly the same protein pairs are tested by Y2H assays, small differences in the experimental protocol can yield dramatically different results [31]. Inclusion of affinity purification and mass spectrometry (AP/MS) approaches introduces another concern: AP/MS methods typically infer interactions from co-purification through a spoke model approach (that is, that a single bait is assumed to interact with all of its co-purified proteins) while two hybrid methods generally screen for binary interactions only. We have previously estimated that the spoke-model approach over-estimates the number of PPIs by about 3-fold [8]. In this study, we have attempted to reduce the impact of technical differences between interaction studies by focusing on the subset of interactions that we pooled from multiple species. This approach is especially effective for minimizing the influence of potentially erroneous spoke model interactions, as the bulk of these interactions in the meta-interactome are from just two species (E. coli and M. pneumoniae, both of which have been subjects of full protein complex surveys). In the meantime, we believe a cross-species approach is helpful for identifying expected PPI in interactomes. As seen in Fig. 3a, fewer than one thousand OG vs. OG interactions in the meta-interactome have been observed in more than one bacterial species, yet more interactions should be conserved across any two pairs of bacterial species. Finally, some differences among interactomes may be due to real distinctions in genetics and physiology. Many processes show considerable genetic variation in bacteria, even when they are traditionally considered to be highly conserved. For instance, ribosomes are surprisingly malleable [32, 33] as are flagella [17], cell division proteins [34] or protein complexes in general [35]. A more complete meta-interactome should therefore shed light on the biological differences between species. Meta-interactomes reveal broadly-conserved interactions involving proteins of unknown function Of all OG-OG interactions involving OGs of unknown or unclear function (UF OGs), fewer than 10 are seen in more than 2 different species (Fig. 3a). Highly conserved PPIs are thought to serve more fundamental processes in a cell (e.g. [8, 12]), hence we identified well-conserved interactions for function prediction. The most frequently observed PPIs (specifically, OG-OG interactions) across species are interactions among enzyme subunits, e.g. the alpha and beta subunits of tryptophan synthase which is a well-studied interaction. A selection of interactions involving interactors of less clear function are shown in Table 3. Table 3 Conserved interactions involving OGs of unclear function This list omits broadly-conserved self-interactions, such as those among histidine kinases (ENOG4105BZU). An orthology-based approach is more informative when used with interactions among proteins in different groups (in this case, different OGs) than with interactions among proteins of the same OG as individual protein identities are ignored in the consensus meta-interactome. We have made the assumption that cross-OG interactions are more likely to indicate cross-function interactions and are therefore of great relevance to functional context. This idea is illustrated by the MdB and NDH-1 complexes: MdaB (ENOG4105NF4) proteins figure prominently in the meta-interactome. MdaB was first identified as modulator of drug activity [36] and is still annotated as such in most databases. Later, Wang et al. (2004) characterized it as a novel antioxidant protein similar to NADPH nitroreductases which play an important role in managing oxidative stress essential for successful colonization of H. pylori in its host [37]. Its mutants are unable to colonize human host cells [37]. However, the MdaB interaction network indicates another unrelated function as it interacts with three motility related proteins in three different species: a chemotaxis protein (UniprotKB: O25152) from H. pylori, flagellin C (UniprotKB: P96747) from C. jejuni, and chemotaxis protein CheW (UniprotKB: P0A964) from E. coli K-12. We suggest that the colonization phenotype is related to its motility rather than oxidative stress. In fact, motility is critical for initial colonization of H. pylori in its host cells [38]. FlaC in particular is well characterized as an important factor for host cell invasion in C. jejuni [39]. NDH-1 complexes Interactions between components of a protein complex can be reconstructed from the meta-interactome interactions. The cyanobacterial NDH-1 membrane protein complexes provide a good example: these proteins belong to widely-conserved family of energy converting NAD(P)H: Quinone oxidoreductases which are unique to organisms capable of photosynthesis. Many distinct NDH-1 complexes may coexist in cyanobacteria to carry out different functions like respiration, cyclic electron transfer and CO2 uptake [40–42]. At least four NDH-1 complexes are predicted in cyanobacteria in Synechocystis 6803, usually called L, L', MS, and MS'. Each complex is composed of a basal complex (NdhA-C, NdhE,G-K, NdhL-O) associated with variable subcomplexes of Ndh and Cup subunits (Fig. 7). Each complex has a different function: for example, NDH-1 L and L' are responsible for respiration and cyclic electron flow and NDH-1MS/MS' for CO2 uptake. The multitude of functionality of cyanobacteria is possible due to the presence of a great diversity of ndhD (D1-D6) and ndhF (F1, F3 and F4) gene families. It is possible that with sudden changes in CO2 levels, cyanobacteria can flexibly use the NDH-1 M basal subcomplex and change contents of its variable subcomplex to form MS and L complexes [42]. The NDH-1 complex as an example of conserved interactions. a An NDH component interaction network from multiple species. Each node in this network corresponds to a single orthologous group and is labeled with the corresponding group member in cyanobacteria (i.e., the sources of most of the PPI observed for NDH complex members). Groups are colored as in Part B; groups in gray have predicted accessory functions. Interactions between any proteins in two groups are shown as edges. Edges are colored as noted in the Key. b A model of the NDH-1MS complex in cyanobacteria. Figure after [54]. Each box corresponds to a protein or group of proteins; those labeled with a single letter are Ndh proteins. Dotted lines indicate alternate complex forms. See [54] for further details An example of the NDH-1MS (NDH-1 M, NdhD/F/CupA/CupS) network in Synechocystis 6803 and T. elongatus BP-1 is shown in Fig. 7a and a corresponding model is provided in Fig. 7b. Only one similar interaction (NuoD and NuoB) is observed in E. coli. CupA (ENOG4107YAI) has been found to interact with NdhF (ENOG4106TXZ), NdhD1-D4 (ENOG4105C8S), and an unknown protein (ENOG410906A) to form the NDH-IS (NdhD/F/CupA/CupS) sub-complex. Members of ENOG410906A, though coding for protein of unknown function, have sequence similarity to Fasciclin superfamily proteins associated with cell adhesion in plants and algae species. Korste et al. (2015) [42] found a similar protein (UniprotKB: P73392) in Synechocystis 6803 and Q8DMA1 in T. elongatus BP-1 and designated it as CupS, a small subunit of the NDH-1MS complex [42]. The NMR studies showed that though the protein was structurally similar to the Fasciclin superfamily, but was not associated with adhesion, contrary to Fasciclin superfamily proteins, given its intracellular location. Though CupS has been shown to interact with NdhD/NdhF/CupA, its function is still unknown. This network data not only provides clarity about the interaction of NDH-1 complex proteins but also predicts a probable respiratory function for the members of ENOG410906A. Cyanobacterial meta-interactome networks (Fig. 7a) clearly show that the NdhH subunit interacts directly with all associated subunits, a point which had been missing in all predicted structures of NDH-I. Meta-interactomes predict interactome sizes If we assume that the average degree of a protein remains the same, independent of the proteome, then interactomes should grow linearly with proteome size and thus with genome size (Fig. 6). However, bias in the available data is likely creating distorted predictions: the E. coli data point (at the top of the figure) nearly does not fit the trend and most of the PPI predictions we can make originate with E. coli data. Additionally, predicted interactome sizes are limited by the number of unannotated or highly unusual genes in a genome. The largest genomes in this set, from P. aeruginosa and M. loti, contain ~310 and 737 genes without orthology predictions, respectively. Further annotation of these genes or interactions among their products may allow for interaction predictions more like those for other species. Total counts of OG vs. OG interactions corresponding to each taxon in the meta-interactome are provided in Additional file 6; in many cases, the meta-interactome contains just one interaction for a given species and hence suggests candidates for further exploration (here, taxon IDs are used instead of species to avoid counting interactions multiple times for closely related species or strains). Proteome size is likely just one trait contributing to the overall complexity of a species [43] and we may consider the interactome of that species to represent one facet of its complexity. Some methods used to estimate interactome size were intended for use with human or yeast proteins rather than those from bacteria [44, 45]. Another confounding factor is that false positives are likely to grow exponentially with increasing proteome size, e.g. because a fraction of proteins interact non-specifically with hydrophobic surfaces. The meta-interactome approach is an intentional abstraction. It is intended to underscore the bacterial cross-species commonality and conservation of protein interactions among currently available interaction data. As a result, this approach is limited by at least three main factors: limitations of protein-protein interaction screens, limitations of publicly-available data, and constraints on orthology prediction. All experimental interactomes are inherently incomplete and may include numerous false positives and otherwise erroneous results. The authors of these studies employ different filtering approaches and likely interpret their results based on expectations (e.g., some interactome studies eliminate frequently-interacting proteins like chaperones from their screens). Most of the available interaction data for bacterial proteins has focused on just a handful of species. Additional screens of proteins from more diverse sources across the bacterial tree of life will reveal a universe of yet unknown functions, just as gene sequences did for genetic diversity. In this work, we have assembled a set of more than 52,000 unique PPIs between bacterial proteins to perform cross-species interactome comparisons. The combined set, or meta-interactome, allows us to define a set of interactions observed across multiple species. Though this set is much smaller than expected, this result highlights the ongoing challenge of duplicating results of interactome screens. In an effort to address this challenge, we use the meta-interactome as a model for bacterial species without comprehensive interactome results, such as Bacillus subtilis and Salmonella enterica. We also employ the meta-interactome as a predictive tool to assign functions to uncharacterized proteins. These efforts and the methods presented here will allow researchers pursuing new interactome studies to easily predict the potential scope of their own results. As more bacterial interactomes reach completion, the interactions occupying prominent locations in a meta-interactome will likely reveal novel, broadly-conserved biological phenomena and appealing anti-microbial targets. Literature mining for citation analysis The initial stages of this project required assessment of whether comparisons of bacterial interactomes were common in the interactome literature. A list of 11 publications, each describing a single bacterial protein-protein interactome, was assembled as a representative set of the bacterial protein-protein interactome literature, namely those of H. pylori [12, 46], C. jejuni [29], Synechocystis [47], M. loti [23], T. pallidum [17], E. coli [8, 9], M. pneumoniae [48], M. tuberculosis [21], and S. aureus [49]. The full list of citations from each paper was retrieved from PubMed Central in XML format in August 2015. All citation lists were combined to determine citations shared by multiple publications in the set. Publications citing multiple representative interactome publications are those with potential for cross-interactome comparisons (see Additional file 2 for the list of publications in the set and their corresponding citations). PPI data sets The full set of interactions was obtained from the IntAct database [3]. To produce the data set used in this study, the full set of IntAct interactions was filtered by Uniprot taxonomy to include only protein-protein interactions (PPI) from bacterial sources (species:"taxid:2"). Prior to further filtering, this interaction set includes 63,421 PPI across all interaction types. All interactions without Uniprot identifiers (i.e., interactions involving ChEBI chemicals) were removed, as were interactions with erroneous annotation (i.e., interactions involving bacterial proteins vs. eukaryote proteins). The set of IntAct interactions was augmented with the protein interactome of Mesorhizobium loti [23]. Where possible, proteins were assigned membership in orthologous groups (OGs) using eggNOG v.4 NOGs [19]; proteins without OG annotation are treated as single-member OGs and referred to using their UniprotAC identifiers. All PPIs are retained in the data set regardless of experimental observation method; interactions derived from spoke-expansion models are treated identically to those defined as "direct" interactions. Construction of meta-interactome networks PPI sets were obtained and filtered using a set of scripts developed for the purpose, Network_umbra (available at https://github.com/caufieldjh/network-umbra). This program parses interaction data files in PSI-MI TAB 2.7 format (MITAB27; a format used by protein-protein interaction databases; developed by the HUPO Proteomics Standards Initiative and described in detail at https://code.google.com/p/psimi/wiki/PsimiTab27Format) and facilitates all further methods described in this study. The full set of PPIs sourced from IntAct constitutes the starting data set for meta-interactome construction [3]. We define a meta-interactome as a set of PPIs where similar proteins and the interactions among those proteins are merged into single interactor groups and interactions. Interactions among proteins of the same group are considered a self-interaction, though all interactions retain properties of the source interaction network, including the count of PPIs and count of unique source species contributing to the interaction. Meta-interactome groups are defined by eggNOG v.4 NOGs [19]. Because annotations for interactions involving similar proteins from closely-related species may differ, the species and strains corresponding to each interaction were labeled using NCBI Taxonomy identifiers and identifiers sharing a parent or a child were merged. All interactions were compressed using OG-annotated proteins such that each OG-OG interaction appears in each data set only once per species, though a protein may belong to multiple OGs (in these cases, the resulting OG name includes both identifiers separated by a comma, e.g. "COG1100,COG4886"). The full meta-interactome is provided in Additional file 3 in PSI-MI TAB 2.7 format, with the addition of orthologous groups in the final two columns (corresponding to interactors A and B, respectively). This interactome contains 52,734 interactions among 12,706 unique proteins, 1805 (3.4%) of which fail to map to an orthologous group. Treated as a network of OGs, this network contains 8521 unique interactors. A further subset of the meta-interactome was prepared such that this set merged all interactions on the basis of shared interactors (see Additional file 4). For example, two different interactions between proteins in OG1 and proteins in OG2 are considered a single interaction. Furthermore, each OG-OG interaction is counted as a single interaction across any number of species. We refer to this set as the consensus meta-interactome. This network contains 8475 unique interactors and 43,545 interactions. Interactome size prediction We utilized the consensus meta-interactome of OG-OG interactions to generate predicted interactomes for a given bacterial species. Given a list of UniprotAC identifiers we assigned each to an OG and constructed a set of interactions among those OGs based on their presence in the consensus network. In most cases, predictions are general and unverified: if a pair of OGs is present in the consensus network they are predicted to interact in any context. Reference proteomes for the following species and strains were used, with NCBI taxonomy IDs in parentheses: Bacillus subtilis str. 168 (224308), Caulobacter crescentus CB15 (190650), Escherichia coli K-12 (83333), Helicobacter pylori 26695 (85962), Mesorhizobium loti MAFF303099 (266835), Mycoplasma genitalium G37 (243273), Pseudomonas aeruginosa PAO1 (208964), Salmonella enterica subsp. enterica serovar Typhi (90370), Staphylococcus aureus subsp. aureus NCTC 8325 (93061), Synechocystis sp. PCC 6803 substr. Kazusa (1111708), and Treponema pallidum subsp. pallidum str. Nichols (243276). Functional prediction of unknown proteins in S. pneumoniae We modeled the prediction of a functional class σ of a protein i as a Potts model [28]. In particular, we considered functional annotation of proteins in S. pneumoniae using COG classes. All proteins without a functional annotation as well as proteins that were either classified as 'unknown' or had a 'general function' were randomly assigned a function out of the remaining 23 classes. In particular, we minimized the following global function, \( E=-{\displaystyle \sum_{i, j}{J}_{i j}\delta \left({\sigma}_i,{\sigma}_j\right)}-{\displaystyle \sum_i{h}_i\left({\sigma}_i\right)} \) where J ij is the adjacency matrix of the interaction network for the unclassified proteins. In particular, J ij = 1 if unclassified proteins i and j interact and vice versa. (i,j) is the discrete \( \delta \) function, and h i(σ i) is the number of classified interaction partners of protein i with function σ i. To minimize E we applied a simulated annealing approach that features an effective temperature T. After initially assigning random functions to all unclassified proteins, we randomly selected a protein, changed its function to a different class and determined the energy of the new configuration. If the difference of energies ΔE ≤ 0, the new configuration was accepted. If ΔE > 0, the new configuration was accepted with probability p = e − ΔE/T. To obtain stabilized functional configurations we repeated such a Monte-Carlo step 10,000 times. Subsequently, we increased the inverse of T by 0.01 in each step and repeated such Monte-Carlo steps. Since minimum energy solutions are not unique, we repeated such runs of simulated annealing 100 times, and considered the fraction of times an unclassified protein i was observed in a certain functional state σ as an estimate of the probability that protein i belongs to class σ. Interactions between functional classes Focusing on a set of PPIs that connect proteins in orthologous groups (OG), we counted the occurrence of different class combinations. For each combination of classes i, j we determined its probability, $$ {p}_o\left( i, j\right)=\frac{n_{ij}}{N}, $$ where N is the total number of interactions between classes. As a null-model, we determined an expected probability of interactions between classes i, j \( {p}_e\left( i, j\right)=\frac{\left({v}_i{v}_j\right)-\frac{J_{i j}^2}{2}}{\frac{N\left( N-1\right)}{2}} \). Specifically, v i is the number of viable proteins in class i (i.e. proteins of class i that are involved in at least one interaction in the underlying set), and J i,j is the number of genes that are involved in both classes. Combining these probabilities, we determined a log-odds ratio, $$ r=\frac{p_o{\left(1-{p}_o\right)}^{-1}}{p_e{\left(1-{p}_e\right)}^{-1}}. $$ For large samples, we estimated the variance of the odds distribution as σ 2 = n − 1 ij + (N − n ij )− 1 + a − 1 + (b − a)− 1 $$ \begin{array}{l}\begin{array}{cc}\hfill a=\left({v}_i{v}_j\right)-\frac{J_{i j}^2}{2}\hfill & \hfill \mathrm{and}\hfill \end{array}\\ {} b=\frac{N\left( N-1\right)}{2}.\end{array} $$ In particular, we calculated a Z-score representing the significance of a link between two classes by [18] $$ Z=\frac{r}{\sigma} $$ Enrichment of accuracy as a function of degree To compare the prediction results we obtained with the original networks that were based on interactions in E. coli and C. jejuni and the complete network of orthologous groups (OG) we calculated the fraction of correctly predicted functions in bins of OGs with a given number of interaction partners in the underlying networks obtained with the mentioned bacterial species. Since each OG was assigned to a functional class with a certain probability, we labeled each group with the most probable function. We defined the enrichment of accuracy in a given bin of degree k as $$ {E}_k= l{g}_2\left(\frac{f_{k, m}}{f_k}\right), $$ where f k is the fraction of correctly predicted functions of OGs with degree k in the original networks. In turn, f k,m reflects the rate of correctly predicted functions using the consensus meta-interactome. AP/MS: Affinity purification and Mass spectrometry Orthologous group PPI: Y2H: Yeast two hybrid Young KH. Yeast two-hybrid: so many interactions, (in) so little time. Biol Reprod. 1998;58(2):302–11. Abu-Farha M, Elisma F, Figeys D. Identification of protein-protein interactions by mass spectrometry coupled techniques. Adv Biochem Eng Biotechnol. 2008;110:67–80. doi:10.1007/10_2007_091. Kerrien S, Aranda B, Breuza L, Bridge A, Broackes-Carter F, Chen C, et al. The IntAct molecular interaction database in 2012. Nucleic Acids Res. 2012;40(Database issue):D841–6. doi: 10.1093/nar/gkr1088. PubMed PMID: 22121220; PubMed Central PMCID: PMCPMC3245075. Szklarczyk D, Franceschini A, Wyder S, Forslund K, Heller D, Huerta-Cepas J, et al. STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015;43(Database issue):D447–52. doi: 10.1093/nar/gku1003. PubMed PMID: 25352553; PubMed Central PMCID: PMCPMC4383874. Chatr-Aryamontri A, Breitkreutz BJ, Heinicke S, Boucher L, Winter A, Stark C, et al. The BioGRID interaction database: 2013 update. Nucleic Acids Res. 2013;41(Database issue):D816–23. doi: 10.1093/nar/gks1158. PubMed PMID: 23203989; PubMed Central PMCID: PMCPMC3531226. Consortium EP. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489(7414):57–74. doi: 10.1038/nature11247. PubMed PMID: 22955616; PubMed Central PMCID: PMCPMC3439153. Stumpf MP, Thorne T, de Silva E, Stewart R, An HJ, Lappe M, et al. Estimating the size of the human interactome. Proc Natl Acad Sci U S A. 2008;105(19):6959–64. doi: 10.1073/pnas.0708078105. PubMed PMID: 18474861; PubMed Central PMCID: PMCPMC2383957. Rajagopala SV, Sikorski P, Kumar A, Mosca R, Vlasblom J, Arnold R, et al. The binary protein-protein interaction landscape of Escherichia coli. Nat Biotechnol. 2014;32(3):285–90. doi: 10.1038/nbt.2831. PubMed PMID: 24561554; PubMed Central PMCID: PMCPMC4123855. Hu P, Janga SC, Babu M, Diaz-Mejia JJ, Butland G, Yang W, et al. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins. PLoS Biol. 2009;7(4):e96. doi: 10.1371/journal.pbio.1000096. PubMed PMID: 19402753; PubMed Central PMCID: PMCPMC2672614. Zhong Q, Pevzner SJ, Hao T, Wang Y, Mosca R, Menche J, et al. An inter-species protein-protein interaction network across vast evolutionary distance. Mol Syst Biol. 2016;12(4):865. doi: 10.15252/msb.20156484. PubMed PMID: 27107014; PubMed Central PMCID: PMCPMC4848758. Kelkar YD, Ochman H. Genome reduction promotes increase in protein functional complexity in bacteria. Genetics. 2013;193(1):303–7. doi: 10.1534/genetics.112.145656. PubMed PMID: 23114380; PubMed Central PMCID: PMCPMC3527252. Hauser R, Ceol A, Rajagopala SV, Mosca R, Siszler G, Wermke N, et al. A second-generation protein-protein interaction network of Helicobacter pylori. Mol Cell Proteomics. 2014;13(5):1318–29. doi: 10.1074/mcp.O113.033571. PubMed PMID: 24627523; PubMed Central PMCID: PMCPMC4014287. Ratmann O, Andrieu C, Wiuf C, Richardson S. Model criticism based on likelihood-free inference, with an application to protein network evolution. Proc Natl Acad Sci U S A. 2009;106(26):10576–81. doi: 10.1073/pnas.0807882106. PubMed PMID: 19525398; PubMed Central PMCID: PMCPMC2695753. Blaby-Haas CE, de Crecy-Lagard V. Mining high-throughput experimental data to link gene and function. Trends Biotechnol. 2011;29(4):174–82. doi: 10.1016/j.tibtech.2011.01.001. PubMed PMID: 21310501; PubMed Central PMCID: PMCPMC3073767. Schauer K, Stingl K. 'Guilty by Association' - Protein-Protein Interactions (PPIs) in Bacterial Pathogens. Genome Dyn. 2009;6:48–61. doi:10.1159/000235762. Schwikowski B, Uetz P, Fields S. A network of protein-protein interactions in yeast. Nat Biotechnol. 2000;18(12):1257–61. doi:10.1038/82360. Titz B, Rajagopala SV, Goll J, Hauser R, McKevitt MT, Palzkill T, et al. The binary protein interactome of Treponema pallidum--the syphilis spirochete. PLoS One. 2008;3(5):e2292. doi: 10.1371/journal.pone.0002292. PubMed PMID: 18509523; PubMed Central PMCID: PMCPMC2386257. Song J, Singh M. How and when should interactome-derived clusters be used to predict functional modules and protein function? Bioinformatics. 2009;25(23):3143–50. doi: 10.1093/bioinformatics/btp551. PubMed PMID: 19770263; PubMed Central PMCID: PMCPMC3167697. Huerta-Cepas J, Szklarczyk D, Forslund K, Cook H, Heller D, Walter MC, et al. eggNOG 4.5: a hierarchical orthology framework with improved functional annotations for eukaryotic, prokaryotic and viral sequences. Nucleic Acids Res. 2016;44(D1):D286–93. doi: 10.1093/nar/gkv1248. PubMed PMID: 26582926; PubMed Central PMCID: PMCPMC4702882. Wiles AM, Doderer M, Ruan J, Gu TT, Ravi D, Blackman B, et al. Building and analyzing protein interactome networks by cross-species comparisons. BMC Syst Biol. 2010;4:36. doi: 10.1186/1752-0509-4-36. PubMed PMID: 20353594; PubMed Central PMCID: PMCPMC2859380. Wang Y, Cui T, Zhang C, Yang M, Huang Y, Li W, et al. Global protein-protein interaction network in the human pathogen Mycobacterium tuberculosis H37Rv. J Proteome Res. 2010;9(12):6665–77. doi:10.1021/pr100808n. Tatusov RL, Galperin MY, Natale DA, Koonin EV. The COG database: a tool for genome-scale analysis of protein functions and evolution. Nucleic Acids Res. 2000;28(1):33–6. PubMed PMID: 10592175; PubMed Central PMCID: PMCPMC102395. Shimoda Y, Shinpo S, Kohara M, Nakamura Y, Tabata S, Sato S. A large scale analysis of protein-protein interactions in the nitrogen-fixing bacterium Mesorhizobium loti. DNA Res. 2008;15(1):13–23. doi: 10.1093/dnares/dsm028. PubMed PMID: 18192278; PubMed Central PMCID: PMCPMC2650630. Gallos LK, Makse HA, Sigman M. A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks. Proc Natl Acad Sci U S A. 2012;109(8):2825–30. doi: 10.1073/pnas.1106612109. PubMed PMID: 22308319; PubMed Central PMCID: PMCPMC3286928. Reed WJ. A Brief Introduction to Scale-Free Networks. Nat Resour Model. 2008;19:3–14. doi:10.1111/j.1939-7445.2006.tb00173.x. Friedel CC, Zimmer R. Inferring topology from clustering coefficients in protein-protein interaction networks. BMC Bioinforma. 2006;7:519. doi: 10.1186/1471-2105-7-519. PubMed PMID: 17137490; PubMed Central PMCID: PMCPMC1716184. Guimera R, Sales-Pardo M. Missing and spurious interactions and the reconstruction of complex networks. Proc Natl Acad Sci U S A. 2009;106(52):22073–8. doi: 10.1073/pnas.0908366106. PubMed PMID: 20018705; PubMed Central PMCID: PMCPMC2799723. Vazquez A, Flammini A, Maritan A, Vespignani A. Global protein function prediction from protein-protein interaction networks. Nat Biotechnol. 2003;21(6):697–700. doi:10.1038/nbt825. Parrish JR, Yu J, Liu G, Hines JA, Chan JE, Mangiola BA, et al. A proteome-wide protein interaction map for Campylobacter jejuni. Genome Biol. 2007;8(7):R130. doi: 10.1186/gb-2007-8-7-r130. PubMed PMID: 17615063; PubMed Central PMCID: PMCPMC2323224. Shimoda Y, Mitsui H, Kamimatsuse H, Minamisawa K, Nishiyama E, Ohtsubo Y, et al. Construction of signature-tagged mutant library in Mesorhizobium loti as a powerful tool for functional genomics. DNA Res. 2008;15(5):297–308. doi: 10.1093/dnares/dsn017. PubMed PMID: 18658183; PubMed Central PMCID: PMCPMC2575893. Chen Y-C, Rajagopala SV, Stellberger T, Uetz P. Exhaustive benchmarking of the yeast two-hybrid system. Nat Methods. 2010;7:667–8. doi:10.1038/nmeth0910-667. Shoji S, Dambacher CM, Shajani Z, Williamson JR, Schultz PG. Systematic chromosomal deletion of bacterial ribosomal protein genes. J Mol Biol. 2011;413(4):751–61. doi: 10.1016/j.jmb.2011.09.004. PubMed PMID: 21945294; PubMed Central PMCID: PMCPMC3694390. Wilson DN, Nierhaus KH. Ribosomal proteins in the spotlight. Crit Rev Biochem Mol Biol. 2005;40(5):243–67. doi:10.1080/10409230500256523. Margolin W. Sculpting the bacterial cell. Curr Biol. 2009;19(17):R812–22. doi: 10.1016/j.cub.2009.06.033. PubMed PMID: 19906583; PubMed Central PMCID: PMCPMC4080913. Caufield JH, Abreu M, Wimble C, Uetz P. Protein complexes in bacteria. PLoS Comput Biol. 2015;11(2):e1004107. doi: 10.1371/journal.pcbi.1004107. PubMed PMID: 25723151; PubMed Central PMCID: PMCPMC4344305. Chatterjee PK, Sternberg NL. A general genetic approach in Escherichia coli for determining the mechanism(s) of action of tumoricidal agents: application to DMP 840, a tumoricidal agent. Proc Natl Acad Sci U S A. 1995;92:8950–4. Wang G, Maier RJ. An NADPH Quinone Reductase of Helicobacter pylori Plays an Important Role in Oxidative Stress Resistance and Host Colonization. Infect Immun. 2004;72:1391–6. doi:10.1128/IAI.72.3.1391-1396.2004. Ottemann KM, Lowenthal AC. Helicobacter pylori Uses Motility for Initial Colonization and To Attain Robust Infection. Infect Immun. 2002;70:1984–90. doi:10.1128/IAI.70.4.1984-1990.2002. Song YC, Jin S, Louie H, Ng D, Lau R, Zhang Y, et al. FlaC, a protein of Campylobacter jejuni TGH9011 (ATCC43431) secreted through the flagellar apparatus, binds epithelial cells and influences cell invasion. Mol Microbiol. 2004;53:541–53. doi:10.1111/j.1365-2958.2004.04175.x. Battchikova N, Eisenhut M, Aro E-M. Cyanobacterial NDH-1 complexes: Novel insights and remaining puzzles. Biochim Biophys Acta. 2011;1807:935–44. doi:10.1016/j.bbabio.2010.10.017. Zhang P, Battchikova N, Paakkarinen V, Katoh H, Iwai M, Ikeuchi M, et al. Isolation, subunit composition and interaction of the NDH-1 complexes from Thermosynechococcus elongatus BP-1. Biochem J. 2005;390:513–20. doi:10.1042/BJ20050390. Korste A, Wulfhorst H, Ikegami T, Nowaczyk MM, Stoll R. Solution structure of the NDH-1 complex subunit CupS from Thermosynechococcus elongatus. Biochim Biophys Acta. 2015;1847:1212–9. doi:10.1016/j.bbabio.2015.05.003. Schad E, Tompa P, Hegyi H. The relationship between proteome size, structural disorder and organism complexity. Genome Biol. 2011;12(12):R120. doi: 10.1186/gb-2011-12-12-r120. PubMed PMID: 22182830; PubMed Central PMCID: PMCPMC3334615. Sambourg L, Thierry-Mieg N. New insights into protein-protein interaction data lead to increased estimates of the S. cerevisiae interactome size. BMC Bioinforma. 2010;11:605. doi: 10.1186/1471-2105-11-605. PubMed PMID: 21176124; PubMed Central PMCID: PMCPMC3023808. Venkatesan K, Rual J-F, Vazquez A, Stelzl U, Lemmens I, Hirozane-Kishikawa T, et al. An empirical framework for binary interactome mapping. Nat Methods. 2009;6:83–90. doi:10.1038/nmeth.1280. Rain JC, Selig L, De Reuse H, Battaglia V, Reverdy C, Simon S, et al. The protein-protein interaction map of Helicobacter pylori. Nature. 2001;409(6817):211–5. doi:10.1038/35051615. Sato S, Shimoda Y, Muraki A, Kohara M, Nakamura Y, Tabata S. A large-scale protein protein interaction analysis in Synechocystis sp. PCC6803. DNA Res. 2007;14(5):207–16. doi: 10.1093/dnares/dsm021. PubMed PMID: 18000013; PubMed Central PMCID: PMCPMC2779905. Kuhner S, van Noort V, Betts MJ, Leo-Macias A, Batisse C, Rode M, et al. Proteome organization in a genome-reduced bacterium. Science. 2009;326(5957):1235–40. doi:10.1126/science.1176343. Cherkasov A, Hsing M, Zoraghi R, Foster LJ, See RH, Stoynov N, et al. Mapping the Protein Interaction Network in Methicillin-Resistant Staphylococcus aureus. J Proteome Res. 2011;10:1139–50. doi:10.1021/pr100918u. Ito T, Chiba T, Ozawa R, Yoshida M, Hattori M, Sakaki Y. A comprehensive two-hybrid analysis to explore the yeast protein interactome. Proc Natl Acad Sci U S A. 2001;98(8):4569–74. doi: 10.1073/pnas.061034498. PubMed PMID: 11283351; PubMed Central PMCID: PMCPMC31875. Uetz P, Giot L, Cagney G, Mansfield TA, Judson RS, Knight JR, et al. A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisiae. Nature. 2000;403:623–7. doi:10.1038/35001009. Yu H, Braun P, Yildirim MA, Lemmens I, Venkatesan K, Sahalie J, et al. High-quality binary protein interaction map of the yeast interactome network. Science. 2008;322(5898):104–10. doi: 10.1126/science.1158684. PubMed PMID: 18719252; PubMed Central PMCID: PMCPMC2746753. Tarassov K, Messier V, Landry CR, Radinovic S, Molina MMS, Shames I, et al. An in vivo map of the yeast protein interactome. Science. 2008;320(5882):1465–70. PubMed PMID: ISI:000256676400037. He Z, Mi H. Functional characterization of the subunits N, H, J, and O of the NAD(P)H dehydrogenase complexes in Synechocystis sp. strain PCC 6803. Plant Physiology. 2016:pp.00458.2016. doi: 10.1104/pp.16.00458. Marco Abreu assisted with an early stage of this study. This project was supported by the US National Institutes of Health, grant R01GM109895. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. All data generated or analyzed during this study are included in this published article and its Additional files. JHC and PU designed the study. JHC, CW, and SW performed analyses. JHC, SS, SW, and PU wrote and edited the manuscript. All authors read and approved the final manuscript. Center for the Study of Biological Complexity, Virginia Commonwealth University, Richmond, Virginia, USA J. Harry Caufield, Christopher Wimble, Semarjit Shary & Peter Uetz Department of Computer Science, University of Miami, Coral Gables, Florida, USA Stefan Wuchty Center for Computational Science, University of Miami, Coral Gables, Florida, USA Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, Florida, USA J. Harry Caufield Christopher Wimble Semarjit Shary Peter Uetz Correspondence to Peter Uetz. Guide to content provided in the supporting tables. (DOCX 20 kb) Review of literature citing multiple bacterial interactomes. (XLS 359 kb) All interactions in the meta-interactome network. Interactions are provided in PSI-MI TAB 2.7 format, with the addition of orthologous group identifiers for interactor A and B in the 43rd and 44th columns (AQ, AR), respectively. (XLS 47313 kb) All interactions in the consensus meta-interactome network. (XLS 6135 kb) Conserved interactors of unclear function. (XLS 29 kb) Contributions of individual bacterial taxons to the consensus meta-interactome. (XLS 42 kb) Caufield, J.H., Wimble, C., Shary, S. et al. Bacterial protein meta-interactomes predict cross-species interactions and protein function. BMC Bioinformatics 18, 171 (2017). https://doi.org/10.1186/s12859-017-1585-0 Protein interactions Interactome Genome evolution
CommonCrawl
Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. Join them; it only takes a minute: Why can't jet engines operate with supersonic air and how do they slow it down? Typically jets cannot operate when intake airflow is supersonic relative to the engine. Why is this so? Also, why are scramjets able to use supersonic air? To slow down the air to subsonic speeds, the air passes through a shockwave (if I understand correctly). How does this slow down the air? aircraft-design jet-engine Dylan CleaverDylan Cleaver 56022 gold badges55 silver badges99 bronze badges $\begingroup$ +1 for acknowledging that scramjets do use supersonic air. $\endgroup$ – Keegan Oct 12 '14 at 6:29 To avoid shock waves on the compressor blades which would make the engine unusable both because of the very large pressure fluctuations that would cause fatigue and failure of blades and because of the high level of drag that is developed in supersonic flows that would have the effect of slowing the blades down as they rotated. In fact, the engine just wouldn't run with a supersonic flow going into it. Also, the flow needs to be slowed down as much as possible to allow enough time in the combustion chamber for the fuel to burn completely. So... a cone or ramp shape at the inlet is used to create a little shockwave in front of the engine slowing down the incoming air to subsonic speeds and allowing the jet engine to operate efficiently. A ramjet is able to use the compressed air because it is designed to do so. An excellent case study is the SR-71 Blackbird, which had engine cones that moved forward and backward based on speed/altitude, to transition from a turbine to a ramjet mission profile. (Fun fact: That plane is so darned fast that the limit to its speed comes not from engine power, but from MELTING THE PLANE because it is going so fast.) The SR-71 had "Bypass doors" to close of the main turbine of the engine when operating on a ramjet profile. A ramjet sometimes referred to as a flying stovepipe or an athodyd, is a form of airbreathing jet engine that uses the engine's forward motion to compress incoming air without a rotary compressor. Ramjets cannot produce thrust at zero airspeeds; they cannot move an aircraft from a standstill. A ramjet powered vehicle, therefore, requires an assisted take off like a JATO to accelerate it to a speed where it begins to produce thrust. Ramjets work most efficiently at supersonic speeds around Mach 3. This type of engine can operate up to speeds of Mach 6. A scramjet is a variant of a ramjet airbreathing jet engine in which combustion takes place in supersonic airflow. As in ramjets, a scramjet relies on high vehicle speed to forcefully compress the incoming air before combustion, but a ramjet decelerates the air to subsonic velocities before combustion, while airflow in a scramjet is supersonic throughout the entire engine. This allows the scramjet to operate efficiently at extremely high speeds: theoretical projections place the top speed of a scramjet between Mach 12 and Mach 24. edited Feb 17 at 14:27 avionerman BassinatorBassinator $\begingroup$ How does the shock wave slow the air down? Does it simply apply a force to the air (much like an explosive blast would) which slows it down? Or is there some more complicated underlying principle? $\endgroup$ – Dylan Cleaver Oct 12 '14 at 0:32 $\begingroup$ @HCBPshenanigans: Well, it is somewhat complicated. Bernoulli principle says restricting flow reduces pressure and increases speed, but that only holds for incompressible fluids and for supersonic restricting the flow increases pressure and decreases speed. $\endgroup$ – Jan Hudec Oct 12 '14 at 10:27 $\begingroup$ read any of the SR-71 manuals for a great discussion on operating supersonic engines $\endgroup$ – rbp Oct 12 '14 at 16:34 $\begingroup$ Not Bernoulli, but Hugoniot is the right name here. See the Wikipedia page for the Hugoniot equation for details. $\endgroup$ – Peter Kämpf Oct 19 '14 at 9:33 A compressor blade works best in subsonic flow. Supersonic flow introduces additional drag sources which should be avoided if efficiency is important. Thus, the intake has to slow down the air to a Mach number between 0.4 and 0.5. Note that the high circumferential speed of a large fan blade will still mean that its tips work at around Mach 1.5, but the subsequent compressor stages will operate in subsonic conditions. A scramjet is possible with fuels with supersonic flame front speeds and rapid mixing of fuel and air. If the engine would burn regular kerosene, the flame would be blown out like a candle if the internal airspeed would be supersonic, and even if flame holders keep the flame in place, most combustion would take place only after the fuel-air mixture has left the engine due to the slow mixing of kerosene and air. By using hydrogen, a stable combustion can be achieved even in supersonic flow. Due to the high flight speeds, compression is possible by a cascade of shocks, so no moving turbomachinery is needed in ramjets and scramjets. Background: Maximum heating of air All jets decelerate air in their intake in order to increase air pressure. This compression heats the air, and in order to achieve a combustion which produces thrust, this heating must be restricted. If air is heated above approx. 6,000° K, adding more energy will result in dissociation of the gas with little further heat increase. Since thrust is produced by expanding air through heating, burning air that enters the combustion process already at 6,000° K will not achieve much thrust. If the air enters the intake at Mach 6, it must not be decelerated below approx. Mach 2 to still achieve combustion with a meaningful temperature increase - that is why scramjets are used in hypersonic vehicles. Full disclosure: Oxygen starts to dissociate already between 2,000° and 4,000° K, depending on pressure, while Nitrogen will dissociate mainly above 8,000° K. The 6,000° K figure above is a rough compromise for the boundary where adding more energy starts to make less and less sense. Of course, even a 6,000° K flame temperature is a challenge for the materials of the combustion chamber, and ceramics with film cooling are mandatory. The equation for the stagnation temperature $T_0$ of air shows how important the flight speed $v$ is: $$T_0 = T_{\infty} \cdot \frac{v^2}{c_p} = T_{\infty} \cdot \left(1 + \frac{\kappa - 1}{2}\cdot Ma^2 \right)$$ $T_{\infty}$ is the ambient temperature, $c_p$ the specific heat at constant pressure and $\kappa$ the ratio of specific heats. For two-atomic gases (like oxygen and nitrogen), $\kappa$ is 1.405. Temperature increases with the square of flight speed, so at Mach 2 the factor of heat increase over ambient is only 3.8, while at Mach 6 this becomes 26.3. Even at 220° K air temperature, the air will be heated to 5,800° K when it is ideally compressed in case of a hypersonic vehicle traveling at Mach 6. Note that real compression processes will heat air even more due to friction. Compression with shocks Supersonic flow is slowed down by a pressure rise along the flow path. Since no "advance warning" of what is coming is possible, this pressure rise is sudden: Pressure jumps from a fixed value ahead to a higher, fixed value past the jump. This is called a shock. The energy for the pressure rise is taken from the kinetic energy of the air, so past the shock all other parameters (speed, density and temperature) take on new values. F-16 air intake (picture source) The simplest shock is a straight shock. This can be found at the face of pitot intakes like the one of the F-16 (see the picture above) in supersonic flight. More common are oblique shocks which are tilted according to the Mach number of the free flow. They happen on leading and trailing edges, fuselage noses and contour changes in general: Whenever something bends the airflow due to its displacement effect, the mechanism for this bending of the flowpath is an oblique shock. straight and oblique shock (own work) The index 1 denotes conditions ahead of the shock, and 2 those downstream of the shock. For weak straight shocks the product of the speed ahead of the shock $v_1$ and the speed past the shock $v_2$ equals the square of the speed of sound: $$v_1\cdot v_2 = a^2$$ If $Ma_1 > 1$, then $Ma_2$ must be smaller than 1, so the flow is always decelerated to subsonic speed by a straight shock. The same equation works for the normal speed component $v_n$ ahead and past a weak oblique shock: $$v_{1n}\cdot v_{2n} = a^2$$ Note that the tangential component $v_t$ is unaffected by the shock! Only the normal component is reduced. Now the speed $v_2$ is still supersonic, but lower than $v_1$, so a weak oblique shock produces a modest increase of pressure, density and temperature. The angle of the oblique shock wave is determined by the Mach number ahead of the shock. Supersonic intakes Weak shocks are desired, because they produce only small losses due to friction. Pitot intakes with their single, straight shocks work well at low supersonic speeds, but incur higher losses at higher Mach numbers. As a rule of thumb, a pitot intake is the best compromise at speeds below Mach 1.6. If the design airspeed is higher, more complex and heavier intakes are needed to decelerate the air efficiently. This is done by a sequence of weak, oblique shocks and by means of a wedge intake. The picture below shows the intake of the supersonic Concorde airliner: Concorde intake (picture source) Gradually increasing the angle of the wedge is causing a cascade of ever steeper, oblique shocks which gradually decelerate the air. The design goal is to position this cascade of shocks caused by the wedge on top such that they hit the lower intake lip. This is done by a moveable contour of the upper intake geometry and/or the lip. The goal is to achieve a uniform speed over the intake cross section and not to waste any of the compressed air to the flow around the intake. See the picture of the Eurofighter intake below for an example of a moveable intake lip (which admittedly is mainly for increasing the capture area at low speed and for avoiding flow separation even with a small intake lip radius). Eurofighter intake (picture source) Once the air has entered the intake, it is only mildly supersonic and can be further decelerated by a final, straight shock at the narrowest point of the intake. After that point, the intake contour is gradually widened, such that the air decelerates further without separation. To achieve this, a very even flow across the intake area is mandatory, and even the slight disturbance caused by the boundary layer of anything which is ahead of the intake must be avoided. This is achieved by a splitter plate which is clearly visible in the pictures of the F-16 and Eurofighter intakes. The splitter plate of the Eurofighter intake is even perforated to suck away the early boundary layer there. The deceleration of the intake flow results in a significant pressure rise: In case of the Concorde at Mach 2.02 cruise, the intake caused a pressure rise by a factor of more than 6, so the engine compressor had to add "only" a factor of 12, such that the pressure in the combustion chamber of the four Olympus 593 engines was 80 times that of the ambient pressure (admittedly, this ambient pressure was only 76 mbar in the cruise altitude of 18 km). This pressure increase means that a supersonic intake must be built like a pressure vessel, and the rectangular face of the intake must quickly be changed to a round cross section downstream to keep the mass of the intake structure low. Intakes at higher speed Going faster means the intake pressure recovery increases with the square of flight speed: In case of the SR-71 intake at Mach 3.2, the pressure at the engine face was already almost 40 times higher than the ambient pressure. Now it becomes clear that going faster than Mach 3.5 does away with the need for a turbocompressor: At these speeds a properly designed intake can achieve enough compression by itself for the combustion to produce enough thrust, and going above Mach 5 will need restraint in slowing down the intake flow in order to have enough temperature margin for combustion, requiring supersonic flow in the combustion chamber. Peter KämpfPeter Kämpf $\begingroup$ Superb, concise and very informative answer! $\endgroup$ – mins Apr 30 '16 at 9:04 $\begingroup$ Even if adding heat to air hotter than 6000K would cause it to dissociate rather than heating it up further, wouldn't that still increase the thrust of the engine (by increasing the combustion chamber pressure and thus the speed at which the superheated gas goes out the tailpipe)? $\endgroup$ – Sean Mar 26 '18 at 18:48 $\begingroup$ @Sean: All energy that goes into ionisation will not further expand the gas and will be wasted for propulsion. Thrust is generated by accelerating the gas, and this acceleration happens because the gas expands when heated. $\endgroup$ – Peter Kämpf Mar 27 '18 at 4:27 $\begingroup$ @PeterKämpf: So why didn't you say "ionisation"? I thought you meant "dissociation", which is when the molecules in the air break apart into individual atoms. $\endgroup$ – Sean Mar 28 '18 at 19:08 $\begingroup$ @Sean It is both. Molecules are stripped of their electrons and fall apart. $\endgroup$ – Peter Kämpf Mar 29 '18 at 21:26 Besides the fact that beyond those 6000K combustion does not provide much expansion is also the fact that decelerating the flow to subsonic increases the drag of the engine because shock waves are not reversible and therefore the pressure is not recovered at the back (imagine a shut off engine with inner subsonic flow travelling at that speed, it would have a high drag due to shock waves). At hypersonic speeds overcoming that drag on top of the airframe drag would be a no-no. That is why I doubt whether the solution for the SABRE engine (you can google it), which has an internal subsonic flow, may be feasible even if it achieves a high degree of cooling before reaching the compressor. AeroguyAeroguy Why can't jet engines operate with supersonic air? "Because there has been no business case to develop an engine with supersonic flow at entry to the compressor." The advantages would be the same that led to todays transonic (supersonic relative flow over part of the blade span) compressors, ie smaller and lighter. Compressors with supersonic relative flow over the whole blade span have been rig-tested at steady-state speeds, eg see Naca RM E55A27. Problems to be addressed (there are many) would include shock-induced boundary-layer thickening and separation in the compressor blade passages which causes unacceptably high loss of the potentially "useful" energy that the compressor rotor is putting into the air (there would be too much temperature rise and not enough density and pressure rise) However, they can and do operate with supersonic air, but only over the outer portion of fan and core compressor front stages. Note this air is only supersonic relative to the fast-spinning rotor blades and is self-generated within the engine, ie is not received as supersonic air from the intake (see reason that air leaving intake and entering engine is subsonic in following answer). The compressor's job is to compress and so the rotor, having first grabbed the air and whirled it around at high speed, also has to slow it down within the passage between the spinning rotor blades (and also through the following stator vane passages), ie it has to compress it if it wants to be called a compressor (no slowing down would mean no increase in pressure). The compressor rotor blade profiles and the diverging area of the passages between them give rise to the type of shock waves that have subsonic flow behind them. The shock waves that are the natural mechanism to get from supersonic to subsonic flow interact with the blade boundary layers and bl thickening and separation means high losses and losses are what the compressor efficiency is a measure of. So everything has to be done with as much finesse as possible to minimise the effects of BL separation and that means limiting the Mach number of the air relative to the blades to low supersonic values and these occur where the blade speed is highest, ie at the tips. How do jet engines slow supersonic air down? The question asks how does the engine slow the air down. It is often said that the intake slows the air down. However, the air is going to slow down anyway, with or without an intake. The air flow through the engine, and hence subsonic velocity at entry to the compressor, is set in the first instance by the pilot's request, ie compressor speed/fuel flow. At supersonic speed, if there is no intake, the air slows down to the subsonic entry speed through a plane shockwave. To improve the 'overall pressure-ratio' part of the engine efficiency an intake is added which is a more efficient supersonic compressor than the free-stream, ie it has features which produce a higher ram rise at compressor entry and less spillage drag round the outside of the engine (see later when it doesn't) This requirement is extreme at high supersonic speeds and is the reason for ramps/cones/lip shaping before entry and more ramps/cones/boundary layer bleed/duct shaping inside the intake. When the intake doesn't do its job. This occurred many times when flying YF12 and SR71 aircraft at high supersonic speeds. In a split second the intake would increase the total pressure loss of the air entering the compressor from its low design value of about 20% to about 70%. The intake had changed (ie unstarted) from being an efficient supersonic intake to being the most inefficient type possible, ie a pitot intake with slowing down of the air from Mach 3 to subsonic in one violent step instead of a number of gentler ones. The air in the intake slows down "because the engine has controlling areas inside the engine which set the average axial speed of the air through the engine (which has to be low to keep pressure losses to an acceptably low level) and hence at entry to the engine and this speed is subsonic". High air speeds only occur where energy exchange is taking place, ie from the compressor rotors to the incoming air and from the outgoing combustion gases to the turbine, and where the low Mach number flow in the jetpipe ( it's low to keep the pressure losses to an acceptable value) accelerates to sonic speed at the nozzle throat. Controlling areas are the throat areas of the turbine nozzle guide vanes and the exhaust nozzle where the gas Mach number is 1 and cannot go higher. As stated in a previous answer, the low air speed requirement through the combustor sets the air speed at entry to the compressor. From this subsonic flow the compressor can generate its own supersonic flow relative to its rotor blades if it is driven fast enough by its turbine. Thanks for contributing an answer to Aviation Stack Exchange! Not the answer you're looking for? Browse other questions tagged aircraft-design jet-engine or ask your own question. How does a supersonic jet engine differ from a subsonic jet engine? How does a wing mounted jet engine on a supersonic airplane prevent the interference in the flow due to Shock? Is it possible to control the inlet Mach no. of the engine during flight? Why is the maximum speed of current fighter aircraft lower than those from the 1960's and 1970's? Why is the nose of the Mig-21 shaped so differently compared to other fighter aircraft? What determines the maximum altitude a plane can reach? Why do gases in the combustion chamber only flow one direction to the gas turbine in a jet engine? If jets carried pure O2 to use for combustion, would they be more efficient? Are the fastest moving pieces of a jet engine supersonic? Why do military turbofan engines use a low bypass ratio? Why does supersonic flight detach airflow from a wing? What is an afterburner and how long can a jet fly on afterburner? Why are contra-rotating jet engines so rare? How is bleed air used to start a jet engine? How do I estimate how much air flows through the intake due to engine suction? Why weren't the Airbus A330 engines designed with a system to stop their fans? How does a subsonic jet engine intake design ensure correct air intake? Is my understanding of subsonic inlets correct? Why are most turbofans tractors, but most propfans pushers?
CommonCrawl
International Tax and Public Finance April 2019 , Volume 26, Issue 2, pp 317–356 | Cite as How sensitive is the average taxpayer to changes in the tax-price of giving? Peter G. Backus Nicky L. Grant First Online: 26 June 2018 There is a substantial literature estimating the responsiveness of charitable donations to tax incentives for giving in the USA. One approach estimates the price elasticity of giving based on tax return data of individuals who itemize their deductions, a group substantially wealthier than the average taxpayer. Another estimates the price elasticity for the average taxpayer based on general population survey data. Broadly, results from both arms of the literature present a counterintuitive conclusion: the price elasticity of donations of the average taxpayer is larger than that of the average, wealthier, itemizer. We provide theoretical and empirical evidence that this conclusion results from a heretofore unrecognized downward bias in the estimator of the price elasticity of giving when non-itemizers are included in the estimation sample (generally with survey data). An intuitive modification to the standard model used in the literature is shown to yield a consistent and more efficient estimator of the price elasticity for the average taxpayer under a testable restriction. Strong empirical support is found for this restriction, and we estimate a bias in the price elasticity around − 1, suggesting the existing literature significantly over-estimates (in absolute value) the price elasticity of giving. Our results provide evidence of an inelastic price elasticity for the average taxpayer, with a statistically significant and elastic price response found only for households in the top decile of income. Charitable giving Tax incentives Bias D64 H21 H24 D12 Some commentators have voiced the suspicion that, while a few sophisticated taxpayers (and their tax or financial advisors) might be sensitive to variations in tax rates, the average taxpayer is too oblivious or unresponsive to the marginal tax rate for anything like the economic model to be a realistic representation of reality.Clotfelter (2002) Do tax incentives for charitable giving lead people to give more? In the USA, taxpayers can deduct their charitable donations from their taxable income if they choose to itemize, or list, their deductible expenditures (e.g., donations, mortgage interest paid, state taxes paid) in their annual filing. Taxpayers can choose to subtract the sum of their itemized deductions or the standard deduction amount, whichever is greater, from their taxable income. The tax deductibility of donations was introduced into the US tax code in 1917 and has survived every tax reform since, fundamentally unchanged (Fack and Landais 2016). It has been called 'probably the most popular tax break in the Internal Revenue Code' (Reid 2017, p. 82). This deductibility of donations produces a price (or tax-price) of giving equal to 1 minus the marginal tax rate faced by the donor if she itemizes and equal to 1 if not. This fact has been exploited in a sizeable literature aimed at estimating the elasticity of charitable giving with respect to this price. In general, estimates of the price elasticity of giving have been obtained using either tax-filer data (i.e., data from annual income tax forms), or from surveys. Estimating this elasticity using tax-filer data limits the sample to individuals who itemize their tax returns as no information on donations is recorded for non-itemizers.1 However, itemizers are substantially wealthier than non-itemizers on average.2 As such, the estimated price elasticity obtained using tax-filer data estimates the responsiveness of the average itemizer and may not reflect that of the relatively poorer average taxpayer. In order to estimate the elasticity of the general taxpayer, we must consider non-itemizers who compose about a fifth of total donations (Duquette 1999).3 This is often achieved using survey data from the general population of taxpayers, including non-itemizers. In their meta-analysis, Peloza and Steel (2005) report that studies using tax-filer data (40 of the 69 studies they surveyed) estimate a price elasticity on average of − 1.08 compared to a mean elasticity of − 1.29 from studies using survey data (the remaining 29 studies), rejecting the null hypothesis that the mean responses are equal.4 This suggests the economically counterintuitive result that the average taxpayer is more responsive to changes in the price of giving than itemizers with higher average income.5 Such a conclusion is in contrast to what has been found in a related literature that estimates the elasticity of taxable income, where higher income individuals are found to be the most sensitive to changes in tax rates (e.g., Feldstein 1995; Saez 2004; Emmanuel Saez and Giertz 2012 for an overview). This paper provides an explanation for this result, showing it to arise from a downward bias in the estimator of the price elasticity using survey data in the standard model considered in the literature. Theoretical and empirical evidence is provided demonstrating this bias, which follows from a hitherto unrecognized source of endogenous price variation arising from changes in itemization status. It is shown that controlling for itemization status yields a consistent, and more efficient estimator (relative to two stage least squares estimators) of the price elasticity under a simple testable restriction which we find is strongly supported by the data. Results from this model find that the price response of the average taxpayer is inelastic, consistent with recent work in Hungerman and Ottoni-Wilhelm (2016). Only for those with income in the top decile do we find evidence of an elastic and statistically significant price response. This provides one explanation for the observation of Clotfelter (1985, 2002), and others (e.g., Aaron 1972), that the estimated price responsiveness of charitable giving seems unrealistically large for the average taxpayer. Our findings are also significant for public policy analysis as a price elasticity less than unity is indicative that the tax deductibility of charitable donations may not be 'treasury efficient.'6 Moreover, the optimal subsidies of giving derived in Saez (2004) depend heavily on the sensitivity of donors to the price of giving. For example, the optimal subsidy with a price elasticity of − 1 is eight times larger than with a price elasticity of − 0.5. This is important since we find with 95% probability that the price elasticity in the model removing the bias is bounded below by − 0.59 compared to − 1.59 for the standard model. The literature in this area has long recognized two main sources of endogeneity in the price of giving. First, that the marginal tax rate is a function of taxable income, which, in turn, is a function of donations for itemizers (Auten et al. 2002). We follow a common practice in addressing this source of endogeneity (detailed below in Sect. 3). Second, that the price of giving is a function of itemization status itself, and hence donations, for so-called 'endogenous itemizers' (Clotfelter 1980), i.e., people that, conditional on their other deductible expenditures, are itemizers only because of the level of their donation. A common solution to this issue in the literature, using both tax-filer and survey data, has been to omit these endogenous itemizers, generally a small share of the sample, leaving only exogenous itemizers in the sample. In studies using tax-filer data, this exclusion is sufficient to expunge the endogenous price variation (e.g., Lankford and Wyckoff 1991; Randolph 1995; Auten et al. 2002; Bakija and Heim 2011), providing consistent estimation of the price elasticity of giving for the average itemizer. However, if the interest is in consistently estimating the price elasticity of the average taxpayer, then we must use samples which include those who may itemize their tax returns in certain years and not in others. We show that in such samples a third, and heretofore unacknowledged, source of endogeneity remains even if endogenous itemizers are excluded. This is because non-itemizers face a price equal to 1, and not the lower price of 1 minus the marginal tax rate as for itemizers, because their donations are sufficiently small (conditional on their other tax deductible expenses). In short, as itemizing is a function of donations for endogenous itemizers, so is not itemizing a function of donations for all non-itemizers. As a result, estimators of the price elasticity of giving based on data which includes non-itemizers (e.g., Brown and Lankford 1992; Andreoni et al. 2003; Bradley et al. 2005; Brown et al. 2012; Yöruk 2010, 2013; Brown et al. 2015; Zampelli and Yen 2017) will be downward biased.7 To understand the intuition of the price endogeneity arising from the inclusion of non-itemizers in the sample, consider the case where a taxpayer switches from being an itemizer one year to a non-itemizer the next. By definition, her donations have decreased (holding other deductible expenditure constant) and the price of donating has increased. As such, a negative relationship will be found between the change in donations and the change in price, by construction; even in the extreme case where donation decisions are made at random.8 This leads to a difference in the mean donation of itemizers and non-itemizers (conditional on expenditures and other controls) that cannot be picked up in a fixed effect for those who switch itemization status in some years and not in others, being inherently time varying. A natural approach to address this bias would be to form a two stage least squares (2SLS) estimator, instrumenting for the change in price. We consider two exogenous instruments: the 'synthetic' and the actual change in marginal tax rates. Despite finding evidence that these instruments satisfy the identification condition, they only explain a small variation in the price of giving, as most of the price variation comes from changes in itemization status. Consequentially, we find that the 2SLS estimators yield standard errors too large to make any economically meaningful inference. Instead, we develop an alternative approach. We show formally that the ordinary least squares (OLS) estimator of the price elasticity in a model which controls for change in itemization status removes this bias when the average change in price for those who stop and start itemizing is of the same magnitude; a testable restriction. This restriction is shown to hold with probability close to 1, suggesting this estimator is consistent. Moreover, since it exploits the maximal exogenous variation in the price and is estimated via OLS it is more efficient than any 2SLS estimator. In fact, we find the standard error of the OLS estimator of the price elasticity in this model to be one half, or less, than those obtained via 2SLS. Finally, to note another benefit of this approach is that it estimates the average treatment effect, and not the local average treatment effect estimated by 2SLS. The paper proceeds as follows. Section 2 provides the formal theoretical results and discusses the bias in the standard model based on survey data. Section 3 discusses the data and our instruments, where Sect. 4 presents the empirical results. Finally, conclusions are drawn in Sect. 5. Proofs of the theoretical results along with extra empirical output are provided in the Appendix. 2 Estimating price elasticity of donations The standard empirical approach in estimating the price elasticity of donations has minimal theoretical underpinnings, modeling donations as a linear function of price, income and various controls. This empirical approach was first introduced in the seminal work of Taussig (1967) where $$\begin{aligned} \log (D_{it})= & {} \alpha _{i}+\beta \log (P_{it})+\omega 'X_{it}+e_{it} \end{aligned}$$ $$\begin{aligned} P_{it}= & {} 1-I_{it}\tau _{it}, \qquad \qquad \,\\ I_{it}= & {} 1(D_{it}+E_{it}>S_{it})\qquad \, \end{aligned}$$ and \(\beta \) is the price elasticity of interest, \(D_{it}=D_{it}^{*}+1\) where \(D_{it}^{*}\) is the level of donation for household i at time t, \(S_{it}=S_{it}^{*}+1\) where \(S_{it}^{*}\) is the standard deduction, \(E_{it}\) is all other tax deductible expenditure, \(\tau _{it}\) is the marginal rate of income tax, \(P_{it}\) is the price of giving, \(X_{it}\) is a vector of personal characteristics including income and \(E_{it}\) (with corresponding parameter \(\omega \)), \(\alpha _{i}\) is all time invariant unobserved heterogeneity, and \(e_{it}\) is a random error term.9 Here \(I_{it}=1\) if an agent itemizes, namely if the sum of deductible expenditures \(\left( D^*_{it}+E_{it}\right) \) is larger than the standard deduction \(\left( S^*_{it}\right) \).10 At any time t, a household is either an exogenous itemizer (\(I_{it}=1\) where \(E_{it}>S_{it}^{*}\)), an endogenous itemizer (\(I_{it}=1\) where \(D_{it}+E_{it}>S_{it}\) and \(E_{it}\le S_{it}^{*}\)) or a non-itemizer (\(I_{it}=0\), i.e., \(D_{it}+E_{it}\le S_{it}\)). As noted above, including endogenous itemizers in the estimation sample has long been recognized to cause the OLS estimator to be downward biased. A common solution to this issue in the literature omits endogenous itemizers, generally a small share of the sample, leaving only non-itemizers and exogenous itemizers in the estimation sample. This approach only addresses one side of the problem as \(I_{it}\) is in general a function of \(D_{it}\), not just for endogenous itemizers. A non-itemizer has donations bounded above (since \(D_{it}\le S_{it}-E_{it}\)) and faces a higher price than an itemizer, whose donations are unbounded. This is the converse to the bias caused by endogenous itemizers, who have donations bounded below \(\left( S_{it}-E_{it}>0\text { and }D_{it}>S_{it}-E_{it}\right) \) and face a lower price than non-itemizers (as the marginal tax rate is greater then zero). We show formally that even omitting endogenous itemizers, a large bias remains as a result of households itemizing in some years and not in others where this bias is not expunged by removing individual fixed effects. To show this issue, we consider a model where endogenous itemizers are omitted (as is commonly done in the literature) and individual effects (\(\alpha _{i}\)) are removed via first differencing (FD).11 We omit endogenous itemizers for simplicity and to maintain comparability with the results in the literature. First differencing Eq. (1) gives $$\begin{aligned} \Delta \log (D_{it})=\beta \Delta \log (P_{it})+\omega '\Delta X_{it}+u_{it} \quad \text {where} \quad u_{it}=\Delta \epsilon _{it}. \end{aligned}$$ There are three sources of price variation: (1) changes in taxable income and other observables which determine \(\tau _{it}\) (which we control for), (2) exogenous variation in the marginal tax rate schedule (which can be exploited to identify the price effect) and (3) changes in itemization status, \(I_{it}\), which we show are endogenous. We define the following dynamic itemization behaviors for any i, t Continuing itemizer:\(\Delta I_{it}=0\), \(I_{i,t-1}=1\), \(\;I_{it}=1\) Stop itemizer:\(\Delta I_{it}=-1\), \(I_{i,t-1}=1\), \(\;I_{it}=0\) Start itemizer: \(\Delta I_{it}=1\), \(I_{i,t-1}=0\), \(\;I_{it}=1\) Continuing Non-itemizer:\(\Delta I_{it}=0\), \(I_{i,t-1}=0\), \(\;I_{it}=0\). Note we refer to I2 and I3 collectively as 'switchers.' Define \(V_{it}=S_{it}-E_{it}\) which is the standard deduction minus expenses plus one. So, \(I_{it}=0\) where \(V_{it}\ge 1\) and \(I_{it}=1\) where \(V_{it}<1\). Table 1 summarizes the changes in price and the bounds on changes in donations (if any) for the four dynamic itemization behaviors (I1–I4). Changes in donations and price for I1–I4 \(I_{it}=1\) \(I_{i,t-1}=1\) I1 \(\Delta \text {log}(D_{it})\) is unbounded I2 \(\Delta \log (D_{it})\le \log (V_{it})\) \(\quad \;\;\Delta \log (P_{it})=\Delta \log (1-\tau _{it}\)) \(\Delta \log (P_{it})=-\log (1-\tau _{i,t-1})\) I3 \(\Delta \log (D_{it})\ge -\log (V_{i,t-1})\) I4 \(-\log (V_{i,t-1})\le \Delta \log (D_{it})\le \log (V_{it})\) \(\Delta \log (P_{it})=\log (1-\tau _{it}\)) \(\Delta \log (P_{it})=0\) To show the bias, we decompose the correlation between \(u_{it}\) and \(\Delta \log (P_{it})\) into four component parts corresponding to each quadrant of Table 1. For continuing non-itemizers (I4), the change in price equals zero and hence does not introduce any bias in the OLS estimator in Eq. (2). For continuing itemizers (I1), there is no bound on \(\Delta \log (D_{it})\) and since \( u_{it}\) is exogenous and uncorrelated with \(\Delta \log (1-\tau _{it})\) no bias is introduced by this group either. However, when \(\Delta I_{it}=1\) (start itemizers, I3) then \(\Delta \log (D_{it})\) (and hence \(u_{it}\)) are bounded below where \(\Delta \log (P_{it})<0\) and so the two variables are negatively correlated. To see this more formally note that for start itemizers \(I_{it}=1\) (i.e., \(E_{it}\ge S_{it}^{*}\) as we consider only exogenous itemizers) and \(I_{it-1}=0\) (i.e., \(D_{i,t-1}\le S_{i,t-1}-E_{i,t-1}\)), so donations in \(t-1\) are bounded from above and donations in t are unbounded.12 It then follows that \(\Delta \log (D_{it})\) is bounded from below for start itemizers. Formally, $$\begin{aligned} \Delta I_{it}=1\;\Rightarrow \;\Delta \log (D_{it})\ge & {} \log (D_{it})-\log (S_{i,t-1}-E_{i,t-1}) \end{aligned}$$ $$\begin{aligned} ~\ge & {} -\log (S_{i,t-1}-E_{i,t-1}) \end{aligned}$$ where (4) follows since \(\log (D_{it})\ge 0\). Given that \(\Delta \log (D_{it})\) is bounded below for start itemizers, the residuals, \(u_{it}\), are also bounded below. Since \(u_{it}\) are mean zero (with the inclusion of a constant) the residuals are skewed to the positive for start itemizers; a group who also faces a decrease in price from 1 to \(1-\tau _{it}\). The same argument holds in reverse for stop itemizers. Hence, changes in itemization status lead to a negative correlation between \(\Delta \text {log(}P_{it})\) and \(u_{it}\), even when \(\beta =0\). Theorem 1 demonstrates this result (see proof in Appendix A), showing that the OLS-FD estimator of \(\beta \) in (2) is downward biased in the presence of switchers. For ease of exposition, we assume \(\omega =0\) and \(E[u_{it}]=0\).13 Equation (2) then collapses to $$\begin{aligned} \Delta \log (D_{it})=\beta \Delta \log (P_{it})+u_{it} \end{aligned}$$ and the OLS-FD estimator of \(\beta \) in (5) is \(\hat{\beta }_\mathrm{FD}=\frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\Delta \log (D_{it})\Delta \log (P_{it})}{\sum _{i=1}^{N}\sum _{t=2}^{T}\Delta \log (P_{it})^{2}}\).14 To simplify the proof, we assume that \((D_{it},\tau _{it},u_{it})'\) is i.i.d.15 We also assume that \(\tau _{it}\) conditional on income is strictly exogenous which we achieve by controlling for income. While the marginal tax rate schedule itself is exogenous, \(\tau _{it}\) will be a nonlinear function of taxable income. As such \(\Delta \log (P_{it})\) is highly nonlinear in income and if we fail to control for any potential nonlinearity between \(\Delta \log (D_{it})\) and \(\Delta \log (Y_{it})\) then we introduce a correlation between \(\Delta \log (P_{it})\) and \(u_{it}\). In light of this, we check the robustness of our results to nonlinear specifications in income, results provided in Appendix D.16 Define \(p_{1}=\mathcal {P} \{\Delta I_{it}=1\}\), \( p_{-1}=\mathcal {P}\{\Delta I_{it}=-1\}\), \(\xi _{1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=1]\) and \(\xi _{-1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=-1]\). Theorem 1 \(\hat{\beta }_\mathrm{FD}\overset{p}{\rightarrow }\beta +(p_{1}\xi _{1}+p_{-1}\xi _{-1})/E[(\Delta \log (P_{it}))^{2}]\) where \(\xi _{1},\xi _{-1}<0\). Theorem 1 shows that is there a downward bias in the OLS-FD estimate of \(\beta \) when the probability of either stop or start itemizing is nonzero. In our sample, \(p_{1}\) and \(p_{-1}\) are approximately 0.1 and 0.08, respectively. The conditional covariance between \(u_{it}\) and \(\Delta \text {log(}P_{it})\) (\(\xi _{1}\), \(\xi _{-1}\)) is negative for both forms of switchers so Theorem 1 implies a downward bias in the estimator of \(\beta \) in the standard model.17 The first thought toward a solution to this bias would be to search for an instrument for \(\Delta \log (P_{it})\). An obvious choice is the exogenous change in the tax rate (conditioning on a given level of taxable income). Exogenous variation in marginal tax rates has been explicitly relied upon in both tax-filer and survey data studies to estimate price elasticities of giving in the past (e.g., Feldstein 1995; Bakija and Heim 2011). We pursue an instrumental variable approach and find evidence that our proposed instruments (detailed in Sect. 3.1 below) satisfy the identification condition. However, the correlation between these instruments and \(\Delta \text {log}(P_{it})\) is small, as much of the variation in \(\Delta \text {log}(P_{it})\) arises from variations in \(\Delta I_{it}\). As such the 2SLS estimator yields large standard errors that make any meaningful economic inference implausible. As such we seek a more efficient method to estimate the price elasticity. The source of the endogeneity in this problem differs from that commonly found in many instrumental variable settings as the source of the endogenous variation in \(\Delta \log (P_{it})\) is measurable (arising from changes in \(I_{it}\)). One complication arises as \(\Delta \log (P_{it})\) is a nonlinear function of \(I_{it}\) and \(I_{i,t-1}\). As such it is not immediately clear how to transform the standard model to expunge this endogenous variation in \(\Delta \log (P_{it})\). Intuitively, controlling for \(\Delta I_{it}\) removes the variation in \(\Delta \log (P_{it})\) from the change in itemization status and should (possibly under some restrictions) remove the endogenous price variation in \(\Delta \log (P_{it})\). This would then leave the maximal exogenous variation in price with which to consistently estimate \(\beta \) and with more precision than a 2SLS-FD estimator.18 Theorem 2 below formalizes this intuitive argument, showing that controlling for change in itemization status removes the bias in Theorem 1 under a testable restriction that the average change in price for stop and start itemizers is of the same magnitude. We define the 'itemizer model,' as opposed to the standard model of Eq. (2) which controls for \(\Delta I_{it}\), as $$\begin{aligned} \Delta \log (D_{it})=\gamma \Delta I_{it}+\beta \Delta \log (P_{it})+\omega '\Delta X_{it}+e_{it}. \end{aligned}$$ Define \(z_{it}=(\Delta I_{it},\Delta \log (P_{it}))'\) and \(w_{it}=(z_{it}',X_{it}')'\) the OLS-FD estimator in the 'itemizer model' \( \hat{\theta }_\mathrm{FD}^{I}=\left( \sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}w_{it}'\right) ^{-1}\sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}\Delta \log (D_{it})\) where we express \(\hat{\theta }_\mathrm{FD}^{I}=(\hat{\gamma }_\mathrm{FD}^{I},\hat{\beta }_\mathrm{FD}^{I},\hat{\omega }_\mathrm{FD}^{I'})'\). Intuitively, the coefficient \(\gamma \) on \(\Delta I_{it}\) allows the mean change in donations for switchers (conditional on a given marginal tax rate and set of characteristics) to differ relative to non-switchers (by \(\gamma \) and \(-\gamma \), respectively). In this sense, this coefficient 'mops up' the bias derived in Theorem 1 by accommodating this mean shift in donations for switchers which is inherently correlated with the price causing a bias in the OLS estimator of \(\beta \) from Eq. (2).19 Further to note, \(\gamma \) in this case has no real economic interpretation but is a nuisance parameter which allows consistent estimation of \(\beta \). Even if donations were unresponsive to price, and indeed any other factors, it must follow that \(\gamma >0\) as by definition the mean change in donations (conditional on other deductible expenses) is negative for stop itemizers, and vice versa for start itemizers. It could be the case \(\gamma \) will in part reflect a price effect, e.g., if there is an 'itemization effect' (Boskin and Feldstein 1977), namely the response to a price change from a change in \(I_{it}\) might differ from that of a corresponding price change from a change in \(\tau _{it}\) (or more broadly if there is any nonlinear relationship between \(\Delta \log (P_{it})\) and \(\Delta \log (D_{it})\)). In either case, \(\gamma \) would partly pick up this price effect and we would need to model this nonlinear price relationship. This issue is discussed further in Sect. 4.20 Define \(\bar{\tau }_{1}=E[\log (1-\tau _{it})|\Delta I_{it}=1]\), \(\bar{\tau }_{-1}=E[\log (1-\tau _{i,t-1})|\Delta I_{it}=-1]\) and \(C=\det (E[w_{it}w_{it}'])>0\) (ruling out any multi-collinear regressors in \(X_{it}).\) If \(E[e_{it}X_{it}]=0\) (exogenous controls) $$\begin{aligned} \hat{\beta }_\mathrm{FD}^{I}\overset{p}{\rightarrow }\beta +\frac{p_{1}p_{-1}}{C}(\bar{\tau }_{1}-\bar{\tau }_{-1})(E[e_{it}|\Delta I_{it}=-1]+E[e_{it}|\Delta I_{it}=1]). \end{aligned}$$ By Theorem 2 (formally proven in Appendix A), there is no bias when either \(p_{1}\) or \(p_{-1}\) are zero, which, as noted above, is not the case in our sample. More importantly, it shows there is no asymptotic bias in \(\hat{\beta }_\mathrm{FD}^{I}\) if the average price increase for stop itemizers (\(\bar{\tau }_{-1}\)) is of the same magnitude as the average price decrease for start itemizers (\(\bar{\tau }_{1}\)). If (for a given \(\Delta X_{it}\)) both stop and start itemizers have the same price elasticity (\(\beta \)) then the size of the endogenous response of \(\Delta \log (D_{it})\) conditional on \(\Delta X_{it}\) will be of equal magnitude (but opposite sign) provided they face the same magnitude of price change on average. This restriction \(\left( \bar{\tau }_{1}=\bar{\tau }_{-1}\right) \) is testable, and we find strong empirical support that it holds (discussed below). Moreover, if \(\bar{\tau }_{1}=\bar{\tau }_{-1}\) then Theorems 1 and 2 imply \(\hat{\beta }_\mathrm{FD}-\hat{\beta }_\mathrm{FD}^{I}\) consistently estimates the bias in \(\hat{\beta }_\mathrm{FD}\) shown in Theorem 1. 3 Data description, specification of the tax-price of giving and instruments Our analysis uses data from the Panel Study of Income Dynamics (PSID) covering, biannually, 2000–2012.21 The PSID contains information on socioeconomic household characteristics, with substantial detail on income sources and amounts, certain types of expenditure, employment, household composition and residential location. In 2000, the PSID introduced the Center on Philanthropy Panel Study (COPPS) module which includes questions about charitable giving.22 The raw sample of data has 58,993 observations. Following Wilhelm (2006), we remove the low-income oversample leaving us with a representative sample of American households. Households donating more than 50% of their taxable income, households with taxable income less than the standard deduction and households appearing only once during the observed period are omitted. These restrictions leave us with a working sample of 28,480 observations (6325 households appearing on average 4.5 years). The unit of analysis is the household. All monetary figures are in 2014 prices.23 Actual itemization status \(\left( I_{it}\right) \) is reported in the survey. To identify the endogenous itemizers, we predict itemization status by determining if the sum of deductible expenditures of each household (donations, paid property taxes, mortgage interest, state taxes and medical expenses in excess of 7.5% of gross income) is larger than the standard deduction faced by the household (about $6,000 for single person households and $12,000 for married couples, moving roughly in line with inflation each year).24 Following convention, endogenous itemizers are defined as households who report that they itemize and are predicted to itemize, but only when donations are included among itemized deductions, i.e., (\(0<S-E<D)\). Endogenous itemizers comprise approximately 3% of the overall sample and 7% of itemizers. Exogenous itemizers \((E>S)\) make up 46% of the sample and 93% of the itemizers.25 The marginal tax rates used to calculate the price are obtained using the National Bureau of Economic Research's Taxsim program (Feenberg and Coutts 1993). This allows for the calculation of rates and liabilities at both the state and federal level given a number of tax relevant household characteristics including earned income, passive income, various deductible expenditures, capital gains and marital status. As a result, the calculated marginal tax rates are a function of the observable characteristics we submit to Taxsim and the exogenous federal and state tax codes. We define the marginal tax rate as $$\begin{aligned} \tau _{it}=\frac{\tau {}_{it}^\mathrm{Fed}+\delta _{it}^\mathrm{State}\tau {}_{it}^\mathrm{State}-\tau {}_{it}^\mathrm{State}\tau {}_{it}^\mathrm{Fed}\delta _{it}^\mathrm{Fed}-\tau _{it}^\mathrm{State}\tau _{it}^\mathrm{Fed}\delta _{it}^\mathrm{State}}{1-\tau _{it}^\mathrm{State}\tau _{it}^\mathrm{Fed}\delta _{it}^\mathrm{Fed}} \end{aligned}$$ where \(\tau _{it}^\mathrm{Fed}\) is the federal marginal income tax rate faced by household i in year t, \(\tau _{it}^\mathrm{State}\) is the state marginal income tax rate (42 states have a state income tax), \(\delta _{it}^{S}\) is a dummy equal to 1 if donations can be deducted from state tax returns (75% of these states allow donations to be deducted), and \(\delta _{it}^{F}\) is a dummy equal to one if federal taxes can be deducted from state returns (allowed in six states) and \(I_{it}\) is equal to 1 if i itemizes in year t and 0 otherwise. The actual marginal tax rate, \(\tau _{it}^{a}\), is calculated using i's tax relevant characteristics in t and i's actual level of giving in t. The price of giving for this household is then \(P_{it}^{a}=1-I_{it}\tau _{it}^{a}\). However, as noted in Auten et al. (2002), \(\tau _{it}^{a}\), and thus \(P_{it}^{a}\), will be endogenous, even for exogenous itemizers, as donations may be large enough to push i down to a lower tax bracket. To address this source of endogeneity, distinct from the source we focus on in this paper, we follow Auten et al. (2002) and Brown et al. (2012) in constructing an alternative marginal tax rate, \(\tau _{it}^{b}\) calculated as the mean of the marginal tax rate setting \(i's\) giving in t to 0 (sometimes called the 'first-dollar' marginal tax rate in the literature), and the marginal tax rate calculated by setting i's giving in t at 1% of median household income (the level used in Auten et al. (2002) which corresponds roughly to the median level of giving in our sample). The price variable we use in the regression analysis below is then \(P_{it}^{b}=1-I_{it}\tau _{it}^{b}\) which, as Auten et al. (2002) note, will be 'consistent' with the actual price of giving but will not suffer from the endogeneity from donations pushing a taxpayer to a lower tax bracket (the first source of endogeneity noted in the introduction). The correlation between \(P_{it}^{a}\) and \(P_{it}^{b}\) is 0.992.26 To help clarify the intuition of the bias derived in Theorem 1, we present descriptive statistics for changes in price and donations for the four types of dynamic itemization behaviors (I1 to I4 from Sect. 2) in Table 2. We present complete descriptive statistics for all other control variables in Appendix B. Descriptive statistics of primary variables in first differences I1: Continuing itemizer I2: Continuing non-itemizer I3: Start itemizer I4: Stop itemizer \(\Delta \text {log}P\) − 0.002 \(\Delta \text {log}P|\Delta \text {log}P>0\) \(\Delta \text {log}P|\Delta \text {log}P<0\) \(\Delta \)logD \(\Delta \text {log}D|\Delta \text {log}P>0\) \(\Delta \text {log}D|\Delta \text {log}P<0\) All monetary figures are in 2014 prices, deflated using the Consumer Price Index. Standard errors are reported in parentheses under the corresponding estimate Taking all the taxpayers together, the mean change in price is essentially 0 (− 0.002 in column (1)) and the mean change in donations is 0.037, though there is a mass point at 0 with 21% of the observations experiencing no change in donations. Price changes for continuing itemizers, which come from changes in marginal tax rates (and taxable income for which we control), are essentially 0 on average (0.004 in column (2)). However, the mean increase in the price of giving (\(\Delta \log P|\Delta \log P>0\)) for continuing itemizers is 0.080 (median \(=\) 0.042) and the mean decrease is − 0.089 (median \(=\) 0.063). The implied elasticity for continuing itemizers is 0.337, i.e., small and positive.27 Start itemizers, for whom the price necessarily falls, see a 0.257 average decrease in log of price. Stop itemizers, who necessarily face a price increase, see an average log price increase of 0.261. The price changes for start and stop itemizers are driven largely by the change in itemization status. Note that the price for continuing non-itemizers does not change, being equal to 1 by definition. As we argue above, the mean change in donations for start and stop itemizers (conditional on deductible expenditures, which we control for) must be larger and smaller, respectively, than the changing donations for non-switchers. For switchers, the implied elasticities are much larger (in absolute value) being − 2.351 for start itemizers and − 1.832 for stop itemizers with weighted mean elasticity of − 2.107. By Theorem 1, the negative bias in the standard model comes from the price variation from switchers. As such we would expect to find larger implied elasticities, in absolute value, from the switchers relative to the continuing itemizers, which is consistent with these results. 3.1 Instrumental variables for price Any attempt to identify the price elasticity of giving, via 2SLS or otherwise, relies on exogenous variation to the tax code to introduce variation in the marginal tax rates and thus price. The largest changes to federal tax rates during our observed period occurred in the Economic Growth and Tax Relief Reconciliation Act of 2001 and the Jobs and Growth Tax Relief Reconciliation Act of 2003, which saw changes to the federal income tax brackets and marginal rates in those brackets. Other changes included adjustment of the manner in which dividends are taxed and changes to the Alternative Minimum Tax exemption levels (Tax Increase Prevention and Reconciliation Act of 2005) though Congress introduces a multitude of changes each year. In fact, the US Congress made nearly 5000 changes to the federal tax code between 2001 and 2012 (Olson 2012). Moreover, forty-three states impose some form of income tax and rates range from 0.36% in Iowa on income below $1539 up to 11% on income over $200,000 in Hawaii. As state income tax rates are set by state legislatures, the evolution of those rates over time differs from state to state providing temporal as well as cross-sectional exogenous variation in the state marginal income tax rates. Though the most significant changes to the federal tax code took place in the early 2000s, this exogenous tax variation is not isolated to that particular period. We isolate an exogenous change in the marginal tax rate following Gruber and Saez (2002) by constructing a 'synthetic' marginal tax rate, \(\tau _{it}^{s}\) in a manner analogous to \(\tau _{it}^{b}\) but using i's tax relevant characteristics in t, including giving set to 0, but the tax code in place at \(t+2\). Any difference between \(\tau _{it}^{s}\) and \(\tau _{it}^{b}\) is necessarily due to changes in the federal or state tax codes. Figure 1 plots the mean exogenous increases \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}>0\right) \) and decreases \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}<0\right) \) in marginal tax rates. Mean exogenous increases and decreases in marginal tax rates. Notes The figure plots \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}>0\right) \) and \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}<0\right) \) on the left-hand axis and the proportion of the sample in each year experiencing an exogenous change in their marginal tax rate on the right-hand axis Between about 40 and 60% of the sample experiences an exogenous change in their marginal tax rate face in a given year. Around 79% of households experience at least one exogenous change to their marginal tax rate. The mean exogenous increase in a household's marginal tax rate is 0.032 (median=0.006), and the mean exogenous decrease in a household's marginal tax rate is − 0.035 (median \(=-\) 0.017). The first instrumental variable we consider for \(\log (P_{it})\) is the synthetic change in the marginal tax rate (\(\tau _{it}^{b}-\tau {}_{it}^{s}\)) à la Gruber and Saez (2002). The correlation between \(\tau _{it}^{b}-\tau {}_{it}^{s}\) and \(\Delta \text {log}(P_{it})\) is, however, small (\(\rho =-\,0.067\)) where the majority of the variation in \(\Delta \text {log}(P_{it})\) (about 70%) arises from changes in itemization status. The exogenous change in the marginal tax rates accounts for only 1.7% of the variation in \(\Delta \text {log}(P_{it})\). Our second instrument is \(\Delta \tau _{it}^{b}\) which is excludable as the tax rate where \(D=0\) and the tax rate calculated by setting i's giving in t at 1% of median household income are unrelated to the household level of donation conditional on our set of controls. This implicit assumption is frequently relied upon in the literature for identification. The correlation between \(\Delta \tau _{it}^{b}\) and \(\Delta \text {log} (P_{it})\) is 0.341 and about 10% of the variation in \(\Delta \text {log}(P_{it})\) is explained by variation in \(\Delta \tau _{it}^{b}\).28 The primary results of our paper are presented in Table 3.29 We estimate Eq. (2) including logged net taxable income, logged non-donation deductible expenditures (sum of mortgage interest, state taxes paid, medical expenditure and property tax paid plus $1), logged age of the household head, the number of dependent children in the household as well as dummies for male household heads, being married, highest degree earned and home ownership.30 All estimated models control for state and year fixed effects.31 Estimates of the price elasticity of giving Standard model 2SLS with \(\tau _{it}^{s}-\tau _{it}^{b}\) 2SLS with \(\Delta \tau _{it}^{b}\) Itemizer model \(\Delta \)log\(P^{b}\) − 1.237*** \(\Delta \)itemizer 0.437*** \(\Delta \)Log net income 0.122** \(R^2\) \(H_{0}:\) 2SLS estimator satisfies identification condition \(H_{0}:\beta _{\Delta \text {log}P^{b}}\le -1\) Results in column (1) are obtained from OLS-FD estimation of Eq. (2). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (3) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as instruments, respectively. Results in column (3) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test the identification condition holds. The tests reported in the last row are the one-sided t tests that the price elasticity is elastic (\(\le -1\)) against the alternative hypothesis it is price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% Column (1) presents results from OLS-FD estimation in Eq. (2), an estimate of the price elasticity of the average taxpayer. The estimated elasticity is − 1.24 (95% confidence interval − 1.59 to − 0.88). This result is closely in line with those surveyed in Peloza and Steel (2005) and Batina and Ihori (2010) and with more recent work also using PSID (Brown et al. 2012; Yöruk 2010, 2013; Brown et al. 2015; Zampelli and Yen 2017). Note that the elasticities reported here are the total elasticities, the measure most relevant to determining the efficiency of the tax incentive for giving, not the intensive-margin elasticities as is reported in some papers (e.g., McClelland and Kokoski 1994). Though we exclude endogenous itemizers and construct the price in line with Auten et al. (2002) to address the two long-recognized sources of endogeneity in \(\tau \) the estimate in column (1) still derives from an estimator with a downward bias from inclusion of non-itemizers (Theorem 1). To address this, we instrument for price using the approach outlined in Sect. 3.1. Column (2) provides results applying the 2SLS-FD estimator to Eq. (2) using the 'synthetic' change in the marginal tax rate, \(\left( \tau _{it}^{b}-\tau {}_{it}^{s}\right) \), as an instrument for \(\Delta \text {log}\left( P_{it}^{b}\right) \). Though the correlation between \(\left( \tau _{it}^{b}-\tau _{it}^{s}\right) \) and \(\Delta \text {log}\left( P_{it}^{b}\right) \) is small, there is strong evidence to support that it satisfies the identification condition. The point estimate of − 2.54 is, however, very imprecisely estimated (95% confidence interval − 6.33 to 1.26). Column (3) proceeds similarly to column (2) now using \(\Delta \tau _{it}^{b}\) as an instrument for \(\Delta \text {log}\left( P_{it}^{b}\right) \).32 The point estimate is closer to zero than in (2) with a corresponding reduction in the standard error, though the confidence interval is still quite wide and there is not sufficient evidence to rule out the hypothesis that price elasticity is elastic (95% confidence interval − 1.33 to 0.52). The scope of inference on the true price elasticity in columns (2) and (3) is limited since the exogenous variation in the price from the instruments is small. Consequentially, t tests of the null hypotheses that \(\beta =0\) and \(\beta \le -1\) both have low power and there is little of economic interest that we can draw from these results. There are also other issues with inference based on columns (2) and (3). Our interest is in estimating the price elasticity of the average taxpayer, but these models estimate the elasticity from variation in the price from exogenous changes in the marginal tax rate, the local average treatment effect (LATE). This is not the same as the effect of a change in the price of giving over the whole population, i.e., the average treatment effect, the parameter of interest. Moreover, identifying the LATE requires that the instrument affects the endogenous variable in the same direction for everyone (i.e., the monotonicity condition holds). This condition may not hold in our setting as we may have 'defiers,' i.e., people for whom the realized change in the price is the opposite of the change predicted by the instrument. To understand 'defiers' in our setting consider the case of an exogenous increase in marginal tax rates. For continuing and start itemizers this will lower the price of giving. However, for stop itemizers the price will increase in spite of the exogenous increase in the marginal tax rates, i.e., they are 'defiers'. Such 'defiers' (and their converse) make up about 12% of our sample. Violations of the monotonicity assumption (Imbens and Angrist 1994) imply the 2SLS estimator does not necessarily estimate the LATE. Column (4) presents results for the OLS-FD estimator of the itemizer model in Eq. (6). As shown in Theorem 2, this specification will yield a consistent estimator of \(\beta \) (the average treatment effect) when \(\bar{\tau }_{1}-\bar{\tau }_{-1}=0\) where we find evidence that this restriction holds (p value \(=\) 0.797) with a sample estimate of \(\bar{\tau }_1-\bar{\tau }_{-1} \) of − 0.001. The point estimate of the price elasticity in column (4), − 0.08 (95 confidence interval − 0.58 to 0.42), is very close to and not significantly different from 0.33 This result provides strong evidence the true price response for the average taxpayer is inelastic. Given there is strong evidence the OLS-FD estimator in (4) is consistent with a standard error about one half of those from the estimators in column (3) and (4), we conclude that unlike the findings from the 2SLS-FD estimators the price elasticity is not elastic. This finding is also in contrast to the general findings of many previous studies, though is consistent with the more recent work in Hungerman and Ottoni-Wilhelm (2016). As noted above, consistency of the OLS-FD estimator in the itemizer model implies that we can consistently estimate the size of the bias in the estimates obtained from the standard model via the difference between the estimated elasticities in columns (1) and (4) of Table 3 which is \(-1.16\). This sizeable bias, about the same size as the average estimated price elasticity from survey data, could explain why such strong price responses have been found in the literature using survey data. 4.1 The extensive margin Recent work (Hungerman and Ottoni-Wilhelm 2016; Almunia et al. 2017) focuses greater attention on the impact of tax incentives at the extensive margin, i.e., the decision to give any nonzero amount. We estimate the effect of the price of giving on the decision to donate using a linear probability model in first differences and report results in Table 4.34 While we do not derive the bias formally in the models considered in Table 4, the intuition for the bias is the same as in Theorem 1 for Eq. (2), namely itemization status is a function of whether or not one gives so the price is endogenous. The price effect at the extensive margin 2SLS with\(\tau _{it}^{s}-\tau _{it}^{b}\) 0.018* Results in column (1) are obtained from OLS-FD estimation of Eq. (donlit111). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (3) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as instruments, respectively. Results in column (2) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test the identification condition holds. The tests reported in the last row are the one-sided t tests that the price elasticity is elastic (\(\le -1\)) against the alternative hypothesis it is price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% The pattern of the results is similar to those in Table 3. The lack of evidence for an effect at the extensive margin in column (4) in Table 4 is consistent with the findings in both Hungerman and Ottoni-Wilhelm (2016) who also consider the American context, and with Almunia et al. (2017) who study tax incentives for giving in the UK and find an extensive-margin elasticity of about − 0.1. The above results suggest that the average taxpayer is not responsive to changes in the price of giving, consistent with Clotfelter's observation. To test the robustness of the results in Table 3 to mis-specification, we follow the good practice outlined in Athey and Imbens (2015) and re-estimate Eqs. (2) and (6) under various specifications and estimation samples. We present the results of these robustness checks in Appendix D. In summary, the results presented in Table 3, and the estimated size of the bias therefrom, remain stable across the various models considering changes to the sub-sample used in estimation, nonlinearities in income, different specifications of the dependent variable and the exclusion of other itemizable expenditures as a control. 4.2 Testing for nonlinearities in the price effect Note that the estimate of \(\gamma \), the coefficient on \(\Delta I_{it}\), in Table 3 suggests that (conditional \(\Delta X_{it}\)) average log donations of start and stop itemizers relative to non-switchers is \(+\,0.44\) and \(-\,0.44\), respectively, which corresponds with the intuition in Sect. 2. At first sight, this could be interpreted as the donors' response to the price change from the change in itemization status, and hence part of the true price effect. However, by the discussion in Sect. 2 we know that \(\gamma \) must be greater than zero and reflects the response to endogenous price changes of switchers, not purely a true price effect. It may be that the price response for switchers differs from non-switchers, in this case \(\gamma \) may indeed pick up some genuine responsiveness of donations to changes in the price, and we may overestimate the bias. While controlling for itemization status allows consistent estimation of the price elasticity of giving for the average taxpayer as seen above further complications arise if there are other problems with the standard specification (Eq. (2)). Another key restriction of Eq. (2) is that the price effect is linear in \(\Delta \text {log}\left( P_{it}\right) \) and is the same for switchers and continuous itemizers. However, if the average response (ceterisparibus) to a, say, 30% price drop is more than 10 times the change from a 3% price drop, then the intercept would shift for switchers even aside from a bias in the standard model. In this case, part of \(\gamma \) reflects endogenous movement in \(\Delta \text {log}\left( P_{it}\right) \) and part will pick up a price response. There are economic reasons to think the response to a change in P coming from a change in itemization status may differ from the response to a change in P from changes in the marginal tax rate. Such an 'itemization effect' was posited early in the literature (Boskin and Feldstein 1977). Dye (1978) points out that taxpayers are more likely to know their itemization status than their marginal tax rate. The change induced in P by a change in itemization status is large and thus likely to be more salient, whereas changes in the marginal tax rate can be very small. Dye (1978) estimates a specification very similar to the itemizer specification we study. He, like us, finds that the itemization status is a highly significant determinant of giving. However, Dye misinterprets this estimated effect, claiming that the identified price effect in the literature is really an itemization effect, failing to attribute any of the estimated effect to the bias demonstrated above.35 Caution must therefore be taken in how we interpret \(\gamma \) and \(\beta \) in the presence of omitted nonlinearities in the price effect. When changes in itemization status are controlled for, the price response we estimate (\(\beta \)) is the average price response to changes in the marginal tax rate, which are quite small. If there are strong nonlinearities we cannot infer that this estimated elasticity reflects the response to larger changes in price such as those coming from changes in itemization status. We consider the possibility of an itemization effect and more general nonlinearities in the effect \(\Delta \text {log}\left( P_{it}\right) \) on \(\Delta \text {log}\left( D_{it}\right) \) in our model with corresponding results presented in Table 5. Nonlinear effect of \(\Delta \text {log}\left( P_{it}\right) \) Quadratic |\(\Delta \)log\(P|>0.15\) \(\Delta \)logP \(\Delta \)log\(P^2\) Switcher\(\times \Delta \)logP \(\Delta \)log\(P\times \)1(\(|\Delta \text {log}P|>0.15\)) Hypothesis tests: \(\beta _{\Delta \text {log}P^{b}}+\beta _{\text {Switcher}\times \Delta \text {log}P}\le -1\) \(\beta _{\Delta \text {log}P^{b}}+2\beta _{\Delta \text {log}P^2}E[{\Delta \text {log}P^{b}}]\le -1\) \(\beta _{\Delta \text {log}P^{b}}+\beta _{\Delta \text {log}P\times 1(|\Delta \text {log}P|>0.15)}\le -1\) All standard errors are clustered (at the household level). The hypothesis tests reported in the bottom five rows are the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% In column (1) we re-estimate the itemizer model allowing the price elasticity to differ for start or stop itemizers ('switchers'). The estimated price elasticity for switchers (\(\hat{\beta }=-\,0.205\), 95% confidence interval − 1.11 to 0.70) does not significantly differ from that of non-switchers or from 0. However, note that given the high correlation (multicollinearity) between \(\Delta I\) and \(Switcher\times \Delta \text {log}P\) (\(\hat{\rho }=-\,0.936\)) yielding a large standard error and making it difficult to identify the price elasticity for switchers and similarly to estimate precisely the coefficients on the nonlinear terms in (2)–(5). In column (2), we include the square of \(\Delta \text {log}(P_{it})\) in Eq. (2) but find little evidence of a quadratic price specification. We then interact \(\Delta \text {log}(P_{it})\) with dummies taking a value of 1 if \(\Delta \text {log}(P_{it})\) is in the top quartile of the \(\Delta \text {log}(P_{it})\) distribution (column (3)), in the top decile (column (4)) or in the top percentile (column (5)). In columns (3),(4) and (5), the coefficient on the interaction term is close to 0 and statistically insignificant at conventional levels. In the last rows of Table 5, we present results from the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic for those facing larger price increases. For column (1), this corresponds to a test that the response to price of switchers is inelastic and for columns (3)–(5) for those experiencing price changes in the respective three quartiles defined above. We find strong evidence to reject the hypothesis that giving is price elastic for larger price changes in (3)–(5).36 It is less clear if the response to prices faced by switchers is elastic with \(p=0.084\). A key feature here is the stability of the coefficient on \(\Delta \)itemizer over the different nonlinear specifications of the price. If there was a strong itemization or nonlinear price effect we would expect the estimate of \(\gamma \) to reduce. However, we find stable estimates of \(\gamma \) around 0.43 even allowing for different possible nonlinearities in \(\Delta \text {log}(P_{it})\). As such we conclude there is little evidence of a nonlinear price effect, and subsequently that we have overstated the bias found in Table 3. We next turn to potential heterogeneity in the price elasticity of giving over income. This may be interesting in its own right, but we consider it in light of our results above which suggest the average taxpayer is not responsive to changes in the price of giving. However, studies using samples of (wealthier than average) itemizers consistently find evidence that itemizers are indeed responsive. An interesting question is then whether those people are responsive because they itemize or because they are wealthier. 4.3 Heterogeneity in the price elasticity over income Studies using tax-filer data do not suffer from the bias derived in Theorem 1. An example of this kind of study is Bakija and Heim (2011) who find evidence of a price elasticity of around − 1. Itemizers are, on average, higher income earners than non-itemizers. For example, the sample of itemizers in Bakija and Heim (2011) has a mean income of about $1 million. Given itemizers in this sample are on average extremely wealthy, we cannot easily discern if the price effect estimated in Bakija and Heim, and elsewhere (e.g., (Randolph 1995; Auten et al. 2002) reflects the responsiveness of the average itemizer. Some researchers (e.g., Feldstein and Taylor 1976; Reece and Zieschang 1985) have found the economically counterintuitive result that the price elasticity is largest for those with lowest incomes. Peloza and Steel (2005) find that the price elasticities for higher income donors seem to be slightly greater than, though not significantly different from, those for lower income donors. Bakija and Heim (2011) find little evidence the magnitude of the price effect varies with income, though their sample is disproportionately wealthy even for tax-filer data. In Table 6, we present some descriptive statistics for taxable income decile groups.37 Note that while the probability of being a continuing itemizer increases monotonically with income, the probability of switching itemization status rises with income and then falls. We return to this feature below. In column (6), we show the results of the test of the restriction outlined in Theorem 2. The restriction holds for every decile group \((p=0.1)\). In the analysis that follows we combine the bottom two decile groups due to the lack of price variation among the lowest income earners. As can be seen in Table 6, the variance of \(\Delta \text {log}(P_{it})\) at the bottom of the income distribution is about 1/4 that at the top making identification of the price effect difficult for these relatively poorer households.38 Descriptive statistics by income Mean income ($'000) P[Switcher] P[Cont. itemizer] Var[\(\Delta \text {log}P\)] \(H_{0}: \bar{\tau }_{1}=\bar{\tau }_{-1}\) Non-itemizers Itemizers This table presents some relevant descriptive statistics by income decile group. These income groups, but combining the bottom two decile groups, form the basis of Figs. 2, 3 and 4 In Fig. 2, we plot the estimated price elasticities from both the standard model and the itemizer model across these income groups. Variation in estimated price elasticities over income. Notes The markers plot \(\hat{\beta }_\mathrm{FD}\) (triangles) and \(\hat{\beta }_\mathrm{FD}^{I}\) (circles) for each income group (bottom quintile, upper eight deciles). Gray markers are statistically insignificant at the 10% level, and black markers are significant at the 10% level Black and gray markers indicate that we reject and accept the null hypothesis that \(\beta =0\) respectively (at the 10% level) against the alternative that \(\beta <0\) within the various income deciles. Estimates from the standard model are triangles, and the circles are estimates from the itemizer model. With the standard model, we find large and significant price elasticities for the bottom quintile and the next five decile groups as well as for the top decile group. The estimated price elasticities for the eighth and ninth decile groups are close to, and not statistically different from, 0. These results suggest a nonlinear relationship between the price responsiveness of taxpayers and their income with lower/middle income taxpayers as well as the wealthiest taxpayers being most sensitive to changes in the price of giving. In contrast, the results from the itemizer specification suggest that the bottom 90% of the income distribution is not sensitive to changes in the price of giving. We do find some evidence in the itemizer model that the highest income earners are sensitive as the estimated elasticities for the top decile (p value \(=\) 0.094) group are statistically significant. Note that the estimate from the itemizer model lies below that of the the standard model save for the top decile group where they are virtually equivalent. We fail to reject the required restriction for the consistency of the itemizer model, i.e., \(\bar{\tau }_{1}=\bar{\tau }_{-1}\) for every decile group at the 10% level (see the last column of Table 6). As such, by Theorems 1 and 2 (now across each decile) the difference between the estimated income elasticities in each model is a consistent estimator of the bias in the price elasticity within each income decile from the standard model. The mean of the estimated biases over the decile groups is − 1.06 and is largest (in absolute value) for the middle deciles, where the probability of switching status is highest. Below we plot the size of the estimated bias (\(\hat{\beta }_\mathrm{FD}-\hat{\beta }^I_\mathrm{FD}\)) against the probability of switching within each income decile group. Estimated bias plotted against probability of switching itemization status across income decile groups. Notes The markers plot \(\hat{\beta }_\mathrm{FD}-\hat{\beta }_\mathrm{FD}^{I}\) by the probability of switching itemization status in each income group (bottom quintile, upper eight deciles). The line is the linear fit to these points By Theorem 1, the size of the bias increases in \(p_{1},p_{-1}\) and decreases in Var(\(\Delta \text {log}P)\) for a given \(\xi _{1},\xi _{-1}\) which are unobservable (though we know are both negative by Theorem 1). If \(\xi _{1},\xi _{-1}\) were roughly equal across income deciles, or did not move in any systematic way, we should expect to see some negative (though not necessarily linear) relationship between the bias in the OLS estimator in the standard model and the probability of switching across income deciles. We see some support for this in Fig. 3 which shows the magnitude of the estimated bias by the probability of switching. The correlation between the probability of switching status and the size of the bias is \(-\,0.44\). It is difficult to conceive of an economic rationale for the finding in the standard model of why lower income households would be more responsive to tax incentives than richer households. The results and discussion in this section utilizing Theorems 1 and 2 provide some evidence that this finding is at least in part due to a bias for utilizing endogenous price variation from switching itemization status. While we find evidence that the average taxpayer is not sensitive to changes in the price of giving, it remains the case that previous studies using tax-filer data have regularly found price elasticities close to − 1. We find evidence that the average higher income earner also exhibits sensitivity to changes in the price of giving with price elasticities of around − 1 for the top decile group. However, higher income people are also more likely to itemize, as can be seen in Table 6. An obvious question is then whether the significant effects found here for the average high earner and the significant effects found in, for example, Bakija and Heim (2011) are driven by the fact that people are itemizers or higher income earners. As noted above, estimates obtained from tax-filer data are consistent and do not suffer from the bias derived in Theorem 1. To test this we estimate our model for continuing itemizers (equivalent to using tax-filer data) over different income decile groups and present results in Fig. 4. Price elasticity by income group for continuing itemizers. Notes Each marker is the estimated price elasticity of giving for the group (bottom quintile, upper eight deciles). The whiskers show the 95% confidence interval around each estimate Note from Table 6, non-itemizers have lower average within decile group income than itemizers (columns 1 and 2). We find evidence that the highest earning continuing itemizers, those in the top decile group, do exhibit a rather substantial sensitivity to changes in the price of giving with elasticities around − 2, though we cannot reject a unitary price elasticity. Results are found similarly for itemizers among the wealthiest 5% (\(\hat{\beta }=-1.99\), se \(=\) 0.81). However, continuing itemizers at lower levels of income do not seem to be sensitive to changes in the price of giving. We estimate the model for all continuing itemizers below the top income decile together and obtain an estimated price elasticity of − 0.25 (95% confidence interval − 0.97 to 0.46) which we find to be statistically different from the estimated price elasticity for continuing itemizers in the top decile of income (p value \(=\) 0.047). These results, taken together with those in Fig. 2, suggest that it is the fact that one is a higher earner that corresponds to being more sensitive to changes in the price, not simply the fact that a person is an itemizer as we show that the average person (not the average itemizer) in the top income decile is sensitive to price changes, but we do not find evidence that lower income itemizers are sensitive to price changes. 5 Conclusions Many studies estimating the price elasticity of donations use survey data as it includes data on donation behaviors of those in the general population including price variation from changes in itemization status not often seen in tax-filer data, allowing estimation of the response of the average taxpayer (and not the average, wealthier itemizer). In this paper, we show that the estimator of the price elasticity utilizing variation in price from changes in itemization status (largely in survey data) is severely biased downwards, even omitting endogenous itemizers as is done in the literature. We derive the form of bias of the OLS-FD estimator in the standard model and show a downward bias when agents switch itemization status. It is shown that the approach of instrumenting the change in price with exogenous changes in the marginal tax rate, though identified, produces standard errors so large as to make economically meaningful inference difficult. We try to improve inference by developing an estimator which has no asymptotic bias though is more efficient than an instrument variable estimator. We do this by deriving the bias of the OLS estimator of the price elasticity in a model which controls for the change in itemization status (a measurable source of endogeneity in the price) and show that it is zero under a testable restriction which is strongly supported by the data. The standard errors of the price elasticity in this estimator are also over one half of those from 2SLS estimators we consider. Empirically, we find that the consistent estimates of the price elasticity for the average taxpayer obtained using the itemizer model are not price elastic. However, even in the consistent and efficient OLS estimator, the standard errors are fairly large such that the question of whether the price elasticity is closer to 0 or − 1 remains open though the lower bound of the 95% confidence interval we estimate is − 0.59. The bias in the estimator obtained from the standard model in the literature is large, approximately of the order − 1. This finding is robust to numerous variations in specification and sample. Our results suggest that Clotfelter may be right in suggesting that the average taxpayer is unlikely to be responsive to the price of giving. Estimates of the price elasticity in the standard model across different income levels show the size of price elasticity is generally decreasing (in absolute value) in income. We provide evidence that this perhaps surprising result is at least in part explained by the bias in the estimator of the price elasticity in the standard literature model. Correcting for this bias with the itemizer model, we no longer find evidence that lower income households respond most to tax incentives with estimates of the price elasticities in each income decile being closer to, and not significantly different from, 0. We do find evidence that higher income households are indeed responsive. This result differs from the findings in the literature using tax-filer data as our result is for the average taxpayer or average higher income taxpayer, whereas results from tax-filer data are for the average itemizer. We find that it is the higher income people, who are also more likely to be itemizers, that are sensitive to changes in the price of giving. Itemizers with incomes in the bottom 90% of the income distribution do not appear to respond to changes in the price of giving. This suggests it is the fact that people are higher income that corresponds to them being sensitive to changes in the price, not the fact that they itemize. Considering these results together with the existing work using tax-filer data suggests that a rethinking of the tax deductibility of donations may be called for. It is well established in the literature that itemizing households are sensitive to changes in the price of giving (e.g., Bakija and Heim 2011). Lowry (2014) shows that taxpayers claimed $134.5 billion of charitable deductions in 2010, 53% of which is from taxpayers with income below $250,000 roughly the same income as the top decile in our data. Our results suggest the cost of tens of billions of dollars in lost tax revenue is not resulting in the benefit found in the literature in the form of increased charitable donations for the average taxpayer and in fact the bottom 90% of the income distribution. As such, and given the evidence presented here, the government may consider amending the charitable deduction for those households below the top marginal tax bracket or revising the subsidy in line with Saez (2004). One exception to this rule occurred between 1982 and 1986 where non-itemizers could deduct some or all of their donations. According to IRS records, the mean income of taxpayers who itemized their tax returns in 2013 was $147,938 compared to $48,050 for non-itemizers. A similar proportion is found in our data. In Batina and Ihori (2010), another survey of this literature, the mean price elasticity for tax-filer studies is − 1.25 versus − 1.62 in studies using survey data. A similar pattern is found in Steinberg (1990) which surveys 24 early studies. More recently, Bakija and Heim (2011) find elasticities very close to − 1 using a panel of tax-filer data and Yöruk (2010, 2013), Reinstein (2011), Brown et al. (2012) and Brown et al. (2015) generally find price elasticities in excess, sometimes substantially so, of − 1 using the same survey panel data we use. In their working paper, Andreoni et al. (1999) use a Gallop survey of household giving and find price elasticities ranging from − 1.73 to − 3.35, magnitudes that they note are 'consistent with the body of literature' (p. 11). More recently, Yöruk (2013, p. 1708) notes that 'most estimates in the literature suggest that a 1% increase in the tax-price of giving is associated with more than 1% decrease in the amount of charitable gifts'. Brown (1987) also points out this result, but ultimately concludes this finding arises from the failure to estimate the price using a Tobit type estimator. Tax deductibility of charitable donations is treasury efficient when the foregone tax revenue (and thus the decrease in the public provision of a public good) is exceeded by the increase in aggregate giving (the private provision of the public good). Conventionally, the threshold for efficiency has been a price elasticity of at least − 1 (Feldstein and Clotfelter 1976). However, some have argued that the threshold ought to be larger (in absolute value) due to concerns about tax evasion (Slemrod 1988), while others have argued that the deduction might be efficient even at price elasticities smaller than − 1 (Roberts 1984). Some of these studies do not exclude the endogenous itemizers (e.g., Brown and Lankford 1992; Bradley et al. 2005; Yöruk 2010) meaning estimated price elasticities will suffer from both the known bias from endogenous itemizers and the bias outlined here from endogenous non-itemizers. Gruber (2004) and Reinstein (2011) impute itemization status, though such an approach can introduce nonclassical measurement error. In neither case, however, is the main aim of the study the consistent estimation of the price elasticity of giving. The same argument holds in reverse for those who start itemizing. As is conventional in the literature donations (\(D_{it}\)) is measured as a transformation of \(D_{it}^{*}\) which is strictly greater than zero so that \(\log (D_{it})\) exists and is nonnegative. In the Appendix D, Table 11, we test the sensitivity of our results to other transformations considered in the literature (e.g., inverse hyperbolic sine transformation or \(D_{it}=D^*_{it}+10\)). Note that itemization status is not assigned, but rather people must choose to itemize themselves and some people may not itemize despite their deductible expenditure exceeding the standard deduction. One possible reason for this was found in Benzarti (2015) who shows that there is a cost of itemizing in terms of effort that amounts to about $644 on average though with substantial heterogeneity around that figure. In this paper, we use actual itemization status as reported by the surveyed household. The FD estimator is used to simplify the exposition of the issue which will also occur more generally when using within group (WG) type estimators. Note that \(S_{it}-E_{it}\ge 1\) when \(I_{it}=0\) since \(D_{it}\le S_{it}-E_{it}\) where \(S_{it}=S_{it}^{*}+1\) and \(S_{it}^{*}\ge E_{it}\) by definition when \(I_{it}=1\) and \(D_{it}\ge 1\) as \(D_{it}=D_{it}^{*}+1\). This assumption is made without loss of generality as we can make all the arguments below partialling out \(X_{it}\) which we assume is exogenous. This method is used in the proof of Theorem 2 below. In practice, a constant would be included in (5) so that the OLS-FD estimator would be demeaned ensuring \(E[u_{it}]=0\). All the arguments in the proof of Theorem 1 will go through unchanged on the variables demeaned, and this restriction is enforced for simplicity to clarify the exposition of the result. Extensions to non-i.i.d data hold straightforwardly utilizing more general weak law of large number results allowing quite general forms of heteroskedasticity and dependence in the data. Theorem 1 can be generalized to much weaker assumptions on the correlation of \(u_{it}\) and \(\tau _{it}\) though we wish to highlight even when \(\tau _{it}\) is exogenous the change in price will not be as changes in itemization status are endogenous. Note this problem as outlined here is unique to the US tax system though the literature on tax incentives for charitable giving extends to other countries. For example, Fack and Landais (2010) use data from France, Bönke et al. (2013) use data from Germany and Scharf and Smith (2010) and Almunia et al. (2017) use UK data. Each study contends with different issues surrounding the estimation of the price elasticity given the differently structured tax incentives for giving in each country. Our results here may be of limited use in applications to similar studies in a different setting. Another possible benefit to OLS versus a 2SLS approach is that 2SLS estimator based on instruments with a small correlation with the endogenous variable can cause the normal approximation to the distribution of the 2SLS estimator to be poor, even in large samples, e.g., Hansen et al. (1996), Staiger and Stock (1997). Hence, the OLS estimator may provide more accurate inference then our 2SLS-FD estimators. Note that we do not posit that the OLS-FD estimator in this auxiliary regression provides a consistent estimator of \(\beta \) by this argument alone. Equation (6) includes two endogenous variables, both \(\Delta \log (P_{it})\) and \(\Delta I_{it}\), where we derive the bias in the estimate of \(\beta \) in this estimator in Theorem 2 below. We can show this bias is zero under an intuitive and testable restriction. If there is an itemization effect then the standard model is fundamentally misspecified, even aside from the bias in Theorem 1. To identify this itemization effect would prove problematic as we know \(\gamma \) would be a biased estimate of this itemization effect as it has to part reflect the mean differences in \(\Delta \log (D_{it})\) arising purely from the definition of different types of itemizers. We consider the possibility of an itemization effect in Sect. 4.2. A significant topic of interest in this area has been the timing of donations and the responsiveness to permanent and transitory changes in the price (e.g., Randolph 1995; Bakija and Heim 2011). Due to the biannual nature of our data, we do not consider this in our paper. As we are using survey data, one might be concerned with measurement error in the donations variable. (Wilhelm 2006, 2007) contends that the data collected in the COPPS module are of better quality than most household giving survey data given the experience of the PSID staff. Recent work in Gillitzer and Skov (2017) suggests that it is in tax data that the measurement error might be found, not survey data. Moreover, the measurement error that might be of concern is in donations. If that error is random with mean 0, then the precision of the estimates of the price elasticity will be reduced, but the estimator will not necessarily suffer from inconsistency or biasedness. In our case, one might reasonably argue that the error is in fact not centered at 0, as people may systematically over-report giving (this could be the case in both survey and tax records). In such a case, only the constant in our regression would be biased. Although, if we assume the measurement error constant over time within households, i.e., households consistently over or under report by the same proportion, then it will be washed out via the first differencing of the data. While measurement error can be a serious problem in tax data or survey data, we do not believe it to be prohibitively so in our analysis. Deflated using the US Consumer Price Index: http://www.bls.gov/cpi/. Self-reporting itemizers make up 48% of the sample. Our predicted itemization status gives an itemization rate of 53% and matches the declared itemization status in 78% of the cases. Our 'over-prediction' of itemization status is consistent with findings in Benzarti (2015) who shows taxpayers systematically forego savings they might accrue from itemizing in order to avoid the hassle of itemizing. There is a smaller share of the sample (6.5%) who report themselves as itemizers, but for whom we fail to predict them as such. We include these households as exogenous itemizers. We have re-estimated all our models excluding them, and results are qualitatively the same. Replacing \(P_{it}^a\) with \(P_{it}^b\) in our regression may lead to measurement error. Instead, some (e.g., (Yöruk 2010, 2013; Brown et al. 2012, 2015) have used the price calculated using the first-dollar marginal tax rate as an instrument for \(P_{it}^{a}\) to address the endogeneity identified by Auten et al. (2002). Given the very high correlation between \(P_{it}^{a}\) and first-dollar price in our data, we find that the use of the first-dollar price as an instrument or as a proxy provides qualitatively similar results. It is important to note that such a 2SLS approach is valid in studies which exclude non-itemizers (e.g., Auten et al. 2002; Bakija and Heim 2011) as the bias caused by switching itemization status (Theorem 1) is not present in their sample. However, this is not a valid instrument for \(\Delta \log (P_{it}^{a})\) when switchers are included in the sample as \(\Delta \log (P_{it}^b)\) is a function of the switch in itemization status. We calculate this as the sample proportion weighted mean of the implied elasticity for continuing itemizers facing a price increase and that of those continuing itemizers facing a price decrease. A potential alternative is to use the price constructed with the 'synthetic' marginal tax rate as an instrument for \(P_{it}^{a}\). This approach has been effectively used in studies of tax-filer data (e.g., Bakija and Heim 2011). Though the change in 'synthetic price' is a function of the switch in itemization status, this synthetic change in the price (unlike the synthetic change in the marginal tax rate) would not be a valid instrument in our setting which includes switchers. We present and briefly discuss full regression results, including estimates on the parameters of control variables in Appendix C. In general, non-donation itemizable expenditures (E) are not measured in survey data and even when information on E is available, as is the case with the PSID, it has not been, to our knowledge, included in models of donations in the literature to date. Such expenditures will be correlated with price via itemization status and likely correlated with donations since changes in, say, medical expenditures may affect one's donation amount. As such omitting other expenses will result in a biased estimator of the price elasticity. Including them, however, can be problematic as donations and non-donation deductible expenditures may be co-determined. We consider this issue further and check the robustness of our results controlling for expenditures in Appendix D. Note that conventionally models with a dependent variable distributed with a mass point at 0 might be treated as censored and thus require sophisticated econometric techniques (e.g., McClelland and Kokoski 1994 and a double hurdle model in Huck and Rasul 2008). However, such a mass point does not necessarily indicate censoring. In our case, it is not that we do not observe donations below a particular level but in fact the donation of zero is part of the choice set of the (non)-donor. Angrist and Pischke (2009) note that despite the convention, the use of nonlinear models like Tobits when a bound is not indicative of censoring is not appropriate. We therefore use OLS to estimate the effect of changes in the price on the mean of the donations distribution including zero donations. Results were qualitatively the same when using a correlated random effects Tobit (see: Backus and Grant (2016)). We also estimated columns (2) and (3) including higher order polynomials of each instrument, though no meaningful increases in precision were obtained in either case. Another issue stemming from measurement error in donations (see footnote 22) could be that it infects the price variable since the price variable is a function of taxable income which is a function of donations. This more 'classical' measurement error would produce a bias toward 0 in our estimator of the price elasticity. However, such a bias would be present in both the 'traditional' and 'itemizer' specification and it is not clear how the pattern of our results could be the product strictly of such a bias. We also perform this estimation using a conditional logit, and results were qualitatively similar. Despite featuring in some prominent early publications, the 'itemizer effect' has largely been ignored in the literature since, Brown (1987) being an exception. The test reported in the final row in column (2) evaluates the quadratic price response at the mean change in log price and we find strong evidence to reject the null the price response is elastic, similar to the case of Eq. (3) in Table 3. To avoid losing observations that become singletons when the subsamples are defined, we calculate the mean net household income over the observed period and then estimate the model for different levels of mean household income \(\left( \bar{y_{i}}\right) \) rather than annual income \(\left( y_{it}\right) \). Results are similar keeping the bottom two decile groups separate, though we find very large standard errors for the bottom decile group which wash out some of the features we are interested in showing in Fig. 3 below. Note that a full and formal treatment of the simultaneous nature of the determination of E and D is beyond the scope of the current paper. The fact that our empirical results are not sensitive to the inclusion or exclusion of E suggests that concerns over endogeneity bias arising from the co-determination of E and D or the omission of E may be minor in practice. We are grateful for comments from seminar attendees at the University of Cape Town, University of Pretoria, University of Manchester, University of Barcelona, CREST, PEUK 2016 and the European Economic Association Conference 2017 as well as James Banks, Matias Cortes, Manasa Patnam and Jacopo Mazza. This paper was previously circulated under the title 'Consistent Estimation of the Tax-Price Elasticity of Charitable Giving with Survey data'. To simplify the proofs of Theorems 1 and 2, we make the assumption that \(\tau _{it}\) is independent of \(u_{it}\), which is slightly stronger than the assumption that \(\tau _{it}\) is strictly exogenous. The results do not hinge on this slight strengthening of the exogeneity assumption, but simplify the proof and exposition. Proof of Theorem 1 Define \(p_{1}=\mathcal {P}\{\Delta I_{it}=1\}\), \(p_{-1}=\mathcal {P}\{\Delta I_{it}=-1\}\), \(p_{0}=\mathcal {P}\{\Delta I_{it}=0\}\), \(\xi _{1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=1]\), \(\xi _{-1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=-1]\). Under the i.i.d assumption then by the Khintchine Weak Law of Large Numbers (KWLLN) $$\begin{aligned} \hat{\beta }_\mathrm{FD}\overset{p}{\rightarrow }\beta +\frac{E[u_{it}\Delta \log (P_{it})]}{E[\Delta \log (P_{it})^{2}]} \end{aligned}$$ where we now show that $$\begin{aligned} E[u_{it}\Delta \log (P_{it})]=p_{1}\xi _{1}+p_{-1}\xi _{-1} \end{aligned}$$ where both \(\xi _{1},\xi _{-1}<0\) which establishes the result. We use the Law of Iterated Expectations (LIE) to rewrite \(E[u_{it}\Delta \log (P_{it})]\) as a weighted sum of the conditional expectations \(u_{it}\Delta \log (P_{it})\) for I1-I4 itemizers defined in Sect. 2. Firstly, note that when \(\Delta I_{it}=0\) and \(I_{it}=I_{i,t-1}=0\) (I4) then \(\Delta \log (P_{it})=0\) and for \(I_{i,t}=I_{i,t-1}=1\), \(\Delta \log (P_{it})=\Delta \log (1-\tau _{it})\) so $$\begin{aligned} E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=0]=E[u_{it}|I_{it}=I_{i,t-1}=1] E[\Delta \log (1-\tau _{it})|I_{it}=I_{i,t-1}=1]p_{0,1} \end{aligned}$$ as \(u_{it}\) is assumed independent of \(\Delta \log (1-\tau _{it})\) where \(p_{0,1}=\mathcal {P}\{I_{it}=I_{i,t-1}=1\}\) and $$\begin{aligned} E[u_{it}|I_{it}=I_{i,t-1}=1]=E[u_{it}|E_{it}>S_{it},E_{i,t-1}>S_{i,t-1}]=E[u_{it}]=0 \end{aligned}$$ since \(\omega =0\). More generally when \(\omega \ne 0\) then the same result follows assuming \(E[u_{it}|E_{it}]=E[u_{it}]\) which could be achieved by controlling for (polynomials of) \(E_{it}\). By the LIE utilizing \(E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=0]=0\), we can re-express $$\begin{aligned} E[u_{it}\Delta \log (P_{it})]= & {} E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]p_{1}\nonumber \\&\quad -E[\log (1-\tau _{i,t-1})u_{it}|\Delta I_{it}=-1]p_{-1} \end{aligned}$$ $$\begin{aligned}= & {} \xi _{1}p_{1}+\xi _{-1}p_{1}. \end{aligned}$$ noting \(\Delta \log (P_{it})=\log (1-\tau _{it})\) for \(\Delta I_{it}=1\) and \(\Delta \log (P_{it})=-\log (1-\tau _{i,t-1})\) The event \(\Delta I_{it}=1\) (I2) is equivalent to \(E_{it}\ge S_{it}^*\) (itemizer at time t) and \(D_{i,t-1}\le S_{i,t-1}-E_{i,t-1}\) (non-itemizer time t-1) so that $$\begin{aligned} \Delta \log {D_{it}}\ge \log (D_{it})-\log (S_{i,t-1}-E_{i,t-1}) \end{aligned}$$ where \(\Delta \log {D_{it}}=\beta \log (1-\tau _{it})+u_{it}\) (as \(\Delta \log (P_{it})=\log (1-\tau _{it})\)) so that $$\begin{aligned} u_{it}\ge & {} \log (D_{it})-\log (S_{i,t-1}-E_{i,t-1})-\beta \log (1-\tau _{it}) \end{aligned}$$ $$\begin{aligned}\ge & {} -\log (S_{i,t-1}-E_{i,t-1})-\beta \log (1-\tau _{it}) \end{aligned}$$ where (8) follows as \(\log (D_{it})\ge 0\) as \(D_{it}=D_{it}^{*}+1\) where \(D_{it}^{*}\ge 0\). Define \(h_{it}:=-\log (S_{i,t-1}-E_{i,t-1})-\beta \log (1-\tau _{it})]\) then $$\begin{aligned} E[u_{it}|\Delta I_{it}=1]= & {} E[u_{it}|u_{it}\ge h_{it},E_{it}\ge S_{it}] \end{aligned}$$ $$\begin{aligned}\ge & {} E[u_{it}|u_{it}\ge h_{it}] \end{aligned}$$ $$\begin{aligned}> & {} 0 \end{aligned}$$ where (11) follows by (9) and noting \(E_{it}\) is mean independent of \(u_{it}\). The final inequality follows as \(E[u_{it}]=0\), where defining \(p_{11}=\mathcal {P}\{u_{it}\ge h_{it}\}\) $$\begin{aligned} 0=E[u_{it}]=E[u_{it}|u_{it}\ge h_{it}]p_{11}+E[u_{it}|u_{it}\le h_{it}](1-p_{11}) \end{aligned}$$ where \(E[u_{it}|u_{it}\le h_{it}]<0\) as \(h_{it}\le 0\) (as \(\beta \le 0\) and \(E_{i,t-1} - S_{i,t-1} \le 1\) as \(I_{i,t-1}=0\)) and' since \(E[u_{it}|u_{it}\le h_{it}]=E[u_{it}|u_{it}\le h_{it}, h_{it}=0]\Pr \{h_{it}=0\} + E[u_{it}|u_{it}\le h_{it}, h_{it}<0]\Pr \{h_{it}<0\}\) since \(\Pr \{h_{it}<0\}>0\) and $$\begin{aligned} E\left[ u_{it}|u_{it}\ge h_{it}\right] >0 \end{aligned}$$ follows from (13) noting that \(0<p_{11}<1\). Finally, since \(\log (1-\tau _{it}) < 0\) for all i,t and is strictly less than zero for some i,t then $$\begin{aligned} E[\log (1-\tau _{it})|\Delta I_{it}=1]<0. \end{aligned}$$ By independence of \(\tau _{it}\) and \(u_{it}\) $$\begin{aligned} E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]=E[\log (1-\tau _{it})|\Delta I_{it}=1]E[u_{it}|\Delta I_{it}=1] \end{aligned}$$ where (14) and (16) imply $$\begin{aligned} E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]\le E[\log (1-\tau _{it})|\Delta I_{it}=1]E[u_{it}|u_{it}\ge h_{it}] \end{aligned}$$ where together with the inequality in (15) implies $$\begin{aligned} \xi _{1}:=E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]<0. \end{aligned}$$ A similar argument holds in reverse for second term in the RHS of (6) for \(\Delta I_{it}=-1\) where $$\begin{aligned} \xi _{-1}:=-E[\log (1-\tau _{i,t-1})u_{it}|\Delta I_{it}=-1]<0. \end{aligned}$$ establishing the result. We specify our itemizer specification (Eq. (2) in Sect. 2) $$\begin{aligned} \Delta \log (D_{it})=\gamma \Delta I_{it}+\beta \Delta \log (P_{it})+\omega '\Delta X_{it}+e_{it} \end{aligned}$$ where \(X_{it}\) is a \(k\times 1\) vector of controls and \(u_{it}=e_{it}+\gamma \Delta {I}_{it}\). To show the result decompose \(\Delta X_{it}\) $$\begin{aligned} \Delta X_{it}=\Xi z_{it}+v_{it}^{\Delta X} \end{aligned}$$ where \(\Xi \) is a \(k\times 2\) matrix of OLS coefficients where by definition \(E[z_{it}v_{it}^{\Delta X'}]=0\). Plugging (21) in to (20) $$\begin{aligned} \Delta \log (D_{it})=\gamma ^{*}\Delta I_{it}+\beta ^{*}\Delta \log (P_{it})+\omega 'v_{it}^{\Delta X}+e_{it} \end{aligned}$$ where \(\gamma ^{*}=\gamma +\omega '\Xi _{1}\), \(\beta ^{*}=\beta +\omega '\Xi _{2}\) where \(\Xi _{j}\) is the jth column of \(\Xi \) for \(j=\{1,2\}\). We see in the population regressions in (20) and (22) that $$\begin{aligned} \beta =\beta ^{*}-\omega '\Xi _{2} \end{aligned}$$ likewise it is straightforward to show that the sample estimator satisfies $$\begin{aligned} \hat{\beta }_\mathrm{FD}^{I}=\hat{\beta }_\mathrm{FD}^{I,*}-\hat{\omega }_\mathrm{FD}^{I,*'}\hat{\Xi }_{2} \end{aligned}$$ where \(\hat{\beta }_\mathrm{FD}^{I,*}\), \(\hat{\omega }_\mathrm{FD}^{I,*}\) are the OLS estimators in (22) and \(\hat{\Xi }_{2}\) is the estimator of \(\Xi _{2}\) from OLS regression in (21). Namely, we have 'partialled out' \(\Delta X_{it}\). Below we show the following two results $$\begin{aligned} \hat{\beta }_\mathrm{FD}^{I,*}\rightarrow \beta ^{*}+\frac{p_{1}p_{-1}}{C}(\bar{\tau }_{1}-\bar{\tau }_{-1})(E[e_{it}|\Delta I_{it}=-1]+E[e_{it}|\Delta I_{it}=1]) \end{aligned}$$ $$\begin{aligned} \hat{\omega }_\mathrm{FD}^{I,*}\rightarrow \omega \end{aligned}$$ where \(\hat{\Xi }_{2}\overset{p}{\rightarrow }\Xi _{2}\) by KWLLN together this result along with the fact that \(\beta =\beta ^{*}-\omega '\Xi _{2}\) and the results in (29), (25) and (26) imply To show (25) and (26) define \(w_{it}^{*}=(z_{it}',v_{it}^{\Delta X'})'\) and the OLS estimator in (22) $$\begin{aligned} \hat{\theta }_\mathrm{FD}^{I,*}:=\left( \sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}^{*}w_{it}^{*'}\right) ^{-1}\sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}^{*}\Delta \log (D_{it}) \end{aligned}$$ where \(\hat{\theta }_\mathrm{FD}^{I,*}:=(\hat{\gamma }_\mathrm{FD}^{I,*},\hat{\beta }_\mathrm{FD}^{I,*},\hat{\omega }_\mathrm{FD}^{I,*'})'\). Under the i.i.d assumption by an application of KWLLN $$\begin{aligned} \hat{\theta }_\mathrm{FD}^{I,*}&\overset{p}{\rightarrow }&E[w_{it}^{*}w_{it}^{*'}]^{-1}E[w_{it}^{*}\Delta \log (D_{it})] \end{aligned}$$ $$\begin{aligned}= & {} \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*}\\ \omega \end{array}\right) +\left( \begin{array}{cc} E[z_{it}z_{it}'] &{} E[z_{it}v_{it}^{\Delta X'}]\\ E[v_{it}^{\Delta X}z_{it}'] &{} E[v_{it}^{\Delta X}v_{it}^{\Delta X'}] \end{array}\right) ^{-1}\left( \begin{array}{c} E[e_{it}z_{it}]\\ E[e_{it}v_{it}^{\Delta X}] \end{array}\right) \end{aligned}$$ $$\begin{aligned}= & {} \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*}\\ \omega \end{array}\right) +\left( \begin{array}{cc} E[z_{it}z_{it}']^{-1} &{} 0\\ 0 &{} E[v_{it}^{\Delta X}v_{it}^{\Delta X'}]^{-1} \end{array}\right) \left( \begin{array}{c} E[e_{it}z_{it}]\\ 0 \end{array}\right) \end{aligned}$$ where (30) follows plugging in \(\Delta \log (D_{it})=\gamma ^{*}\Delta I_{it}+\beta ^{*}\Delta \log (P_{it})+\omega ^{'}v_{it}^{\Delta X}+e_{it}\) and (31) follows as \(E[e_{it}v_{it}^{\Delta X}]=0\) and \(E[z_{it}v_{it}^{\Delta X'}]=0\). Hence we establish (26). It follows by (30) (noting \(z_{it}=(\Delta I_{it},\Delta \log (P_{it})\))' that $$\begin{aligned}&\left( \begin{array}{c} \hat{\gamma }_\mathrm{FD}^{I,*}\\ \hat{\beta }_\mathrm{FD}^{I,*} \end{array}\right) \overset{p}{\rightarrow } \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*} \end{array}\right) +E[z_{it}z_{it}']^{-1}E[e_{it}z_{it}]\\&= \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*} \end{array}\right) +\left( \begin{array}{cc} E[(\Delta I_{it})^{2}] &{} E[\Delta I_{it}\Delta \log (P_{it})]\\ E[\Delta I_{it}\Delta \log (P_{it})] &{} E[(\Delta \log (P_{it}))^{2}] \end{array}\right) ^{-1}\left( \begin{array}{c} E[e_{it}\Delta I_{it}]\\ E[e_{it}\Delta \log (P_{it})] \end{array}\right) \\&= \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*} \end{array}\right) +\frac{1}{\text {det}(E[z_{it}z_{it}'])}\left( \begin{array}{cc} E[(\Delta \log (P_{it}))^{2}] &{} -E[\Delta I_{it}\Delta \log (P_{it})]\\ -E[\Delta I_{it}\Delta \log (P_{it})] &{} E[(\Delta I_{it})^{2}] \end{array}\right) \left( \begin{array}{c} E[e_{it}\Delta I_{it}]\\ E[e_{it}\Delta \log (P_{it})] \end{array}\right) . \end{aligned}$$ Expanding out the second element in the limit and defining \(C=\text {det}(E[z_{it}z_{it}'])\) which is greater than zero by assumption (no multi-collinear instruments) $$\begin{aligned}&\hat{\beta }_\mathrm{FD}^{I,*}-\beta ^{*} \overset{p}{\rightarrow } \frac{1}{C}\left( E[e_{it}\Delta \log (P_{it})]E[(\Delta I_{it})^{2}]\nonumber \right. \\&\quad \left. -E[\Delta I_{it}\Delta \log (P_{it})]\right) )E[e_{it}\Delta I_{it}] \end{aligned}$$ $$\begin{aligned}&= \frac{1}{C}(\left( p_{1}+p_{-1})E[e_{it}\Delta \log (P_{it})]\right) \nonumber \\&\quad -(\bar{\tau }_{1}p_{1}+\bar{\tau }_{-1}p_{-1})E[e_{it}\Delta I_{it}]) \end{aligned}$$ $$\begin{aligned}&= \frac{1}{C}\left( \left( E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{1}E[e_{it}\Delta I_{it}]\right) p_{1}\nonumber \right. \\&\quad \left. +\left( E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{-1}E[e_{it}\Delta I_{it}]\right) p_{-1}\right) \end{aligned}$$ $$\begin{aligned}&= \frac{1}{C}p_{1}p_{-1}(\bar{\tau }_{1}-\bar{\tau }_{-1})(E[e_{it}|\Delta I_{it}=-1]+E[e_{it}|\Delta I_{it}=1]) \end{aligned}$$ and the second equality follows as $$\begin{aligned} E[(\Delta I_{it})^{2}]= & {} E[(\Delta I_{it})^{2}|\Delta I_{it}=1]p_{1}+E[(\Delta I_{it})^{2}|\Delta I_{it}=-1]p_{-1}\\= & {} p_{1}+p_{-1} \end{aligned}$$ $$\begin{aligned} E[\Delta I_{it}\Delta \log (P_{it})]= & {} E[\Delta \log (P_{it})|\Delta I_{it}=1]p_{1}\nonumber \\&\quad -E[\Delta \log (P_{it})|\Delta I_{it}=-1]p_{-1} \end{aligned}$$ $$\begin{aligned}= & {} E[\Delta \log (1-\tau _{it})|\Delta I_{it}=1]p_{1}\nonumber \\&\quad +E[\Delta \log (1-\tau _{i,t-1})|\Delta I_{it}=-1]p_{-1} \end{aligned}$$ $$\begin{aligned}= & {} \bar{\tau }_{1}p_{1}+\bar{\tau }_{-1}p_{-1} \end{aligned}$$ and the final equality (35) uses the LIE and strict exogeneity of \(\tau _{it}\) so that $$\begin{aligned} E[e_{it}\Delta \log (P_{it})]= & {} E[e_{it}|\Delta I_{it}=1]\bar{\tau }_{1}p_{1}-E[e_{it}|\Delta I_{it}=-1]\bar{\tau }_{-1}p_{-1} \end{aligned}$$ $$\begin{aligned} E[e_{it}\Delta I_{it}]= & {} E[e_{it}|\Delta I_{it}=1]p_{1}-E[e_{it}|\Delta I_{it}=-1]p_{-1} \end{aligned}$$ $$\begin{aligned} E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{1}E[e_{it}\Delta I_{it}]=(\bar{\tau }_{1}-\bar{\tau }_{-1})E[e_{it}|\Delta I_{it}=-1]p_{-1} \end{aligned}$$ where by a similar argument we can show $$\begin{aligned} E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{-1}E[e_{it}\Delta I_{it}]=(\bar{\tau }_{1}-\bar{\tau }_{-1})E[e_{it}|\Delta I_{it}=1]p_{1}. \end{aligned}$$ Table 7 presents descriptive statistics for all other control variables. There is substantial variation over the dynamic itemizer types (columns (1) to (4)). Continuing itemizers (column (1)) are the most likely to have donated and give the largest donations on average; more than five times that of continuing non-itemizers (column (2)) and more than double the mean donations of start and stop itemizers. Continuing itemizers also have the highest mean income and lowest mean price. The donating probability, mean donation and mean income of the start (column (3)) and stop (column (4)) itemizers are quite similar. Descriptive statistics for all variables by itemizer type Continuing itemizer Continuing non-itemizer Start itemizer Stop itemizer itemizer Non-itemizer Net taxable income 51,725.998 (119,696.605) (43,073.655) (2630.332) \(1-I\tau ^{a}\) Age (head) Married\(^d\) No high school\(^d\) Some college\(^d\) College grad\(^d\) Graduate school\(^d\) # of dependent children Deductible expenses Homeowner\(^d\) All monetary figures are in 2014 prices, deflated using the Consumer Price Index. Standard deviations are shown in (). Variables with \(^d\) are 0/1 dummies In Table 8, we present fuller regression results from our main analysis summarized in Table 3. The first three variables show the same information as in Table 3. Full regression results of Table 3 IV with \(\tau _{it}^{s}-\tau _{it}^{b}\) IV with \(\Delta \tau _{it}^{b}\) \(\Delta \)Age (head) \(\Delta \)Age (head)\(^2\) \(\Delta \)Married\(^d\) \(\Delta \)Dependent children \(\Delta \)Homeowner\(^d\) \(\Delta \)Not HS grad\(^d\) \(\Delta \)Some college\(^d\) \(\Delta \)College grad\(^d\) \(\Delta \)Graduate school\(^d\) \(\Delta \)Log deductions \(H_{0}:\) IV estimator is identified Results in column (1) are obtained from OLS-FD estimation of Eq. (2). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (3) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as the instrument, respectively. Results in column (3) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test. The tests reported in the last row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% The superscript d indicates that the variable is a 0/1 dummy In addition to the price and income effects discussed in detail above, we find evidence of a quadratic relationship between age and donations with, looking at the results in column (4), households headed by people aged 49.3 years giving the most, ceterisparibus. We also find that married people give substantially more than unmarried people consistent with Mesch et al. (2011) and Rooney et al. (2005). We do not find evidence that the number of dependent children affects giving. We do not find evidence that more educated people give more. This might in part be due to the lack of within household variation in the level of education of the head. Some studies using cross-sectional data find strong positive effects of education on giving (e.g., Mesch et al. 2011), but other studies have found no evidence of an effect (e.g., Andreoni et al. 2003). We do not find evidence that non-giving itemizable expenditure affects giving. We include this here because its exclusion will likely result in an omitted variable bias (correlated with price and giving), but it might also be a 'bad control' (Angrist and Pischke 2009) and so we test the robustness of our results to its exclusion in Appendix D below. We present results of a number of robustness checks starting with the inclusion of the PSID poor oversample and the exclusion of the 'never' itemizers in Table 9. Robustness checks I, include poor oversample and 'never itemizers' Include 'poor' sample No never itemizers \(\Delta \) Log net income \(H_{0}:\beta _{\Delta \text {log}P}\le -1\) Results in columns (1) and (2) are obtained via OLS-FD estimation of Eqs. (2) and (6), respectively, including the PSID oversample of poor households. Results in columns (3) and (4) are obtained via OLS-FD estimation of Eqs. (2) and (6), respectively, and excluding those households who never itemize during the observed period. All standard errors are clustered (at the household level). The test reported in the bottom row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% Robustness checks II, allowing for a nonlinear income effect Quadratic income Cubic income Income decile groups Income deciles \(\times \)Year \(\Delta \)Log net income\(^2\) − 0.068* \(H_{0}:\beta _{\Delta Log price}<-1\) Results are obtained from OLS-FD estimation of Eq. (2). All standard errors are clustered (at the household level). The test reported in the bottom row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% Results in columns (1) and (2) are obtained from OLS-FD estimation of Eqs. (2) and (6), respectively, including observations that are in the PSID oversample of poor households. These are excluded from our primary analysis. Results in columns (3) and (4) are obtained from OLS-FD estimation of Eqs. (2) and (6), respectively, excluding those households who never itemize during the observed period and therefore experience no change in the price of giving. In both cases the pattern is the same: price elasticities in excess of − 1 from the standard model and price elasticities close to and not different from 0, but different from − 1, from the itemizer model. We also test the robustness of the results to the inclusion of nonlinear income effects in Table 10 by re-estimating Eq. (6) including quadratic (column (1)), cubics (column (2)), income decile group dummies (columns (3)) and, following Bakija and Heim (2011) decile groups interacted with years. The results do not qualitatively differ from our main findings in columns (1) and (4) of Table 3. We next test the robustness of our results to the specification of the dependent variable. In our analysis, we use the log of donations plus $1 as the dependent variables. But this is an arbitrary choice, though common in the literature. In Table 11, we re-estimate Eq. (2) using different specifications for the dependent variable. Robustness checks III, allowing for different specifications of donations \(+\) $5 \(+\) $10 − 46.583 46.364*** Results are obtained from OLS-FD estimation of Eq. (2), but we vary the specification of the dependent variable. All standard errors are clustered (at the household level). The test reported in the bottom row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% Robustness checks IV, excluding other itemizable expenditures (E) Results in column (1) are obtained from OLS-FD estimation of Eq. (2). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (2) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as the instrument, respectively. Results in column (4) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test. The tests reported in the last row are the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10% In column (1) we use level donations, in column (2) we use logged donations plus $5, in column (3) we use logged donations plus $10, and in column (4) we use an inverse hyperbolic sine transformation instead of taking logs. Again, our result maintains. Finally, we check the robustness of our results to the exclusion of other itemizable expenditures (E). As noted above, non-donation E will be correlated with price via itemization status and likely correlated with donations, and therefore, its omission, as is done throughout the literature, will result in a biased estimator of the price elasticity. Including E, however, might be problematic as donations, and non-donation deductible expenditures may be co-determined though this may be mitigated by the fact that more than half of non-donation deductible expenditures are accounted for by mortgage interest payments and real estate taxes (Lowry 2014) which are likely to be predetermined in most cases. That is, non-donation itemizable expenditure may be a 'bad control' (Angrist and Pischke 2009). The role of E in estimating the price elasticity of giving has received very little attention in the literature. E is not generally available in survey data and is therefore omitted, and the studies using tax-filer data have not addressed this issue to our knowledge.39 In Table 12, we present results analogous to those presented in Table 3 above, but obtained excluding log E from the model. Again, our result maintains as the estimated price elasticities are not sensitive to the exclusion of E. While such robustness checks are not exhaustive, the stability of our result to variation in data transformation, estimation sample, estimator and specification provides further support of our main result. Aaron, H. (1972). Federal encouragement of private giving. In D. Dillon (Ed.), Tax impacts on philanthropy. Princeton: Tax Institute of America.Google Scholar Almunia, M., Lockwood, B., & Scharf, K. (2017). More giving or more givers? The effects of tax incentives on charitable donations in the UK. CESifo Working Paper Series No. 6591.Google Scholar Andreoni, J., Brown, E, & Rischall, I. (1999). Charitable giving by married couples: Who decides and why does it matter? Working Paper.Google Scholar Andreoni, J., Brown, E., & Rischall, I. (2003). Charitable giving by married couples: Who decides and why does it matter? Journal of Human Resources, 38(1), 111–133.CrossRefGoogle Scholar Angrist, J., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist's companion. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar Athey, S., & Imbens, G. (2015). A measure of robustness to misspecification. American Economic Review, 105(5), 476–80.CrossRefGoogle Scholar Auten, G. E., Sieg, H., & Clotfelter, C. T. (2002). Charitable giving, income, and taxes: An analysis of panel data. The American Economic Review, 92(1), 371–382.CrossRefGoogle Scholar Backus, P. & Grant, N. (2016). Consistent estimation of the tax-price elasticity of charitable giving with survey data. Manchester Economics Discussion Paper EDP-1606.Google Scholar Bakija, J., & Heim, B. (2011). How does charitable giving respond to incentives and income? New estimates from panel data. National Tax Journal, 64(2), 615–650.CrossRefGoogle Scholar Batina, R . G., & Ihori, T. (2010). Public goods: Theories and evidence. New York: Springer.Google Scholar Benzarti, Y. (2015). How taxing is tax filing? Leaving money on the table because of hassle costs. Ph.D. thesis, University of California, Berkeley.Google Scholar Bönke, T., Massarrat-Mashhadi, N., & Sielaff, C. (2013). Charitable giving in the german welfare state: Fiscal incentives and crowding out. Public Choice, 154, 39–58.CrossRefGoogle Scholar Boskin, M. J., & Feldstein, M. S. (1977). Effects of the charitable deduction on contributions by low income and middle income households: Evidence from the National Survey of Philanthropy. The Review of Economics and Statistics, 59(3), 351–54.CrossRefGoogle Scholar Bradley, R., Holden, S., & McClelland, R. (2005). A robust estimation of the effects of taxation on charitable contributions. Contemporary Economic Policy, 23(4), 545–554.CrossRefGoogle Scholar Brown, E. (1987). Tax incentives and charitable giving: Evidence from new survey data. Public Finance Quarterly, 15(4), 386–396.CrossRefGoogle Scholar Brown, E., & Lankford, H. (1992). Gifts of money and gifts of time estimating the effects of tax prices and available time. Journal of Public Economics, 47(3), 321–341.CrossRefGoogle Scholar Brown, S., Harris, M. N., & Taylor, K. (2012). Modelling charitable donations to an unexpected natural disaster: Evidence from the U.S. Panel Study of Income Dynamics. Journal of Economic Behavior and Organization, 84, 97–110.CrossRefGoogle Scholar Brown, S., Greene, W. H., & Taylor, K. (2015). An inverse hyperbolic sine heteroskedastic latent class panel tobit model: An application to modelling charitable donations. Economic Modelling, 50, 321–341.CrossRefGoogle Scholar Clotfelter, C. T. (1980). Tax incentives and charitable giving: Evidence from a panel of taxpayers. Journal of Public Economics, 13(3), 319–340.CrossRefGoogle Scholar Clotfelter, C . T. (1985). Federal tax policy and charitable giving. Chicago: University of Chicago Press.CrossRefGoogle Scholar Clotfelter, C. T. (2002). The economics of giving. In J. Barry & B. Manno (Eds.), Giving better, giving smarter. Washington, DC: National Commission on Philanthropy and Civic Renewal.Google Scholar Duquette, C. (1999). Is charitable giving by nonitemizers responsive to tax incentives? New evidence. National Tax Journal, 52(2), 195–206.Google Scholar Dye, R. (1978). Personal charitable contributions: Tax effects and other motives. In Proceedings of the seventieth annual conference on taxation. Columbus: National Tax Association-Tax Institute of America.Google Scholar Emmanuel Saez, J. S., & Giertz, S. H. (2012). The elasticity of taxable income with respect to marginal tax rates: A critical review. Journal of Economic Literature, 50(1), 3–50.CrossRefGoogle Scholar Fack, G., & Landais, C. (2010). Are tax incentives for charitable giving efficient? Evidence from France. American Economic Journal: Economic Policy, 2(2), 117–141.Google Scholar Fack, G., & Landais, C. (2016). Introduction. In G. Fack & C. Landais (Eds.), Charitable giving and tax policy: A historical and comparative perspective, CEPR. Oxford University Press.Google Scholar Feenberg, D., & Coutts, E. (1993). An introduction to the TAXSIM model. Journal of Policy Analysis and Management, 12(1), 189.CrossRefGoogle Scholar Feldstein, M., & Clotfelter, C. (1976). Tax incentives and charitable contributions in the United States: A microeconometric analysis. Journal of Public Economics, 5(1–2), 1–26.CrossRefGoogle Scholar Feldstein, M., & Taylor, A. (1976). The income tax and charitable contributions. Econometrica, 44(6), 1201–1222.CrossRefGoogle Scholar Feldstein, M. S. (1995). Behavioral responses to tax rates: Evidence form the tax reform act of 1986. The American Economic Review, 85(2), 170–174.Google Scholar Gillitzer, C., & Skov, P. (2017). The use of third-party information reporting for tax deductions: Evidence and implications from charitable deductions in Denmark. Working paper.Google Scholar Gruber, J. (2004). Pay or pray? The impact of charitable subsidies on religious attendance. Journal of Public Economics, 88(12), 2635–2655.CrossRefGoogle Scholar Gruber, J., & Saez, E. (2002). The elasticity of taxable income: Evidence and implications. Journal of Public Economics, 84, 2657–2684.CrossRefGoogle Scholar Hansen, L., Heaton, J., & Yaron, A. (1996). Finite-sample properties of some alternative GMM estimators. Journal of Business and Economic Statistics, 14(3), 262–280.Google Scholar Huck, S., & Rasul, I. (2008). Testing consumer theory in the field: Private consumption versus charitable goods. ELSE Working Paper #275, Department of Economics, University College London.Google Scholar Hungerman, D., & Ottoni-Wilhelm, M. (2016). What is the price elasticity of charitable giving? Toward a reconciliation of disparate estimates. Working Paper.Google Scholar Imbens, G., & Angrist, J. (1994). Identification and estimation of local average treatment effects. Econometrica, 62(2), 467–475.CrossRefGoogle Scholar Lankford, R. H., & Wyckoff, J. H. (1991). Modeling charitable giving using a Box–Cox standard tobit model. The Review of Economics and Statistics, 73(3), 460–470.CrossRefGoogle Scholar Lowry, S. (2014). Itemized tax deductions for individuals: Data analysis. Technical Report 7-5700, Congressional Research Service.Google Scholar McClelland, R., & Kokoski, M. F. (1994). Econometric issues in the analysis of charitable giving. Public Finance Review, 22(4), 498–517.CrossRefGoogle Scholar Mesch, D. J., Brown, M. S., Moore, Z. I., & Hayat, A. D. (2011). Gender differences in charitable giving. International Journal of Nonprofit and Voluntary Sector Marketing, 16, 342–355.CrossRefGoogle Scholar Olson, N. (2012). 2012 annual report to congress. National Taxpayer Advocate: Technical report.Google Scholar Peloza, J., & Steel, P. (2005). The price elasticities of charitable contributions: A meta-analysis. Journal of Public Policy and Marketing, 24(2), 260–272.CrossRefGoogle Scholar Randolph, W. (1995). Dynamic income, progressive taxes, and the timing of charitable contributions. Journal of Political Economy, 103(4), 709–738.CrossRefGoogle Scholar Reece, W., & Zieschang, K. (1985). Consistent estimation of the impact of tax deductibility on the level of charitable contributions. Econometrica, 53(2), 271–293.CrossRefGoogle Scholar Reid, T. (2017). A fine mess: A global quest for a simpler, fairer, and more efficient tax system. New York, NY: Penguin.Google Scholar Reinstein, D. A. (2011). Does one contribution come at the expense of another? The B.E. Journal of. Economic Analysis and Policy, 11(1), 1–54.Google Scholar Roberts, R. (1984). A positive model of private charity and wealth transfers. Journal of Political Economy, 92(1), 136–148.CrossRefGoogle Scholar Rooney, P. M., Mesch, D. J., Chin, W., & Steinberg, K. (2005). The effects of race, gender, and survey methodologies on giving in the US. Economics Letters, 86, 173–180.CrossRefGoogle Scholar Saez, E. (2004). The optimal treatment of tax expenditures. Journal of Public Economics, 88, 2657–2684.CrossRefGoogle Scholar Scharf, K., & Smith, S. (2010). The price elasticity of charitable giving: Does the form of tax relief matter? IFS Working Paper W10/07.Google Scholar Slemrod, J. (1988). Are estimated tax elasticities really just tax evasion elasticities? The case of charitable contributions. The Review of Economics and Statistics, 71(3), 517–22.CrossRefGoogle Scholar Staiger, D., & Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica, 65(3), 557–586.CrossRefGoogle Scholar Steinberg, R. (1990). Taxes and giving: New findings. Voluntas, 1(2), 61–79.CrossRefGoogle Scholar Taussig, M. (1967). Economic aspects of the personal income tax treatment of charitable contributions. National Tax Journal, 20(1), 1–19.Google Scholar Wilhelm, M. O. (2006). New data on charitable giving in the PSID. Economics Letters, 92(1), 26–31.CrossRefGoogle Scholar Wilhelm, M. O. (2007). The quality and comparability of survey data on charitable giving. Nonprofit and Voluntary Sector Quarterly, 36(1), 65–84.CrossRefGoogle Scholar Yöruk, B. (2010). Charitable giving by married couples revisited. The Journal of Human Resources, 45(2), 497–516.CrossRefGoogle Scholar Yöruk, B. (2013). The impact of charitable subsidies on religious giving and attendance: Evidence from panel data. The Review of Economics and Statistics, 95(5), 1708–1721.CrossRefGoogle Scholar Zampelli, E., & Yen, S. (2017). The impact of tax price changes on charitable contributions. Contemporary Economic Policy, 35(1), 113–124.CrossRefGoogle Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.3.011 Arthur Lewis BuildingUniversity of ManchesterManchesterUK Backus, P.G. & Grant, N.L. Int Tax Public Finance (2019) 26: 317. https://doi.org/10.1007/s10797-018-9500-9 First Online 26 June 2018
CommonCrawl
Why didn't Lorentz conclude that no object can go faster than light? Based on Lorentz factor $\gamma = \frac{1}{\sqrt {1-\frac{v^2}{c^2}}}$ it is easy to see $v < c$ since otherwise $\gamma$ would be either undefined or a complex number, which is non-physical. Also, as far as I understand this equation was known before Einstein's postulates were published. My question is: why didn't Lorentz himself conclude that no object can go faster than speed of light? Or maybe he did, I do not know. I feel I am missing some contexts here. special-relativity speed-of-light inertial-frames faster-than-light history $\begingroup$ This is an interesting question, but you might get better answers on hsm.stackexchange.com . Usually the way people thought about these things at the time (in this case 130 years ago!) is very hard to wrap your head around if you've had modern training. $\endgroup$ – Ben Crowell Feb 20 at 5:54 $\begingroup$ I'd agree with the HSM SE idea, but also consider reading the rather more involved history of the Lorentz Transformations page on Wikipedia for a broader context. $\endgroup$ – StephenG Feb 20 at 6:09 $\begingroup$ @BenCrowell Thanks for the suggestion! I did not know about HSM SE. Should I ask the same question there or is there any possibility of moving this question? I am new here so it seems I cannot move my own question. $\endgroup$ – Rob Feb 20 at 6:16 $\begingroup$ I flagged it for a moderator that you want to move the question to HSM $\endgroup$ – anna v Feb 20 at 6:48 $\begingroup$ @DvijMankad Oh, I just meant how you phrased the custom close reason you entered. It's best to say something like "I'm voting to close this question as off-topic because [reason which doesn't reference other sites]. It may fit on [other site]." $\endgroup$ – David Z♦ Feb 27 at 22:37 If I had to sum up my findings in a sound bite it would be this: Einstein was the first to derive the Lorentz transformation laws based on physical principles--namely that the speed of light is constant and the principle of relativity. The fact that Lorentz and Poincaré were not able to do this naturally leads to why they were not able to justify making any fundamental statements about the nature of space and time--namely that nothing can go faster than light. This is seen by a careful reading of the Einstein (1905) – Special relativity section of the History of Lorentz Transformations Wikipedia article On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a new derivation of the transformation, which was based only on the principle on relativity and the principle of the constancy of the speed of light. [Emphasis mine] Furthermore, it is stated that (idem) While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. My reading of this seems so indicate that that at the time of publishing, Lorentz considered the notion of "local time" (via his transformations) to be just a convenient theoretical device, but didn't seem to have a justifiable reason for why it it should be physically true. It looks obvious in hindsight I know, but model building is tough. So the reason, in short, seems (to me) to be this: As far as Lorentz saw it, he was able to "explain" the Michaelson-Morely experiment in a way not unlike the way that Ptolemy could explain the orbits with epicycles. Did it work? Yes, but its mechanism lacked physical motivation. That is, he didn't have a physical reason for such a transformation to arise. Rather it was Einstein who showed that these transformation laws could be derived from a single, physical assumption--the constancy of the speed of light. This insight was the genius of Einstein. Picking up at the end of the last blockquote, we further have that (idem) For quantities of first order in v/c, this was also done by Poincaré in 1900; while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations concern the nature of space and time. This implies actually that Lorentz and Poincaré were able to derive the Lorentz transformations to first order in $\beta$, but since they believed that the Aether existed they failed to be able to make the fundamental connection to space, time and the constancy of the speed of light. The failure to make this connection means that there would have been no justifiable reason to take it physically serious. So, to Lorentz and Poincaré the Lorentz transformation laws would remain ad-hoc mathematical devices to explain the Michaelson-Morley experiment within the context of the Aether but not saying anything fundamental about space and time. This failure to conclude any fundamental laws about the nature of spacetime subsumes, by implication, making any statements such as no moving object can surpass the speed of light. Edit: @VladimirKalitvianski has pointed me to this source, which provides the opinions of historians on the matter. Poincaré's work in the development of special relativity is well recognised, though most historians stress that despite many similarities with Einstein's work, the two had very different research agendas and interpretations of the work. Poincaré developed a similar physical interpretation of local time and noticed the connection to signal velocity, but contrary to Einstein he continued to use the Aether in his papers and argued that clocks at rest in the Aether show the "true" time, and moving clocks show the local time. So Poincaré tried to keep the relativity principle in accordance with classical concepts, while Einstein developed a mathematically equivalent kinematics based on the new physical concepts of the relativity of space and time. Indeed this resource is useful, as it adds an additional dimension as to why Lorentz didn't publish any claims about a maximum signal velocity. It reads rather clearly, so I won't bother summarizing it. InertialObserverInertialObserver $\begingroup$ H. Poincaré has derived the Lorentz transformations exactly. His presentation in US, his "compte rendu" (résumé) and his full article on this subject show that he followed and mastered well the physics. His words about "coup de pouce" (experimental data) and other things were motivated by the observable physics. So A. Einstein postulated what had beed already been established from physical motivation by others. $\endgroup$ – Vladimir Kalitvianski Feb 20 at 9:03 $\begingroup$ @VladimirKalitvianski That's extremely interesting. What year? would you mind giving me a citation so I can update my answer? I by no means claim to be an expert on this matter, rather I'm just closely reading the wiki and related sources. $\endgroup$ – InertialObserver Feb 20 at 9:07 $\begingroup$ en.wikipedia.org/wiki/Henri_Poincar%C3%A9 $\endgroup$ – Vladimir Kalitvianski Feb 20 at 9:11 $\begingroup$ Poincare always believed in the ether as an absolute rest frame, an idea which Einstein abandoned in his 1905 paper. So Einstein usually gets credit. However, the absence of an absolute rest frame is not absolute - consider the frame in which the microwave background radiation is the same color in all directions. $\endgroup$ – Paul Young Feb 20 at 18:58 $\begingroup$ Actually, Lorentz did have a physical reason, it was a dynamic effect based on the hypothesis of molecular forces. And Einstein considered relativity and the speed of light invariance to be phenomenological postulates, not "physical" principles, and was dissatisfied with them. So the real difference between them is the opposite to the one described: Lorentz and Poincare derived the transformations from hypothetical dynamics, whereas Einstein did from kinematic postulates. Moral: less Wikipedia, see Zahar Why Did Einstein's Programme Supersede Lorentz's?. $\endgroup$ – Conifold Feb 20 at 20:46 Because typically if you find an expression that seems to break down at some value of $v$, you would conclude that the expression simply loses its validity for that value of $v$, not that the value isn't attainable. Presumably this was the conclusion of Lorentz and others. The reason Einstein concluded otherwise is that special relativity gives a physical argument for "superluminal speeds are equivalent to time running backwards" -- the argument is "does a superluminal ship hit the iceberg before or after its headlight does?" This depends on the observer, and because the headlight would melt the iceberg, the consequences of each observation are noticeably different. The only possible conclusions are "superluminal ships don't exist", "time runs backwards for superluminal observers", or "iceberg-melting headlights don't exist". Abhimanyu Pallavi SudhirAbhimanyu Pallavi Sudhir $\begingroup$ They are not cost-effective, apparently. $\endgroup$ – JdeBP Feb 20 at 11:44 $\begingroup$ @JdeBP Funny! What search term did you use to uncover that? $\endgroup$ – Abhimanyu Pallavi Sudhir Feb 20 at 18:03 $\begingroup$ Not sure if this is where he started, but it's related, and it's what I thought of. $\endgroup$ – Ross Presser Feb 20 at 20:39 Not the answer you're looking for? Browse other questions tagged special-relativity speed-of-light inertial-frames faster-than-light history or ask your own question. Deriving the Lorentz Transformation Can something travel faster than light if it has always been travelling faster than light? Faster than light possibility? Does this* refraction experiment correctly conclude faster than light travel? What is the relative speed of two near-light speed particles headed towards each other? How is it possible for the wavelength of light to change in a medium? What's wrong with using the 4-current to find the potential of a moving particle? How to achieve speeds faster than light? can light go faster than light speed? Where is the scientific evidence to support the assertion that nothing can travel faster than light in vacuum?
CommonCrawl
Find $a,b,c$ so that $\begin{bmatrix} 0 & 1& 0 \\ 0 & 0 & 1\\ a & b & c \end{bmatrix} $ has the characteristic polynomial $-\lambda^3+4\lambda^2+5\lambda+6=0$ Linear Algebra Matrices Algebra Mathtutor You should offer some more for this question. Lets compute the characteristic equation \[0=\det \begin{bmatrix} -\lambda & 1& 0 \\ 0& -\lambda & 1 \\ a & b & c-\lambda \end{bmatrix} \] \[=-\lambda \det \begin{bmatrix} -\lambda & 1 \\ b & c-\lambda \end{bmatrix}-1 \det \begin{bmatrix} 0 & 1 \\ a & c-\lambda \end{bmatrix} \] \[=-\lambda [\lambda (\lambda-c)-b]-[-a]=-\lambda^3+c\lambda^2+b\lambda+a\] \[=-\lambda^3+4\lambda^2+5\lambda+6.\] Hence \[a=6, b=5, c=4.\] Solve $abc=2(a-2)(b-2)(c-2)$ where $a,b $ and $c$ are integers Algebra Word Problem 3 Representation theory question Prove that: |x| + |y| ≤ |x + y| + |x − y|. Can enough pizza dough be made to cover the surface of the earth? Five times the larger of two consecutive odd integers is equal to one more than eight times the smaller. Find the integers. Need Upper Bound of an Integral Linear Algebra - matrices and vectors
CommonCrawl
OSA Publishing > Applied Optics > Volume 59 > Issue 34 > Page 10808 Gisele Bennett, Editor-in-Chief Simple and compact diode laser system stabilized to Doppler-broadened iodine lines at 633 nm F. Krause, E. Benkler, C. Nölleke, P. Leisching, and U. Sterr F. Krause,1,* E. Benkler,1 C. Nölleke,2 P. Leisching,2 and U. Sterr1 1Physikalisch-Technische Bundesanstalt, Bundesalle 100, 38116 Braunschweig, Germany 2TOPTICA Photonics AG, Lochhamer Schlag 19, 82166 Gräfelfing, Germany *Corresponding author: [email protected] E. Benkler https://orcid.org/0000-0001-7907-849X U. Sterr https://orcid.org/0000-0001-5661-769X F Krause E Benkler C Nölleke P Leisching U Sterr pp. 10808-10812 •https://doi.org/10.1364/AO.409308 F. Krause, E. Benkler, C. Nölleke, P. Leisching, and U. Sterr, "Simple and compact diode laser system stabilized to Doppler-broadened iodine lines at 633 nm," Appl. Opt. 59, 10808-10812 (2020) Tunable extended-cavity diode laser stabilized on iodine at λ = 633 nm (AO) Frequency stabilization of the 1064-nm Nd:YAG lasers to Doppler-broadened lines of iodine (AO) High-frequency-stability diode-pumped Nd:YAG lasers with the FM sidebands method and Doppler-free iodine lines at 532 nm (AO) Distributed feedback lasers Frequency combs Frequency modulated lasers Single mode fibers Original Manuscript: September 7, 2020 Revised Manuscript: November 2, 2020 Manuscript Accepted: November 2, 2020 EXPERIMENTAL SETUP We present a compact iodine-stabilized laser system at 633 nm, based on a distributed-feedback laser diode. Within a footprint of $27 \times 15\,\,{\rm{cm}}^2$, the system provides 5 mW of frequency-stabilized light from a single-mode fiber. Its performance was evaluated in comparison to Cs clocks representing primary frequency standards, realizing the SI unit Hz via an optical frequency comb. With the best suited absorption line, the laser reaches a fractional frequency instability below ${10^{- 10}}$ for averaging times above 10 s. The performance was investigated at several iodine lines, and a model was developed to describe the observed stability on the different lines. Due to their simplicity and reliability helium–neon (He–Ne) lasers at a wavelength of 633 nm are used widely for interferometric length measurements [1] and metrology applications [2]. Without any additional reference, Zeeman-stabilized and two-mode frequency-stabilized He–Ne lasers have shown instabilities of $2 \cdot {10^{- 11}}$ and $3 \cdot {10^{- 10}}$, respectively, at averaging times of 1000 s, and frequency drifts of about $2 \cdot {10^{- 8}}$ over several months [3,4] with typical output powers of less than a milliwatt. These properties well meet the requirements of commercial laser interferometers. He–Ne lasers with internal iodine cells stabilized to Doppler-free molecular hyperfine lines of iodine achieve an instability down to $1 \cdot {10^{- 13}}$ at 1000 s averaging time, and an absolute uncertainty of $2.1 \cdot {10^{- 11}}$ [5]. The stabilization of these systems is more demanding, and they are used mostly for calibration. However, He–Ne lasers require relatively large volumes even at low output powers, have a low power efficiency, and do not offer the possibility for wide-bandwidth frequency tuning. Furthermore, the technical know-how for building and maintaining He–Ne lasers is vanishing, and hence alternative techniques in this wavelength range are needed. Stabilizing a diode laser (DL) to a molecular or atomic reference is a promising substitute for He–Ne lasers, as this eliminates their drawbacks [6]. A narrow linewidth 633 nm DL stabilized to Doppler broadened iodine absorption lines has reached an instability of $1 \cdot {10^{- 9}}$ at 1000 s averaging time as evaluated with a wavelength meter. This laboratory setup employs an iodine cell with a length of 30 cm. Because of a low pressure (14°C saturation temperature), a relatively long effective interaction length of 90 cm has to be used for the spectroscopy [7]. Here we present a simple and compact shoe-box-sized DL system at 633 nm with fiber output, as direct one-to-one replacement of stabilized He–Ne lasers in industrial applications, such as interferometric length measuring systems or laser trackers [8]. As these systems employ specific optical components, the operational wavelength of 633 nm is a mandatory requirement. This rules out the use of diode-pumped solid state (DPSS) lasers or robust DLs stabilized to rubidium (Rb) at 780 nm. So far, available narrow-linewidth DLs at 633 nm have been complex extended cavity DLs (ECDLs) that are not robust enough for industrial application. In our system, a robust distributed-feedback (DFB) laser diode is stabilized to Doppler-broadened iodine $({^{127}{{\rm{I}}_2}})$ absorption lines, and the system includes a double-stage isolator and a fiber coupling (Fig. 1). To achieve a compact design, Doppler-free saturation spectroscopy is not used because a more complicated setup with a larger cell and higher optical power to saturate the weak molecular transition would be required [9]. The 633 nm DL system presented here with a relatively small iodine cell (3.3 cm length) reaches a fractional frequency instability of $1.9 \cdot {10^{- 11}}$ at 1000 s averaging time. The stability of the laser system stabilized to different iodine lines was evaluated using an optical frequency comb. Fig. 1. Picture of the diode laser system with DFB laser diode (LD), isolator, beam splitters (BS), photodetectors (PD), and iodine cell inside a temperature controlled environment. To the left, a fiber coupler is attached to the housing (not shown here). 2. EXPERIMENTAL SETUP A. Stabilized Laser System Figure 1 shows the system in its $27\,\,{\rm{cm}} \times 15\,\,{\rm{cm}}$ housing. Light emitted from the DFB laser diode first passes an isolator to prevent perturbations by reflections back into the laser diode. Behind the isolator, the light is split by a beam splitter (BS), and the main beam is coupled to a single-mode fiber by a fiber coupler mounted to the housing. A small part of the light is used for spectroscopy, which is further split by a second BS. The reflected part is sent through the iodine cell onto a photo diode (PD), while the transmitted beam is monitored by a second PD to provide a reference signal for normalization of the absorption spectrum. To achieve strong absorption, the 3.3 cm long iodine cell (with purity according to the manufacturer ${\gt}{{98}}\%$) was heated to a temperature of 60°C. This temperature is a good trade-off between strong absorption (of approximately 50%) for the strongest lines and small heating power. Using published iodine vapor pressure data [10] interpolated by the Antoine equation [11], we estimate a saturated iodine vapor pressure of 616 Pa inside the cell. By varying the diode temperature, the optical frequency can be tuned over a range of $\Delta \nu = 245\,\,{\rm{GHz}}$ without mode hops to scan the iodine spectrum (Fig. 2). The laser current has a much smaller impact on the laser frequency of about 1 GHz per 1 mA with significant power variation [12]. However, due to its high actuator bandwidth, the current is used in the lock to iodine to correct for fast laser frequency fluctuations. Fig. 2. Measured normalized iodine transmission spectrum as a function of the laser diode temperature. The iodine lines used for frequency stabilization are marked in color. Depending on the diode temperature, the power at the fiber output is between 4.5 and 6.5 mW. To stabilize the laser frequency to a peak of an absorption line via a 1f-lock-in technique, the laser current is modulated at a frequency ${f_{{\rm{mod}}}} = 21\,\,{\rm{kHz}}$. The corresponding peak-to-peak frequency modulation deviation of the emitted light was kept as small as $\Delta {\nu _{{\rm{mod}}}} = 5\,\,{\rm{MHz}}$, which is much smaller than the Doppler-broadened iodine linewidth. When the laser is used for interferometry, this modulation leads to a phase modulation of the interference signal, depending on the path difference. Hence, the interference contrast is reduced if the data acquisition averages over the modulation. If we allow for a maximum peak-to-peak phase modulation of $\pi$ (contrast reduced to the zeroth-order Bessel function ${J_0}(\pi /2) \approx 0.5$), this limits the path difference to ${L_{{\rm{coh,}}\pi}} \approx \frac{c}{{2\Delta {\nu _{{\rm{mod}}}}}} = 30\,\,{\rm{m}}$. Instead, if the data acquisition is fast enough to follow the modulation of the interference fringes, it will not limit the coherence length. The coherence length is then determined by the linewidth $\delta \nu$ of the laser diode, which is less than 1.5 MHz. Assuming a Lorentzian line profile, the coherence length is then ${L_{{\rm{coh}}}} = \frac{c}{{\pi \delta \nu}} \gt 63\,\,{\rm{m}}$ [13]. The system automatically scans the iodine spectrum, identifies the correct absorption line, and locks to that line within less than 5 min. After initial power-on, the system needs 10–15 min until all the parameters (especially the temperature of the iodine cell) are settled and the laser frequency is stabilized. B. Frequency Measurement The long-term instability and the absolute frequency $\nu$ of the laser system are characterized in comparison to two Cs fountain clocks via a hydrogen maser and an optical frequency comb (Fig. 3). The comb spectrum of the comb-generating Er:fiber-based fs-laser oscillator is centered around 1560 nm. After amplification in an Er-doped fiber amplifier (EDFA), the second-harmonic comb spectrum around a wavelength of 780 nm is generated [second-harmonic generation (SHG)], which in a nonlinear fiber (NLF) generates a super-continuum spanning 600–750 nm [14]. The fields of the super-continuum and the DFB-laser system (DL) are overlapped using a BS. With a volume Bragg grating (VBG), most of the comb lines besides those near the line of the DFB laser at 633 nm are filtered out to improve the signal-to-noise ratio (SNR) of the beat note detected with a PD. For band-pass filtering of the radiofrequency beat signal, tracking oscillators (TOs) are phase-locked to the signal. Thus, clean signals are provided to the inputs of frequency counters (CNTs) in $\Lambda$-averaging mode [15] (K + K FXE) for dead-time-free synchronous measurement and recording of the RF frequencies. To make sure that the center frequency of the beat signal is tracked correctly despite the frequency modulation of the laser, different TOs with slightly different, asymmetrically chosen parameters and two CNTs are used. The frequency difference between these two channels is in the range of a few kilohertz ($\Delta \nu /\nu \approx {10^{- 11}}$); thus, significant measurement errors due to cycle slips of the tracking filters can be excluded. The optical absolute frequency $\nu$ is calculated from the beat frequency ${f_{{\rm{beat}}}}$: (1)$$\nu = 2{f_{{\rm{CEO}}}} + n \cdot {f_{{\rm{rep}}}} + {f_{{\rm{beat}}}},$$ where ${f_{{\rm{CEO}}}}$ is the carrier–envelope offset frequency and ${f_{{\rm{rep}}}}$ the repetition rate. Both frequencies are also recorded by K + K FXE CNTs. All CNTs use a reference signal at frequency ${f_{{\rm{ref}}}} = 10\,\,{\rm{MHz}}$ from an active hydrogen maser. The H-maser is referenced to a Cs fountain clock, which is a primary frequency standard realizing the unit hertz. The mode number $n$ of the comb line and the sign of ${f_{{\rm{beat}}}}$ are determined from a rough frequency measurement with a wavelength meter having a few tens of megahertz uncertainty. Figure 4 shows the measured absolute frequency of the laser stabilized to the line R(74) 8-4. Fig. 3. Sketch of the frequency comb and experimental setup for optical frequency measurement of the diode laser light (DL) with Er-doped fiber amplifier (EDFA), second-harmonic generation (SHG), nonlinear fiber (NLF), beam splitter (BS), volume Bragg grating (VBG), tracking oscillators (TO), frequency counters (CNT), and photo diode (PD). Fig. 4. Absolute frequency of the diode laser system stabilized to R(74) 8-4 over several days averaged over 10 s (black) and 100 s (red), with an offset of ${\nu _0} = 473 099 403\,\,{\rm{MHz}}$ subtracted. C. Simulation of Doppler-Broadened Iodine Spectra Each absorption line in Fig. 2 consists of several hyperfine transitions with Voigt line shapes. Their Gaussian widths are given by the Doppler broadening ($\delta {\nu _{{\rm{DB}}}} = 388\,\,{\rm{MHz}}$), and their Lorentzian widths contain natural and collision broadening. In the investigated frequency range, the upper state lifetime of iodine $^{127}{{\rm{I}}_2}$ is in the range of 300–400 ns [16], leading to a natural line width of about 400–500 kHz. At our vapor pressure and temperature, collision broadening amounts to ($\delta {\nu _{{\rm{co}}}} = 76.6\,\,{\rm{MHz}}$) [17]. Compared to these broadening contributions, additional transit time broadening can be neglected. For our simulations, the Voigt profile was calculated as the real part of the Faddeeva function [18]. The absorption lines around 633 nm are between rovibrational levels in the $X$ and $B$ potentials of iodine $^{127}{{\rm{I}}_2}$ and consist of several hyperfine structure (HFS) lines in a range of about 1 GHz. The number of hyperfine lines depends on the rotational quantum number $J^{\prime \prime} $ of the molecular ground state. Transitions with an even $J^{\prime \prime} $ have 15 hyperfine components and with an odd $J^{\prime \prime} $ have 21, due to the required symmetry of the homonuclear $^{127}{{\rm{I}}_2}$ molecular wavefunction [19]. The frequency and the relative intensity of the iodine hyperfine transitions were calculated using the program IodineSpec [20,21]. This software is based on molecular potentials for the two electronic states involved and an interpolation for hyperfine splittings and achieves a standard ($1\sigma$) frequency uncertainty of 1.5 MHz. The transmission spectrum $T(\nu)$ of iodine in the frequency region of the laser system was simulated by summing the Voigt profiles of the individual hyperfine transitions. The relative intensities given by the program were scaled to match the simulated transmission to the experimental data (Fig. 2). A pressure shift of about ${-}{5.9}\;{\rm{MHz}}$ was included due to the high temperature of 333 K and corresponding vapor pressure of 616 Pa. The shift was obtained by scaling the pressure shift for a He–Ne laser stabilized to the R(127) 11-5 line of ${-}{9.5}\;{\rm{kHz/Pa}}$ at a temperature of 288 K (18 Pa) [22]. Figure 5 presents the simulated line shapes of the transmission spectra of line R(77) 8-4 (21 HFS components) and R(74) 8-4 (15 HFS components). The two lines illustrate the influence of the number of HFS components on the total profile. The profile with 15 HFS components is more sharply peaked and shows a smaller full width at half maximum (FWHM) compared to the one with 21 components. This behavior is typical for all lines in the tuning range of the DL. Fig. 5. Simulated transmission spectra of two Doppler-broadened iodine lines $^{127}{{\rm{I}}_2}$ with the frequencies of the HFS components (blue), measured center frequency ${\nu _{{c}}}$ of the laser system (red), and the FWHM. Table 1. Comparison of the Number of Hyperfine Transitions, Measured Center Frequency of the Laser ${\nu _{{c}}}$, Frequency Position of the Simulated Minimum ${\nu _{{\rm{sim}}}}$, Difference $\Delta \nu = {\nu _{{c}}} - {\nu _{{\rm{sim}}}}$, Modified Allan Deviation ${\sigma _y}(\tau)$ at $\tau = 128\,{\rm\,{s}}$, Measured Minimum Transmission ${T_{{\min}}}$, and Calculated Curvature ${\kappa _{{\rm{sim}}}}$ for the Four Investigated Iodine Lines 3. EXPERIMENTAL RESULTS To investigate the frequency-stabilized laser system, Doppler-broadened lines with 15 hyperfine components (P(54) 6-3, R(74) 8-4) and lines with 21 components (R(59) 6-3, R(77) 8-4) were used as reference. For both kinds of lines, we have chosen a strong line with a minimum transmission ${T_0} \approx 0.66$ and a weaker line with ${T_0} \approx 0.76$. Stabilized on line R(74) 8-4, the optical frequency was measured over several days. For the other lines, the measurements lasted about 2 h. For each of these iodine lines, Table 1 shows the number of HFS components, the measured center frequency ${\nu _c}$ calculated from the beat measurements data, the frequency position of the simulated minimum ${\nu _{{\rm{sim}}}}$, and the difference $\Delta \nu = ({\nu _{\rm{c}}} - {\nu _{{\rm{sim}}}})$ between simulation and experimental results. Compared to the line width of around 850 MHz, the residual frequency differences smaller than 10 MHz represent a good agreement. Figure 6 shows the modified Allan deviation mod ${\sigma _y}(\tau)$ of the measured DL frequency as a function of the averaging time $\tau$ for the four investigated lines. At short averaging times, the instability decreases proportionally to $1/\sqrt \tau$ as expected for white frequency noise. However, at longer averaging times $\tau \gt 100 \ldots 1000\,\,{\rm{s}}$, it starts to rise again. The short-term instability can be explained by the SNR of the error signal. To generate the 1f-error signal, the laser frequency is modulated with rms amplitude $\Delta \nu _{{\rm{mod}}}^{{\rm{rms}}}$ near the tip of the peak. In the neighborhood of the transmission minimum ${T_0}$ at frequency ${\nu _0}$, the transmission can be approximated as $T(\nu) = {T_0} + {\kappa _{{\rm{sim}}}}/2\times(\nu - {\nu _0}{)^2}$. The error signal is given as the 1f-rms component of the corresponding photocurrent ${I_s}$: (2)$${I_s}(\nu) = \frac{{\eta {P_0}e}}{{h\nu}}{\kappa _{{\rm{sim}}}}(\nu - {\nu _0})\Delta \nu _{{\rm{mod}}}^{{\rm{rms}}}.$$ Thus, the slope $D = {{\rm{d}}{I_s}/{{\rm{d}}\nu}}$ of the error signal is proportional to the curvature ${\kappa _{{\rm{sim}}}}$ at the peak: (3)$$D = {\kappa _{{\rm{sim}}}}\frac{{\eta {P_0}e}}{{h\nu}}\Delta \nu _{{\rm{mod}}}^{{\rm{rms}}}.$$ Here ${P_0}$ denotes the power at the input of the iodine cell and $\eta$ the detector quantum efficiency. Using the slope $D$ and the noise spectral power density ${S_I}$ of the photocurrent, the short-term instability for a laser frequency ${\nu _{\rm{L}}}$ can be estimated [23] as (4)$${\sigma _y}(\tau) = \frac{{\sqrt {{S_I}}}}{{\sqrt 2 D {\nu _{\rm{L}}}}}{\tau ^{- 1/2}}.$$ The stability is thus inversely proportional to the transmission curvature of the absorption peak. The width close to the peak is smaller for 15 HFS lines compared to the peak with 21 lines, and therefore their curvature is increased by approximately a factor of two. For all investigated lines, the curvature ${\kappa _{{\rm{sim}}}}$ is calculated from the simulated line shapes. Line R(59) has the smallest peak curvature with ${\kappa _{{\rm{sim}}}} = 1.09\,\,{\rm{GH}}{{\rm{z}}^{- {{2}}}}$ followed by lines R(77) (${\kappa _{{\rm{sim}}}} = 1.77\,\,{\rm{GH}}{{\rm{z}}^{- {{2}}}}$), P(54) (${\kappa _{{\rm{sim}}}} = 1.96\,\,{\rm{GH}}{{\rm{z}}^{- {{2}}}}$), and R(74) (${\kappa _{{\rm{sim}}}} = 2.83\,\,{\rm{GH}}{{\rm{z}}^{- {{2}}}}$). This order is also visible in the modified Allan deviation at intermediate averaging times (Fig. 6). The measured short term instability of the line R(74) (${\sigma _y}(1 {\rm{s}}) = 2 \cdot {10^{- 10}}$) can be compared with the fundamental limit due to photon shot noise of the detected light. With an incident power of ${P_0}{T_0} = 58\,\,\unicode{x00B5}{\rm{W}}$ at the photodetector, the photocurrent shot noise amounts to ${S_I} = \frac{{2{e^2}\eta {T_0}{P_0}}}{{h\nu}}$, resulting in the shot-noise-limited instability of ${\sigma _y}(\tau) = 2.6 \cdot {10^{- 11}} {(\tau /{\rm{s}})^{- 1/2}}$. This instability is a magnitude smaller than the measured Allan deviation at 1 s. Measuring the relative intensity noise spectrum of the laser diode, we discovered that in the frequency range near the modulation frequency, electronic noise of the detector is about a factor of 10 above the shot noise. With this noise, an instability of ${\sigma _y}(\tau) = 1.4 \cdot {10^{- 10}} {(\tau /{\rm{s}})^{- 1/2}}$ would be reached, which is in good agreement with the measured short-term instability. Fig. 6. Modified Allan deviation mod ${\sigma _y}(\tau)$ of the laser system stabilized to four different Doppler-broadened iodine lines. The data for the two-mode He–Ne are taken from [4] and for the Zeeman-stabilized He–Ne from [3]. The fact that the observed frequency instability is determined by the lock to iodine is further supported by analyzing the free running laser fractional frequency noise. We observe that the power spectral density of the free running laser frequency fluctuations ${S_y}$ for small frequencies ($f \lt 10\,\,{\rm{kHz}}$) shows flicker noise behavior (${S_y}\!(f) = 4.5 \cdot {10^{- 19}}{f^{- 1}}$). This leads to a constant modified Allan deviation for the unstabilized laser ${\sigma _y}\!(\tau) = 6.5 \cdot {10^{- 10}}$ [24] at averaging times $\tau$ below 1 s. The measured short-term instability of the DL stabilized to line R(74) is below this value, which indicates that the short-term stability is limited by the stabilization to the iodine vapor cell. Much stronger variations between the lines are seen in the long-term stability. Stabilized to line R(74) 8-4 (red) that shows the highest SNR, the laser system achieves the best frequency stability of all compared lines with a modified Allan deviation of ${\sigma _y} = 2.0 \cdot {10^{- 10}}$ at an averaging time $\tau = 1\,\,{\rm{s}}$ and ${\sigma _y} = 1.9 \cdot {10^{- 11}}$ at $\tau = 1000\,\,{\rm{s}}$. When stabilized to the second line with 15 HFS components, the Allan deviation rises at $\tau = (600\,\,{\rm{s}}$ –$1500\,\,{\rm{s}})$, while on the lines with 21 HFS, it rises already at $\tau = (100\,\,{\rm{s}}$ –$\;200\,\,{\rm{s}})$. We attribute this behavior to the different susceptibilities of these lines to environmental perturbations. Datasets shown in the figures in this paper are available (see Ref. [25]). We have presented a compact, iodine stabilized DL system at 633 nm with relative frequency instability below ${10^{- 10}}$ ($2 \cdot {10^{- 11}}$ at $\tau = 1000\,\,{\rm{s}}$), which is competitive to Zeeman- and two-mode-stabilized He–Ne lasers. In addition, we have shown that the laser can be tuned over a wide frequency range so that a large number of possible Doppler-broadened iodine lines can be used. The absolute frequency and the observed behavior of the stability was in good agreement with simulations based on molecular potentials of iodine. We found a significant dependence of the stability on the hyperfine structure of the Doppler-broadened absorption lines that were used for stabilization. Because of its small size, lower electrical power consumption, and high optical output power, such stabilized DL systems using an external iodine cell can become a valuable alternative to Zeeman- or two-mode-stabilized He–Ne lasers at 633 nm. Bundesministerium für Bildung und Forschung (FKZ 13N13954 (FinDLiNG)); European Metrology Programme for Innovation and Research (17FUN03 (USOQS)). Part of this work has received funding in the 17FUN03 USOQS project from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. CN, PL: TOPTICA Photonics (E). 1. G. Jäger, E. Manske, T. Hausotte, A. Müller, and F. Balzer, "Nanopositioning and nanomeasuring machine NPMM-200—a new powerful tool for large-range micro- and nanotechnology," Surf. Topogr. Metrol. Prop. 4, 034004 (2016). [CrossRef] 2. T. J. Quinn, "Practical realization of the definition of the metre, including recommended radiations of other optical frequency standards (2001)," Metrologia 40, 103–133 (2003). [CrossRef] 3. W. R. C. Rowley, "The performance of a longitudinal Zeeman-stabilised He-Ne laser (633 nm) with thermal modulation and control," Meas. Sci. Technol. 1, 348–351 (1990). [CrossRef] 4. P. E. Ciddor and R. M. Duffy, "Two-mode frequency-stabilised He-Ne (633 nm) lasers: studies of short- and long-term stability," J. Phys. E 16, 1223–1227 (1983). [CrossRef] 5. BIPM, "Recommended values of standard frequencies," online at https://www.bipm.org/en/publications/mises-en-pratique/standard-frequencies.html (page last updated: 30 November 2018). 6. W. Gawlik and J. Zachorowski, "Stabilization of diode-laser frequency to atomic transitions," Opt. Appl. 34, 607–618 (2004). 7. S. Rerucha, A. Yacoot, T. M. Pham, M. Cizek, V. Hucl, J. Lazar, and O. Cip, "Laser source for dimensional metrology: investigation of an iodine stabilized system based on narrow linewidth 633 nm DBR diode," Meas. Sci. Technol. 28, 045204 (2017). [CrossRef] 8. B. Muralikrishnan, S. Phillips, and D. Sawyer, "Laser trackers for large-scale dimensional metrology: a review," Precis. Eng. 44, 13–28 (2016). [CrossRef] 9. H. Talvitie, M. Merimaa, and E. Ikonen, "Frequency stabilization of a diode laser to Doppler-free spectrum of molecular iodine at 633 nm," Opt. Commun. 152, 182–188 (1998). [CrossRef] 10. R. Honig and H. Hook, Vapor Pressure Data for Some Common Gases (RCA Review, 1960), Vol. 21, pp. 360–368. 11. G. W. Thomson, "The Antoine equation for vapor-pressure data," Chem. Rev. 38, 1–39 (1946). [CrossRef] 12. C. Nölleke, P. Leisching, G. Blume, D. Jedrzejczyk, J. Pohl, D. Feise, A. Sahm, and K. Paschke, "Frequency locking of compact laser-diode modules at 633 nm," Proc. SPIE 10539, 1053907 (2018). [CrossRef] 13. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (Wiley, 1991). 14. R. Holzwarth, T. Udem, T. W. Hänsch, J. C. Knight, W. J. Wadsworth, and P. St.J.Russell, "Optical frequency synthesizer for precision spectroscopy," Phys. Rev. Lett. 85, 2264–2267 (2000). [CrossRef] 15. E. Benkler, C. Lisdat, and U. Sterr, "On the relation between uncertainties of weighted frequency averages and the various types of Allan deviations," Metrologia 52, 565–574 (2015). [CrossRef] 16. K. C. Shotton and G. D. Chapman, "Lifetimes of 127I2 molecules excited by the 632.8 nm He/Ne laser," J. Chem. Phys. 56, 1012–1013 (1972). [CrossRef] 17. A. Brillet and P. Cerez, "Quantitative description of the saturated absorption signal in iodine stabilized He-Ne lasers," Metrologia 13, 137–139 (1977). [CrossRef] 18. A. K. Hui, B. H. Armstrong, and A. A. Wray, "Rapid computation of the Voigt and complex error functions," J. Quant. Spectrosc. Radiat. Transfer 19, 509–516 (1978). [CrossRef] 19. M. Kroll, "Hyperfine structure in the visible molecular-iodine absorption spectrum," Phys. Rev. Lett. 23, 631–633 (1969). [CrossRef] 20. B. Bodermann, H. Knöckel, and E. Tiemann, "Widely usable interpolation formulae for hyperfine splittings in the 127I2 spectrum," Eur. Phys. J. D 19, 31–44 (2002). [CrossRef] 21. H. Knöckel, B. Bodermann, and E. Tiemann, "High precision description of the rovibronic structure of the I2 B–X spectrum," Eur. Phys. J. D 28, 199–209 (2004). [CrossRef] 22. C. S. Edwards, G. P. Barwood, P. Gill, and W. R. C. Rowley, "A 633 nm iodine-stabilized diode-laser frequency standard," Metrologia 36, 41–45 (1999). [CrossRef] 23. C. Affolderbach and G. Mileti, "A compact laser head with high-frequency stability for Rb atomic clocks and optical instrumentation," Rev. Sci. Instrum. 76, 073108 (2005). [CrossRef] 24. S. T. Dawkins, J. J. McFerran, and A. N. Luiten, "Considerations on the measurement of the stability of oscillators with frequency counters," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 54, 918–925 (2007). [CrossRef] 25. F. Krause, E. Benkler, C. Nölleke, P. Leisching, and U. Sterr, "Additional data for the publication 'Simple and compact diode laser system stabilized to Doppler-broadened iodine lines at 633 nm'," PTB Open Access Repository, https://doi.org/10.7795/720.20201111. G. Jäger, E. Manske, T. Hausotte, A. Müller, and F. Balzer, "Nanopositioning and nanomeasuring machine NPMM-200—a new powerful tool for large-range micro- and nanotechnology," Surf. Topogr. Metrol. Prop. 4, 034004 (2016). T. J. Quinn, "Practical realization of the definition of the metre, including recommended radiations of other optical frequency standards (2001)," Metrologia 40, 103–133 (2003). W. R. C. Rowley, "The performance of a longitudinal Zeeman-stabilised He-Ne laser (633 nm) with thermal modulation and control," Meas. Sci. Technol. 1, 348–351 (1990). P. E. Ciddor and R. M. Duffy, "Two-mode frequency-stabilised He-Ne (633 nm) lasers: studies of short- and long-term stability," J. Phys. E 16, 1223–1227 (1983). BIPM, "Recommended values of standard frequencies," online at https://www.bipm.org/en/publications/mises-en-pratique/standard-frequencies.html (page last updated: 30 November 2018). W. Gawlik and J. Zachorowski, "Stabilization of diode-laser frequency to atomic transitions," Opt. Appl. 34, 607–618 (2004). S. Rerucha, A. Yacoot, T. M. Pham, M. Cizek, V. Hucl, J. Lazar, and O. Cip, "Laser source for dimensional metrology: investigation of an iodine stabilized system based on narrow linewidth 633 nm DBR diode," Meas. Sci. Technol. 28, 045204 (2017). B. Muralikrishnan, S. Phillips, and D. Sawyer, "Laser trackers for large-scale dimensional metrology: a review," Precis. Eng. 44, 13–28 (2016). H. Talvitie, M. Merimaa, and E. Ikonen, "Frequency stabilization of a diode laser to Doppler-free spectrum of molecular iodine at 633 nm," Opt. Commun. 152, 182–188 (1998). R. Honig and H. Hook, Vapor Pressure Data for Some Common Gases (RCA Review, 1960), Vol. 21, pp. 360–368. G. W. Thomson, "The Antoine equation for vapor-pressure data," Chem. Rev. 38, 1–39 (1946). C. Nölleke, P. Leisching, G. Blume, D. Jedrzejczyk, J. Pohl, D. Feise, A. Sahm, and K. Paschke, "Frequency locking of compact laser-diode modules at 633 nm," Proc. SPIE 10539, 1053907 (2018). B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (Wiley, 1991). R. Holzwarth, T. Udem, T. W. Hänsch, J. C. Knight, W. J. Wadsworth, and P. St.J.Russell, "Optical frequency synthesizer for precision spectroscopy," Phys. Rev. Lett. 85, 2264–2267 (2000). E. Benkler, C. Lisdat, and U. Sterr, "On the relation between uncertainties of weighted frequency averages and the various types of Allan deviations," Metrologia 52, 565–574 (2015). K. C. Shotton and G. D. Chapman, "Lifetimes of 127I2 molecules excited by the 632.8 nm He/Ne laser," J. Chem. Phys. 56, 1012–1013 (1972). A. Brillet and P. Cerez, "Quantitative description of the saturated absorption signal in iodine stabilized He-Ne lasers," Metrologia 13, 137–139 (1977). A. K. Hui, B. H. Armstrong, and A. A. Wray, "Rapid computation of the Voigt and complex error functions," J. Quant. Spectrosc. Radiat. Transfer 19, 509–516 (1978). M. Kroll, "Hyperfine structure in the visible molecular-iodine absorption spectrum," Phys. Rev. Lett. 23, 631–633 (1969). B. Bodermann, H. Knöckel, and E. Tiemann, "Widely usable interpolation formulae for hyperfine splittings in the 127I2 spectrum," Eur. Phys. J. D 19, 31–44 (2002). H. Knöckel, B. Bodermann, and E. Tiemann, "High precision description of the rovibronic structure of the I2 B–X spectrum," Eur. Phys. J. D 28, 199–209 (2004). C. S. Edwards, G. P. Barwood, P. Gill, and W. R. C. Rowley, "A 633 nm iodine-stabilized diode-laser frequency standard," Metrologia 36, 41–45 (1999). C. Affolderbach and G. Mileti, "A compact laser head with high-frequency stability for Rb atomic clocks and optical instrumentation," Rev. Sci. Instrum. 76, 073108 (2005). S. T. Dawkins, J. J. McFerran, and A. N. Luiten, "Considerations on the measurement of the stability of oscillators with frequency counters," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 54, 918–925 (2007). F. Krause, E. Benkler, C. Nölleke, P. Leisching, and U. Sterr, "Additional data for the publication 'Simple and compact diode laser system stabilized to Doppler-broadened iodine lines at 633 nm'," PTB Open Access Repository, https://doi.org/10.7795/720.20201111. Affolderbach, C. Armstrong, B. H. Balzer, F. Barwood, G. P. Benkler, E. Blume, G. Bodermann, B. Brillet, A. Cerez, P. Chapman, G. D. Ciddor, P. E. Cip, O. Cizek, M. Dawkins, S. T. Duffy, R. M. Edwards, C. S. Feise, D. Gawlik, W. Gill, P. Hänsch, T. W. Hausotte, T. Holzwarth, R. Honig, R. Hook, H. Hucl, V. Hui, A. K. Ikonen, E. Jäger, G. Jedrzejczyk, D. Knight, J. C. Knöckel, H. Kroll, M. Lazar, J. Leisching, P. Lisdat, C. Luiten, A. N. Manske, E. McFerran, J. J. Merimaa, M. Mileti, G. Müller, A. Muralikrishnan, B. Nölleke, C. Paschke, K. Pham, T. M. Phillips, S. Pohl, J. Quinn, T. J. Rerucha, S. Rowley, W. R. C. Sahm, A. Saleh, B. E. A. Sawyer, D. Shotton, K. C. St.J.Russell, P. Sterr, U. Talvitie, H. Teich, M. C. Thomson, G. W. Tiemann, E. Udem, T. Wadsworth, W. J. Wray, A. A. Yacoot, A. Zachorowski, J. Chem. Rev. (1) Eur. Phys. J. D (2) IEEE Trans. Ultrason. Ferroelectr. Freq. Control (1) J. Chem. Phys. (1) J. Phys. E (1) J. Quant. Spectrosc. Radiat. Transfer (1) Meas. Sci. Technol. (2) Metrologia (4) Opt. Appl. (1) Precis. Eng. (1) Proc. SPIE (1) Rev. Sci. Instrum. (1) Surf. Topogr. Metrol. Prop. (1)
CommonCrawl
Basic performance and future developments of BeiDou global navigation satellite system Yuanxi Yang ORCID: orcid.org/0000-0002-5553-948X1, Yue Mao1 & Bijiao Sun1 Satellite Navigation volume 1, Article number: 1 (2020) Cite this article The core performance elements of global navigation satellite system include availability, continuity, integrity and accuracy, all of which are particularly important for the developing BeiDou global navigation satellite system (BDS-3). This paper describes the basic performance of BDS-3 and suggests some methods to improve the positioning, navigation and timing (PNT) service. The precision of the BDS-3 post-processing orbit can reach centimeter level, the average satellite clock offset uncertainty of 18 medium circular orbit satellites is 1.55 ns and the average signal-in-space ranging error is approximately 0.474 m. The future possible improvements for the BeiDou navigation system are also discussed. It is suggested to increase the orbital inclination of the inclined geostationary orbit (IGSO) satellites to improve the PNT service in the Arctic region. The IGSO satellite can perform part of the geostationary orbit (GEO) satellite's functions to solve the southern occlusion problem of the GEO satellite service in the northern hemisphere (namely the "south wall effect"). The space-borne inertial navigation system could be used to realize continuous orbit determination during satellite maneuver. In addition, high-accuracy space-borne hydrogen clock or cesium clock can be used to maintain the time system in the autonomous navigation mode, and stability of spatial datum. Furthermore, the ionospheric delay correction model of BDS-3 for all signals should be unified to avoid user confusion and improve positioning accuracy. Finally, to overcome the vulnerability of satellite navigation system, the comprehensive and resilient PNT infrastructures are proposed for the future seamless PNT services. The BeiDou navigation satellite system (BDS) provides more services compared with other global navigation satellite systems (GNSSs). The BeiDou global navigation satellite system, (BDS-3), is the third step of China's satellite navigation system construction [1, 2]. In addition to the positioning, navigation and timing (PNT) service provided by all GNSSs, BDS-3 also provides regional message communication (1000 Chinese characters per time) and global short message communication (40 Chinese characters per time), global search and rescue service (SAR), regional precise point positioning (PPP) service, embedded satellite-based augmentation service (BDSBAS), and space environment monitoring function [3,4,5]. By the end of 2019, 28 BDS-3 satellites had been successfully launched, including 24 medium circular orbit (MEO) satellites, 3 inclined geostationary orbit (IGSO) satellites and 1 geostationary orbit (GEO) satellite. However, only 18 MEO satellites can provide services thus far, and other satellites are in test phase. The BDS-3 baseline system with 18 MEO satellites has begun providing initial services to global users on December 27, 2018, and at least five satellites are visible for global users. Many scholars have described the BDS-3 status and the main service function indexes [2,3,4,5,6], including the constellation design, service type, navigation signal system, space–time datum and orbit determination method of BDS-3 system, etc. The space–time datum, signal-in-space quality, accuracy of satellite broadcast ephemeris, the accuracy of the post-processing precise orbit, time synchronization accuracy, satellite clock offset accuracy and the basic PNT service performance of the BDS-3 baseline system have be calculated and analyzed [3,4,5, 7,8,9,10]. Because BDS-3 satellites are equipped with inter-satellite links (ISL), more research achievements have been made in the fields of ISL supported determination of satellite orbit and clock offset [3,4,5, 11,12,13]. According to the preliminary calculation results, the ranging accuracy of the ISLs of BDS-3 is about 4 cm; if the satellite orbits are determined using only regional station observations, the three-dimensional orbit accuracy of the overlapping arc is about 60 cm; with ISL measurements, the orbit accuracy is about 30 cm, the 24-h orbit prediction accuracy is also raised from 140 to 51 cm [3,4,5], and the radial accuracy can reach 10 cm, as evaluated through laser observations. The performance of BDS-3 presented by different scholars, using different data sources, are highly similar to each other. From the aspect of signal, the ratio bias of signal component effective power is better than 0.25 dB, the S curve bias is less than 0.3, the signal-in-space accuracy calculated with post-processing ephemeris and broadcast ephemeris is approximately 0.5 m (root mean square, RMS), and the signal-in-space continuity and availability are approximately 99.99% and 99.78%, respectively [7]. The timing accuracy is better than 19.1 ns (95%), which satisfies the system service performance specifications. Compared with BDS-2, BDS-3 exhibits significant improvement in system coverage, spatial signal accuracy, availability and continuity [2,3,4,5]. At the user's end, service improvements such as the BDS-3 signal design and optimal receiving mode [14], precision orbit determination [3,4,5, 9, 15], and precise point positioning technology [16] have been reported in relevant literature and will not be repeated herein. The main performance evaluation methods and formulae are given in the second section. The current performance of BDS-3 by the end of August 2019 is presented and analyzed in the third section. Under normal circumstances, the main performance indicators provided by BDS-3 can satisfy or surpass the design indicators, which are comparable to the service performance of other GNSSs. The possible improvement strategies for BDS-3 are presented in the fourth section from the aspects of system construction, satellite constellation, satellite payload, etc. Conclusions are presented in the final section of the paper. Performance evaluation methods The spatial signal accuracies of BDS-3 include broadcast orbit accuracy, broadcast clock offset accuracy, signal-in-space ranging error (SISRE), broadcast ionospheric delay model accuracy, etc. Specific evaluation methods are as follows: Broadcast orbit accuracy: the post precise satellite positions are taken as references, and the accuracy of the positions calculated by the broadcast ephemeris is the evaluation object. The root mean square error (RMSE) of the broadcast orbit is evaluated using the differences (errors of the broadcast orbit) of the broadcast and precise satellite orbits in the same coordinate system, i.e. China Geodetic Coordinate System 2000 (CGCS2000) [17] and the same time scale, i.e. BDS time (BDT) [18]. The orbit error can be obtained using the following formula [19,20,21,22] $$\Delta \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {R} = \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {R}_{brd} + A \cdot PCO_{brd} } \right) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {R}_{pre}$$ where \(\Delta \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {R}\) is the broadcast orbit error vector; \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {R}_{brd}\) is the satellite position vector calculated by the broadcast ephemeris with the unit m; \(A\) denotes the transformation matrix from the satellite-fixed coordinate system to the earth fixed system; \(PCO_{brd}\) is the correction from the satellite antenna phase center to the satellite mass center with the unit m; \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {R}_{pre}\) is the precise satellite position vector calculated by precise ephemeris with the unit m. And then the broadcast orbit accuracy in a specific period can be obtained from the statistics of differences of the satellite positions. Broadcast clock offset accuracy: the post precise satellite clock offsets are taken as the references, and the accuracy of the satellite clock offsets calculated by the broadcast ephemeris is the evaluation object. The RMS of the broadcast clock offset is evaluated using the differences between the broadcast and precise clock offsets. The broadcast clock offset is calculated using the following formula $$ \Delta \tilde{c}_{k}^{t} = \Delta c_{k}^{t} - \frac{{\mathop \sum \nolimits_{k = 1}^{N} \Delta c_{k}^{t} }}{N} $$ where \(\Delta \tilde{c}_{k}^{t}\) is the broadcast clock offset of the satellite \(k\) after subtracting the time datum difference at the epoch \(t\); \(\Delta c_{k}^{t}\) is the difference between the broadcast and precise clock offsets after time group delay (TGD) and antenna phase center corrections; \(N\) denotes the number of satellites; \(k\) denotes the satellite number; \(t\) denotes the epoch time. \(\Delta c_{k}^{t}\) is calculated using the following formula $$c_{k}^{t} = B_{k}^{t} - \frac{{TGD_{{{\text{m}}1}} f_{{{\text{m}}1}}^{2} - TGD_{{{\text{m}}2}} f_{{{\text{m}}2}}^{2} }}{{f_{{{\text{m}}1}}^{2} - f_{{{\text{m}}2}}^{2} }} - \frac{{e_{k}^{pre} - e_{k}^{brd} }}{c} - \bar{C}_{k}^{t}$$ where \(B_{k}^{t}\) is the clock offset of the \(k\)-th satellite calculated by the broadcast clock offset parameters at the epoch t; \(\bar{C}_{k}^{t}\) is the precise clock offset of the \(k\)-th satellite at the epoch t with the unit \(s\); \(\frac{{TGD_{{{\text{m}}1}} f_{{{\text{m}}1}}^{2} - TGD_{{{\text{m}}2}} f_{{{\text{m}}2}}^{2} }}{{f_{{{\text{m}}1}}^{2} - f_{{{\text{m}}2}}^{2} }}\) is the TGD correction of the satellite clock offset obtained using the broadcast TGD parameters with the unit \(s\); \(TGD_{m1}\) is the time delay difference of on-board equipment corresponding to the \(f_{m1}\) frequency signal with unit \(s\); \(f_{m1}\) and \(f_{m2}\) denote the signal carrier frequencies adopted in the double differential ionosphere-free combination; \(\frac{{e_{k}^{pre} - e_{k}^{brd} }}{c}\) is the difference between the antenna phase center corrections of the broadcast and precise clock offsets with the unit \(s\); \(e_{k}^{pre}\) and \(e_{k}^{brd}\) denote the Z components of the satellite antenna phase center corrections adopted by the products of the precise clock offset and the broadcast clock offset respectively with the unit m. The accuracy of broadcast clock offset in the evaluation period can then be obtained through the statistical calculation of clock offset errors. Signal-in-space ranging error (SISRE): the post orbit parameters are taken as the reference, and the evaluation object is set to be the projection accuracy of the combined error of the satellite orbit error and the clock offset on the sightline between the satellite and the user. It is usually calculated using the differences of the broadcast and post processed precise orbit parameters and the differences of the broadcast and post processed precise clock offsets [23,24,25] $$SISRE = \sqrt {\left( {\alpha \delta_{R} - c\delta_{T} } \right)^{2} + \beta \left( {\delta_{A}^{2} + \delta_{C}^{2} } \right)}$$ where \(\delta_{R}\), \(\delta_{A}\) and \(\delta_{C}\) denote the errors of broadcast orbit in the radial, tangential and normal components respectively with the unit m; \(\delta_{T}\) denotes the clock offset error with the unit s; \(\alpha\) denotes the contribution factor of the radial component error, and \(\beta\) denotes that of the tangential and normal errors. The contribution factors of different GNSS satellites are listed in Table 1. Table 1 Average contribution factors of satellites on SISRE (elevation mask angle: 5°) Broadcast ionospheric model accuracy: the post-processing high-precision grid ionospheric delay model is taken as the reference model, and the accuracy of the broadcast ionospheric zenith delay value is the evaluation object. In this paper, the accuracy of BeiDou global broadcast ionospheric delay correction model (BDSGIM) is evaluated [26]. Four indexes are used for evaluation, including average bias, standard deviation, RMSE and correction percentage. The formulae are as follows [26, 27]: $$bias = \frac{{\mathop \sum \nolimits_{i = 0}^{N} \left( {vTEC_{model}^{i} - vTEC_{ref}^{i} } \right)}}{N}$$ $$std = \sqrt {\frac{{\mathop \sum \nolimits_{i = 0}^{N} \left( {vTEC_{model}^{i} - vTEC_{ref}^{i} - bias} \right)^{2} }}{N - 1}}$$ $$RMS = \sqrt {\frac{{\mathop \sum \nolimits_{i = 0}^{N} \left( {vTEC_{model}^{i} - vTEC_{ref}^{i} } \right)^{2} }}{N}}$$ $$\left\{ {\begin{array}{*{20}l} {per_{i} = \left( {1 - \frac{{\left| {vTEC_{model}^{i} - vTEC_{ref}^{i} } \right|}}{{vTEC_{ref}^{i} }}} \right) \cdot 100\% } \hfill \\ {per = \left( {\frac{{\mathop \sum \nolimits_{i = 1}^{{N_{{\left( {per_{i} \ge 0} \right)}} }} per_{i} }}{{N_{{\left( {per_{i} \ge 0} \right)}} }}} \right) \cdot 100\% ,\left( {1 - \frac{{N_{{\left( {per_{i} \ge 0} \right)}} }}{N}} \right) \cdot 100\% } \hfill \\ \end{array} } \right.$$ where \(i\) denotes the ith grid point; \(N\) is the total number of grid points; \( vTEC_{model} \) is the correction value calculated by the model at the grid point; \( vTEC_{ref} \) is the reference value. Latest performance of BDS-3 Broadcast satellite orbit accuracy With the continuous improvement of the BDS-3 basic constellation, the stability and connectivity of the ISL continue to increase, and the performance of the basic constellation improves gradually. The basic service performance of BDS-3 first depends on the performance of the satellite orbit and satellite clock. The performance of the BDS-3 post-processing precision orbit was estimated from the observations of international GNSS monitoring assessment service (iGMAS) stations deployed all over the world until July 2019. It indicated that the accuracies of the radial, tangential and normal components of the satellite orbit were 1.5, 5.7 and 4.1 cm respectively (seen in Fig. 1a), and the accuracies were 8.0, 34 and 37 cm, respectively, when the system started offering global service in December 2018. a Precise orbit accuracy of the BDS-3 basic constellation. b Broadcast orbit accuracy of the BDS-3 basic constellation As shown, the post-processing orbit accuracy of the BDS-3 satellites are increased by approximately five, six and nine times, respectively. The improvement in the normal accuracy is the most significant, as the ISL observation is more abundant and stable. The accuracy of the three components (radial, tangential and normal) of the broadcast orbit of the BDS-3 by referring to the post accurate orbits are 0.059, 0.323 and 0.343 m respectively (seen in Fig. 1b). Compared with the launch in December 2018, the broadcast orbit has also been significantly improved. Furthermore, with the significant improvement of post-processing orbit accuracy, the estimation reliability of broadcast ephemeris based on the post-processing precise orbit has also been increased accordingly. Broadcast satellite clock offset accuracy The uncertainty of the broadcast satellite clock offset is a main factor affecting the user PNT service. The accuracy of the satellite broadcast clock offset is evaluated using the differences of the broadcast satellite clock offsets and post-processing precise ones. The RMS are displayed in Fig. 2. Broadcast clock offset of the BDS-3 satellites As shown in the figure, the average uncertainty of satellite clock offset of 18 MEO satellites is 1.55 ns, but the satellite clock offsets of M02, M06 and M15 satellites are 3.068, 2.555 and 2.832 ns, respectively, which are larger and unstable. Estimation of SISRE SISRE of the BDS-3 are estimated based on the post-processing ephemeris and broadcast ephemeris, which are shown in Fig. 3. SISRE of the BDS-3 satellites Figure 3 shows that the average SISRE is approximately 0.474 m, and all the SISREs are in agreement with the satellite clock offsets. The SISRE of the M02, M06 and M15 satellites are significantly greater than those of other satellites; when BDS-3 started offering global service in December 2018, the SISRE was approximately 0.7 m. Accuracy of ionospheric delay correction models BDS-3 can broadcast four civil signals, including B1C, B1I, B2a/B2b and B3I. Among them, the B1I and B3I signals are broadcast by both BDS-2 and BDS-3. Therefore, the civil signals of BDS-3 adopt two types of ionospheric models, namely BDSK8 and BDGIM [28]. The former is used for B1I and B3I which is consistent with those of the BDS-2, while the latter is used for the new signals of BDS-3, i.e. B1a, B1C, B2a and B2b. To analyze the ionospheric correction accuracy of these two models, we used the global ionospheric maps model constructed by the Center for Orbit Determination in Europe (CODG model) as a reference, and compared the nine-month (from January to September) average accuracies of Nequick, GPSK8, BDGIM and BDSK8 models. The results are shown in Fig. 4, where "high latitude" ranges from 55° to 85°, "mid latitude" from 30° to 55°, and "low latitudes" from 0° to 30°. Evaluation of the ionospheric model accuracy in half a year based on CODG model Figure 4 shows that the correction accuracy of BDGIM model is obviously better than that of all other models in the middle and low latitude zones; in the northern middle and low latitudes, the correction ratio of BDGIM is approximately 75%; in the southern middle latitude zones, the correction ratio is approximately 65%; in the high latitude zones, BDGIM is slightly worse than the Nequick model; the BDGIM performs better than the GPSK8 and BDSK8 model at any latitude. The accuracy evaluation details of BDGIM are shown in Table 2, and the correction percentage and data rejection percentage are listed in the last row of low, middle and high latitude columns. Table 2 Accuracy evaluation of BeiDou global ionospheric model Table 2 shows that the BDGIM maintains a good correction accuracy, regardless of the ionospheric effect. According to the performance evaluation of the BDS-3 orbit accuracy, satellite clock offset accuracy and SISRE, the basic functions and performance of BDS-3 satisfy the design requirements. Future possible development Although BDS-3 has developed many new designs and functions, such as satellite orbit design, service function innovation and payload improvements [3,4,5], there is still room for improvements in the future. The PNT service performance in high latitude area is poor. The BDS-3 constellation is composed of 24 MEO, 3 GEO and 3 IGSO satellites. The inclination of MEO and IGSO satellites is 55°, which indicates that the service performance of BDS-3 will be significantly degraded in high latitude area. First, all visible BDS-3 satellites relative to the users in the polar regions are in low-elevation angle. Calculations show that the maximum altitude angle of the satellites is generally less than 50° when the users are in regions of latitude greater than 85° [29]. The low-elevation satellites will decrease the user ranging accuracy; next, the BDSK8 model is unavailable in the Arctic and Antarctic regions, and the accuracy of BDGIM ionospheric model is relatively low, which will degrade high-accuracy positioning. Furthermore, owing to the global climate change, sea ice in the polar regions of the Earth is melting rapidly. Especially after ice and snow in the Arctic has melted in the summer, the demand for PNT service in the Arctic region becomes more urgent considering the significant route value and rich resource reserve [29, 30]. To ensure the efficiency of all kinds of scientific research and the safety of transportation in the Arctic region, the satellite navigation service must be improved. For BDS-3, a reasonable and simplified method is to increase the inclination of IGSO satellites. Using the actual constellation of BDS-3, we analyzed the user's elevation angle variation according to changes in the orbit inclination of the IGSO satellites in high-latitude areas by adjusting the inclination of the IGSO constellation to 55°, 65° and 75°. Seven-day simulation data with a sampling interval of 10 min were used. To calculate the average dilution of precision (DOP), grids with the longitude interval of 2° and latitude interval of 1° were used, and the minimum altitude angle was set as 5°. The latitude of key test area ranged from 75° to 90°(N) whereas the longitude ranged from − 180° to 180°. Table 3 shows that with a 10° increase of the inclination angle of the IGSO satellites in the high latitude region, the average altitude angle of the visible satellite increases by approximately 5°, and the maximum altitude angle increases by approximately 12°. With the increase in satellite elevation angle, the number of visible satellites will increase, and the effect of ionosphere will be weakened. Table 3 Statistics of mean elevation angle with changes of the IGSO inclinations An obvious "south wall effect" exists in BDSPPP and regional short message communication (RSMC) services based on GEO satellites. For users in the northern hemisphere, GEO satellites are always located south of the user, and for those at higher latitudes, the BDSPPP and short message communication service will likely be interrupted once obstacles appear in the south. The "south wall effect" is particularly serious for users of PPP, as 20–30 min are required to obtain the converged results after the PPP service recovers from the GEO satellites. Furthermore, the repeated convergence will result in a significant reduction in the application efficiency of BDSPPP in cities with high-rise buildings. Similarly, users in southern hemisphere will experience the "north wall effect". To overcome the "south wall effect" of BDSPPP and RSMC provided by BeiDou GEO satellites, the most effective method is to use the BeiDou IGSO satellites to transmit rapid precise orbit and clock offset parameters, as well as the regional short message. IGSO and GEO satellites working together, can eliminate the "south wall effect" and effectively improve the featured service efficiency of BDS. Furthermore, if the inclination angle of IGSO satellites are increased and an appropriate ground reference station is built in the Arctic region, then the BDSPPP and RSMC services can be provided to users in the Arctic region. Some satellites' services are interrupted during the orbit maneuver of the satellites. Satellite orbit maneuver is inevitable. Almost every month, BeiDou GEO satellite must adjust its East-West orbit, and conduct a North-South orbit maneuver every half a year. IGSO and MEO satellites generally conduct an orbit maneuver every half a year. During the period of orbit adjustment, the satellite ephemeris cannot reflect the orbit maneuver quickly and is typically expressed as "unavailable" or "unhealthy". Furthermore, the PNT service of the corresponding satellite will be suspended. The duration is approximately 7 h. For IGSO and MEO satellites, owing to their large number, low frequencies of adjustment and the sole functions, the PNT service will not be significantly affected in general. However, the BDSPPP and RSMC service provided by the three GEO satellites of the BDS-3 will significantly affect many authorized users owing to the unavailability caused by GEO satellites' orbit maneuver. As we know, the maneuvering satellites cannot provide normal service because the satellite ephemeris cannot accurately reflect the actual positions of the satellites. If inertial navigation system (INS) equipment was installed on the satellite, the orbit change in the satellite maneuver could be measured during the satellite maneuver, and thus the orbit parameters of the moving satellites could be obtained. However, one of the main problems is that during the non-maneuverable period of the satellite, the on-board INS must be calibrated using a precise satellite orbit or an error correction model must be established. Once the satellite enters the maneuvering state, INS observations and corresponding corrections can be used to provide the satellite position during the maneuvering period, which will improve the availability of the maneuvering satellites. Datum drift and rotation of BeiDou satellite constellation will occur in autonomous navigation. If only ISL measurements are used in the autonomous navigation, then only the relative positions of constellation can be solidified. The overall drifts of the spatial datum and time datum cannot be determined or solidified, and neither can the overall rotation of the entire constellation. These types of systematic errors may not significantly affect high-accuracy differential positioning users; however, they will significantly affect the real-time navigation and timing service users. Regarding the drift and rotation of space and time reference of the BeiDou constellation in autonomous navigation, we may use high-accuracy and high stable hydrogen clocks or cesium clocks onboard as references, and then the autonomous time keeping of satellite clocks can be realized to reduce the time drift of the entire constellation. In addition, the onboard GNSS receiver can receive the signals of other GNSS satellites and determine the onboard satellite orbit; thus, the high-precision orbits of BeiDou satellites can be determined based on the joint adjustment with the autonomous orbit determination data. Therefore, the space and time references of the BeiDou satellite constellation can be maintained, and the datum drift and rotation of the autonomous orbit determination can be reduced. Inconsistency of the BDS-3 ionospheric model. As previously analyzed, the B1I and B3I signals of BDS-2 are maintained in BDS-3, and the corresponding ionospheric model adopts the BDSK8 model. However, new signals such as B1C, B2a and B2b use the BDGIM ionospheric correction model. Different models result in the inconsistent of accuracies, which not only causes confusion among terminal manufacturers and users, but also reduces the ranging accuracy of B1I and B3I. Regarding the inconsistency of the BDS-3 ionospheric models, using the BDGIM as the unified model of all signals of the BDS-3 can reduce the confusion and inconvenience in users and terminal manufacturers. Since the accuracy of BDGIM model [28] is higher than that of BDSK8, the user equivalent range error can be improved slightly after the unification. Considering that BDS-2 is close to being obsolete, and to not affect the development of the BDS-3 terminal and the PNT service performance of BDS-3 users, we suggest unifying the ionospheric models for BDS-3 the soonest possible time for reducing loss to users and receiver manufacturers. In addition, the ionospheric delay correction model in high-latitude areas should be refined. Vulnerability exists inherently in satellite navigation systems, such as the satellite constellation, the ground operational control system (OCS) and the signals. Once the core subsystem fails or malfunctions, e.g., power interruption, time system failure, and other core equipment failures, the PNT service may be interrupted. To resolve the vulnerability of the BDS constellation, OCS and signals, we can expand PNT information sources by building a comprehensive PNT system [31,32,33], and then use the resilient PNT theoretical framework [34] to realize the resilient integration of multiple PNT sensors, and build the resilient function model, the resilient stochastic model, and resilient data fusion methodology, with the aim of realizing a seamless PNT service from deep space to deep sea, and from outdoor to indoor. BDS-3 satisfied the requirements of design indexes in orbit determination accuracy, satellite clock accuracy, signal-in-space accuracy and PNT service performance. Additionally, the featured services such as BDSPPP, BDSBAS, regional message communication, and the global SAR function have wide application prospects. In the future, the inclination angle of IGSO satellites may be increased to improve the service performance of BDS in polar regions. The IGSO satellites may be designed for providing the RSMC and BDSPPP services to overcome the "south wall effects". The INS payloads are suggested to be added to various satellites, and thus the continuity and availability of satellite orbit parameters can be guaranteed in satellite maneuver. The high accuracy hydrogen clocks or cesium atomic clocks onboard might be used to control the time drift of satellite constellation during the autonomous navigation with the support of ISLs. The two commonly used BDS ionospheric models should be unified for all signals to reduce the users' confusion and improve the range accuracy for the B1I and B3I signals. The comprehensive and resilient PNT infrastructure should be established for the seamless PNT services. The datasets analyzed during the current study are available in the data repositories of Test and Assessment Research Center of China Satellite Navigation Office (www.csno-tarc.cn) and the Crustal Dynamics Data Information System of NASA (ftp://cddis.nasa.gov/gnss/products/ionex). Yang, Y. X., Tang, J., & Montenbruck, O. (2017). Chinese satellite navigation system. In P. Teunissen & O. Montenbruck (Eds.), Handbook of global navigation satellite system (pp. 273–304). New York: Springer. Yang, Y. X., Xu, Y., Li, J., & Yang, C. (2018). Progress and performance evaluation of BeiDou global navigation satellite system: Data analysis based on BDS-3 demonstration system. Science China Earth Sciences, 61(5), 614–624. Yang, Y. F., Yang, Y. X., Hu, X., Chen, J., Guo, R., Tang, C., et al. (2019). Inter-satellite link enhanced orbit determination for BeiDou-3. Navigation, 66(1), 1–16. https://doi.org/10.1017/S0373463319000523. Yang, Y. X., Gao, W., Guo, S., Mao, Y., & Yang, Y. (2019). Introduction to BeiDou-3 navigation satellite system. Navigation, 66(1), 7–18. Yang, Y. X., Yang, Y., Hu, X., Tang, C., Zhao, L., & Xu, J. (2019). Comparison and analysis of two orbit determination methods for BDS-3 satellites. Acta Geodaetica et Cartographica Sinica, 48(7), 831–839. https://doi.org/10.11947/j.AGCS.2019.20180560. Liu, L., & Zhang, T. (2019). Improved design of operational system in BDS-3. Navigation, 66(1), 37–47. Guo, S., Cai, H., Meng, Y., Geng, C., Jia, X., Mao, Y., et al. (2019). BDS-3 RNSS technical characteristics and service performance. Acta Geodaetica et Cartographica Sinica, 48(7), 820–821. https://doi.org/10.11947/j.AGCS.2019.20190091. Xie, X., Geng, T., Zhao, Q., Liu, J., & Wang, B. (2017). Performance of BDS-3: Measurement quality analysis, precise orbit and clock determination. Sensors, 17(6), 1233. https://doi.org/10.3390/s17061233. Xu, X., Wang, X., Liu, J., & Zhao, Q. (2019). Characteristics of BDS3 global service satellites: POD, open service signal and atomic clock performance. Remote Sensing, 11(13), 1559. https://doi.org/10.3390/rs11131559. Ren, X., Yang, Y., Zhu, J., & Xu, T. (2017). Orbit determination of the next-generation BeiDou satellites with intersatellite link measurements and a priori orbit constraints. Advances in Space Research, 60(10), 2155–2165. https://doi.org/10.1016/j.asr.2017.08.024. Ren, X., Yang, Y., Zhu, J., & Xu, T. (2019). Comparing satellite orbit determination by batch processing and extended Kalman filtering using inter-satellite link measurements of the next-generation BeiDou satellites. GPS Solutions, 23, 25. https://doi.org/10.1007/s10291-018-0816-9. Yang, Y. X., & Ren, X. (2018). The maintenance of space datum for autonomous satellite navigation. Geomatics and Information Science of Wuhan University, 43(12), 1780–1786. Lu, M., Li, W., Yao, Z., & Cui, X. (2019). Overview of BDS III new signals. Navigation, 66(1), 19–35. Wang, C., Zhao, Q., Guo, J., Liu, J., & Chen, G. (2019). The contribution of inter-satellite links to BDS-3 orbit determination: Model refinement and comparisons. Navigation, 66(1), 71–82. Li, X. (2018). Triple frequency PPP ambiguity resolution with BDS2 and BDS3 observations. In 31st International technical meeting of the satellite division of the institute of navigation (ION GNSS+ 2018) (pp. 3833–3858), Miami, Florida. Yang, Y. X. (2009). Chinese geodetic coordinate system 2000. Chinese Science Bulletin, 54(16), 2714–2721. Han, C., Yang, Y., & Cai, Z. (2011). BeiDou navigation satellite system and its time scales. Metrologia, 48(4), 1–6. China Satellite Navigation Office. (2016). BeiDou navigation satellite system signal in space interface control document: Open service signal (version 2.1). Retrieved October 20, 2019, from http://www.beidou.gov.cn/xt/gfxz/201805/P020180507527106075323.pdf. China Satellite Navigation Office. (2017). BeiDou navigation satellite system signal in space interface control document: Open service signal B1C (version 1.0). Retrieved October 20, 2019, from http://www.beidou.gov.cn/xt/gfxz/201712/P020171226741342013031.pdf. China Satellite Navigation Office. (2017). BeiDou navigation satellite system signal in space interface control document: Open service signal B2a (version 1.0). Retrieved October 20, 2019, from http://www.beidou.gov.cn/xt/gfxz/201712/P020171226742357364174.pdf. China Satellite Navigation Office. (2018). BeiDou navigation satellite system signal in space interface control document: Open service signal B3I (version 1.0). Retrieved October 20, 2019, from http://www.beidou.gov.cn/xt/gfxz/201802/P020180209623601401189.pdf. Hu, Z. (2013). BeiDou navigation satellite system performance assessment theory and experimental verification. Doctoral Dissertation, Wuhan University, Wuhan Spilker, J. J., Jr., Axelrad, P., Parkinson, B., & Enge, P. (1994). Global positioning system: Theory and application volume I (p. 1994). Stanford: American Institute of Aeronautics and Astronautics. U.S. Department of Defense. (2008). Global positioning system standard positioning service performance standard (4th ed). Retrieved September 18, 2008, fromhttps://www.gps.gov/technical/ps/2008-SPS-performance-standard.pdf. Wang, N., Yuan, Y., Li, Z., Li, M., & Huo, X. (2017). Performance analysis of different NeQuick ionospheric model parameters. Acta Geodaetic et Cartographica Sinica, 46(4), 421–429. Zhang, Q., Zhao, Q. L., Zhang, H. P., Hu, Z. G., & Wu, Y. (2014). Evaluation on the precision of Klobuchar model for BeiDou navigation satellite system. Geomatics and Information Science of Wuhan University, 39(2), 142–146. Yuan, Y., Wang, N., Li, Z., & Huo, X. (2019). The BeiDou globally broadcast ionospheric delay correction model (BDGIM) and its preliminary test. Navigation, 66(1), 55–69. Peng, H., Yang, Y., Wang, G., & He, H. (2016). Performance analysis of BDS satellite orbits during eclipse periods: Results of satellite laser ranging validation. Acta Geodaetica et Cartographica Sinica, 45(6), 639–645. https://doi.org/10.11947/j.AGCS.2016.20150637. Yang, Y. X., & Xu, J. (2016). Navigation performance of Beidou in polar area. Geomatics and Information Science of Wuhan University, 41(1), 15–20. Li, J., Yang, Y., He, H., & Guo, H. (2017). An analytical study on the carrier-phase linear combinations for triple-frequency GNSS. Journal of Geodesy, 91(2), 151–166. https://doi.org/10.1007/s00190-016-0945-2. Yang, Y. X. (2016). Concepts of comprehensive PNT and related key technologies. Acta Geodaetica et Cartographic Sinica, 45(5), 505–510. Yang, Y. X., & Li, X. (2016). Micro-PNT and comprehensive PNT. Acta Geodaetica et Cartographic Sinica, 45(10), 1249–1254. Yang, Y. X. (2018). Resilient PNT concept frame. Acta Geodaetica et Cartographica Sinica, 47(07), 893–898. We are immensely grateful to the BeiDou Navigation Satellite Project Development Team for years of hard work and all major breakthroughs they have achieved in a series of core technologies represented by constellation design, inter-satellite links and the expansion of featured service. We would also like to thank our colleagues from the State Key Laboratory of Geo-Information Engineering who provided insight and expertise that greatly assisted the research. This work was supported by the National Natural Science Foundation of China (Grant No. 41931076) and the National Key Technologies R&D Program of China (Grant No. 2016YFB0501700). State Key Laboratory of Geo-Information Engineering, Xi'an, 710054, China Yuanxi Yang, Yue Mao & Bijiao Sun Yuanxi Yang Yue Mao Bijiao Sun YY proposed the idea and drafted the article; YM carried out the evaluation and assisted in data analysis; BS assisted in data collection and article revision. All authors read and approved the final manuscript. Correspondence to Yuanxi Yang. Yang, Y., Mao, Y. & Sun, B. Basic performance and future developments of BeiDou global navigation satellite system. Satell Navig 1, 1 (2020). https://doi.org/10.1186/s43020-019-0006-0 Ionospheric delay
CommonCrawl
The deeper the better? A thermogeological analysis of medium-deep borehole heat exchangers in low-enthalpy crystalline rocks Kaiu Piipponen ORCID: orcid.org/0000-0002-2969-75211, Annu Martinkauppi2, Kimmo Korhonen1, Sami Vallin1, Teppo Arola1, Alan Bischoff1 & Nina Leppäharju1 Geothermal Energy volume 10, Article number: 12 (2022) Cite this article The energy sector is undergoing a fundamental transformation, with a significant investment in low-carbon technologies to replace fossil-based systems. In densely populated urban areas, deep boreholes offer an alternative over shallow geothermal systems, which demand extensive surface areas to attain large-scale heat production. This paper presents numerical calculations of the thermal energy that can be extracted from the medium-deep borehole heat exchangers in the low-enthalpy geothermal setting at depths ranging from 600 to 3000 m. We applied the thermogeological parameters of three locations across Finland and tested two types of coaxial borehole heat exchangers to understand better the variables that affect heat production in low-permeability crystalline rocks. For each depth, location, and heat collector type, we used a range of fluid flow rates to examine the correlation between thermal energy production and resulting outlet temperature. Our results indicate a trade-off between thermal energy production and outlet fluid temperature depending on the fluid flow rate, and that the vacuum-insulated tubing outperforms a high-density polyethylene pipe in energy and temperature production. In addition, the results suggest that the local thermogeological factors impact heat production. Maximum energy production from a 600-m-deep well achieved 170 MWh/a, increasing to 330 MWh/a from a 1000-m-deep well, 980 MWh/a from a 2-km-deep well, and up to 1880 MWh/a from a 3-km-deep well. We demonstrate that understanding the interplay of the local geology, heat exchanger materials, and fluid circulation rates is necessary to maximize the potential of medium-deep geothermal boreholes as a reliable long-term baseload energy source. The progressive displacement of fossil fuels by clean, affordable, and reliable energy sources requires implementation of low-carbon technologies that will meet large-scale commercial demands across all energy sectors. Renewable heat production at large scales is a challenging target for most cold climate countries, today accounting for only a minor proportion of the energy used for space heating worldwide (IEA, 2021). Whereas countries like Finland and Denmark can provide over 50% of their space heating needs from renewables, petroleum-based sources are still taking the most significant shares of heat sectors of many nations (IRENA, 2017). Unlike the geothermal production of electricity that requires high-temperature fluids for power generation, unconventional low-enthalpy geothermal resources can provide energy to meet space and district heating applications (Jolie et al., 2021). Areas with low heat flux density have generally been overlooked in the geothermal prospecting due to their poor potential, but locally their contribution to the heating sector can be significant. Such low-temperature areas are found for example in the stable continental Archean–Precambrian settings like the Baltic Shield (Kukkonen et al., 2003), Canadian Shield (Guillou et al., 1994; Rolandone et al., 2002), the Kalahari craton in South Africa (Rudnick and Nyblade, 1999), the West African craton (Chapman and Pollack, 1974), the Western Shield of Australia (Neumann et al., 2000) and in the Siberian Craton (Duchkov, 1991). In Finland, geothermal energy has been researched since the 1970s (Kukkonen, 2000), but the geothermal market in the Nordic countries has been dominated by the shallow geothermal energy systems (up to 300 m), mainly using borehole heat exchangers (BHEs) and ground-source heat pumps (GSHPs) to deliver space heating for single-family dwellings (Gehlin et al., 2016). In the last 10 years, the share of larger installations has been increasing, as more efficient BHEs and GSHP systems are supplying heat for offices, industrial buildings, and residential blocks (Statistics Finland, 2021). Whereas shallow geothermal wells are likely to take a good share of future heat spacing markets (IRENA, 2017), in the urban areas, the challenge of shallow geothermal heat production is the lack of available surface land area required by large BHE fields. Deeper BHEs are gaining the interest of energy companies because they use less land area and, with the use of heat pumps and the rapidly developing drilling technologies, may offer more extractable energy per area, with higher fluid temperature outcomes. To date, there are only a few medium-deep single well geothermal boreholes in crystalline rocks in Europe. Commercial examples include two 1.5-km-deep boreholes installed to keep Oslo airport in Norway ice-free (Kvalsvik et al., 2019), and a medium-deep borehole operating in Weggis, Switzerland, which produces 220 MWh/a of heat for a residential area (Kohl et al., 2002). The highest peak power recorded for any medium-deep borehole produced near 400 kW of heat in a test of a 2-km-deep borehole in Cornwall, UK (Collins and Law, 2014). In Finland, there is one pilot 1300-m-deep well using a coaxial heat collector and five new projects are in the drilling phase across the country (Arola and Wiberg, 2022). Conversely, some commercial failures have also been reported, such as the Aachen SuperC project in Germany, which experienced several technical issues with plastic pipes, resulting in a maximum recovery temperature of 35 ºC (Falcone et al., 2018). In the scientific literature, the terms medium-deep and deep borehole heat exchanger are used interchangeably, typically corresponding to an arbitrary depth of geothermal production. For example, in China and Central Europe, the depth limit for shallow borehole heat exchangers is defined as 200 m (e.g., Pan et al., 2020; Welsch, 2019), whereas in Northern Europe, conventional shallow borehole heat exchangers can reach depths of 400 m (Korhonen et al., 2019). For medium-deep geothermal boreholes, some authors have considered the depth of 3000 m (Chen et al., 2019; Pan et al., 2020), while others have suggested 1000 m as the deep-end boundary (Holmberg, 2016; Schulte, 2016; Welsch, 2019). Here, we use the term medium-deep to describe geothermal systems with a range of depths of 600–3000 m, based on our experience dealing with the emerging geothermal heat industry in Nordic countries. Irrespective of these depth markers, a common feature of medium-deep borehole technology is the integration of a coaxial heat collector to exchange heat from the host rock to the fluid in the borehole (Cai et al., 2019; Kohl et al., 2002, 2000; Pan et al., 2020). The BHEs are designed to extract geothermal energy from the host rock by circulating the working fluid in a well without necessarily extracting fluids from the enclosing rock formation (Renaud et al., 2019). BHE systems have been modelled analytically, semi-analytically and numerically over the past four decades, as early as Horne (1980) and Eskilson and Claesson (1988) and numerous models are reviewed by Li and Lai (2015) and Zhao et al. (2020). The advantage of the analytical models is their computational speed and that they are not limited by the softwares or licence expenses. Numerical models, on the other hand, can be easier modified to have more complex geometries and boundary conditions. Here, we investigate the effects of geological and engineering variables on the heat outcome of medium-deep geothermal boreholes. We apply a numerical finite element method to model how the interplay of thermogeological, climatological and engineering parameters affect the geothermal well performance. The input data rely on verified measurements and best practices of energy and drilling companies. We selected three locations in different parts of Finland to estimate how their geological and climatological parameters influence the productivity of geothermal energy wells at different depths. Our work aims to answer three fundamental questions that will leverage the exploration of geothermal systems in crystalline rocks in Finland and globally: (i) How medium-deep low-enthalpy geothermal energy production is affected by the underlying geological and climatological conditions? (ii) What are the outlet temperatures of medium-deep BHE systems with increasing drilling depths compared to the production of shallow BHEs systems? and (iii) Is there a relevant performance difference between heat collector types? Although the current drilling costs are slowing down the development of the medium-deep geothermal in crystalline rocks, our resulting models will increase the likelihood of locating and designing profitable systems, scaling-up the existing low-carbon solutions for the heating sector. Geological and geothermal setting Finland is located in the central Fennoscandian Shield, characterized by a cold and thick (150–250 km) lithosphere (Artemieva, 2019; Grad et al., 2014; Kukkonen et al., 2003) with a mean heat flux density of 42 ± 4 mWm−2 (Veikkolainen and Kukkonen, 2019). The age of the crystalline bedrock varies from Archean (3100–2500 Ma) to Proterozoic (2500–1200 Ma), comprising rocks formed by multiple tectonic plate collisions and continental terrain accretions (Nironen et al., 2017). The Precambrian bedrock of Finland is covered by a continuous, thin layer of glacial and postglacial sediment deposited during the Weichselian glacial stage and the Holocene, varying in thickness from a few metres to some tens of metres (Lahermo et al., 1990; Lunkka et al., 2004). Typically, the average thermal conductivity of crystalline rocks in Finland is around 3.2 W/(m∙K), depending mainly on their mineral composition (Peltoniemi and Kukkonen, 1995). The geothermal gradient of Finland varies between 8 and 17 K/km and the mean annual ground temperature is controlled by climatological conditions, varying from + 7 °C in Southern Finland to + 1 °C in the northern parts of the country (Aalto et al., 2016). Our study focuses on three areas across Finland: (i) Vantaa, (ii) Jyväskylä, and (iii) Rovaniemi, aiming to obtain information on the potential of medium-deep geothermal heat production in different geothermal settings in the southern, central, and northern Finland, respectively (Fig. 1). Vantaa area comprises Paleo-to-Mesoproterozoic granitoids and high-grade metamorphic rocks, much of which has experienced intense partial melting (i.e., migmatization) during the Svecofennian Orogeny (Korsman et al., 1997). Granitic rocks in Vantaa often have a high percentage of microcline minerals, conferring a potassium-rich composition and consequently high radiogenic heat production properties. However, remanent of older felsic and ultramafic rocks also occur, forming a heterogeneous crustal block (Nironen, 2005). Jyväskylä area, like Vantaa, also sets within the Svecofennian tectonic province. Most rocks in the Jyväskylä region are part of the Central Finland Granitoid Complex, a crustal block characterized by syn- and post-kinematic tonalites, granodiorites, quartz montzonites and granites that provide the lowest radiogenic heat production properties examined in this study. In contrast, Rovaniemi area is part of the Karelian tectonic province, characterized by the Central Lapland Granite Complex in the north and by the Paleoproterozoic Peräpohja Schist Belt in the south (Nironen, 2005). Rocks in Rovaniemi have a complex interrelationship between porphyritic granites, granodiorites, gneissic inclusions and various migmatitic bodies that have relatively high radiogenic heat production properties (Nironen et al., 2017). © Geological Survey of Finland Simplified geological map of Finland and the study area locations. Bedrock of Finland 1:5 000 000 Thermogeological parameters To estimate the potential of medium-deep geothermal systems in Finland, we use data primarily obtained from the literature and information derived from these original datasets (Table 1). The mean annual ground temperatures (Tg) were calculated using the relationship suggested by Kukkonen (1986): $$T_{{\text{g}}} = 0.71 \cdot T_{{\text{a}}} + 2.93,$$ where Ta is the mean annual air temperature for the climatological normal period 1931–1960 from the data of the Finnish Meteorological Institute (Aalto et al., 2016). The cell size of the mean annual air temperature data is 1 km2. The geological map is adapted from the digital Bedrock map of Finland at the scale of 1:5 M (Fig. 1). Each lithological unit was assigned a thermal conductivity value based on laboratory measurements presented by Peltoniemi and Kukkonen (1995). Radiogenic heat production rates (A0) were compiled from Veikkolainen and Kukkonen (2019). Radiogenic heat production rate is based on equation from Rybach (1973): $$A_{0} = \rho \cdot \left( {9.52 \cdot c_{{\text{U}}} + 2.56 \cdot c_{{{\text{Th}}}} + 3.48 \cdot c_{{\text{K}}} } \right) \cdot 10^{ - 5} ,$$ where \(\rho\) is rock density and c the concentration of U, K and Th. Further, we estimated the geothermal heat flux from the radiogenic heat production rates described above, using Birch's law: $$q = q_{0} + DA_{0} ,$$ where q0 is the reduced heat flux density (the heat flux density below the heat-producing crustal layer), D is the thickness of the heat-producing crustal layer, and A0 is the near-surface radiogenic heat production rate (Eppelbaum et al., 2014). Veikkolainen and Kukkonen (2019) calculated the coefficients q0 and D by fitting Eq. (3) to paleoclimatically corrected heat flux data, resulting in the reduced heat flux density of 33.79 mW/m2 and the thickness of the heat-producing crustal layer of 5.919 km. Table 1 Thermogeological parameters of each location investigated in this study Rock density was assigned a constant value of 2725 kg/m3 based on the averaging lithology of the study areas, as constrained density calculated by Pirttijärvi et al. (2013) and specific heat capacity of the rock at each location was assigned a constant value of 728 J/kg∙K based on laboratory measurements conducted on Finnish rock samples by Kukkonen (2015). Borehole design, heat exchangers, and heat collector pipes We modelled borehole heat exchangers of four lengths: 600, 1000, 2000, and 3000 m (Fig. 2). In the 600 m U-tube model, the working fluid was an ethanol–water mixture with 28 wt% ethanol. In the coaxial BHE models, water was used as the heat carrier fluid. The topmost 300 m of the boreholes were cased to avoid any fluid exchange with the shallow environment. Typically, crystalline rocks have low natural permeability, and the groundwater can only enter a borehole if it intersects a fracture zone. We assume that the boreholes do not intersect any permeable zones deeper than 300 m, so the deeper part of the borehole was left uncased, and the working fluid of the coaxial models was in direct contact with the rock below the casing. Casing the top part of the borehole prevents cool groundwater from entering the borehole, but the low thermal conductivity of the concrete casing acts as an insulator, so leaving most of the borehole uncased allows more efficient heat transfer between the rock and the working fluid. Fluid flow direction was set to be down the annulus and up the central pipe as optimized by, e.g., Horne (1980) and Holmberg et al. (2016). The present modelling options, including the casing procedure, are based on the suggestions and the interest of industry and companies operating in Finland. The borehole heat exchangers modelled in this study. a A 600-m-deep open-loop coaxial borehole heat exchanger, b a 600-m-deep U-tube borehole heat exchanger, and c a 1- to 3-km-deep open-loop coaxial borehole heat exchanger with casing in the topmost 300 m (green). Arrows indicate the direction of working fluid circulation We compared two types of collector pipes: vacuum-insulated tubing (VIT) and standard high-density polyethylene (HDPE) pipe. The most important parameter that distinguishes these two pipe types is their thermal conductivity. VIT consists of two concentric steel pipes separated from each other by vacuumed-air space. Effective thermal conductivity of 0.02 W/m∙K was used for the VIT in this study (Śliwa et al., 2018; Zhou et al., 2015). As an industrial standard, HDPE pipes have a higher thermal conductivity of 0.42 W/m∙K. However, HDPE is still in use because of its significantly lower price (Chen et al., 2019; Saaly et al., 2014). The efficiency of the VIT was experimentally tested in, for example, Weggis, Switzerland (Kohl et al., 2002), and HDPE pipes have been modelled by Wang et al., (2017). For the 1- to 3-km-deep BHEs, we considered the boreholes to have a diameter of 8.5 in. (215.9 mm). The topmost 300 m of the wellbores was 12.25 in. (311 mm) in diameter and cased with 33-mm-thick cement casing. The 600-m-deep coaxial BHE was uncased, and its diameter was 160 mm. For the 600-m-deep BHEs, two diameter options for the VIT and one for the HDPE pipe were considered. As a comparison, we also modelled a standard U-tube BHE with a HDPE collector, an existing and tested BHE system. All BHE parameters were chosen based on existing or planned pilot experiments in Finland, and the parameters are summarized in Table 2. Table 2 Summary of borehole heat exchanger parameters adopted in this study The maximum volumetric flow rate was separately determined for each borehole depth based on the pressure loss in the well calculated with Moody friction factor approximation for smooth pipes, $$f = \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.316 \cdot Re^{ - 1/4} } & {{\text{if}} \;Re \le 20,000} \\ \end{array} } \\ {\begin{array}{*{20}c} {0.184 \cdot Re^{ - 1/5} } & {{\text{if}}\; Re > 20,000} \\ \end{array} } \\ \end{array} } \right.,$$ where Re is the Reynolds number, calculated as \(Re=uD/\nu\), where u is the mean fluid velocity in the pipe, D is the inner pipe diameter and n is the kinematic viscosity of the fluid (Incropera and DeWitt, 1996). The pressure loss in the pipe was calculated using $${\Delta }p = \frac{{fLu^{2} }}{2D},$$ where L is the pipe length and u is the velocity of the fluid in the pipe. Pipe roughness and minor losses caused by velocity changes in the pipe diameter changes, well bottom, valves, etc., were considered to have little impact on the total pressure loss and consequently were not assessed in detail. The acceptable pressure drop was set as 2 bars for 1-km pipes, 4 bars for 2-km pipes and 5 bars for 3-km pipes. In the case of 600-m pipes, the increase in flow rate was dictated by temperature losses more than by pressure losses, and the maximum volumetric flow rate was therefore set to 3 L/s. Numerical modelling The finite element modelling and simulation platform COMSOL Multiphysics® was used to construct the models of the BHEs illustrated in Fig. 2 and to simulate their operation. The open-loop coaxial BHEs were modelled using two-dimensional axisymmetric models and the U-tube BHE was modelled using a three-dimensional model. The equation used to describe the temperature field T was $$\rho C_{p} \frac{\partial T}{{\partial t}} - k\nabla^{2} T + \rho C_{p} {\mathbf{u}} \cdot \nabla T - A_{0} = 0,$$ where t is time, ρ is density, Cp is the specific heat capacity, k is thermal conductivity, u is velocity vector, and A0 is the radiogenic heat production rate. All models comprised domains representing the piping, working fluid, and rock, together with casing when included. Furthermore, the working fluid was assumed to instantaneously conduct heat horizontally to simulate turbulent flow. The boundary conditions applied to the model were the surface geothermal heat flux density estimated using Birch's law (Eq. 3) at the ground surface boundary and the reduced geothermal heat flux density at the bottom boundary of the model. Heat extraction was assumed to be constant throughout the year, which corresponded to a heat pump dropping the entering water temperature (Tewt) by $$\Delta T = \frac{{E_{{{\text{ann}}}} }}{H} \cdot \frac{1}{{\rho C_{p} Q}},$$ where Eann is the amount of heat annually extracted from the ground, H is the number of hours in a year (8760 h), rCp is the volumetric heat capacity of the working fluid, and Q is its flow rate. Thus, the leaving water temperature was \(T_{{{\text{lwt}}}} = T_{{{\text{ewt}}}} - {\text{D}}T\) and was imposed as a temperature boundary condition on the BHE inlet. Solving maximal annual energy yield COMSOL Multiphysics® was used to simulate 25 years of operation of the BHEs systems. A COMSOL simulation was expressed as $$f\left( {\user2{\theta };E_{{{\text{ann}}}} } \right) = T_{{{\text{min}}}} ,$$ where f is a function that maps the vector of model parameters q and annually extracted energy Eann to Tmin, which is the minimum temperature at the borehole wall at the end of the simulation. The maximal amount of thermal energy Emax that can be extracted annually from the ground using a BHE without dropping the borehole outer boundary temperature below the freezing point of 0 °C was determined by solving $$E_{{{\text{max}}}} = \mathop {{\text{arg min }}}\limits_{{E_{{{\text{ann}}}} }} \left| {f\left({\user2{\theta };E_{{{\text{ann}}}} } \right)} \right|.$$ The minimization problem in Eq. (9) was solved for each location, borehole length, collector type, and volumetric fluid flow rate using MATLAB® and COMSOL through LiveLink™ for MATLAB. In this study, the energy values are pure geothermal heat from the subsurface, and additional energy from running the heat pumps is not assessed. Therefore, our results provide a first-order estimation of the amount of thermal energy from each location, informing the decision-makers and investors about the options for drilling deeper in low-enthalpy crystalline settings in Finland and elsewhere. We validated our model with an analytical model of coaxial borehole heat exchanger of Beier et al. (2014) with a Matlab code written by Chen et al. (2019). The model calculates the transient heat conduction in the BHE and as a result, provides temperature profiles along the inner pipe, annulus and grout. The analytical model does not include parameters for the geothermal gradient, heat flux and radiogenic heat production, so for the model validation we simplified our COMSOL model to reproduce the results. We used a model with a constant temperature value of the temperature at the centre of the borehole across the entire borehole length. Energy production estimation The energy production estimation was based on different geological and borehole designing parameters (Tables 1, 2). Our models indicate that the thermal energy yield is highest in the southernmost Vantaa area (up to 1880 MWh/a), slightly lower in central Finland Jyväskylä (up to 1830 MWh/a), and lowest in the northernmost Rovaniemi location (up to 1590 MWh/a) (Table 3). In Rovaniemi, the thermal energy yield is 15–26% lower than in Vantaa, and 13–18% lower than in Jyväskylä. In addition, the thermal energy yield of each location is directly proportional to the borehole depth. We observe that as the BHE depth increases from 600 to 1000 m, the thermal energy yield roughly doubled, nearly tripled from 1000 to 2000 m, and almost doubled when increasing the depth from 2000 to 3000 m. These results show that the BHE energy production can increase over tenfold from 600 to 3000-m-depth boreholes. Table 3 Maximum thermal energy production (MWh/a) from three locations with VIT pipe and a flow rate of 3 L/s with a 600-m-deep well and 5L/s with well depths of 1000 m, 2000 m, and 3000 m At the highest used volumetric flow rates, the energy yield does not depend on the pipe type (Fig. 3a). With 600–1000 m deep wells, the working fluid outlet temperature is 1.2–2 °C (northern to southern location) regardless of the collector type (Fig. 3b). As the borehole depth increases to 2 and 3 km, outlet temperatures attained with VIT are twice as high as those with HDPE pipes. At a depth of 3 km, the VIT temperatures achieved 8.7–10 °C, whereas HDPE pipes can only produce 4.2–5 °C, i.e., half of the VIT heat outcome. With an increase in borehole depth from 1 to 2 km, the specific heat rate increases from 31 to 35% with VIT and 30–50% with HDPE pipe, depending on the location (Fig. 3c). The specific heat rate of both collectors grows around 14–20% when the borehole depth increases from 600 m to 1 km, and 20% when the borehole depth increases from 2 to 3 km. Comparison of thermal energy produced by constant heat extraction over 25 years, with the outlet temperature and specific heat rate at high and low flow rates for each modelled depth, location, and two collector pipe types. The highest flow rates for VIT are 5 L/s at all pipe lengths, while for HDPE pipe they are 5 L/s at 1 km and 10 L/s at 2 km and 3 km. The lowest flow rates are 1 L/s for VIT at all pipe lengths and for 1 km of HDPE pipe, and 2 L/s for 2 km and 3 km HDPE pipe. Because flow rate is not similar in all cases, the solid and dashed lines do not represent interpolation At the lowest modelled volumetric flow rates, the increase in the energy yield as a function of depth is linear with VIT, which rises by a factor of 1.8 between 600 m and 1 km, 2.4 between 1 and 2 km, and 1.5 between 2 and 3 km (Fig. 3d). With HDPE pipe, the energy yield remains practically the same between 600 m and 1 km, but increases by a factor of 3.3 when the borehole depth increases from 1 to 2 km, and again by only 1.2 when deepening the borehole from 2 to 3 km. Differences in the working fluid outlet temperature depending on the collector type are more evident at low flow rates. In the VIT case, the 600-m-deep well outlet temperature increases linearly from 3 to 3.9 °C (northern to southern), and, respectively, from 23 to 27 °C using a 3-km-deep borehole. With HDPE pipe, the 600-m-deep well outlet temperature increases linearly from 2.4–3 °C, and to 6.6–7.9 °C in a 3-km-deep well (Fig. 3e). With VIT, the specific heat rate increases by 7–15% between 600 m and 1 km, 13–20% between 1 and 2 km, and less than 10% between 2 and 3 km. With HDPE pipe, the specific heat rate decreases by 28–36% between 600 and 1000 m, increases 40% between 1 and 2 km, and decreases again 20% between 2 and 3 km (Fig. 3f). Effect of thermal short-circuiting The effects of the pipe material and fluid flow rate circulation are crucial for understanding the performance of medium-deep BHEs. We observe that a 2-km-deep well with VIT produces 980 MWh/a at highest circulation rates (5 L/s). To obtain the same amount of thermal energy, the flow rate in HDPE pipe needs to be doubled. With these energy production values, VIT yields an outlet temperature of 5.4 °C, while HDPE pipe yields 2.6 °C. The vertical profiles of a 2-km-deep BHE from Vantaa from the 25-year production simulation at different flow rates are presented in Fig. 4. As a comparison, if the flow rate in HDPE pipe is lower, the temperature increases towards the bottom of the borehole and decreases again as fluid returns to the surface (Fig. 4, blue, green, and red lines). This temperature difference reflects the thermal short-circuiting effect, and the lower the volume flow is, the greater the effect. Thermal short-circuiting is observed at a much smaller scale in the vertical profile of VIT with a flow rate of 1 L/s (Fig. 4, black line), while at a high flow rate, there are practically no heat losses in the inner pipe (Fig. 4, grey line). Vertical temperature profiles of a 2000-m BHE with a VIT or HDPE pipe collector and different volumetric flow rates System efficiency as a function of production time Our models show that the outlet temperature drastically drops during the first days of utilization of the geothermal systems, and then slowly decreases over time (Fig. 5). After one year (close-up in Fig. 5), we see that after the initial drop, temperature begins to stabilize but is still 2–5 °C higher after the first year of production than at the 25th production year. For all flow rates and collector cases, inlet temperatures drop to 0 °C after 25 years, as the model optimization parameters require. Outlet (solid line) and inlet (dashed line) temperatures of a 2000-m BHE with VIT or HDPE pipe and different volumetric rates. The left figure presents temperature development over 25 years of simulation and the figure on the right is a close-up of the first year of production 600-m-deep wells Parametrization of the 600-m-deep BHE was conducted with four different collector pipes, so the results for thermal energy production and the outlet temperature as a function of flow rate are presented here separately, only including results from the southernmost Vantaa area. The smaller diameter VIT pipe outperforms the other pipes in thermal energy production, regardless of the flow rate, but HDPE pipe performs better in terms of the outlet temperature at the highest flow rate of 3 L/s (Fig. 6). The thermal energy yield is low with the U-tube, but the poorest yield and lowest outlet temperature are obtained with a larger diameter VIT. Thermal energy and outlet temperature from 600-m wells as a function of flow rate with four different collectors Our results are in good agreement with the analytical solution at all times and depths. The results are presented as annual inlet and outlet temperatures of the well and vertical profiles of inner pipe and annulus. The results show that the analytical model with the same calculation parameters yields approximately 4% lower outlet temperature than the COMSOL model (Fig. 7). The solid line of the vertical profile (Fig. 7b) is analogous to the grey profile of Fig. 5. The differences in the profile shape and temperature are due to replacement of geothermal gradient value with a constant temperature, and omitting boundary heat fluxes and radiogenic heat production. Comparison of the analytical and numerical models: a annual inlet and outlet temperatures and b vertical temperature profile form the 25th production year. Both results are of a 2000 m BHE with a VIT collector Geological and climatological influence on geothermal production The lowest ground temperature levels and consequently the lowest thermal energy yield is observed in Rovaniemi, the northernmost location. The highest thermal energy yield is observed in the southernmost location, Vantaa, as a result of the three factors: the highest ground surface temperature, high geothermal heat flux density, and high radiogenic heat production values due to the presence of microcline granites. The geothermal gradient in Jyväskylä is higher than in the two other locations due to the lower thermal conductivity and radiogenic heat production, so in both Jyväskylä and Vantaa, the temperature reaches 13.4 °C at a depth of 600 m, while at greater depths, the temperature is higher in Jyväskylä than in Vantaa. Regardless of the higher geothermal gradient in Jyväskylä, the outlet temperatures and thermal energy yield are mainly lower than in Vantaa. This difference in outlet temperature is attributed to the lower thermal conductivity of the granodiorite than microcline granite and consequently less efficient heat exchange between the host rock and the fluid. Compared to other locations where deep geothermal wells have been drilled or modelled, Finland typically has lower heat flow and geothermal gradient, and therefore our models generally result in either lower outlet temperatures or thermal power. Chen et al. (2019) modelled a 2.6-km borehole in the "standard" and "elevated" geothermal gradient of 30 and 40 K/km, respectively, and their models resulted in a specific heat rate of 125–200 W/m. Correspondingly, our specific rates for a geothermal gradient of 13–14 K/km are 50 W/m for a 2-km well and 70 W/m for a 3-km well—approximately 0.4 times of the values reported by Chen et al. (2019). The simulated values of 886–1129 MWh/a for the 2.1-km-deep well in Weggis, Switzerland (Kohl et al., 2002) correlate with our results for a 2-km-deep borehole. In Weggis, the temperature recorded at the bottom of the 2.3-km-deep well was 73 °C (Kohl et al., 2002), which is roughly double than that in Jyväskylä, but their simulated energy yield values correlate with our results because the volumetric flow rate used in Weggis was lower than in our study. As a conclusion, we observe that initial ground surface temperature and consequently the geothermal gradient have a significant impact on geothermal energy yield, while thermal conductivity of the local rock type has a more complex impact that depends on the production properties such as fluid flow rate. This is apparent from the comparison of the results obtained in thermogeological settings of Switzerland and Finland, as well as from the results of our study comparing thermogeological variations across Finland. Another essential finding is that, as the borehole depth increases from 600 to 1000 m, the thermal energy yield roughly doubled, nearly tripled from 1000 to 2000 m, and almost doubled when increasing the depth from 2000 to 3000 m. Collector type and flow rate Our models suggest that the efficient insulation of VIT pipes result in the thermal energy production comparable with HDPE pipes with a larger hydraulic diameter and consequently higher flow rates (Fig. 3, Table 2). This relationship between flow rate and types of pipes is an essential finding if we consider that HDPE pipe is significantly lower in price than VIT. However, the fluid temperatures produced with HDPE pipe are relatively low at all flow rates and borehole lengths due to the thermal short-circuiting effect, being at their highest around 8 °C from a 3-km-deep well at a low volumetric flow rate (2 L/s). On the other hand, fluid outlet temperatures produced with VIT at low flow rates are 2.5–3 times higher than temperatures produced with either HDPE pipe or VIT with high flow rates. The results of this study are similar to those in previous publications based on medium-deep wells in Nordic countries. Holmberg (2016) observed that borehole depth significantly affects the heating power, with the yield increasing by eightfold when borehole length was increased from 300 to 900 m. The power produced from a 600-m-deep well was on average 30 kW and that from a 1000-m-deep well was on average 65 kW (Holmberg, 2016), corresponding to 263 MWh/a and 569 MWh/a, respectively. In addition, the energy yield from a reference 2-km-deep borehole was 1 GWh/a over 25 years (Lund, 2019). We estimate that the energy yields presented in Holmberg (2016) are higher due to the short simulation time of 5000 h (208 days). When heat outtake is optimized for a more extended period, less annual heat production is expected. This reduction in extractable heat as a function of time was demonstrated by Lund (2019), in which energy extracted during the first 5 years was 1.2 GWh/a, dropping to 1 GWh/a when averaged over 25 years. The main reason for the higher yield in the latter case is the higher volumetric flow rate. We arrive at the same conclusion as Nalla et al. (2005) that either the thermal power production or the outlet temperature of working fluid can be individually maximized. Alternatively, they can be simultaneously optimized so that reasonable temperatures for heat pump(s) as well as a reasonable heat production rate can be expected, as suggested by Horne (1980). The slower the fluid flows in the borehole, the higher temperature can be achieved, although less heat can be extracted, as observed by Acuña (2013). This effect is apparent in all the HDPE pipe results: the outlet temperature is generally lower and does not decrease as drastically with an increase in flow rate compared with VIT. Besides the correct flow rate parametrization and efficient insulation of the heat collector, the selection of suitable BHE parameters are essential features of coaxial collectors in the geothermal systems (Horne, 1980; Pan et al., 2020). Comparison of medium-deep and shallow geothermal energy utilization A common misconception of medium-deep wells is that they can produce long-term heating with high working fluid temperatures. Our models show that fluid outlet temperatures from 2 km boreholes can be up to 17 °C and from 3 km boreholes up to 27 °C (Fig. 3e). In Finland, the typical outlet temperature from conventional shallow geothermal wells is around 0–4 °C (Korhonen et al., 2019), depending on the location and subsurface properties. In such small-scale closed-loop systems, the ethanol-based heat carrier fluid heats up by 1°–3° on average during heat extraction. By drilling deeper, we reach higher ground temperature levels, and the heat carrier fluid is consequently expected to warm up more. The advantages sought from medium-deep wells over shallow geothermal wells are related to two scenarios: (1) maximizing heat production with high flow rates and lower ground area requirements compared with conventional BHE fields; or (2) obtaining a higher return fluid temperature with low flow rates, which allows heat production coupled with a heat pump having good efficiency and thus heat distribution at a higher temperature than conventional BHE systems. Related to the first scenario, we can subsequently compare the specific heat rate between medium-deep and shallow boreholes from the potential of shallow geothermal energy reported by Arola et al. (2019). According to a shallow geothermal energy potential dataset produced by the Geological Survey of Finland, a single 300-m-deep borehole has a constant thermal power of 7632 W (25 W/m) in Vantaa, 5887 W (20 W/m) in Jyväskylä and 5320 W (18 W/m) in Rovaniemi. A 1-km-deep borehole with VIT at a high flow rate can produce a specific heat rate of 30–38 W/m, while the respective figures for a 2- and 3-km-deep borehole are 47–56 W/m and 60–70 W/m (Fig. 3). The achievable specific heat rate of a medium-deep borehole is therefore higher than what can be extracted from the conventional shallow BHE systems, but this is predominantly valid at high flow rates. At low flow rates, the increase in the specific heat rate is smaller. With HDPE pipe, the specific heat rate is highest with 2-km-deep wells, where it is between 20 and 25 W/m, thus corresponding to the energy extractable from conventional shallow systems. With 1- and 3-km-deep wells, the specific heat rate is lower than conventional shallow systems. When comparing the specific heat rates between medium-deep and shallow boreholes, one must bear in mind that in shallow closed-loop systems, U-tubes are the dominant technology, and the volumetric flow rates are lower. Further considerations Considering the temperature levels presented in this study, a heat pump is needed to raise the resulting temperature to the level required by the property or district heating system. District heating can be realized as low-temperature heating, to which medium-deep geothermal energy is well suited (Schmidt et al., 2017). A constant energy outtake was chosen to calculate the baseload that geothermal heat could provide, but intermittent energy outtake could provide a higher peak power (e.g., Kohl et al., 2002), while the recharging of wells with, for example, waste or solar heat could not only provide a higher yield, but also prolong the well lifetime. Therefore, further optimization of any geothermal system should be done taking into considerations case-specific characteristics, such as the end use (local or district heating), required peak power and availability of rechargeable heat. Questions that must be investigated in further work include a sensitivity study on a wider range of subsurface properties and BHE parameters, such as borehole and collector dimensions and the impact of different casing materials. Moreover, the surface infrastructure is a factor that guides the parameter choices. The parameters for the present study were purely based on the geographical location and pipe parameters used in pilot projects or considered as industrial alternatives presently available in the market. The next step is to validate the model results with pilot projects in suitable geological conditions. Besides the highest thermal energy yield and highest possible production temperature, the CAPEX and OPEX costs of medium-deep geothermal systems are another essential factor to consider among stakeholders. Most essentially, the installation and drilling costs of deeper boreholes are significantly higher, as the costs do not increase linearly and possible risks for complications along the borehole increase with depth (Gehlin et al., 2016). However, the drilling technology is a rapidly developing field and technological innovations can bring the price down in the future. While VIT outperforms HDPE pipes in temperature production and the economics depend on the delivered temperature level (Gehlin et al., 2016) and pumping costs, it might still be more cost-effective to use HDPE or new innovative material solutions in the collector pipe. More detailed cost information may help estimate the amount of governmental financial support possibly needed to enable the geothermal industry to replace heating with fossil fuels with geothermal heat. Furthermore, an important task is to assess the accuracy of a model that only considers conductive heat transfer in the rock matrix, because it is likely that a well will intersect fracture zones that will impact heat transfer in the formation and consequently in the borehole. Lastly, it is important to examine whether a seismic risk or other environmental risks are posed when installing medium-deep geothermal systems. Exploration and production of medium-deep geothermal systems can significantly accelerate the transition away from fossil fuels by providing a clean and reliable energy source for residential and industrial heating. The results of this study indicate that: (1) the deeper the borehole is, the better is its thermal energy production and specific heat rate; (2) the higher the subsurface temperature, the better is the energy yield, when considering minor variations in thermogeological parameters; (3) VIT outperforms HDPE pipe in terms of energy and temperature production, and (4) with an increasing flow rate, we obtain more thermal energy but a significantly lower outlet fluid temperature. All these variables should be taken into consideration during the design and operation phases of medium-deep geothermal systems. Our results indicate that drilling deep (from 600 to 3000 m) in low-enthalpy crystalline rocks can increase the yield of geothermal boreholes by one order of magnitude. This is a substantial increase in the energy yield in these low-temperature areas, and could help to offset the burning of fossil fuels in the unconventional low-temperature geothermal systems in the district heating sector. Compared to the shallow geothermal BHEs, medium-deep geothermal systems offer higher fluid outlet temperature that enables a higher coefficient of performance (COP) with a heat pump. Another advantage of deeper geothermal systems is that in densely populated areas, fewer wells will be needed to supply the same amount of heat, consequently having a smaller land surface impact than shallow geothermal systems. However, the current upfront cost of drilling can prevent the full expansion of this technology. Nevertheless, our results show that understanding the interplay of thermogeological and engineering parameters is critical to up-scale sustainable low-carbon geothermal systems in Finland and in the areas where low-permeability crystalline rocks and low heat flow dominate the geological setting. The datasets used and/or analysed during the current study are available from the corresponding author on request. A : Radiogenic heat production (W/m3) C p : Specific heat capacity at constant pressure [J/(kg‧K)] Concentration (ppm/‰) Thickness (m) Energy (MWh) Moody friction factor (dimensionless) Number of hours in a year (dimensionless) Thermal conductivity [W/(m‧K)] Pressure (Pa) Geothermal heat flux density (W/m2) Volumetric flow rate (L/s) Reynolds number (dimensionless) Time (s) Velocity (m/s) Depth (m) ρ: Density (kg/m3) Aalto J, Pirinen P, Jylhä K. New gridded daily climatology of Finland: permutation-based uncertainty estimates and temporal trends in climate. J Geophys Res Atmos. 2016;121:3807–23. https://doi.org/10.1002/2015JD024651. Acuña J. Distributed thermal response tests New insights on U-pipe and Coaxial heat exchangers in groundwater-filled boreholes. KTH Royal Institute of Technology. 2013. Arola T, Korhonen K, Martinkauppi A, Leppäharju N, Hakala P, Ahonen L, Pashkovskii M. Creating shallow geothermal potential maps for Finland using finite element simulations and machine learning. Eur Geotherm Congr. 2019;2019:6. Arola T, Wiberg M. Geothermal energy use, country update for Finland, in: European Geothermal Congress 2022. European Geothermal Congress. 2022. Artemieva IM. Lithosphere structure in Europe from thermal isostasy. Earth Sci Rev. 2019;188:454–68. https://doi.org/10.1016/j.earscirev.2018.11.004. Beier RA, Acuña J, Mogensen P, Palm B. Transient heat transfer in a coaxial borehole heat exchanger. Geothermics. 2014;51:470–82. https://doi.org/10.1016/j.geothermics.2014.02.006. Cai W, Wang F, Liu J, Wang Z, Ma Z. Experimental and numerical investigation of heat transfer performance and sustainability of deep borehole heat exchangers coupled with ground source heat pump systems. Appl Therm Eng. 2019;149:975–86. https://doi.org/10.1016/j.applthermaleng.2018.12.094. Chapman DS, Pollack HN. 'Cold spot' in West Africa: anchoring the African plate. Nature. 1974;250:477–8. https://doi.org/10.1038/250477a0. Chen C, Shao H, Naumov D, Kong Y, Tu K, Kolditz O. Numerical investigation on the performance, sustainability, and efficiency of the deep borehole heat exchanger system for building heating. Geotherm Energy. 2019. https://doi.org/10.1186/s40517-019-0133-8. Collins MA, Law R. The Development and deployment of deep geothermal single well (DGSW) Technology in the United Kingdom. Eur Geol. 2014;43:63–8. Duchkov AD. Review of Siberian Heat Flow Data. Berlin: Springer; 1991. https://doi.org/10.1007/978-3-642-75582-8_21. Eppelbaum L, Kutasov I, Pilchin A. Applied Geothermics Endeavour Lecture Notes in Earth System Sciences. Heidelberg: Springer; 2014. Eskilson P, Claesson J. Simulation model for thermally interacting heat extraction boreholes. Numer Heat Transf. 1988;13:149–65. https://doi.org/10.1080/10407788808913609. Falcone G, Liu X, Okech RR, Seyidov F, Teodoriu C. Assessment of deep geothermal energy exploitation methods: the need for novel single-well solutions. Energy. 2018;160:54–63. https://doi.org/10.1016/j.energy.2018.06.144. Gabolde G, Nguyen J-P. Drilling data handbook. Paris: Tech; 1999. Gehlin, S.E., Spitler, J.D., Hellström, G., 2016. Deep Boreholes for Ground Source Heat Pump Systems – Scandinavian Experience and Future Prospects. Am. Soc. Heating, Refrig. Air-Conditioning Eng. 1–8. Grad M, Tiira T, Olsson S, Komminaho K. Seismic lithosphere–asthenosphere boundary beneath the Baltic Shield. GFF. 2014;136:581–98. https://doi.org/10.1080/11035897.2014.959042. Guillou L, Mareschal J-C, Jaupart C, Gariépy C, Bienfait G, Lapointe R. Component Parts of the World Heat Flow Data Collection. Pangaea. 1994. https://doi.org/10.1594/PANGAEA.804760. Holmberg H, Acuña J, Næss E, Sønju OK. Thermal evaluation of coaxial deep borehole heat exchangers. Renew Energy. 2016;97:65–76. https://doi.org/10.1016/j.renene.2016.05.048. Holmberg H. Transient heat transfer in boreholes with application to non-grouted borehole heat exchangers and closed loop engineered geothermal systems. Doctoral thesis. 2016. Horne RN. Design considerations of a down-hole coaxial geothermal heat exchanger. Trans Geotherm Resour Counc. 1980;4:569–72. IEA. Global Energy Review 2021, Global Energy Review 2021. Paris. 2021. Incropera FP, DeWitt DP. Fundamentals of heat and mass transfer. Hoboken: Wiley; 1996. IRENA. Renewable energy in district heating and cooling: a sector roadmap for remap, int renew energy agency. Abu Dhabi: IRENA; 2017. Jolie E, Scott S, Faulds J, Chambefort I, Axelsson G, Gutiérrez-Negrín LC, Regenspurg S, Ziegler M, Ayling B, Richter A, Zemedkun MT. Geological controls on geothermal resources for power generation. Nat Rev Earth Environ. 2021;2:324–39. https://doi.org/10.1038/s43017-021-00154-y. Kohl T, Brenni R, Eugster W. System performance of a deep borehole heat exchanger. Geothermics. 2002;31:687–708. https://doi.org/10.1016/S0375-6505(02)00031-7. Kohl T, Salton M, Rybach L. Data analysis of the deep borehole heat exchanger plant Weissbad (Switzerland). World Geotherm. Congr. 2000. Korhonen K, Leppäharju N, Hakala P, Arola T. 2019. Simulated temperature evolution of large BTES—case study from Finland, in: Proceedings of the IGSHPA Research Track 2018. International Ground Source Heat Pump Association, pp. 1–9. https://doi.org/10.22488/okstate.18.000033 Korsman K, Koistinen T, Kohonen J, Wennerström M, Ekdahl E, Honkamo M, Idman H, Pekkala Y. Bedrock map of Finland 1: 1 000 000. Geol. Surv. Finland, Espoo, Finland. 1997. Kukkonen IT. Geothermal Energy in Finland. Proc World Geotherm Congr. 2000;2000:277–82. Kukkonen I. Thermal properties of rocks in Olkiluoto: results of laboratory measurements 1994–2015. Posiva Work Rep. 2015;30:110. Kukkonen IT, Kinnunen KA, Peltonen P. Mantle xenoliths and thick lithosphere in the Fennoscandian Shield. Phys. Chem Earth, Parts a/b/c. 2003;28:349–60. https://doi.org/10.1016/S1474-7065(03)00057-3. Kukkonen IT. The effect of past climatic changes on bedrock temperatures and temperature gradients in Finland, Geol. Surv. Finland, Nucl. Waste Dispos. Res. Espoo. 1986. Kvalsvik KH, Midttømme K, Ramstad RK. Geothermal Energy Use, Country Update for Norway. in: European Geothermal Congress 2019, Den Haag, The Netherlands, 11–14 June. den Haag. 2019. Lahermo P, Ilmasti M, Juntunen R, Taka M. Suomen geokemian atlas, osa 1: Suomen pohjavesien hydrogeokemiallinen kartoitus, The geochemical atlas of Finland, Part 1: The hydrogeochemical mapping of Finnish groundwater. 1990. Li M, Lai ACK. Review of analytical models for heat transfer by vertical ground heat exchangers (GHEs): a perspective of time and space scales. Appl Energy. 2015;151:178–91. https://doi.org/10.1016/J.APENERGY.2015.04.070. Lund A. Analysis of deep-heat energy wells for heat pump systems. Master's Thesis. 2019. https://doi.org/10.1109/ISGT-Europe47291.2020.9248748. Lunkka JP, Johansson P, Saarnisto M, Sallasmaa O. Glaciation of Finland. In: Ehlers J, Gibbard PL, Hughes PD, editors. Developments in Quaternary Science. Amsterdam: Elsevier; 2004. p. 93–100. https://doi.org/10.1016/S1571-0866(04)80058-7. Nalla G, Shook GM, Mines GL, Bloomfield KK. Parametric sensitivity study of operating and design variables in wellbore heat exchangers. Geothermics. 2005;34:330–46. https://doi.org/10.1016/j.geothermics.2005.02.001. Neumann N, Sandiford M, Foden J. Regional geochemistry and continental heat flow: Implications for the origin of the South Australian heat flow anomaly. Earth Planet Sci Lett. 2000;183:107–20. https://doi.org/10.1016/S0012-821X(00)00268-5. Nironen M. Proterozoic orogenic granitoid rocks. Precambrian Geol. Finl—Key to Evol. Fennoscandian Shield. 2005;14:443–79. https://doi.org/10.1016/S0166-2635(05)80011-8. Nironen M, Luukas J, Kousa J, Vuollo J, Holtta P, Heilimo E. Bedrock of Finland at the scale 1:1 000 000 – Major stratigraphic units, metamorphism and tectonic evolution, Geol. Surv. Finland, Spec. Pap. 2017. Pan S, Kong Y, Chen C, Pang Z, Wang J. Optimization of the utilization of deep borehole heat exchangers. Geotherm Energy. 2020;8:6. https://doi.org/10.1186/s40517-020-0161-4. Peltoniemi S, Kukkonen I. Kivilajien Lämmönjohtavuus Suomessa: Yhteenveto Mittauksista. 1995;1964–1994:14. Pirttijärvi M, Elo S, Säävuori H. Lithologically constrained gridding of petrophysical data. Geophysica. 2013;49:33–51. Renaud T, Verdin P, Falcone G. Numerical simulation of a Deep Borehole Heat Exchanger in the Krafla geothermal system. Int J Heat Mass Transf. 2019;143: 118496. https://doi.org/10.1016/j.ijheatmasstransfer.2019.118496. Rolandone F, Jaupart C, Mareschal JC, Gariépy C, Bienfait G, Carbonne C, Lapointe R. Surface heat flow, crustal temperatures and mantle heat flow in the Proterozoic Trans-Hudson Orogen, Canadian Shield. J Geophys Res Solid Earth. 2002. https://doi.org/10.1029/2001jb000698. Rudnick RLR, Nyblade A. The thickness and heat production of Archean lithosphere: constraints from xenolith thermobarometry and surface heat flow. Mantle Petrol. F. Obs. High Press. Exp. (a Tribut. to Fr. R. Boyd) 6: 3–12. 1999. Rybach L. Wärmeproduktionsbestimmungen an Gesteinen der Schweizer Alpen. 1973. Saaly M, Sinclair R, Kurz D. Assessment of a Closed-Loop Geothermal System for Seasonal Freeze-Back Stabilization of Permafrost. 2014. Schmidt D, Kallert A, Blesl M, Svendsen S, Li H, Nord N, Sipilä K. Low temperature district heating for future energy systems. Energy Procedia. 2017;116:26–38. https://doi.org/10.1016/j.egypro.2017.05.052. Schulte D. Simulation and Optimization of Medium Deep Borehole Thermal Energy Storage Systems. Doctoral dissertation. 2016. Śliwa T, Kruszewski M, Zare A, Assadi M, Sapińska-Śliwa A. Potential application of vacuum insulated tubing for deep borehole heat exchangers. Geothermics. 2018;75:58–67. https://doi.org/10.1016/J.GEOTHERMICS.2018.04.001. Statistics Finland, Energy in Finland. ISBN 978–952–244–678–7. 2021. Veikkolainen T, Kukkonen IT. Highly varying radiogenic heat production in Finland, Fennoscandian Shield. Tectonophysics. 2019;750:93–116. https://doi.org/10.1016/j.tecto.2018.11.006. Wang Z, Wang F, Liu J, Ma Z, Han E, Song M. Field test and numerical investigation on the heat transfer characteristics and optimal design of the heat exchangers of a deep borehole ground source heat pump system. Energy Convers Manag. 2017;153:603–15. https://doi.org/10.1016/j.enconman.2017.10.038. Welsch B. Technical, Environmental and Economic Assessment of Medium Deep Borehole Thermal Energy Storage Systems, Technische, ökonomische und ökologische Bewertung mitteltiefer Erdwärmesondenspeicher. Technische Universität Darmstadt. 2019. Zhao Y, Pang Z, Huang Y, Ma Z. An efficient hybrid model for thermal analysis of deep borehole heat exchangers. Geotherm Energy. 2020. https://doi.org/10.1186/s40517-020-00170-z. Zhou C, Zhu G, Xu Y, Yu J, Zhang X, Sheng H. Novel methods by using non-vacuum insulated tubing to extend the lifetime of the tubing. Front Energy. 2015;9:142–7. https://doi.org/10.1007/s11708-015-0357-7. The writers dedicate this work to M.Sc. (tech) Jarmo Kosonen who started the project, but sadly passed away during the writing of this article. We acknowledge Matti Pentti and Kristian Savela from ST1 Ltd. and Risto Lahdelma from Aalto University for initial work and help. We are grateful for the insightful comments of two anonymous reviewers. The project was partly funded by Business Finland's Smart Otaniemi subproject Smart integration of energy flexible buildings and local hybrid energy systems. Business Finland diary number: 52/31/2019. Geological Survey of Finland, Vuorimiehentie 5, PL 96, 02151, Espoo, Finland Kaiu Piipponen, Kimmo Korhonen, Sami Vallin, Teppo Arola, Alan Bischoff & Nina Leppäharju Geological Survey of Finland, Teknologiankatu 7, PL 97, 67101, Kokkola, Finland Annu Martinkauppi Kaiu Piipponen Kimmo Korhonen Sami Vallin Teppo Arola Alan Bischoff Nina Leppäharju KP created the COMSOL model, ran necessary computations and wrote the sections on borehole parameters, results, and co-wrote the introduction, discussion, and conclusion sections. AM wrote the sections on geology and co-wrote discussion and conclusion sections. KK wrote the sections on numerical modelling and solving for maximum annual energy yield, created the initial Matlab code applied in this study and worked on the model validation. SV and NL co-wrote the introduction and made valuable comments on the manuscript. TA and AB supervised the work and made valuable comments on the manuscript. All authors read and approved the final manuscript. Correspondence to Kaiu Piipponen. Piipponen, K., Martinkauppi, A., Korhonen, K. et al. The deeper the better? A thermogeological analysis of medium-deep borehole heat exchangers in low-enthalpy crystalline rocks. Geotherm Energy 10, 12 (2022). https://doi.org/10.1186/s40517-022-00221-7 Medium-deep Low enthalpy Borehole heat exchanger Crystalline rock COMSOL Multiphysics
CommonCrawl
Mathematicians shocked(?) to find pattern in prime numbers There is an interesting recent article "Mathematicians shocked to find pattern in "random" prime numbers" in New Scientist. (Don't you love math titles in the popular press? Compare to the source paper's Unexpected Biases in the Distribution of Consecutive Primes.) To summarize, let $p,q$ be consecutive primes of form $a\pmod {10}$ and $b\pmod {10}$, respectively. In the paper by K. Soundararajan and R. Lemke Oliver, here is the number $N$ (in million units) of such pairs for the first hundred million primes modulo $10$, $$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &a&b&\color{blue}N&&a&b&\color{blue}N&&a&b&\color{blue}N&&a&b&\color{blue}N\\ \hline &1&3&7.43&&3&7&7.04&&7&9&7.43&&9&1&7.99\\ &1&7&7.50&&3&9&7.50&&7&1&6.37&&9&3&6.37\\ &1&9&5.44&&3&1&6.01&&7&3&6.76&&9&7&6.01\\ &1&1&\color{brown}{4.62}&&3&3&\color{brown}{4.44}&&7&7&\color{brown}{4.44}&&9&9&\color{brown}{4.62}\\ \hline \text{Total}& & &24.99&& & &24.99&& & &25.00&& & &24.99\\ \hline \end{array}$$ As expected, each class $a$ has a total of $25$ million primes (after rounding). The "shocking" thing, according to the article, is that if the primes were truly random, then it is reasonable to expect that each subclass will have $\color{blue}{N=25/4 = 6.25}$. As the present data shows, this is apparently not the case. Argument: The disparity seems to make sense. For example, let $p=11$, so $a=1$ . Since $p,q$ are consecutive primes, then, of course, subsequent numbers are not chosen at random. Wouldn't it be more likely the next prime will end in the "closer" $3$ or $7$ such as $q=13$ or $q=17$, rather than looping back to the same end digit, like $q=31$? (I've taken the liberty of re-arranging the table to reflect this.) However, what is surprising is the article concludes, and I quote, "...as the primes stretch to infinity, they do eventually shake off the pattern and give the random distribution mathematicians are used to expecting." Question: What is an effective way to counter the argument given above and come up with the same conclusion as in the article? (Will all the $N$ eventually approach $N\to 6.25$, with the unit suitably adjusted?) Or is the conclusion based on a conjecture and may not be true? P.S: A more enlightening popular article "Mathematicians Discover Prime Conspiracy". (It turns out the same argument is mentioned there, but with a subtle way to address it.) number-theory prime-numbers modular-arithmetic congruences random Tito Piezas III Tito Piezas IIITito Piezas III $\begingroup$ I caught that sentence too and also wondered if it had actually been proved, and if so what the big deal is. Reminds me of the situation with Chebyshev's bias. Your argument for why the disparity exists for small primes makes sense to me, but I am curious how it shakes out quantitatively. I also read this article with an even catchier title. Excellent question and cheers. $\endgroup$ – Dan Brumleve Mar 15 '16 at 3:44 $\begingroup$ Here is a 2010 paper considering the same question mod $4$. I found it linked by a commenter in the Quanta article. $\endgroup$ – Dan Brumleve Mar 15 '16 at 4:12 $\begingroup$ Check also this article on Tao's blog: terrytao.wordpress.com/2016/03/14/… $\endgroup$ – PITTALUGA Mar 15 '16 at 12:54 $\begingroup$ Symmetry, at least when viewed as crosstabulated and then horizontally mirrored $$ \begin{array} {r|rrrr} & -1 & -3& 3&1 \\ \hline 1& 5.44& 7.50& 7.43& 4.62\\ 3& 7.50& 7.04& 4.44& 6.01\\ -3& 7.43& 4.44& 6.76& 6.37\\ -1& 4.62& 6.01& 6.37& 7.99 \end{array} $$ where the row- and coulmn-headers are 1,3,7,9 set as 1,3,-3,-1 (because being residues modulo 10) $\endgroup$ – Gottfried Helms Mar 15 '16 at 14:01 $\begingroup$ @achillehui: If we define $F(n) = \frac{10}{4 \log n}$, then, $$F(10^8) =0.135,\,F(10^{10}) =0.108,\,F(10^{12}) =0.090$$ rounded off. With increasing denominator, it's getting smaller and approaching $\frac{1}{16}=0.0625$. Ok, got it. $\endgroup$ – Tito Piezas III Mar 15 '16 at 15:04 $ \qquad \qquad $ Remark: see also [update 3] at end 1. First observations I think there is at least one artifact (=non-random) in that list of frequencies. If we rewrite this as a "correlation"-table, (the row-header indicate the residue classes of the smaller prime p and the column-header that of the larger prime q): $$ \small \begin{array} {r|rrrr} & 1&3&7&9 \\ \hline 1& 4.62& 7.43& 7.50& 5.44\\ 3& 6.01& 4.44& 7.04& 7.50\\ 7& 6.37& 6.76& 4.44& 7.43\\ 9& 7.99& 6.37& 6.01& 4.62 \end{array}$$ then a surprising observation is surely the striking symmetry around the antidiagonal. But also the asymmetric increase of frequencies from top-right to bottom-left on the antidiagonal is somehow surprising. However, if we look at this table in terms of primegaps, then residue-pairs $(1,1)$ $(3,3)$ $(7,7)$,$(9,9)$ (the diagonal) refer to primegaps of the lenghtes $(10,20,30,...,10k,...)$ and those are the entries in the table with lowest frequencies, residue-pairs $(1,3)$, $(7,9)$ and $(9,1)$ refer to primegaps of the lenghtes $(2,12,22,32,...,10k+2,...)$ and those contain the entry with the highest frequencies residue-pairs $(3,7)$ $(7,1)$ ,$9,3$ refer to primegaps of the lenghtes $(4,14,24,34,...,10k+4,...)$ residue-pairs $(1,7)$ $(3,9)$ and $(7,3)$ refer to primegaps of the lenghtes $(6,16,26,36,...,10k+6,...)$ and have the two next-largest frequencies residue-pairs $(1,9)$ $(3,1)$ and $(9,7)$ refer to primegaps of the lenghtes $(8,18,28,38,...,10k+8,...)$ so the -in the first view surprising- different frequencies of pairs $(1,9)$ and $(9,1)$ occurs because one collects the gaps of (minimal) length 8 and the other that of (minimal) length 2 - and the latter are much more frequent, but which is completely compatible with the general distribution of primegaps. The following images show the distribution of the primegaps modulo 100 (whose greater number of residue classes should make the problem more transparent). (I've left the primes smaller than 10 out of the computation): in logarithmic scale We see the clear logarithmic decrease of frequencies with a small jittering disturbance over the residue classes. It is also obvious, that the smaller primegaps dominate the larger ones, so that a "slot" which catches the primegaps of lengthes $2,12,22,...$ has more occurences than the "slot" which catches $8,18,28,...$ - just by the frequencies in the very first residue class. The original table of frequencies in the residue classes modulo 10 splits this into 16 combinations of pairs of 4 residue classes and the observed non-smoothness is due to that general jitter in the resdiue classes of the primegaps. It might also be interesting to see that primegap-frequencies separated into three subclasses - : That trisection shows the collected residue classes $6,12,18,...$ (the green line) as dominant over the two other collections and the two other collection change "priority" over the single residue classes. The modulo-10-problem overlays that curves a bit and irons the variation a bit out and even makes it a bit less visible - but not completely: because the general distribution of residue classes in the primegaps has such a strong dominance in the small residue-classes. So I think that general distribution-characteristic explains that modulo-10 problem, however a bit less obvious... 2. Further observations (update 2) For further analysis of the remaining jitter in the previous image I've tried to de-trend the frequencies distribution of the primegaps (however now without modulo considerations!). Here is what I got on base of 5 700 000 primes and the first 75 nonzero lenghtes g. The regression-formula was simply created by the Excel-spreadsheet: De-trending means to compute the difference between the true frequencies $\small f(g)$ and the estimated ones; however, the frequency-residuals $\small r_0(g)=f(g) - 16.015 e^{-0.068 g }$ decrease in absolute value with the value of g. Heuristically I applied a further detrending function at the residuals $\small r_0(g)$ so that I got $\small r_1(g) = r_0(g) \cdot 1.07^g $ which look now much better de-trended. This is the plot of the residuals $\small r_1(g)$: Now we see that periodic occurences of peaks in steps of 6 and even some apparent overlay. Thus I marked the small primefactors $\small (3,5,7,11)$ in g and we see a strong hint for a additive composition due to that primefactors in $g$ The red dots mark that g divisible by 3, green dots that by 5, and we see, that at g which are divisible by both the frequency is even increased. I've also tried a multiple regression using that small primefactors on that residuals, but this is still in process.... 3. observations after Regression/Detrending (update 3) Using multiple regression to detrend the frequencies of primegaps by their length g and additionally by the primefactors of g I got initially a strong surviving pattern with peaks for the primefactor 5. But those peaks could be explained by the observation, that (mod 100) there are 40 residues of primefactor p where the gaplength g=0 (mod 10) can occur, but only 30 residues where the other gaplengthes can occur. Thus I computed the relative (logarithmized) frequencies as $\text{fl}(g)=\ln(f(g)/m_p(g))$ where $f(g)$ is the frequency of that gaplength, and $m_p(g)$ the number of possible residue classes of the (first) prime p (in the pair (p,q) ) at where the gaplengthes g can occur. The first result is the following picture where only the general trend of decreasing of frequencies of larger gaps is detrended: This computation gives a residue $\text{res}_0$ which is the relative (logarithmized) frequency after the length of the primegap is held constant (see the equation in the picture). The regular pattern of peaks at 5-steps in the earlier pictures is now practically removed. However, there is still the pattern of 3-step which indicates the dominance of gaplength 6. I tried to remove now the primefactorization of g as additional predictors. I included marker variables for primefactors q from 3 to 29 into the multiple regression equation and the following picture shows the residues $\text{res}_1(g)$ after the systematic influence of the primefactorization of g is removed. This picture has besides a soft long hill-like trend no more -for me- visible systematic pattern, which would indicate non-random influences. (For me this is now enough, and I'll step out - but still curious whether there will come out more by someone else) Gottfried HelmsGottfried Helms $\begingroup$ +1 Gottfried: Good observations! I totally missed the other symmetries, though I did notice the $4.62,\, 4.44,\, 4.44,\, 4.62$ when I highlighted them. $\endgroup$ – Tito Piezas III Mar 15 '16 at 14:48 $\begingroup$ The lengths divisible by six are very strongly favored (in my plot of $\frac{\phi(n)}{n}$ those are the most prominent white lines). It looks like highly composite numbers are the most favorable distances between primes. In particular, products of two primes are avoided, which would disfavor small distances between primes in the same residue class. $\endgroup$ – Reikku Kulon Mar 20 '16 at 16:35 $\begingroup$ @Reikku : yes, the composite primegaps seem to be more frequent (see the red dots for divisibles by 6 - they are on top of the (rescaled) frequency list in the last image). What I was additionally trying was to find, whether there is some exact functional relation of the -scaled- frequencies and the primfactors in g (with or without multiplicity) - thus I tried multiple regression including primefactor markers. But my results are not yet convincing, at least not convincing me... I'll update my answer when I've more. $\endgroup$ – Gottfried Helms Mar 20 '16 at 20:54 $\begingroup$ Very good observations! Just out of curiosity, what program did you use to make those graphs? $\endgroup$ – KKZiomek Mar 8 '18 at 22:13 $\begingroup$ @KKZiomek : Pari/GP for the data and Excel for the pictures $\endgroup$ – Gottfried Helms Mar 8 '18 at 23:01 It seems unreasonable to expect the prime numbers to "know" which primes are adjacent. The consecutive prime bias must be a symptom of a more general phenomenon. Some experimentation shows that each prime seems to repel others in its residue class, over considerable distance. Fix a prime $p$ and select primes $q \gg p$. Let $r$ be a reasonably large radius, such as $p \log q$, and let $n$ range over the interval $(q - r, q + r)$. Ignoring $q$, $n$ seems to be prime less often for $n \equiv q \pmod p$ than for $n \not\equiv q \pmod p$. For example, with $p = 7$ and $q$ ranging from $7^7$ to $7^8$, these are the primes counted in each residue class (with many overlaps): $$ \small \begin{array} {r|rrrrrr} [q] & n \equiv +1 & +2 & \mathit{+3} & -3 & \mathit{-2} & \mathit{-1}\\ \hline +1 & 108980 & 128952 & 126384 & 127903 & 128088 & 126665\\ +2 & 128952 & 108641 & 128836 & 126463 & 127911 & 127999\\ \mathit{+3} & 126386 & 128838 & 108915 & 128655 & 126043 & 128555\\ -3 & 127904 & 126464 & 128655 & 108843 & 129049 & 126684\\ \mathit{-2} & 128087 & 127910 & 126040 & 129046 & 109062 & 129065\\ \mathit{-1} & 126665 & 128001 & 128553 & 126686 & 129068 & 109293\\ \end{array}$$ Italics indicate the quadratic nonresidues, which do not account for the smaller biases. The repulsion persists even for intervals $(q + p \log q, q + \sqrt{q})$, which gives 10-30 primes per residue class around each $q$: $$ \small \begin{array} {r|rrrrrr} [q] & n \equiv +1 & +2 & \mathit{+3} & -3 & \mathit{-2} & \mathit{-1}\\ \hline +1 & 1009455 & 1015043 & 1015079 & 1014692 & 1012735 & 1014648\\ +2 & 1010366 & 1006394 & 1015175 & 1012825 & 1014562 & 1011749\\ \mathit{+3} & 1014932 & 1010510 & 1008805 & 1014377 & 1015580 & 1017266\\ -3 & 1012473 & 1013167 & 1011447 & 1007058 & 1015711 & 1014626\\ \mathit{-2} & 1017126 & 1011133 & 1014870 & 1010336 & 1008950 & 1016188\\ \mathit{-1} & 1015821 & 1014746 & 1012491 & 1014960 & 1012051 & 1010063\\ \end{array}$$ Since there's nothing special about $p = 7$, the repulsion likely occurs for all $p$. This means that simply by determining $q$ to be prime, we learn something about many composite numbers in arbitrary residue classes, without locating any of them precisely. The following is a plot of $\frac{\phi(n)}{n}$ for odd $n$ with $p = 11, r = 2 \cdot 11^2$ (horizontal), in residue classes modulo $11^2$ (vertical), averaged over all intervals about $q \in (11^5, 11^6)$ and normalized, with $n \equiv q \pmod{11}$ in green, scaled to 2x2 tiles. Dark tiles rarely correspond to primes. First differences: Reikku KulonReikku Kulon $\begingroup$ Cortana found the "conspiracy" link two days ago, and since then I've been intermittently experimenting with gp while waiting for responses to this question. I hope something more precise can be said about the extra composites; a similar pattern arises in their prime factors. I'm also thinking about random prime models. A simple model produces a uniform distribution; what would replicate the pair bias? Could repulsion alone replicate the prime number distribution? $\endgroup$ – Reikku Kulon Mar 17 '16 at 23:49 $\begingroup$ The effect on semiprimes is very similar. $\endgroup$ – Reikku Kulon Mar 18 '16 at 1:55 $\begingroup$ Here are some average gaps between primes in each residue class ($p = 7$): $$ \small \begin{array} {r|rrrrrr} [q] & n \equiv +1 & +2 & \mathit{+3} & -3 & \mathit{-2} & \mathit{-1}\\ \hline +1 & 14.88 & 13.36 & 12.18 & 12.94 & 12.31 & 12.95\\ +2 & 11.80 & 14.88 & 13.41 & 12.06 & 12.88 & 12.27\\ \mathit{+3} & 12.91 & 11.80 & 14.88 & 13.36 & 12.19 & 12.95\\ -3 & 12.29 & 12.98 & 11.83 & 14.88 & 13.38 & 12.08\\ \mathit{-2} & 12.93 & 12.31 & 13.04 & 11.74 & 14.88 & 13.37\\ \mathit{-1} & 12.10 & 12.89 & 12.33 & 12.94 & 11.81 & 14.88\\ \end{array}$$ $\endgroup$ – Reikku Kulon Mar 18 '16 at 2:27 $\begingroup$ Primes counted in each residue class among the $p$ primes following each $q$: $$ \small \begin{array} {r|rrrrrr} [q] & n \equiv +1 & +2 & \mathit{+3} & -3 & \mathit{-2} & \mathit{-1}\\ \hline +1 & 56590 & 63948 & 68102 & 65538 & 68153 & 65210\\ +2 & 68271 & 56381 & 62178 & 67736 & 64531 & 67891\\ \mathit{+3} & 63872 & 69961 & 56742 & 63525 & 67531 & 65826\\ -3 & 67332 & 63757 & 68853 & 56711 & 62336 & 68272\\ \mathit{-2} & 64741 & 68415 & 63926 & 70040 & 56689 & 63793\\ \mathit{-1} & 66735 & 64530 & 67650 & 63706 & 68373 & 56708\\ \end{array}$$ $\endgroup$ – Reikku Kulon Mar 18 '16 at 2:43 $\begingroup$ Residue classes of distinct prime factors in the composite halfway between $q$ and next congruent prime (note column minima): $$ \small \begin{array} {r|rrr} [q] & n \equiv +1 & +2 & \mathit{+3} & -3 & \mathit{-2} & \mathit{-1}\\ \hline +1 & 14385 & 15347 & 17920 & 19657 & 18259 & 18045\\ +2 & 15861 & 15018 & 19430 & 18436 & 16952 & 17823\\ \mathit{+3} & 16229 & 15374 & 17572 & 18714 & 17928 & 17759\\ -3 & 14505 & 16525 & 18246 & 17891 & 16710 & 19531\\ \mathit{-2} & 14558 & 15525 & 17835 & 20112 & 16424 & 18886\\ \mathit{-1} & 15101 & 17057 & 19163 & 18446 & 16571 & 17304\\ \end{array}$$ $\endgroup$ – Reikku Kulon Mar 18 '16 at 3:20 If I have read the New Scientist article correctly, the so-called "discrepancy" is: If a prime ends in 1 (as in the first class), the observed probability that the next prime also end in 1 is not 1/4. This observation can be explained by elementary probability with the assumption that the classes are indeed truly random. I was also troubled by this and I asked it in the math overflow forum: https://mathoverflow.net/questions/234753/article-in-the-new-scientist-on-last-number-of-prime-number. Let me restate the argument. Let's write the sequence of all numbers ending with 1, 3, 7, 9 (beginning with 7): 7, 9, 11, 13, 17, 19,... and flag each number with a probability of $p$. Let's denote $q=1−p$. Now if a number ending in 1 has been flagged, the probability that the next number being flagged ends in 1 can easy be computed: $\sum_{k=0}^\infty q^{3+4k}p=q^3p\frac{1}{1-q^4}$. That's not 25%! In order to make that more intuitive, suppose that p is close to 1 that is we flag each number with high probability (but nevertheless randomly). If we have flagged a number ending in 1, the probability that the next number being flagged ends in 1 is very small, because we can expect that at least one of the three following numbers (ending in 3, 7, 9) will be flagged (recall that we have expected p being close to 1). Now this model is oversimplificated. The probability that a random number $n$ is prime can be evaluated as $1/ln(n)$ (not as a constant $p$) by the prime counting function. If we know that the number ends in $1, 3, 7, 9$; this probability becomes $\frac{10}{4}\frac{1}{ln(n)}$ (assuming the classes are random). Because the sequence $q^{3+4k}p$ tends to zero rapidly for $k\rightarrow\infty$, if a number $n$ ending in 1 has been flagged, the probability that the next number being flagged also ends in 1 can be evaluated as $q_n^3p_n\frac{1}{1-q_n^4}$ with $p_n=\frac{10}{4}\frac{1}{ln(n)}$ and $q_n=1-p_n$. For $n=100\cdot10^6$, we find 19.8% which is not much different from the number cited in the article (take in mind the simplification in my argument, also the article seem to make the experiment for a random number between 1 and $10\cdot10^6$ not just a number which is approximately equal to $10\cdot10^6$). Moreover as $n\rightarrow\infty$, $p_n\rightarrow 0$ and we can check that $\lim_{p\rightarrow 0}q^3p\frac{1}{1-q^4}=\frac{1}{4}$ which seems to explain that the "discrepancy" vanishes when we take a longer sequence. There are a lot of mysteries concerning prime numbers but it seems that the one pointed by the New Scientist is nothing more than misconception on elementary probability. olivierolivier $\begingroup$ The article is not overly precise to say the least, there are better popular expositions. A thing to note is the quote: ' "In ignorance, we thought things would be roughly equal," says Andrew Granville of the University of Montreal, Canada.' Pay attention to the 'roughly.' What you sketch is the 'roughly equal' but the bias is stronger than this and this is the news. I feel this popular article does a better job at explaining. Read it to the end; the start can make one jump to conclusions as displayed here. $\endgroup$ – quid♦ Mar 29 '16 at 17:29 $\begingroup$ @Olivier: I have deleted an unnecessary sentence at the very start of your post that could have led to confusion. Also, for readability, your long post has been divided into paragraphs. $\endgroup$ – Tito Piezas III Mar 29 '16 at 18:21 $\begingroup$ @quid. The article you cite make much more sense that the New Scientist article; it does indeed mention the explanation I give above (without giving much details) and says that it is still unsatisfactory. Clearly the New Scientist article is badly written, there is no reasonable way to expect 25% to the question posed by the New Scientist article. $\endgroup$ – olivier Mar 29 '16 at 18:51 $\begingroup$ @quid: I've edited the post to include a link to that article which was also recommended by Dan Brumleve. Yes, it has a subtle way to address the argument I raised. $\endgroup$ – Tito Piezas III Mar 29 '16 at 20:19 $\begingroup$ The difficulty in applying this probabilistic argument (which is similar to Adam Bailey's earlier comment) is the assumption that consecutive prime numbers can be accurately modeled as independent events. The observed bias demonstrates that, at least for "small" primes, the dependencies are less intuitive than previously thought. Cramér's model predicts the primes should be uniformly distributed among residue classes over reasonably small intervals, but the primes are found to behave as if each prime in a chosen class depletes that class over a nearby interval. $\endgroup$ – Reikku Kulon Mar 30 '16 at 4:46 Not the answer you're looking for? Browse other questions tagged number-theory prime-numbers modular-arithmetic congruences random or ask your own question. What were some major mathematical breakthroughs in 2016? Are primes $p$ such that $2^p-1$ is prime for seemingly random? Interesting pattern of primes occurred in pairs ($p$ and $p+10$) before $1000$ distribution of primes with reference to last digit What is really being claimed in this article about a prime number pattern? A prime number pattern Prime numbers, what explains this pattern? Prime numbers that fits in a specific pattern Consecutive prime numbers multiplication pattern Determining whether the distribution of prime numbers is random or whether is there a pattern to it An interesting pattern in the differences between prime numbers. Palindromic Numbers - Pattern "inside" Prime Numbers? Is there any pattern in prime numbers Are there too many 8-digit primes $p$ for Mersenne primes $M_p$? Conjecture that there may be no pattern in prime numbers
CommonCrawl
I already sent it around at work, but you might like it too. NPR piece on Donald Knuth Mostly I think of Knuth in association with TEX, the mark-up language for mathematical notation, but I see the occasional piece by or about him. Often his stuff has a context we lay folk understand, so he stands out for that; I can sometimes relate not only to his desire for aesthetically pleasing typesetting, but also to the curiosity about everyday stuff behind some of the problems he looks into. Here's a funny one that came from his musings in the restrooms at Stanford: MR0761401 (86a:05006) The toilet paper problem. Amer. Math. Monthly 91 (1984), no. 8, 465--470. The toilet paper dispensers are designed to hold two rolls of tissues, and a person can use either roll. There are two kinds of users. A big chooser always takes a piece from the roll that is currently larger, while a little chooser does the opposite. When the two rolls are the same size, or when only one is nonempty, everybody chooses the nearest nonempty roll. Assume that people enter the toilet stalls independently at random, with probability $p$ that they are big choosers and probability $q=1-p$ that they are little choosers. If two fresh rolls of toilet paper, both of length $n$[copy eds---and ALG---missed the comma that should have gone here!] are installed, let $M\sb n(p)$ be the average number of portions left on one roll when the other one first empties. The purpose of this paper is to study the asymptotic value of $M\sb n(p)$ for fixed $p$ as $n\to\infty$. Let $M(z)=\sum\sb {n\ge1}M\sb n(p)z\sp n$, and $C(z)=\sum\sb {n\ge1}c\sb nz\sp n$, $c\sb n={2n-2\choose n-1}/n$ (Catalan numbers) be the generating functions. It is proved that $M(z)=(z/(1-z)\sp 2)((q-C(pqz))/q)$. Let $r$ be any value greater than $4pq$; then $M\sb n(p)=p/(p-q)+O(r\sp n)$ if $q[...html freaked out here, where a "<" appeared...]p$, $M\sb n(p)=((q-p)/q)n+p/(q-p)+O(r\sp n)$ if $q[...html freaked out here, where a ">" appeared...]p$, and $M\sb n({1\over 2})=2\sqrt{n/\pi}-\frac 14\sqrt{1/\pi n}+O(n\sp {-3/2})$. S'okay, when it turns into what is apparently a version of Banach's matchbook problem, most of us just toss our hands up in the air & let it go, but it's fun anyway. Some part of me still likes a puzzle, especially when it has that for-its-own-sake purity that seems in part to embody a childlike innocence in its pursuit of joyous cognitive play. From the story this morning: "He wears his bike helmet indoors because, well, he's going to have to put it back on a little later anyway." Not entirely unlike my personal resistance to, say, closing kitchen cabinets I'm just going to be opening again soon enough. (For the record, I've been much better at closing them lately than I was in the first months after H's departure. I really do like the feel of the kitchen better when it's orderly, yet it's an ongoing struggle---though prob'bly not so much against the mistaken instinct to save energy at all costs as against that old deleteriously rebellious reaction against housekeeping as oppression.) Knuth: "There's ways to amuse yourself [chuckle] while you're doing things, and that's the way I look at efficiency: it's an amusing thing to think about, but not that I, that I'm obsessed that it's got to be efficient, you know, or else I, uh, go crazy." He makes it sound easy to strike a balance there, doesn't he.
CommonCrawl
Helping with Math Common Core Worksheet Mapping Graph & Charts Animals & Living Things Career/Work Worksheets Calendar Common Core Mapping Math Calculators Math Quizzes Math Table Charts Seasonal Worksheets Calendar Math Theory Measurement & Time Operations of Numbers Home » Math Theory » Algebra » Solving Inequalities Solving Inequalities What is an inequality? What are the rules of inequality? Law of Trichotomy Converse Property Transitive Property Addition Property Subtraction Property Multiplication Property Division Property Additive Inverse Property Multiplicative Inverse Property How to write inequalities in an interval notation? How to graph inequalities in a number line? How to solve inequalities? How to solve linear inequalities? Solving one-step linear inequalities Solving two-step linear inequalities Solving compound inequalities How to solve quadratic inequalities? Recommended Worksheets In mathematics, inequality is defined as the relationship between any two numbers or algebraic expressions that are not equal. Inequalities can be viewed as either a mathematical question that can be manipulated to find the value of some variables or a statement of fact in the form of theorems. We use different mathematical terms and symbols to compare in order to show inequality. The table below shows the four mathematical symbols we use in comparing and showing inequality of numbers or mathematical expressions. INEQUALITY SYMBOL greater than or equal to There are certain rules that we need to follow when solving inequalities. In this section, we will try to understand all the rules of inequality. The law of trichotomy for real lines states that for any real numbers a and b, only one of 𝑎 < 𝑏, 𝑎 = 𝑏, and 𝑎 > 𝑏 is true. Say, we have the statements, 7 < 9, 7 = 9, 7 > 9, only one of it is true. Since we know that 9 is larger than 7, we can say that the only true statement is 7 < 9. Converse property of inequality states that < and >, ≤ and ≥ are each other's converse, respectively. This means, for any real numbers a and b, 𝑎 < 𝑏 and 𝑏 > 𝑎 and 𝑎 ≤ 𝑏 and 𝑏 ≥ 𝑎 are equivalent, or we can simply say that: 𝑎 < 𝑏 ↔ 𝑏 > 𝑎 𝑎 ≤ 𝑏 ↔ 𝑏 ≥ 𝑎 For example, we have 6 < 7. By converse property of inequality, 6 < 7 is the same as 7 > 6. For any real numbers a, b, and c, If 𝑎 < 𝑏 and 𝑏 < 𝑐, then 𝑎 < 𝑐. If 𝑎 > 𝑏 and 𝑏 > 𝑐, then 𝑎 > 𝑐. Suppose 14 > 12 and 12 > 6. By transitive property of inequality, 14 > 6. In addition property of inequality, if a common constant term c is added to both sides of inequality then, for any real number a, b, and c: if 𝑎 < 𝑏, then 𝑎 + 𝑐 < 𝑏 + 𝑐. if 𝑎 ≤ 𝑏, then 𝑎 + 𝑐 ≤ 𝑏 + 𝑐. If we add 15 in the inequality 10 > 5, using the addition property of inequality, it follows that: 10 > 5 10 + 15 > 5 + 15 The rule still holds for > and ≥. Subtraction property of inequality states that if a common constant term c is subtracted to both sides of an inequality, then, for any real number a, b, and c: if 𝑎 < 𝑏, then 𝑎 − 𝑐 < 𝑏 − 𝑐 if 𝑎 ≤ 𝑏, then 𝑎 − 𝑐 ≤ 𝑏 – 𝑐 Suppose we have the inequality 20 < 14, and we are tasked to subtract 7. By subtraction property of inequality, 20 – 7 < 14 – 7 13 < 7 For any real numbers a, b, and c≠0, if 𝑎 < 𝑏 and 𝑐 > 0, then 𝑎𝑐 < 𝑏𝑐 if 𝑎 < 𝑏 and 𝑐 < 0, then 𝑎𝑐 > 𝑏𝑐 if 𝑎 ≤ 𝑏 and 𝑐 > 0, then 𝑎𝑐 ≤ 𝑏𝑐 If 𝑎 ≤ 𝑏 and 𝑐 < 0, then 𝑎𝑐 ≥ 𝑏𝑐. For example, we are asked to multiply 𝑐 = 8 to the inequality 2 < 4 By multiplication property of inequality, 2 < 4 2 x 8 < 4 x 8 Another example is when 5 < 7 and 𝑐 = −2. If we multiply -2 to the inequality 5 < 7, it follows that: 5 x -2 < 7 x -2 -10 > -14 Always remember that the inequality symbol needs to be reversed every time we multiply an inequality with a negative number. if 𝑎 < 𝑏 and 𝑐 > 0, then $\frac{a}{c}$<$\frac{b}{c}$ if 𝑎 < 𝑏 and 𝑐 < 0, then $\frac{a}{c}$>$\frac{b}{c}$ If we divide c = 2 in the equality of 20 < 30. Then, by division property of inequality $\frac{20}{2}$ < $\frac{30}{2}$ The additive inverse property of inequality states that for any real numbers a and b, if 𝑎 < 𝑏, then −𝑎 > −𝑏. if 𝑎 ≤ 𝑏, then −𝑎 ≥ −𝑏. The multiplicative inverse property of inequality states that if for any real numbers a and b that are both positive or both negative, if a<b, then $\frac{1}{a}$>$\frac{1}{b}$. if a>b, then $\frac{1}{a}$<$\frac{1}{b}$. The following considerations must be kept in mind while writing the solutions of an inequality in interval notation. If the intervals of the solution of the inequality use < or >, use open parenthesis '(' or ')'. If the intervals of the solution of the inequality use ≤ or ≥, use closed brackets '[' or ']'. Always use an open parenthesis to represent or -. Let's take a look at some of the examples below. INEQUALITY INTERVAL NOTATION x < 5 (-∞, 5) x > 5 (5, ∞) x ≤ 5 (-∞, 5] x ≥ 5 [5, ∞) 1 < x ≤ 5 (1, 5] When graphing inequalities in a number line, we need to look at the inequality symbol used as that will give us a hint as to where the number line is leading. INEQUALITY SYMBOL HOW TO GRAPH ILLUSTRATION > To graph an inequality having greater than symbol, use an open circle to mark the starting value and point the arrow towards the positive infinity. < To graph an inequality having less than symbol, use an open circle to mark the starting value and point the arrow towards the negative infinity. ≥ To graph an inequality with greater than or equal to, use a closed circle to mark the starting value and point the arrow towards the positive infinity. ≤ To graph less than or equal to inequality, use a close circle to mark the starting value and point the arrow towards the negative infinity. To find the solutions of any inequality, you may follow the following steps below: Solve for the value of the variable/s using the rules of inequality. Represent all the values on a number line. Represent included and excluded values by using closed and open circles, respectively. Identify the intervals. Double-check the interval by picking a random number from the interval/s and substituting it to the inequality. Determine the solutions of the inequality x + 5 < 13. Step-by-step process Explanation x + 5 < 13 Write the inequality and observe the possible rules that we need to do in order to find the value of x. x + 5 – 5 < 13 – 5 In order to get the value of x, we need to remove 5 from the left-hand side. Hence, subtracting 5 to both sides of the inequality. x < 8 Simplify. Therefore, the solution of the inequality x + 5 < 13 is x < 8. Find the solution of the inequality 6x – 7 > 3x + 2. 6x – 7 > 3x + 2 Write the inequality and observe the possible rules that we need to do in order to find the value of x. 6x – 3x – 7 > 3x – 3x + 2 Put all the x on the left-hand side of the inequality by subtracting 3x to both sides of the inequality. 3x – 7 > 2 Simplify. 3x – 7 + 7 > 2 + 7 Remove all the constants on the left-hand side using addition property of inequality. 3x > 9 Simplify. $\frac{3x}{3}$ > $\frac{9}{3}$ Find the value of x by dividing 3 to both sides of the inequality. x > 3 Simplify. Therefore, the solution of the inequality 6x – 7 > 3x + 2 is x > 3. A linear inequality is like a linear equation, except that the inequality sign replaces the equality sign. Hence, solving linear inequalities is almost the same as solving linear equations. Suppose we have an inequality of 6x > 24. For us to find the solution to this inequality using only one-step. Hence, we need to divide both side of the inequality by 6. Thus, we will have x > 6. Therefore, the solution of the inequality is x > 6 or by interval notation the solution is represented by (6, ). Using interval notation, what is the solution of the inequality 5x ≤ 70? 5x ≤ 70 Write the inequality and observe the possible rules that we need to do in order to find the value of x. $\frac{1}{5}$ ∙ 5x ≤ 70 ∙ $\frac{1}{5}$ Multiply both sides of the inequality by $\frac{1}{5}$. x ≤ 14 Simplify. Using interval notation, the solution of the inequality can be represent by (-∞, 14]. Consider the inequality 8x – 11 < 5. To be able to get the solution of this inequality, we need to work it out using only two steps. The first step that we need to do is add 11 to both sides of the inequality. Hence, we will have 8x < 16. Then, let's divide 8 to both sides of the inequality – which will result to x < 2. Using interval notation, we can write the solution of 8x – 11 < 5 as (-, 2). What is the solution of the inequality 4x – 17 ≥ 23? 4x – 17 ≥ 23 Write the inequality and observe the possible rules that we need to do in order to find the value of x. 4x – 17 + 17 ≥ 23 + 17 Add 17 to the left and right hand side. 4x ≥ 40 Simplify. $\frac{1}{4}$ ∙ 4x ≥ 40 ∙ $\frac{1}{4}$ Multiply both sides by $\frac{1}{4}$. x ≥ 10 Simplify. Using interval notation, the solution of the inequality can be represent by [10, ∞). Compound inequality is a group of two or more inequalities with either "and" or "or". The solution of a compound inequality, the solution to both parts must be true. Hence, to solve inequalities having this case, all we need to do is work on it independently and then find the final solution according to the these rules: If the inequality have "and" between them, then the final solution is the intersection of the solutions of the two inequalities. If the inequality have "or" between them, then the final solution is the union of the solutions of the two inequalities. Solve the solution of the compound inequality 3x + 7 > 28 and 5x ≤ 80. 3x + 7 > 28 Work on the first given inequality. 3x + 7 – 7 > 28 – 7 Subtract 7 from both sides of the inequality. 3x > 21 Simplify. $\frac{1}{3}$ ∙ 3x > 21 ∙ $\frac{1}{3}$ Multiply $\frac{1}{3}$ to both sides of the inequality. 5x ≤ 80 Work on the second inequality. $\frac{1}{5}$ ∙ 5x ≤ 80 ∙ $\frac{1}{5}$ Multiply $\frac{1}{5}$ to both sides of the inequality. x ≤16 Simplify. x > 7 : (7, ∞) x ≤ 16 : (-∞, 16] Represent the solutions of the two inequalities using interval notation. 7 < x ≤ 16 : (7, 16] Get the intersection of the solutions of the compound inequality. Therefore, the solution of the given compound inequality is 7 < x ≤ 16 or (7, 16] using interval notation. What is the solution of the compound inequality 4x + 9 ≤ 29 or 9x + 27 > 108? 4x + 9 ≤ 29 Work on the first given inequality. 4x + 9 – 9 ≤ 29 – 9 Subtract 9 from both sides of the inequality. 4x ≤ 20 Simplify. $\frac{1}{4}$ ∙ 4x ≤ 20 ∙ $\frac{1}{4}$ Multiply both sides by $\frac{1}{4}$. x ≤ 5 Simplify. 9x + 27 > 108 Work on the second inequality. 9x + 27 – 27 > 108 – 27 Subtract 27 from both sides. $\frac{1}{9}$ ∙ 9x > 81 ∙ $\frac{1}{9}$ Multiply both sides by $\frac{1}{9}$. x ≤ 5 : (-∞, 5]x > 9 : (9, ∞) Represent the solutions of the two inequalities using interval notation. x ≤ 5 or x > 9 : (-∞, 5] ∪ (9, ∞) Get the union of the solutions of the compound inequality. Therefore, the solution of the given compound inequality is x ≤ 5 or x > 9 or (-∞, 5] ∪ (9, ∞) using interval notation. Quadratic inequality is a mathematical statement involving quadratic expression and inequality symbols. To solve quadratic inequalities, these are some of the things that we should do: Re-write the inequality as an equation. Determine the values of x. Represent the solutions using interval. Check whether the inequality is true for the interval by taking any number from each interval. If the solution of the inequality in each interval is true, then that is the solution of the quadratic inequality. What is the solution of the quadratic inequality x2 – x – 12 ≤ 0? Step 1: Re-write the inequality as a quadratic equation. Hence, x2 – x – 12 = 0. Step 2: Using the rules of finding the solutions of a quadratic equation, we can make the quadratic equation as (x + 3)(x – 4) = 0. Step 3: The values of x are x = -3 and x = 4. Step 4: Make a table that will represent the solutions of x using interval notation and check whether a random number will make it true. INTERVAL NOTATION RANDOM NUMBER SUBSTITUTE X TO THE INEQUALITY x2 – x – 12 ≤ 0 (-∞, -3] x = -4 (-4)2 – (-4) – 12 ≤ 0 16 + 4 – 12≤ 0 20 – 12 ≤ 0 8 ≤ 0; false [-3, 4] x = 0 02 – 0 – 12 ≤ 0 -12 ≤ 0; true [4, ∞) x = 5 52 – 5 – 12 ≤ 0 25 – 5 – 12 ≤ 0 Therefore, the solution of the quadratic inequality x2 – x – 12≤ 0 is [-3, 4]. Understanding and Solving One-Variable Inequalities 6th Grade Math Worksheets One-variable Inequalities (Time Themed) Worksheets Solving Word Problems Involving Linear Equations and Linear Inequalities 7th Grade Math Worksheets Browse All Worksheets Link/Reference Us We spend a lot of time researching and compiling the information on this site. If you find this useful in your research, please use the tool below to properly link to or reference Helping with Math as the source. We appreciate your support! <a href="https://helpingwithmath.com/solving-inequalities/">Solving Inequalities</a> "Solving Inequalities". Helping with Math. Accessed on February 1, 2023. https://helpingwithmath.com/solving-inequalities/. "Solving Inequalities". Helping with Math, https://helpingwithmath.com/solving-inequalities/. Accessed 1 February, 2023. Solving Inequalities. Helping with Math. Retrieved from https://helpingwithmath.com/solving-inequalities/. Additional Algebra Theory: Dividing Rational Expressions Addition & Subtraction of Algebraic Expressions Coordinate Graphing of Real World Problems Function Standard Form Algebraic Thinking Solving Simultaneous Equations Cartesian Product Complex Plane Solving Simple Equations (2 of 3) Latest Worksheets The worksheets below are the mostly recently added to the site. Subtracting Positive and Negative Fractions (Boy Scouts of America Themed) Math Worksheets Rate of Change Formula (Light Up a Life Themed) Math Worksheets Adding Billions (Mother's Day Themed) Math Worksheets Special Right Triangles (Cultural Diversity Themed) Math Worksheets Multiplying By 5 & 25 (Budgeting Themed) Math Worksheets Adding Positive and Negative Fractions (World Health Day Themed) Math Worksheets Chord (National Pizza Day Themed) Math Worksheets Timeline (Investment Themed) Math Worksheets Clock Angle Formula (Budgeting Themed) Math Worksheets Remainders (Coffee Shop Themed) Math Worksheets Math Online Flash Cards Printable Worksheet Generators Table Charts Place Value and Number Grids Fraction Lines Helping with Math is one of the largest providers of math worksheets and generators on the internet. We provide high-quality math worksheets for more than 10 million teachers and homeschoolers every year. This site uses cookies to improve your experience. To find out more, see our cookie policy.
CommonCrawl
April 2013 , 73:2393 | Cite as Δr in the Two-Higgs-Doublet Model at full one loop level—and beyond David López-Val Joan Solà After the recent discovery of a Higgs-like boson particle at the CERN LHC-collider, it becomes more necessary than ever to prepare ourselves for identifying its standard or non-standard nature. The fundamental parameter Δr, relating the values of the electroweak gauge boson masses and the Fermi constant, is the traditional observable encoding high precision information of the quantum effects. In this work we present a complete quantitative study of Δr in the framework of the general Two-Higgs-Doublet Model (2HDM). While the one-loop analysis of Δr in this model was carried out long ago, in the first part of our work we consistently incorporate the higher order effects that have been computed since then for the SM part of Δr. Within the on-shell scheme, we find typical corrections leading to shifts of ∼20–40 MeV on the W mass, resulting in a better agreement with its experimentally measured value and in a degree no less significant than in the MSSM case. In the second part of our study we devise a set of effective couplings that capture the dominant higher order genuine 2HDM quantum effects on the δρ part of Δr in the limit of large Higgs boson self-interactions. This limit constitutes a telltale property of the general 2HDM which is unmatched by e.g. the MSSM. Higgs Boson Gauge Boson Minimal Supersymmetric Standard Model Higgs Sector Higgs Boson Masse The authors are very grateful to Wolfgang Hollik for enlightening conversations on this topic and also for providing useful references. The work of J.S. has been supported in part by the research Grant PA-2010-20807; by the Consolider CPAN project; and also by DIUE/CUR Generalitat de Catalunya under project 2009SGR502. For the sake of completeness, we provide herewith a more detailed analytical account on selected aspects of our calculation. All UV divergences we handle by means of conventional dimensional regularization in the 't Hooft–Veltman scheme, setting the number of dimensions to d=4−ε. As usual, we introduce an (arbitrary) mass scale μ in front of the loop integrals in order not to alter the dimension of the result in d dimensions with respect to d=4. After renormalization (in the on-shell scheme, in our case) the results for the physical quantities are finite in the limit d→4. Furthermore, in the practical aspect of the calculation all one-loop structures are reduced in terms of standard Passarino–Veltman coefficients in the conventions of Refs. [178, 179]. •One loop functions at zero momentum the one-loop vacuum integrals that enter the evaluation of the parameter δρ, which is built upon the weak gauge boson self energies at vanishing momenta, cf. Eq. (8), read as follows: Open image in new window where Δ ϵ =2/ϵ+1−γ E +log(4πμ 2) and the function F(x,y) is defined as follows: $$ F(x,y) = F(y,x) = \begin{cases} \frac{x+y}{2}- \frac{xy}{x-y} \log(\frac{x}{y}) &x \neq y, \\ 0 & x=y. \end{cases} $$ The tilded notation for the Passarino–Veltman functions, e.g. indicates that these integrals are evaluated at zero momentum. The parameters A,B can be identified with the (squared of the) masses of the virtual particles propagating in the loop, \(A\equiv m_{1}^{2}\), \(B\equiv m_{2}^{2}\). With these expressions at hand, it is straightforward to write down a compact analytical form for δρ 2HDM at one-loop in the 't Hooft–Feynman gauge, starting from the definition of Eq. (8): From the above equation we can explicitly read off how the size of δρ depends on the mass splitting between the different Higgs bosons, as well as on the strength of the Higgs/gauge boson couplings—which is modulated by tanβ and the mixing angle α. The first two lines of the full expression (52) is the part that we have denoted \(\delta\rho_{2\mathrm{HDM}}^{*}\) in Sect. 2.4, see Eq. (15). We remark that for \(M_{\mathrm{A}^{0}}\to M_{\mathrm{H}^{\pm}}\) and |β−α|→π/2 (in which the h 0 field behaves SM-like) the full δρ 2HDM→0. This is the precise formulation of the decoupling regime for the unconstrained 2HDM. In the case of the SM the Higgs contribution to the δρ-parameter (8) is not finite if taken in an isolated form. The complete bosonic contribution to Δr is of course finite and gauge invariant, and therefore unambiguous. To define a Higgs part of it is then a bit a matter of convention. What is important is that the complete M H-dependence is exhibited correctly and coincides in all conventions. After removing the UV-parts which cancel against other bosonic contributions one arrives at The explicit dependence on the scale μ is unavoidable in quantities which are not UV-finite by themselves. It is, however, natural to set e.g. the EW scale choice μ=M W. In the limit \(M_{\mathrm{H}}^{2}\gg M_{\text{W}}^{2}\) we can see Veltman's screening theorem at work in the SM, as there remain no \(M_{\mathrm{H}}^{2}\) terms but a logarithmic Higgs mass dependence. Indeed, in that limit the expression (53) reduces to $$ \delta\rho^{H}_\mathrm{ SM}\simeq- \frac{3\sqrt{2} G_F M_{\text {W}}^2}{16 \pi^2} \frac{s_{\text{W}}^2}{c_{\text{W}}^2} \biggl\{ \ln\frac{M_\mathrm{H}^2}{M_\mathrm{W}^2}-\frac{5}{6} \biggr\} , $$ which coincides with the result quoted in Eq. (12) of Sect. 2.3. The SM Higgs boson contribution to δρ can also be retrieved from the 2HDM result (52) by selecting the h0 parts of the contributions involving the h0 and the gauge bosons, namely in the last line of that equation. By performing the identification H≡h0 and removing the trigonometric factors we are led to We see that the last expression coincides with Eq. (53) up to finite additive parts, which of course reflects the arbitrariness of the scale setting μ. As we said, this is not important because the full bosonic contribution to Δr is finite and unambiguous. The fact that we can recover the SM result from (52) in such a way suggests that the expression in the first line of (55) should be subtracted from (52) in order to compute the genuine 2HDM effects on δρ, i.e. the Higgs boson quantum effects beyond those associated to the Higgs sector of the SM. This is in fact the practical recipe that we follow in this paper. Finally, let us notice that the \(\delta\rho_{2\mathrm{HDM}}^{*}\) part of (52), i.e. the one which is completely unrelated to the SM Higgs contribution, is precisely the part of the full δρ 2HDM that violates the screening theorem in the 2HDM, as is manifest from Eq. (15) of Sect. 2.4. •2HDM contributions to the gauge boson self-energies We quote herewith their complete analytical form, in terms of the standard Passarino–Veltman coefficients and following the conventions of Ref. [178, 179]. The self-energies are evaluated for on-shell gauge bosons, e.g. \(p^{2} = M^{2}_{V} [V = \mbox{W}^{\pm}, \mathrm {Z}^{0}]\), in the way they enter the calculation of Δr. Two Higgs-boson contributions: Higgs/gauge boson and Higgs/Goldstone boson contributions: Let us notice that, in the last two expressions, we have explicitly removed the overlap with the SM Higgs boson contribution, to wit: •Effective Higgs/gauge boson interactions To better illustrate how we build up in practice the effective Higgs/gauge boson couplings employed in this study, herewith we provide explicit analytical details for the construction of one of such Born-improved interactions. We carry out the calculation with the help of the standard algebraic packages FeynArts and FormCalc [156, 178, 179]. Without loss of generality, let us take the concrete case of the Z boson coupling to the \(\mathcal{CP}\)-odd and the light \(\mathcal{CP}\)-even neutral Higgs bosons \([g_{\mathrm{h}^{0}\mathrm{A}^{0}\mathrm{Z}^{0}}]\). A sample of the Feynman diagrams describing the \(\mathcal{O}(\lambda^{2}_{5})\) corrections to this coupling is displayed in the upper row of Fig. 8. The general structure of the associated form factor \(a_{\mathrm{h}^{0}\mathrm{A}^{0}\mathrm{Z}^{0}}\) may be cast as: Notice that we define our form factors to be real, in order to preserve the hermiticity of the Born-improved Lagrangians derived from them. The different building blocks of Eq. (61) correspond to: \(V_{\mathrm{h}^{0}\mathrm{A}^{0}\mathrm{Z}^{0}}\), the genuine vertex corrections (cf. e.g. the first two diagrams in the upper row of Fig. 8). For illustration purposes, we provide its complete analytical form: The wave-function corrections associated to each of the external Higgs boson legs (including, as we single out in the last term of Eq. (61), the h0–H0 mixing one-loop diagrams): In the last equation, the h0–H0 mixing self-energy \(\hat{\varSigma}_{\mathrm{h}^{0}\mathrm{H}^{0}}(q^{2}) \allowbreak = \varSigma _{\mathrm{h}^{0}\mathrm{H}^{0}}(q^{2}) + \delta Z_{\mathrm{h}^{0}\mathrm{H}^{0}}(q^{2}-M^{2}_{\mathrm{h}^{0}})/2 + \delta Z_{\mathrm{h}^{0}\mathrm{H}^{0}}(q^{2}-M^{2}_{\mathrm{H}^{0}})/2 - \delta M^{2}_{\mathrm{h}^{0}\mathrm{H}^{0}}\) involves the renormalization of the mixing angle α, which we anchor via the relation \(\operatorname{Re}\hat{\varSigma}_{\mathrm{h}^{0}\mathrm{H}^{0}}(q^{2}) = 0\) according to [113], with the renormalization scale chosen at the average mass \(q^{2} \equiv(M^{2}_{\mathrm {h}^{0}}+M^{2}_{\mathrm{H}^{0}})/2\). As mentioned above, the tilded Passarino–Veltman functions are evaluated at vanishing external momentum. Let us note in passing that, for the case of the g hVV-type couplings, and due to he fact that just one single scalar leg is present there, only pieces of type (b) shall give rise to \(\mathcal{O}(\lambda^{2}_{5})\) contributions. The same holds as well for the Higgs/gauge/Goldstone boson couplings [g hVG]. J. Incandela, CERN Seminar. Update on the Standard Model Higgs searches in CMS, July 4 2012. CMS-PAS-HIG-12-020 Google Scholar F. Gianotti, CERN Seminar. Update on the Standard Model Higgs searches in ATLAS, July 4 2012. ATLAS-CONF-2012-093 Google Scholar S. Chatrchyan et al. (CMS Collaboration), Phys. Lett. B (2012). arXiv:1207.7235 [hep-ex] G. Aad et al. (ATLAS Collaboration), Phys. Lett. B (2012). arXiv:1207.7214 [hep-ex] P.W. Higgs, Phys. Lett. 12, 132–133 (1964) ADSGoogle Scholar P.W. Higgs, Phys. Rev. Lett. 13, 508–509 (1964) MathSciNetADSGoogle Scholar F. Englert, R. Brout, Phys. Rev. Lett. 13, 321–322 (1964) MathSciNetADSGoogle Scholar G.S. Guralnik, C.R. Hagen, T.W.B. Kibble, Phys. Rev. Lett. 13, 585–587 (1964) ADSGoogle Scholar J.F. Gunion, H.E. Haber, G.L. Kane, S. Dawson, The Higgs Hunter's Guide (Addison-Wesley, Menlo-Park, 1990) Google Scholar A. Djouadi, Phys. Rep. 457, 1–216 (2008). arXiv:hep-ph/0503172 [hep-ph] ADSGoogle Scholar P.H. Chankowski et al., Nucl. Phys. B, Proc. Suppl. 37, 232–239 (1994) ADSGoogle Scholar R. Barbieri, L. Maiani, Nucl. Phys. B 224, 32 (1983) ADSGoogle Scholar G.C. Branco, P.M. Ferreira, L. Lavoura, M.N. Rebelo, M. Sher, J.P. Silva, Phys. Rep. 516, 1 (2012). arXiv:1106.0034 [hep-ph] ADSGoogle Scholar S. Ferrara (ed.), Supersymmetry, vols. 1–2 (North Holland/World Scientific, Singapore, 1987) Google Scholar D.B. Kaplan, H. Georgi, S. Dimopoulos, Phys. Lett. B 136, 187 (1984) ADSGoogle Scholar K. Agashe, R. Contino, A. Pomarol, Nucl. Phys. B 719, 165–187 (2005). arXiv:hep-ph/0412089 ADSGoogle Scholar J. Mrazek et al., Nucl. Phys. B 853, 1–48 (2011). arXiv:1105.5403 [hep-ph] ADSzbMATHGoogle Scholar G. Burdman, C.E.F. Haluch, J. High Energy Phys. 12, 038 (2011). arXiv:1109.3914 [hep-ph] ADSGoogle Scholar M. Geller, S. Bar-Shalom, A. Soni, arXiv:1302.2915 [hep-ph] M. Schmaltz, D. Tucker-Smith, Annu. Rev. Nucl. Part. Sci. 55, 229–270 (2005). arXiv:hep-ph/0502182 ADSGoogle Scholar M. Perelstein, Prog. Part. Nucl. Phys. 58, 247–291 (2007). arXiv:hep-ph/0512128 ADSGoogle Scholar J.F. Gunion, H.E. Haber, Phys. Rev. D 72, 095002 (2005). arXiv:hep-ph/0506227 ADSGoogle Scholar P.M. Ferreira, H.E. Haber, M. Maniatis, O. Nachtmann, J.P. Silva, Int. J. Mod. Phys. A 26, 769–808 (2011). arXiv:1010.0935 [hep-ph] ADSzbMATHGoogle Scholar E. Ma, Phys. Rev. D 73, 077301 (2006). arXiv:hep-ph/0601225 ADSGoogle Scholar S. Kanemura, Y. Okada, E. Senaha, Phys. Lett. B 606, 361–366 (2005). arXiv:hep-ph/0411354 ADSGoogle Scholar J.M. Cline, K. Kainulainen, M. Trott, J. High Energy Phys. 1111, 089 (2011). arXiv:1107.3559 [hep-ph] ADSGoogle Scholar A. Tranberg, B. Wu, J. High Energy Phys. 1207, 087 (2012). arXiv:1203.5012 [hep-ph] ADSGoogle Scholar B. Stech, Phys. Rev. D 86, 055003 (2012). arXiv:1206.4233 [hep-ph] ADSGoogle Scholar N.G. Deshpande, E. Ma, Phys. Rev. D 18, 2574 (1978) ADSGoogle Scholar R. Barbieri, L.J. Hall, V.S. Rychkov, Phys. Rev. D 74, 015007 (2006). arXiv:hep-ph/0603188 ADSGoogle Scholar E. Lündstrom, M. Gustafsson, J. Edsjö, Phys. Rev. D 79, 035013 (2009). arXiv:0810.3924 [hep-ph] ADSGoogle Scholar A. Arhrib, R. Benbrik, N. Gaur, Phys. Rev. D 85, 095021 (2012). arXiv:1201.2644 [hep-ph] ADSGoogle Scholar L. López-Honorez, E. Nezri, J.F. Oliver, M.H.G. Tytgat, J. Cosmol. Astropart. Phys. 0702, 028 (2007). arXiv:hep-ph/0612275 Google Scholar T. Hambye, M.H.G. Tytgat, Phys. Lett. B 659, 651–655 (2008). arXiv:0707.0633 [hep-ph] ADSGoogle Scholar L. López-Honorez, C.E. Yaguna, J. Cosmol. Astropart. Phys. 1101, 002 (2011). arXiv:1011.1411 [hep-ph] Google Scholar M. Gustafsson, in PoS CHARGED2010 (2010), p. 030 Google Scholar B. Gorczyca, M. Krawczyk, Acta Phys. Pol. B 42, 2229–2236 (2011). arXiv:1112.4356 [hep-ph] Google Scholar M. Krawczyk, D. Sokolowska, Fortschr. Phys. 59, 1098–1102 (2011). arXiv:1105.5529 [hep-ph] zbMATHGoogle Scholar R. Schabinger, J.D. Wells, Phys. Rev. D 72, 093007 (2005). arXiv:hep-ph/0509209 ADSGoogle Scholar B. Patt, F. Wilczek, arXiv:hep-ph/0605188 C. Englert, T. Plehn, M. Rauch, D. Zerwas, P.M. Zerwas, Phys. Lett. B 707, 512–516 (2012). arXiv:1112.3007 [hep-ph] ADSGoogle Scholar C. Englert, T. Plehn, D. Zerwas, P.M. Zerwas, Phys. Lett. B 703, 298–305 (2011). arXiv:1106.3097 [hep-ph] ADSGoogle Scholar B. Batell, S. Gori, L.-T. Wang, J. High Energy Phys. 1206, 172 (2012). arXiv:1112.5180 [hep-ph] ADSGoogle Scholar E. Cerveró, J.-M. Gérard, Phys. Lett. B 712, 255–260 (2012). arXiv:1202.1973 [hep-ph] ADSGoogle Scholar J.S. Lee, A. Pilaftsis, Phys. Rev. D 86, 035004 (2012). arXiv:1201.4891 [hep-ph] ADSGoogle Scholar G. Panotopoulos, P. Tuzón, J. High Energy Phys. 07, 039 (2011). arXiv:1102.5726 [hep-ph] ADSGoogle Scholar S. Bar-Shalom, S. Nandi, A. Soni, Phys. Rev. D 84, 053009 (2011). arXiv:1105.6095 [hep-ph] ADSGoogle Scholar A. Arhrib et al., Phys. Rev. D 84, 095005 (2011). arXiv:1105.1925 [hep-ph] ADSGoogle Scholar M. Aoki et al., Phys. Rev. D 84, 055028 (2011). arXiv:1104.3178 [hep-ph] ADSGoogle Scholar S. Chang, J.A. Evans, M.A. Luty, Phys. Rev. D 84, 095030 (2011). arXiv:1107.2398 [hep-ph] ADSGoogle Scholar A. Arhrib, C.-W. Chiang, D.K. Ghosh, R. Santos, arXiv:1112.5527 [hep-ph] (2011) S. Kanemura, K. Tsumura, H. Yokoya, Phys. Rev. D 85, 095001 (2012). arXiv:1111.6089 [hep-ph] ADSGoogle Scholar K. Blum, R.T. D'Agnolo, Phys. Lett. B 714, 66–69 (2012). arXiv:1202.2364 [hep-ph] ADSGoogle Scholar W. Mader, J.-h. Park, G.M. Pruna, D. Stöckinger, A. Straessner, J. High Energy Phys. 1209, 125 (2012). arXiv:1205.2692 [hep-ph] ADSGoogle Scholar N. Craig, J.A. Evans, R. Gray, C. Kilic, M. Park, S. Somalwar, S. Thomas, arXiv:1210.0559 [hep-ph] P.M. Ferreira, R. Santos, M. Sher, J.P. Silva, Phys. Rev. D 85, 077703 (2012). arXiv:1112.3277 [hep-ph] ADSGoogle Scholar G. Burdman, C.E.F. Haluch, R.D. Matheus, Phys. Rev. D 85, 095016 (2012). arXiv:1112.3961 [hep-ph] ADSGoogle Scholar D. Carmi, A. Falkowski, E. Kuflik, T. Volansky, J. High Energy Phys. 1207, 136 (2012). arXiv:1202.3144 [hep-ph] ADSGoogle Scholar H.S. Cheon, S.K. Kang, arXiv:1207.1083 [hep-ph] N. Craig, S. Thomas, arXiv:1207.4835 [hep-ph] D.S.M. Alves, P.J. Fox, N.J. Weiner, arXiv:1207.5499 [hep-ph] Y. Bai, V. Barger, L.L. Everett, G. Shaughnessy, arXiv:1210.4922 [hep-ph] S. Chang, S.K. Kang, J.-P. Lee, K.Y. Lee, S.C. Park, J. Song, arXiv:1210.3439 [hep-ph] C.-Y. Chen, S. Dawson, arXiv:1301.0309 [hep-ph] W. Altmannshofer, S. Gori, G.D. Kribs, arXiv:1210.2465 [hep-ph] A. Celis, V. Ilisie, A. Pich, arXiv:1302.4022 [hep-ph] W. Hollik, Fortschr. Phys. 38, 165 (1990) Google Scholar W. Hollik, J. Phys. G 29, 131–140 (2003) ADSGoogle Scholar W. Hollik, J. Phys. Conf. Ser. 53, 7–43 (2006) ADSGoogle Scholar W. Hollik, Renormalization of the standard model, in Precision Tests of the Standard Electroweak Model, ed. by P. Langacker. Advanced Series on Directions in High Energy Physics, vol. 14 (World Scientific, Singapore, 1995) Google Scholar J.D. Wells, arXiv:hep-ph/0512342 A. Sirlin, A. Ferroglia, arXiv:1210.5296 [hep-ph] The LEP Collaborations, the LEP Electroweak Working Group, the Tevatron Electroweak Working Group, the SLD Electroweak and Heavy Flavour Working Groups, Precision electroweak measurements and constraints on the Standard Model, CERN-PH-EP/2009-023. http://www.cern.ch/LEPEWWG J. Beringer et al. (Particle Data Group Collaboration), Phys. Rev. D 86, 010001 (2012) ADSGoogle Scholar G. Bozzi, J. Rojo, A. Vicini, Phys. Rev. D 83, 113008 (2011). arXiv:1104.2056 [hep-ph] ADSGoogle Scholar C. Bernaciak, D. Wackeroth, Phys. Rev. D 85, 093003 (2012). arXiv:1201.4804 [hep-ph] ADSGoogle Scholar A. Sirlin, Phys. Rev. D 22, 971–981 (1980) ADSGoogle Scholar W.J. Marciano, A. Sirlin, Phys. Rev. D 22, 2695 (1980) ADSGoogle Scholar D.A. Ross, M.J.G. Veltman, Nucl. Phys. B 95, 135 (1975) ADSGoogle Scholar M.J.G. Veltman, Acta Phys. Pol. B 8, 475 (1977) Google Scholar M.J.G. Veltman, Nucl. Phys. B 123, 89 (1977) ADSGoogle Scholar M.B. Einhorn, D.R.T. Jones, M.J.G. Veltman, Nucl. Phys. B 191, 146 (1981) ADSGoogle Scholar R. Barbieri, M. Frigeni, F. Giuliani, H. Haber, Nucl. Phys. B 341, 309–321 (1990) ADSGoogle Scholar D. Garcia, J. Solà, Mod. Phys. Lett. A 9, 211–224 (1994). Preprint UAB-FT-313 (April 1993) ADSGoogle Scholar P.H. Chankowski et al., Nucl. Phys. B 417, 101–129 (1994). Preprint MPI-Ph/93-79 (November 1993) ADSGoogle Scholar P. Gosdzinsky, J. Solà, Phys. Lett. B 254, 139–147 (1991) ADSGoogle Scholar P. Gosdzinsky, J. Solà, Mod. Phys. Lett. A 6, 1943–1952 (1991) ADSGoogle Scholar J. Grifols, J. Solà, Phys. Lett. B 137, 257 (1984) ADSGoogle Scholar J. Grifols, J. Solà, Nucl. Phys. B 253, 47 (1985) ADSGoogle Scholar A. Dabelstein, W. Hollik, W. Mosle, arXiv:hep-ph/9506251 (1995) S. Heinemeyer, W. Hollik, D. Stöckinger, A.M. Weber, G. Weiglein, J. High Energy Phys. 08, 052 (2006). arXiv:hep-ph/0604147 ADSGoogle Scholar J.R. Ellis, S. Heinemeyer, K.A. Olive, G. Weiglein, arXiv:hep-ph/0604180 (2006) R. Benbrik, M.G. Bock, S. Heinemeyer, O. Stål, G. Weiglein et al., arXiv:1207.1096 [hep-ph] (2012) A. Freitas, S. Heinemeyer, G. Weiglein, Nucl. Phys. Proc. Suppl. 116, 331–335 (2003). arXiv:hep-ph/0212068 ADSGoogle Scholar G. Weiglein, Nucl. Phys. Proc. Suppl. 160, 185–189 (2006) ADSGoogle Scholar J. Haestier, D. Stockinger, G. Weiglein, S. Heinemeyer, arXiv:hep-ph/0506259 (2005) S. Heinemeyer, G. Weiglein, J. High Energy Phys. 10, 072 (2002). arXiv:hep-ph/0209305 ADSGoogle Scholar S. Heinemeyer, G. Weiglein, arXiv:hep-ph/0102317 (2001) J. van der Bij, M. Veltman, Nucl. Phys. B 231, 205 (1984) ADSGoogle Scholar S. Heinemeyer, W. Hollik, F. Merz, S. Peñaranda, Eur. Phys. J. C 37, 481–493 (2004). arXiv:hep-ph/0403228 ADSGoogle Scholar S. Peñaranda, S. Heinemeyer, W. Hollik, arXiv:hep-ph/0506104 (2005) J.M. Frere, J.A.M. Vermaseren, Z. Phys. C 19, 63 (1983) ADSGoogle Scholar S. Bertolini, Nucl. Phys. B 272, 77 (1986) ADSGoogle Scholar W. Hollik, Z. Phys. C 32, 291 (1986) ADSGoogle Scholar C.D. Froggatt, R.G. Moorhouse, I.G. Knowles, Phys. Rev. D 45, 2471–2481 (1992) ADSGoogle Scholar H.-J. He, N. Polonsky, S.-f. Su, Phys. Rev. D 64, 053004 (2001). arXiv:hep-ph/0102144 ADSGoogle Scholar F. Mahmoudi, O. Stål, Phys. Rev. D 81, 035016 (2010). arXiv:0907.1791 [hep-ph] ADSGoogle Scholar A. Pich, P. Tuzón, Phys. Rev. D 80, 091702 (2009). arXiv:0908.1554 [hep-ph] ADSGoogle Scholar M. Jung, A. Pich, P. Tuzón, J. High Energy Phys. 11, 003 (2010). arXiv:1006.0470 [hep-ph] ADSGoogle Scholar A.J. Buras, M.V. Carlucci, S. Gori, G. Isidori, J. High Energy Phys. 10, 009 (2010). arXiv:1005.5310 [hep-ph] ADSGoogle Scholar D. López-Val, J. Solà, Phys. Rev. D 81, 033003 (2010). arXiv:0908.2898 [hep-ph] ADSGoogle Scholar R.A. Jiménez, J. Solà, Phys. Lett. B 389, 53–61 (1996). arXiv:hep-ph/9511292 ADSGoogle Scholar J.A. Coarasa, R.A. Jimenez, J. Solà, Phys. Lett. B 389, 312–320 (1996). arXiv:hep-ph/9511402 ADSGoogle Scholar M.S. Carena, H.E. Haber, Prog. Part. Nucl. Phys. 50, 63–152 (2003). arXiv:hep-ph/0208209 ADSGoogle Scholar S. Heinemeyer, Acta Phys. Pol. B 39, 2673–2692 (2008). arXiv:0807.2514 [hep-ph] ADSGoogle Scholar H. Flacher et al., Eur. Phys. J. C 60, 543–583 (2009). arXiv:0811.0009 [hep-ph] ADSGoogle Scholar S.R. Juárez, D. Morales, P. Kielanowski, arXiv:1201.1876 [hep-ph] (2012) F. Mahmoudi, Comput. Phys. Commun. 178, 745–754 (2008). arXiv:0710.2067 [hep-ph] ADSzbMATHGoogle Scholar F. Mahmoudi, Comput. Phys. Commun. 180, 1579–1613 (2009). arXiv:0808.3144 [hep-ph] ADSGoogle Scholar A.W. El Kaffas, P. Osland, O.M. Ogreid, Phys. Rev. D 76, 095001 (2007). arXiv:0706.2997 [hep-ph] ADSGoogle Scholar A.W. El Kaffas, P. Osland, O.M. Ogreid, Nonlinear Phenom. Complex Syst. 10, 347–357 (2007). arXiv:hep-ph/0702097 Google Scholar A. Azatov, S. Chang, N. Craig, J. Galloway, arXiv:1206.1058 [hep-ph] (2012) D. Carmi, A. Falkowski, E. Kuflik, T. Volansky, J. Zupan, arXiv:1207.1718 [hep-ph] (2012) D. Eriksson, J. Rathsman, O. Stål, Comput. Phys. Commun. 181, 189–205 (2010). arXiv:0902.0851 [hep-ph] ADSzbMATHGoogle Scholar P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K.E. Williams, Comput. Phys. Commun. 181, 138–167 (2010). arXiv:0811.4169 [hep-ph] ADSzbMATHGoogle Scholar P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K.E. Williams, Comput. Phys. Commun. 182, 2605–2631 (2011). arXiv:1102.1898 [hep-ph] ADSGoogle Scholar G. Ferrera, J. Guasch, D. López-Val, J. Solà, Phys. Lett. B 659, 297–307 (2008). arXiv:0707.3162 [hep-ph] ADSGoogle Scholar G. Ferrera, J. Guasch, D. López-Val, J. Solà, PoS RADCOR2007, 043 (2007). arXiv:0801.3907 [hep-ph] A. Arhrib, R. Benbrik, C.-W. Chiang, Phys. Rev. D 77, 115013 (2008). arXiv:0802.0319 [hep-ph] ADSGoogle Scholar R.N. Hodgkinson, D. López-Val, J. Solà, Phys. Lett. B 673, 47–56 (2009). arXiv:0901.2257 [hep-ph] ADSGoogle Scholar N. Bernal, D. López-Val, J. Solà, Phys. Lett. B 677, 39–47 (2009). arXiv:0903.4978 [hep-ph] ADSGoogle Scholar D. López-Val, J. Solà, Phys. Lett. B 702, 246–255 (2011). arXiv:1106.3226 [hep-ph] ADSGoogle Scholar J. Solà, D. López-Val, Nuovo Cimento C 34S1, 57–67 (2011). arXiv:1107.1305 [hep-ph] Google Scholar F. Cornet, W. Hollik, Phys. Lett. B 669, 58–61 (2008). arXiv:0808.0719 [hep-ph] ADSGoogle Scholar E. Asakawa, D. Harada, S. Kanemura, Y. Okada, K. Tsumura, Phys. Lett. B 672, 354–360 (2009). arXiv:0809.0094 [hep-ph] ADSGoogle Scholar A. Arhrib, R. Benbrik, C.-H. Chen, R. Santos, Phys. Rev. D 80, 015010 (2009). arXiv:0901.3380 [hep-ph] ADSGoogle Scholar E. Asakawa, D. Harada, S. Kanemura, Y. Okada, K. Tsumura, Phys. Rev. D 82, 115002 (2010). arXiv:1009.4670 [hep-ph] ADSGoogle Scholar A. Arhrib, G. Moultaka, Nucl. Phys. B 558, 3–40 (1999). arXiv:hep-ph/9808317 ADSGoogle Scholar J. Guasch, W. Hollik, A. Kraft, Nucl. Phys. B 596, 66–80 (2001) ADSGoogle Scholar D. López-Val, J. Solà, in PoS RADCOR2009 (2010), p. 045. arXiv:1001.0473 [hep-ph] Google Scholar J. Solà, D. López-Val, Fortschr. Phys. 58, 660–664 (2010) Google Scholar M. Consoli, W. Hollik, F. Jegerlehner, Phys. Lett. B 227, 167 (1989) ADSGoogle Scholar A. Freitas, W. Hollik, W. Walter, G. Weiglein, Nucl. Phys. B 632, 189–218 (2002). arXiv:hep-ph/0202131 ADSGoogle Scholar M. Awramik, M. Czakon, Phys. Rev. Lett. 89, 241801 (2002). arXiv:hep-ph/0208113 ADSGoogle Scholar M. Awramik, M. Czakon, A. Onishchenko, O. Veretin, Phys. Rev. D 68, 053004 (2003). arXiv:hep-ph/0209084 ADSGoogle Scholar M. Awramik, M. Czakon, Phys. Lett. B 568, 48–54 (2003). arXiv:hep-ph/0305248 [hep-ph] ADSGoogle Scholar M. Awramik, M. Czakon, A. Freitas, G. Weiglein, Phys. Rev. D 69, 053006 (2004). arXiv:hep-ph/0311148 [hep-ph] ADSGoogle Scholar A. Onishchenko, O. Veretin, Phys. Lett. B 551, 111–114 (2003). arXiv:hep-ph/0209010 ADSGoogle Scholar J.J. van der Bij, K.G. Chetyrkin, M. Faisst, G. Jikia, T. Seidensticker, Phys. Lett. B 498, 156–162 (2001). arXiv:hep-ph/0011373 ADSGoogle Scholar W. Grimus, L. Lavoura, O.M. Ogreid, P. Osland, J. Phys. G 35, 075001 (2008). arXiv:0711.4022 [hep-ph] ADSGoogle Scholar W. Grimus, L. Lavoura, O. Ogreid, P. Osland, Nucl. Phys. B 801, 81–96 (2008). arXiv:0802.4353 [hep-ph] ADSGoogle Scholar T. Hahn, Comput. Phys. Commun. 140, 418 (2001). arXiv:hep-ph/0012260 ADSzbMATHGoogle Scholar A. Djouadi, C. Verzegnassi, Phys. Lett. B 195, 265 (1987) ADSGoogle Scholar A. Djouadi, Nuovo Cimento A 100, 357 (1988) ADSGoogle Scholar B.A. Kniehl, Nucl. Phys. B 347, 86–104 (1990) ADSGoogle Scholar F. Halzen, B.A. Kniehl, Nucl. Phys. B 353, 567–590 (1991) ADSGoogle Scholar B.A. Kniehl, A. Sirlin, Nucl. Phys. B 371, 141–148 (1992) ADSGoogle Scholar B.A. Kniehl, A. Sirlin, Phys. Rev. D 47, 883–893 (1993) ADSGoogle Scholar S. Fanchiotti, B.A. Kniehl, A. Sirlin, Phys. Rev. D 48, 307–331 (1993). arXiv:hep-ph/9212285 [hep-ph] ADSGoogle Scholar A. Djouadi, P. Gambino, Phys. Rev. D 49, 3499–3511 (1994). arXiv:hep-ph/9309298 [hep-ph] ADSGoogle Scholar L. Avdeev, J. Fleischer, S. Mikhailov, O. Tarasov, Phys. Lett. B 336, 560–566 (1994). arXiv:hep-ph/9406363 [hep-ph] ADSGoogle Scholar K. Chetyrkin, J.H. Kuhn, M. Steinhauser, Phys. Rev. Lett. 75, 3394–3397 (1995). arXiv:hep-ph/9504413 [hep-ph] ADSGoogle Scholar K. Chetyrkin, J.H. Kuhn, M. Steinhauser, Nucl. Phys. B 482, 213–240 (1996). arXiv:hep-ph/9606230 [hep-ph] ADSGoogle Scholar B.W. Lee, C. Quigg, H.B. Thacker, Phys. Rev. Lett. 38, 883–885 (1977) ADSGoogle Scholar B.W. Lee, C. Quigg, H. Thacker, Phys. Rev. D 16, 1519 (1977) ADSGoogle Scholar A. Arhrib, arXiv:hep-ph/0012353 (2000) A.G. Akeroyd, A. Arhrib, E.-M. Naimi, Phys. Lett. B 490, 119 (2000). arXiv:hep-ph/0006035 ADSGoogle Scholar S. Kanemura, T. Kubota, E. Takasugi, Phys. Lett. B 313, 155–160 (1993). arXiv:hep-ph/9303263 ADSGoogle Scholar J. Maalampi, J. Sirkka, I. Vilja, Phys. Lett. B 265, 371–376 (1991) ADSGoogle Scholar A.G. Akeroyd, A. Arhrib, E.-M. Naimi, Phys. Lett. B 490, 119–124 (2000). arXiv:hep-ph/0006035 ADSGoogle Scholar I.F. Ginzburg, I.P. Ivanov, Phys. Rev. D 72, 115010 (2005). arXiv:hep-ph/0508020 ADSGoogle Scholar P. Osland, P.N. Pandita, L. Selbuz, Phys. Rev. D 78, 015003 (2008). arXiv:0802.0060 [hep-ph] ADSGoogle Scholar G. 't Hooft, M. Veltman, Nucl. Phys. B 44, 189–213 (1972) ADSGoogle Scholar T. Hahn, M. Pérez-Victoria, Comput. Phys. Commun. 118, 153–165 (1999). arXiv:hep-ph/9807565 ADSGoogle Scholar T. Hahn, M. Rauch, Nucl. Phys. Proc. Suppl. 157, 236–240 (2006). arXiv:hep-ph/0601248 [hep-ph] ADSGoogle Scholar M. Frank et al., J. High Energy Phys. 02, 047 (2007). arXiv:hep-ph/0611326 ADSGoogle Scholar G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich, G. Weiglein, Eur. Phys. J. C 28, 133–143 (2003). arXiv:hep-ph/0212020 ADSGoogle Scholar S. Heinemeyer, W. Hollik, G. Weiglein, Eur. Phys. J. C 9, 343–366 (1999). arXiv:hep-ph/9812472 ADSGoogle Scholar S. Heinemeyer, W. Hollik, G. Weiglein, Comput. Phys. Commun. 124, 76–89 (2000). arXiv:hep-ph/9812320 ADSzbMATHGoogle Scholar A. Arbey, M. Battaglia, A. Djouadi, F. Mahmoudi, arXiv:1207.1348 [hep-ph] (2012) T. Electroweak, Working Group and CDF and D0 Collaborations. FERMILAB-TM-2504-E, CDF-NOTE-10549, D0-NOTE-6222. arXiv:1107.5255 [hep-ex] M. Awramik, M. Czakon, Nucl. Phys. Proc. Suppl. 116, 238–242 (2003). arXiv:hep-ph/0211041 [hep-ph] ADSGoogle Scholar A. Freitas, W. Hollik, W. Walter, G. Weiglein, Phys. Lett. B 495, 338–346 (2000). arXiv:hep-ph/0007091 [hep-ph] ADSGoogle Scholar M. Faisst, J.H. Kuhn, T. Seidensticker, O. Veretin, Nucl. Phys. B 665, 649–662 (2003). arXiv:hep-ph/0302275 ADSGoogle Scholar R. Boughezal, J.B. Tausk, J.J. van der Bij, Nucl. Phys. B 713, 278–290 (2005). arXiv:hep-ph/0410216 ADSGoogle Scholar Y. Schroder, M. Steinhauser, Phys. Lett. B 622, 124–130 (2005). arXiv:hep-ph/0504055 ADSGoogle Scholar K.G. Chetyrkin, M. Faisst, J.H. Kuhn, P. Maierhofer, C. Sturm, Phys. Rev. Lett. 97, 102003 (2006). arXiv:hep-ph/0605201 ADSGoogle Scholar R. Boughezal, M. Czakon, Nucl. Phys. B 755, 221–238 (2006). arXiv:hep-ph/0606232 ADSGoogle Scholar O. Buchmüller, R. Cavanaugh, A. De Roeck, J. Ellis, H. Flacher et al., Phys. Rev. D 81, 035009 (2010). arXiv:0912.1036 [hep-ph] ADSGoogle Scholar J.A. Evans, M.A. Luty, Phys. Rev. Lett. 103, 101801 (2009). arXiv:0904.2182 [hep-ph] ADSGoogle Scholar J.A. Coarasa, D. Garcia, J. Guasch, R.A. Jiménez, J. Solà, Eur. Phys. J. C 2, 373–392 (1998). arXiv:hep-ph/9607485 ADSGoogle Scholar M.S. Carena, D. Garcia, U. Nierste, C.E.M. Wagner, Nucl. Phys. B 577, 88–120 (2000). arXiv:hep-ph/9912516 ADSGoogle Scholar J. Guasch, J. Solà, W. Hollik, Phys. Lett. B 437, 88–99 (1998). arXiv:hep-ph/9802329 ADSGoogle Scholar A. Belyaev, D. Garcia, J. Guasch, J. Solà, Phys. Rev. D 65, 031701 (2002). arXiv:hep-ph/0105053 ADSGoogle Scholar S. Béjar, J. Guasch, D. López-Val, J. Solà, Phys. Lett. B 668, 364–372 (2008). arXiv:0805.0973 [hep-ph] ADSGoogle Scholar S. Béjar, J. Guasch, D. López-Val, J. Solà, Phys. Rev. D 81, 113005 (2010). arXiv:1003.4312 [hep-ph] ADSGoogle Scholar © Springer-Verlag Berlin Heidelberg and Società Italiana di Fisica 2013 1.Institut für Theoretische PhysikUniversität HeidelbergHeidelbergGermany 2.Dept. Estructura i Constituents de la MatèriaUniversitat de BarcelonaBarcelonaSpain 3.Institut de Ciències del CosmosBarcelonaSpain López-Val, D. & Solà, J. Eur. Phys. J. C (2013) 73: 2393. https://doi.org/10.1140/epjc/s10052-013-2393-y Revised 09 March 2013 DOI https://doi.org/10.1140/epjc/s10052-013-2393-y
CommonCrawl
Modelling the variation in wood density of New Zealand-grown Douglas-fir Mark O. Kimberley1, Russell B McKinley1, David J. Cown1 & John R. Moore ORCID: orcid.org/0000-0001-8245-93061 Wood density is an important property that affects the performance of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) timber. In order to develop strategies to achieve certain end-product outcomes, forest managers and wood processors require information on the variation in wood density across sites, among trees within a stand and within trees. Therefore, the aim of this study was to develop models explaining the variation in outerwood density among sites and among trees within a stand, and the radial and longitudinal variation of wood density within a tree. An extensive dataset was assembled containing wood density measurements from historical studies carried out over a period spanning more than 50 years. The dataset contained breast height outerwood density cores from approximately 10,800 individual trees sampled from 312 stands throughout New Zealand, pith-to-bark radial density profiles from 515 trees from 47 stands, and discs taken from multiple heights in 172 trees from 21 stands. Linear and non-linear mixed models were developed using these data to explain the variation in inter- and intra-tree variation in Douglas-fir wood density. Breast height outerwood density was positively related to mean annual air temperature in stands planted after 1969. This relationship was less apparent in older North Island stands, possibly due to the confounding influences of genetic differences. After adjusting for age and temperature, wood density was also positively related to soil carbon (C) to nitrogen (N) ratio in South Island stands where data on soil C:N ratio were available. There was only a minimal effect of stand density on breast height outerwood density, and a weak negative relationship between wood density and tree diameter within a stand. Within a stem, wood density decreased over the first seven rings from the pith and gradually increased beyond ring ten, eventually stabilising by ring 30. Longitudinal variation in wood density exhibited a sigmoidal pattern, being fairly constant over most of the height but increasing in the lower stem and decreasing in the upper stem. The study has provided greater insight into the extent and drivers of variation in Douglas-fir wood density, particularly the relative contributions of site and silviculture. The models developed to explain these trends in wood density have been coupled together and linked to a growth and yield simulator which also predicts branching characteristics to estimate the impact of different factors, primarily site, on the wood density distribution of log product assortments. Further work is required to investigate the impacts of genetic and soil properties on wood density, which may improve our understanding of site-level variation in wood density. Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) is native to the coastal and interior regions of western North America, from west-central British Columbia southward to central California (Pojar and MacKinnon 1994). It produces a highly regarded timber preferred for its superior strength, toughness, durability and decay resistance (Barrett and Kellogg 1991). As well as being popular for light timber framing, larger dimensions are highly sought after for use in exposed interior posts and beams because of the species' good stability and low incidence of twist. Outside of its native range, Douglas-fir is an important exotic species in western and central Europe, particularly in France, Germany, the Czech Republic, and Belgium. It was first planted in New Zealand in 1859 (Maclaren 2009), and the coastal variety of Douglas-fir (var. menziesii) is now the country's second most important plantation species with a total planted area of approximately 105,000 ha (Ministry for Primary Industries 2014). Early work by wood scientists and silviculturalists working on the utilisation of New Zealand-grown Douglas-fir established that the single most important factor influencing timber stiffness was branching, and the next most important, wood density (Whiteside et al. 1976). Together, density and branch size explained around 80% of the observed variation in timber stiffness (Tustin and Wilcox 1978; Whiteside 1978). The fact that growth rate was shown to have only a modest impact on density and stiffness (Harris 1978) led to the recommendation that Douglas-fir should be grown as rapidly as possible on a short rotation, subject to maintaining a small branch index and an average wood density above 400 kg m−3. Management of Douglas-fir stands to achieve particular end product outcomes requires information on how wood density and branch characteristics are affected by site, silviculture and genetics. Quantitative knowledge of the variation in wood density among sites, among trees within a stand and within a tree is also needed. Information on branch characteristics in this species has been used to develop models that can be coupled to growth and yield modelling systems in order to study the impacts of silviculture on log and timber quality (Grace et al. 2015; Maguire et al. 1999; Maguire et al. 1991). Considerable information on Douglas-fir wood density has been collected in studies undertaken in New Zealand, North America and Europe. A series of studies in Canada and the United States examined the implications of silviculture on wood density and the proportion of corewood (juvenile wood) within a tree (Filipescu et al. 2014; Jozsa and Brix 1989; Kellog 1989; Stoehr et al. 2009; Vargas-Hernandez and Adams 1994; Vikram et al. 2011; Wellwood 1952). Regional differences in Douglas-fir wood density have also been identified that are related to elevation, temperature, rainfall, and soil properties (Cown and Parker 1979; Filipescu et al. 2014; Kantavichai et al. 2010a; Maeglin and Wahlgren 1972; USDA 1965). A trend of increasing wood density with decreasing summer rainfall and decreasing elevation was observed by Lassen and Okkonen (1969), while Filipescu et al. (2014) found wood density increased with temperatue. Climatic factors, particularly temperature and rainfall, affect wood density through their impact on the xylem cell production, radial expansion and secondary thickening (Antonova and Stasova 1997). This, in turn, affects the transition from earlywood to latewood formation within an annual ring. The timing of this transition affects the proportion of latewood within an annual ring as does the amount of growth that occurs during the period that latewood is being produced. Summer temperature has been observed to be positively associated with latewood proportion and average latewood density in a range of species, including Douglas-fir (Filipescu et al. 2014; Jordan et al. 2008; Kantavichai et al. 2010a; Wilhelmsson et al. 2002; Wimmer and Grabner 2000). In New Zealand, a survey of 30- to 40-year-old Douglas-fir stands growing on 19 sites throughout the country revealed a high degree of variation among trees, a latitudinal effect, and a possible influence of provenance (Harris 1966). However, no attempt was made to investigate whether site differences were associated with climatic variables. Early studies in New Zealand-grown Douglas-fir also revealed a high degree of tree-to-tree variation in wood density, which prompted the suggestion that this property could be used as a selection criterion thereby significantly improving the yield of good quality structural timber (Harris 1967, 1978). Models describing the radial variation in Douglas-fir wood density have been developed in several North American studies (e.g. Filipescu et al. 2014; Kantavichai et al. 2010a; Kantavichai et al. 2010b), but up until now similar models have not been produced for Douglas-fir in New Zealand. However, data suitable for developing such models have been collected and are described in several published studies (Cown 1999; Harris 1966; Knowles et al. 2003; Lausberg 1996; Lausberg et al. 1995) and numerous unpublished studies. Similar data for New Zealand-grown radiata pine (Pinus radiata D. Don) have recently been used to develop models explaining both site-level (Palmer et al. 2013) and intra-stem (Kimberley et al. 2015) variation in wood density, and the influence of genetic improvement (Kimberley et al. 2016). Together these models have been implemented in the Forecaster growth and yield simulator (West et al. 2013), enabling forest managers to analyse the effects of different combinations of site and silviculture on wood density. The objective of this study was to develop comprehensive models describing the variation in New Zealand-grown Douglas-fir wood density at the regional (among site), inter-tree and intra-tree scales, and to identify the factors associated with this variation. These models were derived from the extensive database assembled from previous studies, with some additional new data collected from regions where historical coverage was poor. They have been implemented within the Forecaster growth and yield prediction system (West et al. 2013), so that forest managers will be able to predict stem and log densities as functions of site, stand age and silvicultural regime. Wood density data were assembled from studies carried out over a more than 50 year period between 1958 and 2014. A wood density database was created from existing sources of data by collating density values from numerous historical studies, both published and unpublished. This database included the following types of data: (1) outerwood increment cores – basic density of the outer 50 mm of the stem (to allow analysis of site, stand age and climatic variables influencing density); (2) individual ring-level values of density from X-ray densitometry (to provide ring-level radial trends in wood density); and (3) whole-disc density values from felled trees (to enable prediction of log and stem values as well as the longitudinal trend in wood density within a stem). Outerwood density data Outerwood density data were obtained by searching Scion databases along with published and unpublished reports. Priority was given to studies where the location of stands was known in terms of elevation, latitude and longitude, along with age. Information on outerwood density was obtained from approximately 10,800 trees (usually two increment cores per tree) sampled in 312 stands aged between 15 and 70 years (Table 1; Fig. 1). Stands less than 15 years old were not considered as the focus of these studies was on characterising wood density in merchantable trees, either through pre-commercial thinning or final harvesting. Stands were on sites that spanned a latitudinal range from 37.9° to 46.2° south, and ranged in elevation from 60 m up to 1200 m a.s.l. The average site elevation was 485 m in the North Island and 430 m in the South Island. The data represent Douglas-fir stands from throughout New Zealand planted in three main periods; 37% were planted before 1940, mainly in the 1920s and 1930s, 51% between 1940 and 1969, and a smaller group of 12% planted after 1969. In all studies, basic density was calculated for outerwood core samples (50 mm from the bark inwards) using the maximum moisture content method (Smith 1954). Table 1 Summary of the different sources of wood density data collected for New Zealand-grown Douglas-fir Map of New Zealand showing the location of the 312 stands where breast height outerwood density data were collected In most cases, only stand mean values were available but density measurements for individual trees were available for a subset of the data consisting of 1489 trees from 55 stands. In most of these stands, one core was taken per tree, but in 20 stands, two cores were taken per tree avoiding the underside of any lean. These data were used to investigate the distribution and variation in wood density among trees within a stand. In a subset of 29 stands containing 883 trees, breast height diameter (DBH – 1.4 m) was also available allowing analysis of the within-stand relationship between DBH and outerwood density. The coordinates of each stand were used to extract annual and seasonal (spring, summer, autumn and winter) mean values for a wide range of climate variables from spatially-interpolated climate normals developed by the National Institute for Water and Atmospheric Research. Variables extracted included various measures of temperature (annual mean, mean daily minimum and maximum), rainfall, solar radiation and vapour pressure deficit. Soil carbon (C) and nitrogen (N) concentrations sampled from the upper 0–5 cm of mineral soil were available for a small subset of 18 South Island stands within the database from a study carried out in 2011. Pith-to-bark radial density data Pith-to-bark radial profiles of ring-level wood density were available for more than 500 trees from 47 stands (Table 1). Breast height pith-to-bark core samples (5 mm in diameter) were collected from each tree and Soxhlet extracted with methanol to remove extractives and resin. Strips were milled from these cores and scanned using the X-ray densitometer at Scion (Cown and Clement 1983). Whole disc data Area-weighted wood density values of discs cut at intervals from stems of 172 felled trees in 21 stands were used to establish within-stem longitudinal patterns of wood density for New Zealand Douglas-fir (Table 1). Only trees greater than 25 m in height were used in this analysis. Wood density of each disc was assessed gravimetrically in 5-ring groups from the pith outwards and combined to provide an estimate for the disc. The dataset included field samples from 51 trees in five stands from the southern South Island collected for this study during 2014. These additional samples were deemed necessary as over 75% of the Douglas-fir resource is currently located in the South Island, with approximately 50% in the Otago and Southland regions (Ministry for Primary Industries 2014), due mainly to its better performance than other species on snow prone sites and issues with Swiss Needle Cast caused by Phaeocryptopus gaeumannii in North Island stands (Stone et al. 2007). Effect of environmental and stand variables on outerwood density All statistical analyses and modelling performed in this study were carried out using SAS Version 9.3 statistical analysis software (SAS Institute Inc. 2011). The first analysis was concerned with the effects of site and stand variables on the average outerwood density of a stand. Relationships between outerwood density and stand age, latitude, longitude, elevation, the climate variables extracted from spatially-interpolated climate normals, and for the small subset where it was available, soil C:N ratio, were explored through correlation analysis using the SAS CORR procedure. To eliminate the effect of stand age, partial correlations with environmental variables adjusted for age were calculated. Regression models were then fitted to predict outerwood density as a function of stand age and mean annual temperature (MAT) which proved to be the best overall climate variable. As stand age had a nonlinear (exponential) relationship with density, the final regression model fitted using the SAS NLIN procedure was as follows: $$ {Dow}_i=a+b\times {MAT}_i-c\times \exp \left(-d\times {Age}_i\right)+{e}_i $$ Where Dow i is mean breast height outerwood density of stand i (kg m−3), Age i is stand age (years), MAT i is mean annual temperature (°C), a, b, c and d are model parameters, and e i is the error term. The stand age component of Eq. (1) can be used to adjust any outerwood density measurement to a common age of 40 years. This is achieved by adding the following term to an outerwood density measurement: $$ Age\_ adjustment=c\left[ \exp \left(-d\times Age\right)- \exp \left(-d\times 40\right)\right] $$ The outerwood density dataset included results from ten stocking trials (three in the North Island and seven in the South Island), providing the opportunity to examine the influence of stand density on wood density. Each trial contained between two and four levels of stand density, with the minimum stand density averaging 345 stems ha−1 (range 250–980 stems ha−1) and the maximum averaging 1330 stems ha−1 (range 740–2700 stems ha−1). On average, 20 trees were assessed in each treatment (range 10–30 trees) and the average stand age at the time of assessment was 31 years (range 18–40 years). The data were analysed using the following model fitted with the SAS GLM procedure: $$ {Dow}_{ij}={a}_{\mathrm{i}}+b\times {N}_{ij}+{e}_{ij} $$ Where Dow ij is mean breast height outerwood density (kg m−3) of plot j in trial i, N ij is stand density (stems ha−1), a i and b are model parameters, and e ij is the error term. Within-stand variation between trees The distribution of outerwood density between trees within a stand was examined using the SAS UNIVARIATE procedure. The coefficient of variation (i.e., the ratio of the standard deviation to the mean value) for outerwood density was calculated for each stand. In stands where DBH measurements were available, the pooled within-stand correlation coefficient between outerwood density and DBH was calculated using the SAS DISCRIM procedure. Using data from stands where two outerwood cores were taken per tree, it was possible to separate out the small-scale variability of cores within a tree from the general variation between trees using variance component analysis carried out using the SAS MIXED procedure. Within-stem radial pattern in wood density A regression model expressing density as a function of ring number from pith was used to model the pith-to-bark radial pattern in wood density. The model predicts Dring iR , the mean breast-height density (kg m−3) of the R th ring from pith in stand i: $$ {Dring}_{iR}=a+\left(b-R\right)/\left(c+d\times \exp \left(f\times R\right)\right)+\left(1-g\times \exp \left(-h\times R\right)\right)\times {L}_i+{e}_{iR} $$ The first part of this model describes the radial pith-to-bark pattern in mean density as a function of ring number, R, while the second part accounts for random variation between stands and includes a local parameter L i to calibrate the model for each individual stand. When L i is zero, the model predicts density for an average stand, while a negative value of L i is appropriate for a lower density stand, and a positive value for a higher density stand. The parameters in the model were estimated using the SAS NLMIXED procedure, with L i treated as a normally distributed random term varying between stands with variance \( {\sigma}_{s \tan d}^2 \), and a within-radial error term e iR with variance \( {\sigma}_e^2 \). Within-stem longitudinal pattern in wood density Examination of the disc data indicated that a better relationship between wood density and height-within-stem would be obtained if density was predicted using relative height (i.e. the height of a disc in the stem divided by total stem length, (disc height)/(tree height)) rather than absolute height. However, tree heights were measured in only a minority of the felled tree studies and it was therefore necessary to estimate tree height where this had not been measured. This was achieved by fitting quadratic regression models predicting disc diameter from height for each stem. Tree height was then estimated for each stem by extrapolating this model to a diameter of zero. These regression models were fitted using data from discs cut at greater than 5 m height, as above this height, stem taper was found to be well approximated by a quadratic model. Only trees with diameter of the highest disc less than 200 mm, and where the uppermost disc height was at least 70% of the estimated tree height, were used. Actual measured tree height was available for 70 trees in the database, and it was therefore possible to validate the procedure against these measured heights. The mean error (actual – estimated height) was 0.03 m with standard deviation 0.9 m, and the correlation between measured and estimated height was 0.99. Based on this validation, the procedure was judged to perform well. Using estimated tree height, the relative height of each disc was calculated. A model was then derived for predicting whole-disc density as a function of relative height. A polynomial model form with an additive random effect for each tree was found to be suitable. By plotting disc density against relative height, it was apparent that density has a sigmoidal pattern with relative height, well approximated by a cubic function. The following regression model was therefore fitted to predict average disc density at the k th height in tree j in stand i, Ddisc ijk (kg m−3), as a function of relative height (Hrel ijk ): $$ {Ddisc}_{ijk}=a+b\times {Hrel}_{ijk}+c\times {Hrel}_{ijk}^2+d\times {Hrel}_{ijk}^3+{L}_{\mathrm{i}}+{T}_{ij}+{e}_{ijk} $$ The parameters in the model were estimated using the SAS MIXED procedure, with local stand parameter L i treated as a normally distributed random term varying between stand with variance \( {\sigma}_{s \tan d}^2 \), a tree effect T ij treated as a normally distributed random term with variance \( {\sigma}_{tree\left(s \tan d\right)}^2 \), and a within-tree error term e ijk with variance \( {\sigma}_e^2 \). There was no trend in the variance between trees with relative height and the random stand, tree and random terms were therefore represented as an additive effects in the model. Linking models to a growth and yield simulator The wood density models described in this paper have been implemented within the Forecaster growth and yield simulator (West et al. 2013) allowing forest managers to analyse the effects of different combinations of site and silviculture on wood density (see Appendix for details). To demonstrate the utility of these coupled growth and wood density models, we predicted the growth and wood density for stands growing at three different sites: (1) southern South Island (MAT = 8 °C); (2) northern South Island (MAT = 10 °C); and (3) North Island (MAT = 12 °C). At each site, two different silvicultural regimes were simulated: (1) initial planting density of 1667 trees ha−1, pre-commercial thinning to a residual stand density of 750 trees ha−1 at mean top height of 14 m and commercially thinned to a residual stand density of 325 trees ha−1 at mean top height 22 m (a typical regime for Douglas-fir in New Zealand); and (2) initial planting density of 1250 trees ha−1, pre-commercial thinning to a residual stand density of 400 trees ha−1 at mean top height of 14 m (a more aggressive early thinning regime). For each site and silvicultural regime combination, the breast height outerwood density and whole log average wood density was predicted for four different harvest ages – 35, 40, 45 and 50 years. Whole log values were calculated over a 5 m log length and up to four logs were cut from each tree. Site productivity metrics and growth information used to initialise the Forecaster growth and yield prediction system were obtained from Scion's Permanent Sample Plot database (Hayes and Andersen 2007). Effects of site and environment on outerwood density Average breast height outerwood density in the 312 stands sampled from across New Zealand was 427 kg m−3, with a standard deviation of 35.8 kg m−3 giving a coefficient of variation of 8.4% (Table 2). There were significant positive correlations between outerwood density and stand age for both North and South Islands (Table 3). For the North Island stands, partial correlations adjusted for stand age showed weak but statistically significant relationships between wood density and elevation (negative), and MAT (positive). South Island stands showed much stronger associations between outerwood density and air temperature, with a partial correlation adjusted for age of 0.72 for MAT. Although all the temperature variables tested were positively associated with density, correlations were strongest for winter temperatures and weakest for summer temperatures (e.g., the partial correlations for winter and summer mean temperature were 0.72 and 0.57 respectively). There were weaker correlations with a number of other variables including rainfall and elevation, although after accounting for age and MAT, partial correlations with these variables became non-significant. The small number of stands where soil carbon (C) and nitrogen (N) were sampled provided strong evidence of a positive relationship between density and soil C: N ratio. Table 2 Summary of breast height outerwood density values by region Table 3 Correlations and partial correlations between breast height outerwood density and various site and environmental variables Outerwood densities were also analysed in relation to the three main planting periods represented in the dataset (i.e., pre-1940, 1940–1969, and post-1969). It is likely that the genotypes represented within each planting period and within each island varied, potentially obscuring environmental effects on wood density. An analysis of covariance was used to estimate mean outerwood density for each period in each island, using as covariates MAT, stand age, and age-squared (included because the relationship between age and density was clearly nonlinear). Adjusting for the effects of age and temperature, North Island stands planted before 1970 had significantly lower average outerwood density (by about 50 kg m−3) than South Island stands or post-1969 North Island stands (Table 4). Table 4 Mean outerwood density by island and planting period, adjusted for age and MAT using analysis of covariance Examination of the relationship between breast height outerwood density and stand age indicated that it typically increases with age before stabilising after about age 30 years. The increased density in the inner seven rings apparent in detailed densitometry data (see the following section) was not evident in the outerwood density data which were all obtained from stands 15 years or older in age. On the other hand, examination of the data for the South Island suggested that the effect of MAT on density follows a linear trend. Therefore, Eq. (1) which used an exponential term to account for age, and a linear term to account for temperature was used to model outerwood density as a function of age and temperature. Because of the differences in wood density between North Island stands planted before and after 1970, dummy variables were used to fit separate a and b parameters for South Island, North Island post-1969, and North Island pre-1970 stands (Table 5). Table 5 Parameter estimates for the model Eq. (1) to predict breast height outerwood density from age and MAT, with associated standard errors and tests of significance A scatter plot of density adjusted to age 40 years versus MAT shows strong positive relationships between breast height outerwood density and MAT for South Island stands planted before 1970, South Island stands planted after 1970, and North Island stands planted after 1970 (Fig. 2). However, pre-1970 North Island stands show little trend in density with temperature and have a much lower density for a given MAT than the other 3 groups. Breast height outerwood density corrected to age 40 years versus MAT. Each point is the mean for a site with average number of trees per site = 36. Blue lines were fitted using ordinal least squares regression and the grey shading indicates the 95% confidence interval Apart from age and MAT, the variable most strongly associated with wood density based on the correlation analysis was soil C:N ratio. Because this variable was only available for 18 South Island stands (several measurements from pre-1970 North Island stands were not used because it appeared that genetic differences overrode environmental effects in these stands), it could not be included in the main regression model. However, a regression between the residuals from Model (1) and soil C:N ratio showed a significant trend indicating that outerwood density increases by 3.36 kg m−3 for every unit increase in soil C: N ratio (Table 6). This regression model implies a predicted residual of zero for a soil C: N ratio of 22.4, with this ratio being fairly typical for forest soils in New Zealand. Table 6 Parameter estimates and their standard errors and tests of significance for the regression model: (Actual – Predicted using Eq. (2)) outerwood density) = a + b × soil_C:N_ratio, based on 18 South Island sites. The model R2 is 52.4% and it has a root mean square error of 17.6 kg m−3 Influence of stand density on outerwood density Although the database contained only limited data from stocking trials, they were sufficient to demonstrate that wood density in New Zealand-grown Douglas-fir is not greatly influenced by stand density (Fig. 3). The slope parameter for the Eq. (3) regression model was 0.0048 ± 0.0052 (estimate ± standard error) and was not significantly different from zero (t21 = 0.92, p = 0.37). The model implies, for example, that an increase in stand density of 1000 stems ha−1 would only produce an increase in wood density of 6 ± 11 kg m−3 (95% confidence interval). Even if the upper limit of this interval is correct, it indicates that stand density has only a minor influence on outerwood density in New Zealand Douglas-fir, at least over the range of stand densities typically used for the species. Relationship between breast height outerwood density and stand density. Each solid line represents data from a separate stocking trial. The blue line shows the overall regression model for an average site and the grey shading indicates the 95% confidence interval Within-stand variation in wood density between trees The distribution of outerwood density in individual trees across all available studies closely followed a normal distribution (Fig. 4). The coefficient of variation for outerwood density within each stand based on a single core per tree averaged 7.5% and did not vary with mean density (coefficients of variation averaged 7.8, 7.5, 7.3, and 7.6% for stands with mean wood density < 250, 250–400, 400–450, and >450 kg m−3 respectively). Distribution of outerwood density in individual trees with fitted normal distribution The mean coefficient of variation of 7.5% can be considered to slightly overstate the true variation among trees as it is based on a single small core sample per tree, and therefore includes an element of within-tree variation. A variance component analysis was applied to the 20 stands where two cores were taken per tree (from opposite sides of the stem) to estimate the between- and within-tree variance components which were 997 and 177 respectively. These results show that inter-tree variation is large relative to within-tree variation in a typical stand. However, they also show that the true inter-tree variation expressed as a standard deviation would be 8% lower than the variation measured using single cores. This indicates that for modelling purposes, an inter-tree coefficient of variation of 6.9% (rather than 7.5%) is appropriate. This value is, therefore, used in the Forecaster simulation system which stochastically varies wood density among stems within a simulation, thus providing a realistic level of variation in wood density among logs at harvest. The pooled within-stand correlation coefficient between wood density and tree diameter was −0.089 indicating a very weak, although statistically significant (p = 0.0095) negative relationship within a typical stand. The mean DBH of stands used in this analysis was 392 mm and the mean coefficient of variation was 17%. Because the relationship between wood density and stem diameter is so weak, no attempt is made to adjust wood density for tree diameter within a stand in the Forecaster simulation system. Within-stem radial pattern of wood density A common radial pattern was observed in the ring-level breast-height density data available from 47 stands. Starting at the pith, wood density decreases over the first seven rings, and then begins a gradual increase after ring ten, eventually stabilising beyond about ring 30 (Fig. 5). Variation among stands increases over the first ten rings, and then stabilises. This behaviour was well modelled by Eq. (4) which explained 81% of the variation in breast-height density in the dataset. This could be partitioned into variation among sites which explained 62% of the variation and the pith to bark trend described by the model which explained a further 19%. Parameter estimates are given in Table 7. Pith-to-bark radial pattern of breast height density by ring. Fine lines show densities for individual stands. The solid line shows the density predicted by Model (3) for an average site (L = 0). The dashed lines for sites at the 5th and 95th percentiles (lower and upper lines, L = −104 and L = +104, respectively) Table 7 Parameter estimates and their standard errors and tests of significance for the model Eq. (4) to predict the radial variation in wood density Within-stem longitudinal pattern of wood density Equation (5) was used to model the within-stem longitudinal pattern of wood density (Fig. 6). This model explained 79% of the variation in disc density in the dataset which could be partitioned into variation among sites explaining 31% of total variation, among trees within stands explaining 37%, and the longitudinal trend in density which explained a further 11%. Parameter estimates are given in Table 8. Longitudinal pattern of disc density within stem by relative height. Fine lines show densities for individual trees. The solid line shows the density predicted by Model (4) for an average site (L = 0). The dashed lines for sites at the 5th and 95th percentiles (lower and upper lines, L = −64 and L = +64, respectively) Table 8 Parameter estimates and their standard errors and tests of significance for the model Eq. (5) to predict the longitudinal variation in wood density Application of the model The wood density models described above, implemented within the Forecaster growth and yield simulator, were used to predict the distribution of densities of logs harvested from stands grown on three different sites. Predicted log average density followed the expected pattern given the differences in mean annual air temperature between the sites (Fig. 7). At age 40 years, in the stand growing on the warmest site in the North Island, average wood density for the butt log was 467 kg m−3 compared with 376 kg m−3 for the stand growing in the southern South Island. The stand in the upper South Island was intermediate with the butt log averaging 426 kg m−3 in density at the same age. Whole-log wood density decreased with increasing log height up the stem. At stand age of 40 years, the fourth log was 31 kg m−3 lower in density than the butt log in the North Island site, but only 8–18 kg m−3 lower in the South Island sites. Rotation length had only a small effect; for an equivalent log height, there was an increase in log average density of 2–4 kg m−3 for each 5 year increase in rotation length. Finally, the difference between the two silvicultural regimes was minimal (~1–3 kg m−3). Predicted values of whole-log average density for (a) stands grown at three contrasting sites, (b) under different silvicultural regimes (combinations of initial and post-thinning stand density), (c) for four different rotation lengths, and (d) at different heights up the stem (5-m height classes) This study quantifies the extent of variation in wood density of New Zealand Douglas-fir among sites, among trees within a stand as well as radially and longitudinally within trees. The average outerwood density of the New Zealand grown Douglas-fir in this study was 427 kg m−3, lower than is typical for the species growing in its native range. For example, a survey of coast Douglas-fir across 45 sites found the density of wood formed at age 50 years to average 479 kg m−3 (Lassen and Okkonen 1969). There are likely to be a combination of reasons for the lower densities of New Zealand Douglas-fir including greater fertility and correspondingly faster growth rates, younger measurement ages, and lower stand densities. Also, a considerable proportion of the data in this study were from North Island stands planted before 1970 which are of significantly lower wood density possibly because of their genetics; the North Island stands planted after 1970 averaged 462 kg m−3, which is much closer to the North American average. Wood density was found to be strongly influenced by air temperature, especially winter temperature. Similar positive relationships between wood density and temperature have been found for Douglas-fir in the Pacific Northwest of North America (Filipescu et al. 2014), and also for both Douglas-fir and radiata pine in New Zealand (Cown 1974, 1999; Harris 1985; Palmer et al. 2013). This effect has been explained as being caused by warmer temperatures producing an earlier transition in spring from earlywood to latewood, allowing a longer period of higher density latewood production (Kantavichai et al. 2010a). In this study, South Island-grown Douglas-fir outerwood density was shown to increase by 25.4 kg m−3, and North Island stands planted after 1969 by 19.2 kg m−3, for every 1 °C increase in MAT. This is even greater than the increase of 15.9 kg m−3 per 1 °C increase in MAT for New Zealand-grown radiata pine (Beets et al. 2007). The majority of New Zealand's Douglas-fir resource is now grown on cooler, often more snow-prone sites located at higher elevations in the South Island. Stands on these sites will have lower density compared to stands grown on warmer sites in the North Island. The lack of a strong relationship between wood density and MAT for North Islands stands planted prior to 1970 is difficult to explain. However, previous studies in New Zealand have shown large differences in wood density between different Douglas-fir provenances (Lausberg et al. 1995) and it is possible that environmental effects may have been masked by genetic differences between stands. This study shows that North Island stands planted before 1970 were of much lower wood density than those planted after this date. For example, the database contained 115 pre-1970 stands and six post-1970 stands within Kaingaroa Forest (the largest North Island forest), with the stands planted in the earlier period having a mean outerwood density 55 kg m−3 lower than those planted in the latter period after adjusting for stand age. The reason for this dissimilarity is not known, but it is possible that it is due to differences in the genotypes planted in each period. Other climate variables such as rainfall were not significantly associated with wood density after adjusting for stand age and temperature. A negative association between Douglas-fir wood density and rainfall has been observed in the Pacific Northwest of North America (Filipescu et al. 2014), although summer moisture deficit has been shown to negatively affect Douglas-fir ring-level wood density on a drought-prone site (Kantavichai et al. 2010a). It is possible that rainfall on typical New Zealand Douglas-fir sites is above any threshold where it might affect wood density or is sufficiently well-distributed throughout the growing season that severe seasonal moisture deficits rarely occur. Previous work in New Zealand-grown radiata pine has shown that there is a negative relationship between wood density and soil fertility (Cown and McConchie 1981), specifically the ratio of C to N (Beets et al. 2007). Similar results have be noted for Douglas-fir with, for example, wood density decreasing after application of biosolids to the soil (Kantavichai et al. 2010a). In our dataset, soil C and N measurements were available for a subset of 18 South Island stands, and these stands displayed a reduction in wood density with soil fertility. After adjusting for age and temperature, wood density increased by 3.36 kg m−3 for every unit increase in C:N ratio. This is a comparable although slightly lower value than that found for radiata pine of 4.1 (Beets et al. 2007). (Note also that the radiata pine model used the adjusted ratio C/(N-0.014); using this adjusted ratio, the coefficient for Douglas-fir is 2.96.) Interpolated spatial surfaces of the soil carbon to nitrogen ratio have been developed (Watt and Palmer 2012) and these could potentially be used to predict site-level differences. The lack of response of wood density to differences in tree spacing was somewhat unexpected given that many studies across of a range of conifer species have shown that wood density is generally lower in wider-spaced stands (e.g. Brazier 1970; Clark III et al. 2008; Savill and Sandels 1983; Watt et al. 2011). Our study focused on breast height outerwood density and it must be remembered that the wood surrounding the pith has lower density than the outerwood. Therefore, increased initial growth rates, while not necessarily causing a decrease in individual ring wood density, will likely result in an increase in the proportion of corewood and a lower whole-stem average wood density. The observed lack of a relationship between wood density and tree spacing does not mean that the stiffness of timber is not influenced by stand density. Stand density has been shown to have a strong effect on branch size, which is an important determinant of timber stiffness (Auty et al. 2012; Hein et al. 2008; Maguire et al. 1999; Whiteside et al. 1976). In addition, numerous studies have found positive associations between stand density and wood stiffness in a range of species (Lasserre et al. 2009; Moore et al. 2009; Watt et al. 2011; Zhang et al. 2006). This includes a recent study in Germany which showed that higher stiffness Douglas-fir timber is obtained from stands planted at higher initial densities (Rais et al. 2014). Therefore, this result should be interpreted with caution and future work is required in New Zealand to establish robust relationships between timber stiffness values, wood density and knot size incorporating the full range of sites and provenances. Given that radiata pine is the predominant commercial species grown in New Zealand, and is, therefore, familiar to forest managers and wood processors, it is both useful and informative to compare the wood properties of the two species. The average outerwood density value for Douglas-fir observed in this study (429 kg m−3) was almost the same as the average value of 439 kg m−3 obtained in a similar study on radiata pine (Palmer et al. 2013). Both species exhibit strong genetic influences on wood density, and as shown in this study, the environmental effects of temperature and soil fertility are very similar for both species. Within-site variation between trees is also very similar, with the average coefficient of variation in outerwood density of 7.5% observed in this study almost identical to the value of 7.4% in a similar study on radiata pine (Kimberley et al. 2015). The lack of relationship between the coefficient of variation and the mean level of density is similar to the pattern observed for radiata pine (Cown 1999). Neither species exhibited a strong relationship between wood density and tree diameter within a stand. However, within-stem patterns in wood density vary considerably between the two species. Although both species show a tendency for wood density to increase radially from the centre of the stem, and for discs or logs to decrease in wood density with height in the stem, the effects are far more pronounced in radiata pine than Douglas-fir. Wood density typically increases by about 110 kg m−3 from rings 1 to 30 in radiata pine (Kimberley et al. 2015), but only by about 65 kg m−3 between its lowest level at ring 7 to ring 30 in Douglas-fir. Disc density at 80% tree height is about 38 kg m−3 less than at the base of the stem in Douglas-fir, while in radiata pine the difference is about 70 kg m−3 (Kimberley et al. 2015). From a practical perspective, this means that the log and stem average density for a given breast density value will be higher in Douglas-fir than in radiata pine. The density models described in this paper have been implemented in the Forecaster stand simulation system (West et al. 2013), where they can be used in conjunction with growth models and stem volume and taper equations to estimate intra-stem wood density profiles and density variation for log assortments. The simulations undertaken in this paper show that differences in average density of logs in New Zealand grown Douglas-fir are mostly the result of site level differences in mean annual air temperature. There is little difference in density between the two silvicultural regimes, which was not unexpected given the lack of a relationship between outerwood density and stand stocking. Similarly, the relatively small influence of rotation length on whole log density was not unexpected given the magnitude of radial and longitudinal trends in density, particular compared with those found in radiata pine (Kimberley et al. 2015). Given that wood density in Douglas-fir does not appear to be under a high degree of silvicultural control, it is more likely that forest managers would use the model to determine whether a stand growing on a particular site would yield logs capable of meeting certain wood density levels. The model currently does not account for genetic differences in wood density. A similar radiata pine wood density model can predict the effects of genetic differences in wood density using the GF Plus Wood Density Rating assigned to many commercial radiata pine seedlots (Kimberley et al. 2016). A similar rating system for Douglas-fir seedlots does not currently exist, although if such a system was developed, it could be readily implemented within the wood density model (see Appendix). However, it is possible to adjust for genetic effects manually when using the model. For example, if a seedlot is believed to produce wood density 50 kg m−3 higher than an average genotype, predictions from Eq. (1) could be adjusted by this amount enabling the model to produce realistic wood density predictions for that seedlot. There are several priority areas for further work to develop, refine and validate the models presented in this paper. Because the wood properties models are a series of component models that have also been coupled to models that predict tree growth, taper and volume, error propagation is likely to be important. This is a common issue in forest modelling, particularly when several models are linked together or when data obtained from field sampling are used as input into predictive models (Weiskittel et al. 2011). Because the standard errors of the parameter estimates for the wood property models are available along with the root mean square errors for these models, techniques such as Monte Carlo simulations can be used to estimate the magnitude of the prediction error when these models are combined. The error associated with log level density predictions could also be estimated provided similar information can be obtained for the models describing tree growth, volume and taper. Further validation of the models should be undertaken to ensure that predictions are robust for the range of site conditions under which Douglas-fir is grown in New Zealand. A priority would be to further examine the relationship between stand density and wood density. The study has quantified the sources and magnitude of variation in Douglas-fir wood density at a range of scales based on analyses of a comprehensive dataset collected over a 50 year period. This has highlighted the relative contributions of site and silviculture, with most of the variation in wood density among trees due to site level differences in mean annual air temperature. The models developed to explain these trends in wood density have been coupled together and linked to a growth and yield simulator which enables forest managers to estimate the impact of different factors, primarily site, on the wood density distribution of log product assortments. By combining this information with branch characteristics, the impacts on the performance of structural timber can be inferred. Antonova, G. F., & Stasova, V. V. (1997). Effects of environmental factors on wood formation in larch (Larix sibirica Ldb.) stems. Trees, 11, 462–468. Auty, D., Weiskittel, A. R., Achim, A., Moore, J. R., & Gardiner, B. A. (2012). Influence of early re-spacing on Sitka spruce branch structure. Annals of Forest Science, 69, 93–104. Barrett, J. D., & Kellogg, R. M. (1991). Bending strength and stiffness of second-growth Douglas-fir dimension lumber. Forest Products Journal, 41(10), 35–43. Beets, P. N., Kimberley, M. O., & McKinley, R. B. (2007). Predicting wood density of Pinus radiata annual growth increments. New Zealand Journal of Forestry Science, 37(2), 241–266. Brazier, J. D. (1970). The effect of spacing on the wood density and wood yields of Sitka spruce. Supplement to Forestry, 22–28. Clark III, A., Jordan, L., Schimleck, L., & Daniels, R. F. (2008). Effect of initial planting spacing on wood properties of unthinned loblolly pine at age 21. Forest Products Journal, 58(10), 78–83. Cown, D. J. (1974). Wood density of radiata pine: Its variation and manipulation. New Zealand Journal of Forestry, 19(1), 84–92. Cown, D. J. (1999). New Zealand pine and Douglas-fir: Suitability for processing. (FRI Bulletin 216). Rotorua, New Zealand: New Zealand Forest Research Institute. Cown, D. J., & Clement, B. C. (1983). A wood densitometer using direct scanning with X-rays. Wood Science and Technology, 17(2), 91–99. Cown, D. J., & McConchie, D. L. (1981). Effects of thinning and fertiliser application on wood properties of Pinus radiata. New Zealand Journal of Forestry Science, 11, 79–91. Cown, D. J., & Parker, M. L. (1979). Densitometric analysis of wood from five Douglas-fir provenances. Silvae Genetica, 28(2/3), 48–53. Filipescu, C. N., Lowell, E. C., Koppenaal, R., & Mitchell, A. K. (2014). Modeling regional and climatic variation of wood density and ring width in intensively managed Douglas-fir. Canadian Journal of Forest Research, 44(3), 220–229. Grace, J. C., Brownlie, R. K., & Kennedy, S. G. (2015). The influence of initial and post-thinning stand density on Douglas-fir branch diameter at two sites in New Zealand. New Zealand Journal of Forestry Science, 45:14. Harris, J. M. (1966). A survey of the wood density of Douglas-fir grown in New Zealand. (Forest Products Report 194). Rotorua: New Zealand Forest Service, Forest research institute. Harris, J. M. (1967). Wood density as a criterion for thinning Douglas fir. New Zealand Journal of Forestry, 12(1), 54–62. Harris, J. M. (1978). Intrinsic wood properties of Douglas fir and how they can be modified. In: Forest Research Institute, Symposium No. 15: A review of Douglas-fir in New Zealand. New Zealand Forest Research Institute, Rotorua, New Zealand. Pp. 235–239. Harris, J. M. (1985). Effects of site and silviculture on wood density of Douglas fir grown in Canterbury conservancy. New Zealand Journal of Forestry, 30(1), 121–132. Hayes, J., & Andersen, C. (2007). The Scion permanent sample plot (PSP) database system. New Zealand Journal of Forestry, 52(1), 31–33. Hein, S., Weiskittel, A. R., & Kohnle, U. (2008). Effect of wide spacing on tree growth, branch and sapwood properties of young Douglas-fir [Pseudotsuga menziesii (Mirb.) Franco] in south-western Germany. European Journal of Forest Research, 127, 481–493. Jordan, L., Clark, A., Schimleck, L., Hall, D. B., & Daniels, R. F. (2008). Regional variation in wood specific gravity of planted loblolly pine in the United States. Canadian Journal of Forest Research, 38(4), 698–710. Jozsa, L. A., & Brix, H. (1989). The effects of fertilization and thinning on wood quality of a 24-year-old Douglas-fir stand. Canadian Journal of Forest Research, 19(9), 1137–1145. Kantavichai, R., Briggs, D., & Turnblom, E. (2010a). Modeling effects of soil, climate, and silviculture on growth ring specific gravity of Douglas-fir on a drought-prone site in western Washington. Forest Ecology and Management, 259(6), 1085–1092. Kantavichai, R., Briggs, D. G., & Turnblom, E. C. (2010b). Effect of thinning, fertilization with biosolids, and weather on interannual ring specific gravity and carbon accumulation of a 55-year-old Douglas-fir stand in western Washington. Canadian Journal of Forest Research, 40(1), 72–85. Kellog, R. M. (Ed.). (1989). Second growth Douglas-fir: Its management and conversion for value (Special publication no. SP-32). Vancouver, B.C.: Forintek Canada Corporation. Kimberley, M. O., Cown, D. J., McKinley, R. B., Moore, J. R., & Dowling, L. J. (2015). Modelling variation in wood density within and among trees in stands of New Zealand-grown radiata pine. New Zealand Journal of Forest Science, 45:(22), 1–13. Kimberley, M. O., Moore, J. R., & Dungey, H. S. (2016). Modelling the effects of genetic improvement on radiata pine wood density. New Zealand Journal of Forestry Science, 46:8, 1–8. Knowles, R. L., Hansen, L. W., Downes, G., Lee, J. R., Barr, A. B., Roper, J. G., & Gaunt, D. J. (2003). Modelling within-tree and between-tree variation in Douglas-fir wood and lumber properties. Abstract in Proceedings, IUFRO All Division 5 Conference, "Forest Products Research - Providing for Sustainable Choices", Rotorua, NZ, 11–15 March 2003, 94. Lassen, L. E., & Okkonen, E. A. (1969). Effect of rainfall and elevation on specific gravity of coast Douglas-fir. Wood and Fiber Science, 1, 227–235. Lasserre, J. P., Mason, E. G., Watt, M. S., & Moore, J. R. (2009). Influence of initial planting spacing and genotype on microfibril angle, wood density, fibre properties and modulus of elasticity in Pinus radiata D.Don corewood. Forest Ecology and Management, 258(9), 1924–1931. Lausberg, M. (1996). Wood density variation in Douglas-fir provenances in New Zealand. Proceedings of Wood Quality Workshop '95, FRI Bulletin No. 201, New Zealand Forest Research Institute, Rotorua, New Zealand. Pp. 64–71. Lausberg, M. J. F., Cown, D. J., McConchie, D. L., & Skipwith, J. H. (1995). Variation in some wood properties of Pseudotsuga menziesii provenances grown in New Zealand. New Zealand Journal of Forestry Science, 25, 133–146. Maclaren, J. P. (2009). Douglas-fir manual. Forest Research Bulletin 237. New Zealand Forest Research Institute Limited, Rotorua, New Zealand. Maeglin, R. R., & Wahlgren, H. E. (1972). Western Wood Density Survey Report No. 2. Research Paper FPL 183, USDA Forest Service, Forest Products Laboratory, Madison, Wisconsin. 24pp. Maguire, D. A., Johnston, S. R., & Cahill, J. (1999). Predicting branch diameters on second-growth Douglas-fir from tree-level descriptors. Canadian Journal of Forest Research, 29(12), 1829–1840. Maguire, D. A., Kershaw Jr., J. A., & Hann, D. W. (1991). Predicting the effects of silvicultural regime on branch size and crown wood core in Douglas-fir. Forest Science, 37, 1409–1428. Ministry for Primary Industries. (2014). National Exotic Forest Description as at 1 April 2014. Wellington: Ministry for Primary Industries. Moore, J., Achim, A., Lyon, A., Mochan, S., & Gardiner, B. (2009). Effects of early re-spacing on the physical and mechanical properties of Sitka spruce structural timber. Forest Ecology and Management, 258(7), 1174–1180. Palmer, D. J., Kimberley, M. O., Cown, D. J., & McKinley, R. B. (2013). Assessing prediction accuracy in a regression kriging surface of Pinus radiata outerwood density across New Zealand. Forest Ecology and Management, 308, 9–16. Pojar, J., & MacKinnon, A. (Eds.). (1994). Plants of the Pacific Northwest Coast - Washington. Oregon, British Columbia and Alaska: Lone Pine Publishing. Rais, A., Poschenrieder, W., Pretzsch, H., & van de Kuilen, J. W. G. (2014). Influence of initial plant density on sawn timber properties for Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco). Annals of Forest Science, 71(5), 617–626. SAS Institute Inc. (2011). Base SAS® 9.3 Procedures Guide. Cary, NC: SAS Institute Inc.. Savill, P. S., & Sandels, A. J. (1983). The influence of early respacing on the wood density of Sitka spruce. Forestry, 65(2), 109–120. Smith, D. M. (1954). Maximum moisture content method for determining specific gravity of small wood samples. (Report 2014). Madison, WI: United States Department of Agriculture, Forest Service, Forest Products Laboratory. Stoehr, M. U., Ukrainetz, N. K., Hayton, L. K., & Yanchuk, A. D. (2009). Current and future trends in juvenile wood density for coastal Douglas-fir. Canadian Journal of Forest Research, 39(7), 1415–1419. Stone, J. K., Hood, I. A., Watt, M. S., & Kerrigan, J. L. (2007). Distribution of Swiss needle cast in New Zealand in relation to winter temperature. Australasian Plant Pathology, 36(5), 445. Tustin, J. R., & Wilcox, M. D. (1978). The relative importance of branch size and wood density to the quality of Douglas fir framing timber. In: Forest Research Institute, Symposium No. 15: A review of Douglas-fir in New Zealand. New Zealand Forest Research Institute, Rotorua, New Zealand. Pp. 267–272. USDA (1965). Western Wood Density Survey Report No. 1. Research Paper FPL-27. USDA Forest Service, Forest Products Laboratory, Madison, Wisconsin. 60pp. Vargas-Hernandez, J., & Adams, W. T. (1994). Genetic relationships between wood density components and cambial growth rhythm in young coastal Douglas-fir. Canadian Journal of Forest Research, 24(9), 1871–1876. Vikram, V., Cherry, M. L., Briggs, D., Cress, D. W., Evans, R., & Howe, G. T. (2011). Stiffness of douglas-fir lumber: Effects of wood properties and genetics. Canadian Journal of Forest Research, 41(6), 1160–1173. Watt, M. S., & Palmer, D. J. (2012). Use of regression kriging to develop a carbon:nitrogen ratio surface for New Zealand. Geoderma, 183-184, 49–57. Watt, M. S., Zoric, B., Kimberley, M. O., & Harrington, J. (2011). Influence of stocking on radial and longitudinal variation in modulus of elasticity, microfibril angle, and density in a 24-year-old Pinus radiata thinning trial. Canadian Journal of Forest Research, 41(7), 1422–1431. Weiskittel, A. R., Hann, D. W., Kershaw Jr., J. A., & Vanclay, J. K. (2011). Forest Growth and Yield Modeling. Chichester, UK: John Wiley & Sons, Ltd.. Wellwood, R. W. (1952). The effect of several variables on the specific gravity of second-growth Douglas-fir. Forestry Chronicle, 28(3), 34–42. doi:10.5558/tfc28034-28033. West, G. G., Moore, J. R., Shula, R. G., Harrington, J. J., Snook, J., Gordon, J. A., & Riordan, M. P. (2013). Forest management DSS development in New Zealand (pp. 153–163). Slovakia: Paper presented at the Implementation of DSS Tools into the Forestry Practice, Technical University of Zvolen. Whiteside, I. D. (1978). Machine stress grading studies and grading rules for Douglas fir. In: New Zealand Forest Service, Forest Research Institute, Symposium No. 15: A review of Douglas-fir in New Zealand, 273–287. Whiteside, I. D., Wilcox, M. D., & Tustin, J. R. (1976). New Zealand Douglas fir timber quality in relation to silviculture. New Zealand Journal of Forestry, 22(1), 24–44. Wilhelmsson, L., Arlinger, J., Spangberg, K., Lundqvist, S.-O., Grahn, T., Hedenberg, O., & Olsson, L. (2002). Models for predicting wood properties in stems of Picea abies and Pinus sylvestris in Sweden. Scandinavian Journal of Forest Research, 17, 330–350. Wimmer, R., & Grabner, M. (2000). A comparison of tree-ring features in Picea abies as correlated with climate. IAWA Journal, 21(4), 403–416. Zhang, S. Y., Chauret, G., Swift, D. E., & Duchesne, I. (2006). Effects of precommercial thinning on tree growth and lumber quality in a jack pine stand in New Brunswick, Canada. Canadian Journal of Forest Research, 36(4), 945–952. Funding for this study was provided from the Forest Growers Levy, Future Forests Research Ltd. and Scion. Ernslaw One Ltd., City Forests Ltd. and Blakely Pacific provided support for the collection of additional data. We are particularly grateful to Mark Dean and Don McConchie for their assistance with field data collection. Christine Dodunski entered some of the historical data into the database and along with Richard Moberly assisted with the gravimetric density assessments. We would like to thank numerous current and former colleagues who collected density samples over many years. More recent outerwood density data were collected by Stuart Kennedy, Stephen Pearce and Peter Beets. Brian Clement and Jeremy Snook incorporated the wood density model into the Forecaster growth and yield prediction system. We would also like to thank two anonymous reviewers for their helpful comments on an earlier version of this manuscript. Scion, Private Bag 3020, Rotorua, 3046, New Zealand Mark O. Kimberley , Russell B McKinley , David J. Cown & John R. Moore Search for Mark O. Kimberley in: Search for Russell B McKinley in: Search for David J. Cown in: Search for John R. Moore in: MOK developed the modelling approach, undertook the data analysis and contributed to writing the manuscript. DJC and RBMcK collected and assembled most of the wood density data, contributed to the interpretation of the results and writing of the manuscript. JRM contributed to writing the manuscript and undertook the Forecaster analysis. All authors read and approved the final version of the manuscript. Correspondence to John R. Moore. Incorporation of the wood density models into the forecaster growth and yield simulator Forecaster uses growth models to simulate the development with age of a list of stems representing a forest stand. At the required clearfell age, or for production thinning, Forecaster predicts the volumes and other characteristics of logs cut from the felled stems. It is also capable of predicting quality attributes for each log such as branch size and wood properties. To simulate properties such as wood density, Forecaster requires models for predicting the property within a stem in terms of, for example, height and ring number, or height and tree age. The Douglas-fir wood density models described in this paper as implemented in Forecaster will operate using the following steps: 1. Predict stand mean breast height outerwood density for age 40 years (D 40ow ). This prediction can be obtained using either an environmental model based on MAT and soil C:N ratio, or from a user-supplied measurement of outerwood density (D ow ) (e.g., from cores taken from sample trees). Given the variability between sites and because of genetic variation, it will always be more accurate to use a measurement of outerwood density taken from the stand of interest rather than to rely on the environmental model. If an outerwood density measurement is available, it is adjusted to age 40 years using: $$ {D}_{40ow}={D}_{ow}-331\left[ \exp \left(-0.0731\times 40\right)- \exp \left(-0.0731\times Age\right)\right] $$ The environmental model is based on Eq. (1) evaluated at age 40 years for South Island and post-1969 North Island stands, i.e.: $$ \begin{array}{l}\mathrm{South}\kern0.5em \mathrm{Island}:\kern0.5em {D}_{40ow}=197.2+25.4\times MAT+\left( SoilCN\hbox{-} 22\right)\times 3.36\\ {}\mathrm{North}\ \mathrm{Island}:\kern0.5em {D}_{40ow}=259.2+19.2\times MAT+\left( SoilCN- 22\right)\times 3.36\end{array} $$ The soil C:N ratio adjustment used in the environmental model uses an assumption that stands in the database used to develop the MAT model had an average soil C:N ratio of 22. Note that if soil C:N ratio is not known, a default value of 22 will be used. 2. Predict mean breast height wood densities (D ring ) by ring number from pith (R) using the following equations based on Eq. (4) assuming that density of the outer 50 mm of a breast height increment core is centred on ring 29 from pith. This assumption is used because, (i) it typically takes 3 years for a planted seedling to reach 1.4 m height, (ii) a typical breast height outerwood core at age 40 years contains 15 rings (rings 22–37), and (iii) although this implies a central location at ring 29, it is more correct to use ring 28 because inner rings are wider than outer rings. $$ L=\left({D}_{40ow}-432.4\right)/0.9993 $$ $$ \begin{array}{c}{D}_{ring}=432.4+\left(1.22-R\right)/\left(0.0235+1.25\times \exp \left(0.221\times R\right)\right)\\ {}+\left(1-0.814\times \exp \left(-0.251\times R\right)\right)\times L\end{array} $$ These predictions are used to predict disc density at breast height (D bh ) for any required age based on area weighted averages using ring widths predicted by the growth model. One of the shortcomings of the current model is that it does not account for genetic differences in wood density. Currently, Douglas-fir seedlots sold in New Zealand do not have a wood density rating unlike commercial radiata pine seedlots which are often assigned GF Plus density ratings. However, if a Douglas-fir seedlot were known to produce trees of higher or lower than average D 40ow , this effect could be incorporated into the density predictions by using the adjusted D 40ow when calculating L in Eq. (A4). 3. Predict disc densities (D disc ) at suitable intervals up each stem as a function of breast height disc density (D bh ), disc height (H disc ), and total tree height (H), allowing volume-weighted densities of logs cut from each stem to be calculated. The following equations based on Eq. (5) are used: $$ L={D}_{bh}-\left(436.6-126.4\times 1.4/H+243.7\times {\left(1.4/H\right)}^2-167.5\times {\left(1.4/H\right)}^3\right) $$ $$ {D}_{disc}=L+436.6-126.4\times {H}_{disc}/H+243.7\times {\left({H}_{disc}/H\right)}^2-167.5\times {\left({H}_{disc}/H\right)}^3 $$ 4. The above steps allow prediction of wood densities of logs for an average tree in the stand. To simulate a realistic variation about this average, Forecaster uses a stochastic approach, generating a normally distributed random variate with mean one and coefficient of variation 6.9% for each stem, and multiplying all predicted log densities for the stem by this variate. Kimberley, M.O., McKinley, R.B., Cown, D.J. et al. Modelling the variation in wood density of New Zealand-grown Douglas-fir. N.Z. j. of For. Sci. 47, 15 (2017) doi:10.1186/s40490-017-0096-0 Wood density Wood quality Site effects Silviculture
CommonCrawl
Quantum supremacy using a programmable superconducting processor Many-body Hilbert space scarring on a superconducting processor Pengfei Zhang, Hang Dong, … Ying-Cheng Lai Error mitigation extends the computational reach of a noisy quantum processor Abhinav Kandala, Kristan Temme, … Jay M. Gambetta Extra levels give extra functionality Zhang Jiang A look at the full stack Gaia Donati A programmable two-qubit solid-state quantum processor under ambient conditions Yang Wu, Ya Wang, … Jiangfeng Du Saving superconducting quantum processors from decay and correlated errors generated by gamma and cosmic rays John M. Martinis Leaps of quantum phase Leonid Glazman Realizing quantum convolutional neural networks on a superconducting quantum processor to recognize quantum phases Johannes Herrmann, Sergi Masot Llima, … Christopher Eichler Coherent simulation with thousands of qubits David Bernal Neira Frank Arute1, Kunal Arya1, Ryan Babbush1, Dave Bacon1, Joseph C. Bardin1,2, Rami Barends1, Rupak Biswas3, Sergio Boixo1, Fernando G. S. L. Brandao1,4, David A. Buell1, Brian Burkett1, Yu Chen1, Zijun Chen1, Ben Chiaro5, Roberto Collins1, William Courtney1, Andrew Dunsworth1, Edward Farhi1, Brooks Foxen1,5, Austin Fowler1, Craig Gidney1, Marissa Giustina1, Rob Graff1, Keith Guerin1, Steve Habegger1, Matthew P. Harrigan1, Michael J. Hartmann1,6, Alan Ho1, Markus Hoffmann1, Trent Huang1, Travis S. Humble7, Sergei V. Isakov1, Evan Jeffrey1, Zhang Jiang1, Dvir Kafri1, Kostyantyn Kechedzhi1, Julian Kelly1, Paul V. Klimov1, Sergey Knysh1, Alexander Korotkov1,8, Fedor Kostritsa1, David Landhuis1, Mike Lindmark1, Erik Lucero1, Dmitry Lyakh9, Salvatore Mandrà3,10, Jarrod R. McClean1, Matthew McEwen5, Anthony Megrant1, Xiao Mi1, Kristel Michielsen11,12, Masoud Mohseni1, Josh Mutus1, Ofer Naaman1, Matthew Neeley1, Charles Neill1, Murphy Yuezhen Niu1, Eric Ostby1, Andre Petukhov1, John C. Platt1, Chris Quintana1, Eleanor G. Rieffel3, Pedram Roushan1, Nicholas C. Rubin1, Daniel Sank1, Kevin J. Satzinger1, Vadim Smelyanskiy1, Kevin J. Sung1,13, Matthew D. Trevithick1, Amit Vainsencher1, Benjamin Villalonga1,14, Theodore White1, Z. Jamie Yao1, Ping Yeh1, Adam Zalcman1, Hartmut Neven1 & John M. Martinis1,5 Nature volume 574, pages 505–510 (2019)Cite this article 956k Accesses 6192 Altmetric The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits2,3,4,5,6,7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253 (about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy8,9,10,11,12,13,14 for this specific computational task, heralding a much-anticipated computing paradigm. You have full access to this article via your institution. In the early 1980s, Richard Feynman proposed that a quantum computer would be an effective tool with which to solve problems in physics and chemistry, given that it is exponentially costly to simulate large quantum systems with classical computers1. Realizing Feynman's vision poses substantial experimental and theoretical challenges. First, can a quantum system be engineered to perform a computation in a large enough computational (Hilbert) space and with a low enough error rate to provide a quantum speedup? Second, can we formulate a problem that is hard for a classical computer but easy for a quantum computer? By computing such a benchmark task on our superconducting qubit processor, we tackle both questions. Our experiment achieves quantum supremacy, a milestone on the path to full-scale quantum computing8,9,10,11,12,13,14. In reaching this milestone, we show that quantum speedup is achievable in a real-world system and is not precluded by any hidden physical laws. Quantum supremacy also heralds the era of noisy intermediate-scale quantum (NISQ) technologies15. The benchmark task we demonstrate has an immediate application in generating certifiable random numbers (S. Aaronson, manuscript in preparation); other initial uses for this new computational capability may include optimization16,17, machine learning18,19,20,21, materials science and chemistry22,23,24. However, realizing the full promise of quantum computing (using Shor's algorithm for factoring, for example) still requires technical leaps to engineer fault-tolerant logical qubits25,26,27,28,29. To achieve quantum supremacy, we made a number of technical advances which also pave the way towards error correction. We developed fast, high-fidelity gates that can be executed simultaneously across a two-dimensional qubit array. We calibrated and benchmarked the processor at both the component and system level using a powerful new tool: cross-entropy benchmarking11. Finally, we used component-level fidelities to accurately predict the performance of the whole system, further showing that quantum information behaves as expected when scaling to large systems. A suitable computational task To demonstrate quantum supremacy, we compare our quantum processor against state-of-the-art classical computers in the task of sampling the output of a pseudo-random quantum circuit11,13,14. Random circuits are a suitable choice for benchmarking because they do not possess structure and therefore allow for limited guarantees of computational hardness10,11,12. We design the circuits to entangle a set of quantum bits (qubits) by repeated application of single-qubit and two-qubit logical operations. Sampling the quantum circuit's output produces a set of bitstrings, for example {0000101, 1011100, …}. Owing to quantum interference, the probability distribution of the bitstrings resembles a speckled intensity pattern produced by light interference in laser scatter, such that some bitstrings are much more likely to occur than others. Classically computing this probability distribution becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow. We verify that the quantum processor is working properly using a method called cross-entropy benchmarking11,12,14, which compares how often each bitstring is observed experimentally with its corresponding ideal probability computed via simulation on a classical computer. For a given circuit, we collect the measured bitstrings {xi} and compute the linear cross-entropy benchmarking fidelity11,13,14 (see also Supplementary Information), which is the mean of the simulated probabilities of the bitstrings we measured: $${ {\mathcal F} }_{{\rm{XEB}}}={2}^{n}{\langle P({x}_{i})\rangle }_{i}-1$$ where n is the number of qubits, P(xi) is the probability of bitstring xi computed for the ideal quantum circuit, and the average is over the observed bitstrings. Intuitively, \({ {\mathcal F} }_{{\rm{XEB}}}\) is correlated with how often we sample high-probability bitstrings. When there are no errors in the quantum circuit, the distribution of probabilities is exponential (see Supplementary Information), and sampling from this distribution will produce \({{\mathscr{F}}}_{{\rm{X}}{\rm{E}}{\rm{B}}}=1\). On the other hand, sampling from the uniform distribution will give ⟨P(xi)⟩i = 1/2n and produce \({{\mathscr{F}}}_{{\rm{X}}{\rm{E}}{\rm{B}}}=0\). Values of \({ {\mathcal F} }_{{\rm{XEB}}}\) between 0 and 1 correspond to the probability that no error has occurred while running the circuit. The probabilities P(xi) must be obtained from classically simulating the quantum circuit, and thus computing \({ {\mathcal F} }_{{\rm{XEB}}}\) is intractable in the regime of quantum supremacy. However, with certain circuit simplifications, we can obtain quantitative fidelity estimates of a fully operating processor running wide and deep quantum circuits. Our goal is to achieve a high enough \({ {\mathcal F} }_{{\rm{XEB}}}\) for a circuit with sufficient width and depth such that the classical computing cost is prohibitively large. This is a difficult task because our logic gates are imperfect and the quantum states we intend to create are sensitive to errors. A single bit or phase flip over the course of the algorithm will completely shuffle the speckle pattern and result in close to zero fidelity11 (see also Supplementary Information). Therefore, in order to claim quantum supremacy we need a quantum processor that executes the program with sufficiently low error rates. Building a high-fidelity processor We designed a quantum processor named 'Sycamore' which consists of a two-dimensional array of 54 transmon qubits, where each qubit is tunably coupled to four nearest neighbours, in a rectangular lattice. The connectivity was chosen to be forward-compatible with error correction using the surface code26. A key systems engineering advance of this device is achieving high-fidelity single- and two-qubit operations, not just in isolation but also while performing a realistic computation with simultaneous gate operations on many qubits. We discuss the highlights below; see also the Supplementary Information. In a superconducting circuit, conduction electrons condense into a macroscopic quantum state, such that currents and voltages behave quantum mechanically2,30. Our processor uses transmon qubits6, which can be thought of as nonlinear superconducting resonators at 5–7 GHz. The qubit is encoded as the two lowest quantum eigenstates of the resonant circuit. Each transmon has two controls: a microwave drive to excite the qubit, and a magnetic flux control to tune the frequency. Each qubit is connected to a linear resonator used to read out the qubit state5. As shown in Fig. 1, each qubit is also connected to its neighbouring qubits using a new adjustable coupler31,32. Our coupler design allows us to quickly tune the qubit–qubit coupling from completely off to 40 MHz. One qubit did not function properly, so the device uses 53 qubits and 86 couplers. Fig. 1: The Sycamore processor. a, Layout of processor, showing a rectangular array of 54 qubits (grey), each connected to its four nearest neighbours with couplers (blue). The inoperable qubit is outlined. b, Photograph of the Sycamore chip. The processor is fabricated using aluminium for metallization and Josephson junctions, and indium for bump-bonds between two silicon wafers. The chip is wire-bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals. The state of all qubits can be read simultaneously by using a frequency-multiplexing technique33,34. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GHz) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor. We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off. The pulses are shaped to minimize transitions to higher transmon states35. Gate performance varies strongly with frequency owing to two-level-system defects36,37, stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, flux noise and pulse distortions. We therefore optimize the single-qubit operation frequencies to mitigate these error mechanisms. We benchmark single-qubit gate performance by using the cross-entropy benchmarking protocol described above, reduced to the single-qubit level (n = 1), to measure the probability of an error occurring during a single-qubit gate. On each qubit, we apply a variable number m of randomly selected gates and measure \({ {\mathcal F} }_{{\rm{XEB}}}\) averaged over many sequences; as m increases, errors accumulate and average \({ {\mathcal F} }_{{\rm{XEB}}}\) decays. We model this decay by [1 − e1/(1 − 1/D2)]m where e1 is the Pauli error probability. The state (Hilbert) space dimension term, D = 2n, which equals 2 for this case, corrects for the depolarizing model where states with errors partially overlap with the ideal state. This procedure is similar to the more typical technique of randomized benchmarking27,38,39, but supports non-Clifford-gate sets40 and can separate out decoherence error from coherent control error. We then repeat the experiment with all qubits executing single-qubit gates simultaneously (Fig. 2), which shows only a small increase in the error probabilities, demonstrating that our device has low microwave crosstalk. Fig. 2: System-wide Pauli and measurement errors. a, Integrated histogram (empirical cumulative distribution function, ECDF) of Pauli errors (black, green, blue) and readout errors (orange), measured on qubits in isolation (dotted lines) and when operating all qubits simultaneously (solid). The median of each distribution occurs at 0.50 on the vertical axis. Average (mean) values are shown below. b, Heat map showing single- and two-qubit Pauli errors e1 (crosses) and e2 (bars) positioned in the layout of the processor. Values are shown for all qubits operating simultaneously. We perform two-qubit iSWAP-like entangling gates by bringing neighbouring qubits on-resonance and turning on a 20-MHz coupling for 12 ns, which allows the qubits to swap excitations. During this time, the qubits also experience a controlled-phase (CZ) interaction, which originates from the higher levels of the transmon. The two-qubit gate frequency trajectories of each pair of qubits are optimized to mitigate the same error mechanisms considered in optimizing single-qubit operation frequencies. To characterize and benchmark the two-qubit gates, we run two-qubit circuits with m cycles, where each cycle contains a randomly chosen single-qubit gate on each of the two qubits followed by a fixed two-qubit gate. We learn the parameters of the two-qubit unitary (such as the amount of iSWAP and CZ interaction) by using \({ {\mathcal F} }_{{\rm{XEB}}}\) as a cost function. After this optimization, we extract the per-cycle error e2c from the decay of \({ {\mathcal F} }_{{\rm{XEB}}}\) with m, and isolate the two-qubit error e2 by subtracting the two single-qubit errors e1. We find an average e2 of 0.36%. Additionally, we repeat the same procedure while simultaneously running two-qubit circuits for the entire array. After updating the unitary parameters to account for effects such as dispersive shifts and crosstalk, we find an average e2 of 0.62%. For the full experiment, we generate quantum circuits using the two-qubit unitaries measured for each pair during simultaneous operation, rather than a standard gate for all pairs. The typical two-qubit gate is a full iSWAP with 1/6th of a full CZ. Using individually calibrated gates in no way limits the universality of the demonstration. One can compose, for example, controlled-NOT (CNOT) gates from 1-qubit gates and two of the unique 2-qubit gates of any given pair. The implementation of high-fidelity 'textbook gates' natively, such as CZ or \(\sqrt{{\rm{iSWAP}}}\), is work in progress. Finally, we benchmark qubit readout using standard dispersive measurement41. Measurement errors averaged over the 0 and 1 states are shown in Fig. 2a. We have also measured the error when operating all qubits simultaneously, by randomly preparing each qubit in the 0 or 1 state and then measuring all qubits for the probability of the correct result. We find that simultaneous readout incurs only a modest increase in per-qubit measurement errors. Having found the error rates of the individual gates and readout, we can model the fidelity of a quantum circuit as the product of the probabilities of error-free operation of all gates and measurements. Our largest random quantum circuits have 53 qubits, 1,113 single-qubit gates, 430 two-qubit gates, and a measurement on each qubit, for which we predict a total fidelity of 0.2%. This fidelity should be resolvable with a few million measurements, since the uncertainty on \({ {\mathcal F} }_{{\rm{XEB}}}\) is \(1/\sqrt{{N}_{{\rm{s}}}}\), where Ns is the number of samples. Our model assumes that entangling larger and larger systems does not introduce additional error sources beyond the errors we measure at the single- and two-qubit level. In the next section we will see how well this hypothesis holds up. Fidelity estimation in the supremacy regime The gate sequence for our pseudo-random quantum circuit generation is shown in Fig. 3. One cycle of the algorithm consists of applying single-qubit gates chosen randomly from \(\{\sqrt{X},\sqrt{Y},\sqrt{W}\}\) on all qubits, followed by two-qubit gates on pairs of qubits. The sequences of gates which form the 'supremacy circuits' are designed to minimize the circuit depth required to create a highly entangled state, which is needed for computational complexity and classical hardness. Fig. 3: Control operations for the quantum supremacy circuits. a, Example quantum circuit instance used in our experiment. Every cycle includes a layer each of single- and two-qubit gates. The single-qubit gates are chosen randomly from \(\{\sqrt{X},\sqrt{Y},\sqrt{W}\}\), where \(W=(X+Y)/\sqrt{2}\) and gates do not repeat sequentially. The sequence of two-qubit gates is chosen according to a tiling pattern, coupling each qubit sequentially to its four nearest-neighbour qubits. The couplers are divided into four subsets (ABCD), each of which is executed simultaneously across the entire array corresponding to shaded colours. Here we show an intractable sequence (repeat ABCDCDAB); we also use different coupler subsets along with a simplifiable sequence (repeat EFGHEFGH, not shown) that can be simulated on a classical computer. b, Waveform of control signals for single- and two-qubit gates. Although we cannot compute \({ {\mathcal F} }_{{\rm{XEB}}}\) in the supremacy regime, we can estimate it using three variations to reduce the complexity of the circuits. In 'patch circuits', we remove a slice of two-qubit gates (a small fraction of the total number of two-qubit gates), splitting the circuit into two spatially isolated, non-interacting patches of qubits. We then compute the total fidelity as the product of the patch fidelities, each of which can be easily calculated. In 'elided circuits', we remove only a fraction of the initial two-qubit gates along the slice, allowing for entanglement between patches, which more closely mimics the full experiment while still maintaining simulation feasibility. Finally, we can also run full 'verification circuits', with the same gate counts as our supremacy circuits, but with a different pattern for the sequence of two-qubit gates, which is much easier to simulate classically (see also Supplementary Information). Comparison between these three variations allows us to track the system fidelity as we approach the supremacy regime. We first check that the patch and elided versions of the verification circuits produce the same fidelity as the full verification circuits up to 53 qubits, as shown in Fig. 4a. For each data point, we typically collect Ns = 5 × 106 total samples over ten circuit instances, where instances differ only in the choices of single-qubit gates in each cycle. We also show predicted \({ {\mathcal F} }_{{\rm{XEB}}}\) values, computed by multiplying the no-error probabilities of single- and two-qubit gates and measurement (see also Supplementary Information). The predicted, patch and elided fidelities all show good agreement with the fidelities of the corresponding full circuits, despite the vast differences in computational complexity and entanglement. This gives us confidence that elided circuits can be used to accurately estimate the fidelity of more-complex circuits. Fig. 4: Demonstrating quantum supremacy. a, Verification of benchmarking methods. \({ {\mathcal F} }_{{\rm{XEB}}}\) values for patch, elided and full verification circuits are calculated from measured bitstrings and the corresponding probabilities predicted by classical simulation. Here, the two-qubit gates are applied in a simplifiable tiling and sequence such that the full circuits can be simulated out to n = 53, m = 14 in a reasonable amount of time. Each data point is an average over ten distinct quantum circuit instances that differ in their single-qubit gates (for n = 39, 42 and 43 only two instances were simulated). For each n, each instance is sampled with Ns of 0.5–2.5 million. The black line shows the predicted \({ {\mathcal F} }_{{\rm{XEB}}}\) based on single- and two-qubit gate and measurement errors. The close correspondence between all four curves, despite their vast differences in complexity, justifies the use of elided circuits to estimate fidelity in the supremacy regime. b, Estimating \({ {\mathcal F} }_{{\rm{XEB}}}\) in the quantum supremacy regime. Here, the two-qubit gates are applied in a non-simplifiable tiling and sequence for which it is much harder to simulate. For the largest elided data (n = 53, m = 20, total Ns = 30 million), we find an average \({ {\mathcal F} }_{{\rm{XEB}}}\) > 0.1% with 5σ confidence, where σ includes both systematic and statistical uncertainties. The corresponding full circuit data, not simulated but archived, is expected to show similarly statistically significant fidelity. For m = 20, obtaining a million samples on the quantum processor takes 200 seconds, whereas an equal-fidelity classical sampling would take 10,000 years on a million cores, and verifying the fidelity would take millions of years. The largest circuits for which the fidelity can still be directly verified have 53 qubits and a simplified gate arrangement. Performing random circuit sampling on these at 0.8% fidelity takes one million cores 130 seconds, corresponding to a million-fold speedup of the quantum processor relative to a single core. We proceed now to benchmark our computationally most difficult circuits, which are simply a rearrangement of the two-qubit gates. In Fig. 4b, we show the measured \({ {\mathcal F} }_{{\rm{XEB}}}\) for 53-qubit patch and elided versions of the full supremacy circuits with increasing depth. For the largest circuit with 53 qubits and 20 cycles, we collected Ns = 30 × 106 samples over ten circuit instances, obtaining \({ {\mathcal F} }_{{\rm{XEB}}}=(2.24\pm 0.21)\times {10}^{-3}\) for the elided circuits. With 5σ confidence, we assert that the average fidelity of running these circuits on the quantum processor is greater than at least 0.1%. We expect that the full data for Fig. 4b should have similar fidelities, but since the simulation times (red numbers) take too long to check, we have archived the data (see 'Data availability' section). The data is thus in the quantum supremacy regime. The classical computational cost We simulate the quantum circuits used in the experiment on classical computers for two purposes: (1) verifying our quantum processor and benchmarking methods by computing \({ {\mathcal F} }_{{\rm{XEB}}}\) where possible using simplifiable circuits (Fig. 4a), and (2) estimating \({ {\mathcal F} }_{{\rm{XEB}}}\) as well as the classical cost of sampling our hardest circuits (Fig. 4b). Up to 43 qubits, we use a Schrödinger algorithm, which simulates the evolution of the full quantum state; the Jülich supercomputer (with 100,000 cores, 250 terabytes) runs the largest cases. Above this size, there is not enough random access memory (RAM) to store the quantum state42. For larger qubit numbers, we use a hybrid Schrödinger–Feynman algorithm43 running on Google data centres to compute the amplitudes of individual bitstrings. This algorithm breaks the circuit up into two patches of qubits and efficiently simulates each patch using a Schrödinger method, before connecting them using an approach reminiscent of the Feynman path-integral. Although it is more memory-efficient, the Schrödinger–Feynman algorithm becomes exponentially more computationally expensive with increasing circuit depth owing to the exponential growth of paths with the number of gates connecting the patches. To estimate the classical computational cost of the supremacy circuits (grey numbers in Fig. 4b), we ran portions of the quantum circuit simulation on both the Summit supercomputer as well as on Google clusters and extrapolated to the full cost. In this extrapolation, we account for the computation cost of sampling by scaling the verification cost with \({ {\mathcal F} }_{{\rm{XEB}}}\), for example43,44, a 0.1% fidelity decreases the cost by about 1,000. On the Summit supercomputer, which is currently the most powerful in the world, we used a method inspired by Feynman path-integrals that is most efficient at low depth44,45,46,47. At m = 20 the tensors do not reasonably fit into node memory, so we can only measure runtimes up to m = 14, for which we estimate that sampling three million bitstrings with 1% fidelity would require a year. On Google Cloud servers, we estimate that performing the same task for m = 20 with 0.1% fidelity using the Schrödinger–Feynman algorithm would cost 50 trillion core-hours and consume one petawatt hour of energy. To put this in perspective, it took 600 seconds to sample the circuit on the quantum processor three million times, where sampling time is limited by control hardware communications; in fact, the net quantum processor time is only about 30 seconds. The bitstring samples from all circuits have been archived online (see 'Data availability' section) to encourage development and testing of more advanced verification algorithms. One may wonder to what extent algorithmic innovation can enhance classical simulations. Our assumption, based on insights from complexity theory11,12,13, is that the cost of this algorithmic task is exponential in circuit size. Indeed, simulation methods have improved steadily over the past few years42,43,44,45,46,47,48,49,50. We expect that lower simulation costs than reported here will eventually be achieved, but we also expect that they will be consistently outpaced by hardware improvements on larger quantum processors. Verifying the digital error model A key assumption underlying the theory of quantum error correction is that quantum state errors may be considered digitized and localized38,51. Under such a digital model, all errors in the evolving quantum state may be characterized by a set of localized Pauli errors (bit-flips or phase-flips) interspersed into the circuit. Since continuous amplitudes are fundamental to quantum mechanics, it needs to be tested whether errors in a quantum system could be treated as discrete and probabilistic. Indeed, our experimental observations support the validity of this model for our processor. Our system fidelity is well predicted by a simple model in which the individually characterized fidelities of each gate are multiplied together (Fig. 4). To be successfully described by a digitized error model, a system should be low in correlated errors. We achieve this in our experiment by choosing circuits that randomize and decorrelate errors, by optimizing control to minimize systematic errors and leakage, and by designing gates that operate much faster than correlated noise sources, such as 1/f flux noise37. Demonstrating a predictive uncorrelated error model up to a Hilbert space of size 253 shows that we can build a system where quantum resources, such as entanglement, are not prohibitively fragile. Quantum processors based on superconducting qubits can now perform computations in a Hilbert space of dimension 253 ≈ 9 × 1015, beyond the reach of the fastest classical supercomputers available today. To our knowledge, this experiment marks the first computation that can be performed only on a quantum processor. Quantum processors have thus reached the regime of quantum supremacy. We expect that their computational power will continue to grow at a double-exponential rate: the classical cost of simulating a quantum circuit increases exponentially with computational volume, and hardware improvements will probably follow a quantum-processor equivalent of Moore's law52,53, doubling this computational volume every few years. To sustain the double-exponential growth rate and to eventually offer the computational volume needed to run well known quantum algorithms, such as the Shor or Grover algorithms25,54, the engineering of quantum error correction will need to become a focus of attention. The extended Church–Turing thesis formulated by Bernstein and Vazirani55 asserts that any 'reasonable' model of computation can be efficiently simulated by a Turing machine. Our experiment suggests that a model of computation may now be available that violates this assertion. We have performed random quantum circuit sampling in polynomial time using a physically realizable quantum processor (with sufficiently low error rates), yet no efficient method is known to exist for classical computing machinery. As a result of these developments, quantum computing is transitioning from a research topic to a technology that unlocks new computational capabilities. We are only one creative algorithm away from valuable near-term applications. The datasets generated and analysed for this study are available at our public Dryad repository (https://doi.org/10.5061/dryad.k6t1rj8). Feynman, R. P. Simulating physics with computers. Int. J. Theor. Phys. 21, 467–488 (1982). Article MathSciNet Google Scholar Devoret, M. H., Martinis, J. M. & Clarke, J. Measurements of macroscopic quantum tunneling out of the zero-voltage state of a current-biased Josephson junction. Phys. Rev. Lett. 55, 1908 (1985). Article CAS ADS Google Scholar Nakamura, Y., Chen, C. D. & Tsai, J. S. Spectroscopy of energy-level splitting between two macroscopic quantum states of charge coherently superposed by Josephson coupling. Phys. Rev. Lett. 79, 2328 (1997). Mooij, J. et al. Josephson persistent-current qubit. Science 285, 1036–1039 (1999). Wallraff, A. et al. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature 431, 162–167 (2004). Koch, J. et al. Charge-insensitive qubit design derived from the Cooper pair box. Phys. Rev. A 76, 042319 (2007). Article ADS Google Scholar You, J. Q. & Nori, F. Atomic physics and quantum optics using superconducting circuits. Nature 474, 589–597 (2011). Preskill, J. Quantum computing and the entanglement frontier. Rapporteur Talk at the 25th Solvay Conference on Physics, Brussels https://doi.org/10.1142/8674 (World Scientific, 2012). Aaronson, S. & Arkhipov, A. The computational complexity of linear optics. In Proc. 43rd Ann. Symp. on Theory of Computing https://doi.org/10.1145/1993636.1993682 (ACM, 2011). Bremner, M. J., Montanaro, A. & Shepherd, D. J. Average-case complexity versus approximate simulation of commuting quantum computations. Phys. Rev. Lett. 117, 080501 (2016). Article ADS MathSciNet Google Scholar Boixo, S. et al. Characterizing quantum supremacy in near-term devices. Nat. Phys. 14, 595 (2018). Bouland, A., Fefferman, B., Nirkhe, C. & Vazirani, U. On the complexity and verification of quantum random circuit sampling. Nat. Phys. 15, 159 (2019). Aaronson, S. & Chen, L. Complexity-theoretic foundations of quantum supremacy experiments. In 32nd Computational Complexity Conf. https://doi.org/10.4230/LIPIcs.CCC.2017.22 (Schloss Dagstuhl–Leibniz Zentrum für Informatik, 2017). Neill, C. et al. A blueprint for demonstrating quantum supremacy with superconducting qubits. Science 360, 195–199 (2018). Article CAS ADS MathSciNet Google Scholar Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2, 79 (2018). Kechedzhi, K. et al. Efficient population transfer via non-ergodic extended states in quantum spin glass. In 13th Conf. on the Theory of Quantum Computation, Communication and Cryptography http://drops.dagstuhl.de/opus/volltexte/2018/9256/pdf/LIPIcs-TQC-2018-9.pdf (Schloss Dagstuhl–Leibniz Zentrum für Informatik, 2018). Somma, R. D., Boixo, S., Barnum, H. & Knill, E. Quantum simulations of classical annealing processes. Phys. Rev. Lett. 101, 130504 (2008). Farhi, E. & Neven, H. Classification with quantum neural networks on near term processors. Preprint at https://arxiv.org/abs/1802.06002 (2018). McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 9, 4812 (2018). Cong, I., Choi, S. & Lukin, M. D. Quantum convolutional neural networks. Nat. Phys. https://doi.org/10.1038/s41567-019-0648-8 (2019). Bravyi, S., Gosset, D. & König, R. Quantum advantage with shallow circuits. Science 362, 308–311 (2018). Aspuru-Guzik, A., Dutoi, A. D., Love, P. J. & Head-Gordon, M. Simulated quantum computation of molecular energies. Science 309, 1704–1707 (2005). Peruzzo, A. et al. A variational eigenvalue solver on a photonic quantum processor. Nat. Commun. 5, 4213 (2014). Hempel, C. et al. Quantum chemistry calculations on a trapped-ion quantum simulator. Phys. Rev. X 8, 031022 (2018). Shor, P. W. Algorithms for quantum computation: discrete logarithms and factoring proceedings. In Proc. 35th Ann. Symp. on Foundations of Computer Science https://doi.org/10.1109/SFCS.1994.365700 (IEEE, 1994). Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012). Barends, R. et al. Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature 508, 500–503 (2014). Córcoles, A. D. et al. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits. Nat. Commun. 6, 6979 (2015). Ofek, N. et al. Extending the lifetime of a quantum bit with error correction in superconducting circuits. Nature 536, 441 (2016). Vool, U. & Devoret, M. Introduction to quantum electromagnetic circuits. Int. J. Circuit Theory Appl. 45, 897–934 (2017). Chen, Y. et al. Qubit architecture with high coherence and fast tunable coupling circuits. Phys. Rev. Lett. 113, 220502 (2014). Yan, F. et al. A tunable coupling scheme for implementing high-fidelity two-qubit gates. Phys. Rev. Appl. 10, 054062 (2018). Schuster, D. I. et al. Resolving photon number states in a superconducting circuit. Nature 445, 515 (2007). Jeffrey, E. et al. Fast accurate state measurement with superconducting qubits. Phys. Rev. Lett. 112, 190504 (2014). Chen, Z. et al. Measuring and suppressing quantum state leakage in a superconducting qubit. Phys. Rev. Lett. 116, 020501 (2016). Klimov, P. V. et al. Fluctuations of energy-relaxation times in superconducting qubits. Phys. Rev. Lett. 121, 090502 (2018). Yan, F. et al. The flux qubit revisited to enhance coherence and reproducibility. Nat. Commun. 7, 12964 (2016). Knill, E. et al. Randomized benchmarking of quantum gates. Phys. Rev. A 77, 012307 (2008). Magesan, E., Gambetta, J. M. & Emerson, J. Scalable and robust randomized benchmarking of quantum processes. Phys. Rev. Lett. 106, 180504 (2011). Cross, A. W., Magesan, E., Bishop, L. S., Smolin, J. A. & Gambetta, J. M. Scalable randomised benchmarking of non-Clifford gates. npj Quant. Inform. 2, 16012 (2016). Wallraff, A. et al. Approaching unit visibility for control of a superconducting qubit with dispersive readout. Phys. Rev. Lett. 95, 060501 (2005). De Raedt, H. et al. Massively parallel quantum computer simulator, eleven years later. Comput. Phys. Commun. 237, 47–61 (2019). Markov, I. L., Fatima, A., Isakov, S. V. & Boixo, S. Quantum supremacy is both closer and farther than it appears. Preprint at https://arxiv.org/abs/1807.10749 (2018). Villalonga, B. et al. A flexible high-performance simulator for the verification and benchmarking of quantum circuits implemented on real hardware. npj Quant. Inform. (in the press); preprint at https://arxiv.org/abs/1811.09599 (2018). Boixo, S., Isakov, S. V., Smelyanskiy, V. N. & Neven, H. Simulation of low-depth quantum circuits as complex undirected graphical models. Preprint at https://arxiv.org/abs/1712.05384 (2017). Chen, J., Zhang, F., Huang, C., Newman, M. & Shi, Y. Classical simulation of intermediate-size quantum circuits. Preprint at https://arxiv.org/abs/1805.01450 (2018). Villalonga, B. et al. Establishing the quantum supremacy frontier with a 281 pflop/s simulation. Preprint at https://arxiv.org/abs/1905.00444 (2019). Pednault, E. et al. Breaking the 49-qubit barrier in the simulation of quantum circuits. Preprint at https://arxiv.org/abs/1710.05867 (2017). Chen, Z. Y. et al. 64-qubit quantum circuit simulation. Sci. Bull. 63, 964–971 (2018). Chen, M.-C. et al. Quantum-teleportation-inspired algorithm for sampling large random quantum circuits. Preprint at https://arxiv.org/abs/1901.05003 (2019). Shor, P. W. Scheme for reducing decoherence in quantum computer memory. Phys. Rev. A 52, R2493–R2496 (1995). Devoret, M. H. & Schoelkopf, R. J. Superconducting circuits for quantum information: an outlook. Science 339, 1169–1174 (2013). Mohseni, M. et al. Commercialize quantum technologies in five years. Nature 543, 171 (2017). Grover, L. K. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett. 79, 325 (1997). Bernstein, E. & Vazirani, U. Quantum complexity theory. In Proc. 25th Ann. Symp. on Theory of Computing https://doi.org/10.1145/167088.167097 (ACM, 1993). We are grateful to E. Schmidt, S. Brin, S. Pichai, J. Dean, J. Yagnik and J. Giannandrea for their executive sponsorship of the Google AI Quantum team, and for their continued engagement and support. We thank P. Norvig, J. Yagnik, U. Hölzle and S. Pichai for advice on the manuscript. We acknowledge K. Kissel, J. Raso, D. L. Yonge-Mallo, O. Martin and N. Sridhar for their help with simulations. We thank G. Bortoli and L. Laws for keeping our team organized. This research used resources from the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility (supported by contract DE-AC05-00OR22725). A portion of this work was performed in the UCSB Nanofabrication Facility, an open access laboratory. R.B., S.M., and E.G.R. appreciate support from the NASA Ames Research Center and from the Air Force Research (AFRL) Information Directorate (grant F4HBKC4162G001). T.S.H. is supported by the DOE Early Career Research Program. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of AFRL or the US government. Google AI Quantum, Mountain View, CA, USA Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Jarrod R. McClean, Anthony Megrant, Xiao Mi, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis Department of Electrical and Computer Engineering, University of Massachusetts Amherst, Amherst, MA, USA Joseph C. Bardin Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA, USA Rupak Biswas, Salvatore Mandrà & Eleanor G. Rieffel Institute for Quantum Information and Matter, Caltech, Pasadena, CA, USA Fernando G. S. L. Brandao Department of Physics, University of California, Santa Barbara, CA, USA Ben Chiaro, Brooks Foxen, Matthew McEwen & John M. Martinis Friedrich-Alexander University Erlangen-Nürnberg (FAU), Department of Physics, Erlangen, Germany Michael J. Hartmann Quantum Computing Institute, Oak Ridge National Laboratory, Oak Ridge, TN, USA Travis S. Humble Department of Electrical and Computer Engineering, University of California, Riverside, CA, USA Alexander Korotkov Scientific Computing, Oak Ridge Leadership Computing, Oak Ridge National Laboratory, Oak Ridge, TN, USA Dmitry Lyakh Stinger Ghaffarian Technologies Inc., Greenbelt, MD, USA Salvatore Mandrà Institute for Advanced Simulation, Jülich Supercomputing Centre, Forschungszentrum Jülich, Jülich, Germany Kristel Michielsen RWTH Aachen University, Aachen, Germany Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA Kevin J. Sung Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL, USA Benjamin Villalonga Frank Arute Kunal Arya Ryan Babbush Dave Bacon Rami Barends Rupak Biswas Sergio Boixo David A. Buell Brian Burkett Yu Chen Zijun Chen Ben Chiaro Roberto Collins William Courtney Andrew Dunsworth Edward Farhi Brooks Foxen Austin Fowler Craig Gidney Marissa Giustina Rob Graff Keith Guerin Steve Habegger Matthew P. Harrigan Alan Ho Markus Hoffmann Trent Huang Sergei V. Isakov Evan Jeffrey Dvir Kafri Kostyantyn Kechedzhi Julian Kelly Paul V. Klimov Sergey Knysh Fedor Kostritsa David Landhuis Mike Lindmark Erik Lucero Jarrod R. McClean Matthew McEwen Anthony Megrant Xiao Mi Masoud Mohseni Josh Mutus Ofer Naaman Matthew Neeley Charles Neill Murphy Yuezhen Niu Eric Ostby Andre Petukhov John C. Platt Chris Quintana Eleanor G. Rieffel Pedram Roushan Nicholas C. Rubin Daniel Sank Kevin J. Satzinger Vadim Smelyanskiy Matthew D. Trevithick Amit Vainsencher Theodore White Z. Jamie Yao Ping Yeh Adam Zalcman Hartmut Neven The Google AI Quantum team conceived the experiment. The applications and algorithms team provided the theoretical foundation and the specifics of the algorithm. The hardware team carried out the experiment and collected the data. The data analysis was done jointly with outside collaborators. All authors wrote and revised the manuscript and the Supplementary Information. Correspondence to John M. Martinis. Peer review information Nature thanks Scott Aaronson, Keisuke Fujii and William Oliver for their contribution to the peer review of this work. This file contains Supplementary Information I–XI, which contains supplementary figures S1–S44 and Supplementary Tables I–X Arute, F., Arya, K., Babbush, R. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019). https://doi.org/10.1038/s41586-019-1666-5 Issue Date: 24 October 2019 Scalable and robust quantum computing on qubit arrays with fixed coupling N. H. Le M. Cykiert E. Ginossar npj Quantum Information (2023) Preparing random states and benchmarking with many-body quantum chaos Joonhee Choi Adam L. Shaw Manuel Endres Non-linear Boson Sampling Nicolò Spagnolo Daniel J. Brod Fabio Sciarrino Enhancing the coherence of superconducting quantum bits with electric fields Jürgen Lisenfeld Alexander Bilmes Alexey V. Ustinov Design of highly nonlinear confusion component based on entangled points of quantum spin states Hafiz Muhammad Waseem Seong Oun Hwang Scientific Reports (2023) Quantum computing takes flight William D. Oliver Nature News & Views 23 Oct 2019 Quantum supremacy: A three minute guide Elizabeth Gibney Nature Nature Video 05 Nov 2019 History of Nature Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print)
CommonCrawl