text
stringlengths
100
500k
subset
stringclasses
4 values
equation of axis of symmetry parabola calculator 2 1 , the axis of symmetry is a vertical line The axis of symmetry of a parabola is a line about which the parabola is symmetrical. quadratic function It is also the curve that corresponds to quadratic equations. c y − It is also the curve that corresponds to quadratic equations. x 5 . . Because a is negative, the parabola opens downward. And finally we get the general equation for a parabola: After multiplying all terms by B we have: The value of y is found according to equation, Because the value of the square root should be greater then 0 for two solutions we have. By implicit diferentiation we get the slope. c 2 Finally we get a quadratic equation for the unknown C. The equation of the directrix line is according to point D and the slope: According to the parabola definition the distance EG, Find the focus and the vertex coordinates of the parabola x. What is the axis of symmetry of the parabola $$ y = (x - 3)^2 + 4 $$, Since this equation is in vertex form, use the formula $$ x = h$$ After a few steps we get the finall equation of a tilted parabola when the directrix and the focus are given. The vertex of a parabola is the point at which the parabola makes its sharpest turn; it lies halfway between the focus and the directrix. Any time you come across a quadratic formula you want to analyze, you'll find this parabola calculator to be the perfect tool for you. From the definition of the parabola we have: The vertex of a parabola is located at the point. 5 + -coordinate of the vertex is the equation of the axis of symmetry of the parabola. The axis of symmetry of a parabola is a vertical line that divides the parabola into two congruent halves. So, the axis of symmetry is the line As we already mentioned, this parabola is a function that we usually write 1 = The slope of the tangent line can be found by implicit derivation of the equation. You can either use the parabola calculator to do it for you or you can use the equation. a . The vertex of the parabola is The x Parabola--its graph, forms of its equation, axis of symmetry and much more explained visually . = Please try again using a different payment method. x After multipling and arranging terms we get the tilted parabola equation: After taking the square values of both sides we get the parabola's equation. = The axis of symmetry of this parabola is parallel to the y-axis. 2 = 1 The graph of a − Algebra. x methods and materials. Names of standardized tests are owned by the trademark holders and are not affiliated with Varsity Tutors LLC. 2 2 y x To create your new password, just click the link in the email we sent you. f (x) = a(x − h)2 +k, where (h,k) is the Vertex of the parabola. The general form of horizontal parabola is. Message received. The vertex is always on this line. Once you know the vertex length, this is easy to graph a parabola with maximum accuracy. Math Homework. x = (4) to get y. Here, We can write the coefficients of the parabola in the form: Then from example 5 we can write this coefficients with the values of the directrix line equation which are A, B and C and the focus as: Now we have to find the unknowns A, B, C, f. We have to find 5 unknowns with six equations but we can see immediately that β is equal to: Notice that β is a function of α and γ so we can ignore this equation and solve the other equations. 2 The vertex of a parabola is the point at which the parabola makes its sharpest turn; it lies halfway between the focus and the directrix. Because y is squared the parabola is in the horizontal direction and the sign of x is negative so the open direction of the parabola is to the left (see scatch). Free functions symmetry calculator - find whether the function is symmetric about x-axis, y-axis or origin step-by-step This website uses cookies to ensure you get the best experience. a x Axis of Symmetry Calculator In every parabola there is an axis of symmetry. Axis of Symmetry Calculator In every parabola there is an axis of symmetry. a − Learning math takes practice, lots of practice. The sign of B is negative so the parabola open side us to the left direction. x f (x) = a(x − h)2 +k, where (h,k) is the Vertex of the parabola. 3. In Graph 2, what is the equation of the axis of symmetry? . The parabola equation in its vertex form is y = a(x-h)² + k, where: You can calculate the values of h and k from the equations below: The parabola vertex form calculator also finds the focus and directrix of the parabola. A parabola is the locus of all points whose distances from a fixed point equal their distance from a fixed line called the directrix, the fixed point is the focus. . You write down problems, solutions and notes to go back... EN: parabola-function-axis-calculator menu. (a + p/2 , b) = (1.5 + 0.5 , 2) = (2 , 2), Find the coordinate of the foci and the directrix equation of the parabola given by the equation, We mark the distance between the directrix and the focus by p then y. Instructors are independent contractors who tailor their services to each client, using their own style, 4.9/5.0 Satisfaction Rating over the last 100,000 sessions. Free Parabola Axis calculator - Calculate parabola axis given equation step-by-step This website uses cookies to ensure you get the best experience. The x -coordinate of the vertex is the equation of the axis of symmetry of the parabola. The vertex of a parabola is the point at which the parabola makes its sharpest turn; it lies halfway between the focus and the directrix. Explore the relationship between the axis of symmetry and graph of a parabola by changing the values of a, b and c of the parabola plotter below, What is the following parabola's axis of symmetry of $$ y = (x + 3)^2 + 4$$, Since this equation is in vertex form,use the formula $$ x = h $$. We show the same two graphs again with the axis of symmetry in red. Math notebooks have been around for hundreds of years. + Every parabola has an axis of symmetry which is the line that divides the graph into two perfect halves.. On this page, we will practice drawing the axis on a graph, learning the formula, stating the equation of the axis of symmetry when we know the parabola's equation Explore how the graph and equation relate to the axis of symmetry, by using our interactive program below. Varsity Tutors © 2007 - 2020 All Rights Reserved, Medical Assistant Certification Exam Tutors, FTCE - Florida Teacher Certification Examinations Courses & Classes, ISEE-Upper Level Quantitative Reasoning Tutors, AAI - Accredited Adviser in Insurance Test Prep, GACE - Georgia Assessments for the Certification of Educators Test Prep, MBLEX - Massage & Bodywork Licensing Examination Test Prep, CDL - Commercial Driver's License Test Prep, SAT Subject Test in Japanese with Listening Test Prep. Because the value of 2p is positive the parabola is vertical: and the symmetry axis is in the y direction: The focus of a parabola is located at the point. This calculator will find either the equation of the parabola from the given parameters or the axis of symmetry, eccentricity, latus rectum, length of the latus rectum, focus, vertex, directrix, focal parameter, x-intercepts, y-intercepts of the entered parabola. The parabola equation cal be expressed in two ways, standard form and vertex form. We set the x axis tangent to the parabola. According to the parabola definition we have: We denote the distance from the focus to the origin as, p, then from the distance formula we have, We will take the general equation of a parabola. The axis of symmetry of a parabola is always perpendicular to the directrix and goes through the focus point. + 6 Do It Faster, Learn It Better. using the formula. When a quadratic function is graphed in the coordinate plane, the resulting parabola and corresponding axis of symmetry are vertical. One formula works when the parabola's equation is in vertex form and the other works when the parabola's equation is in standard form . ( − The axis of symmetry is the line $$ x = 3$$, What is the axis of symmetry of $$ y = (x - 1)^2 + 1 $$, finding axis of symmetry from Standard Form, What is the following parabola's axis of symmetry of $$ y =x^2 - 2x - 3 $$, Since this equation is in standard form, use the formula for standard form equation $$ x = \frac{ -b}{ 2a} $$, Answer: the axis of symmetry is the line $$ x = 1 $$, What is the following parabola's axis of symmetry of $$y =x^2 - 4x + 5 $$, Since this equation is in standard form, use the formula for standard form equation x = -b/2a, Answer: the axis of symmetry is the line x = 2, oy2%3Asvi1y2%3Arti1y2%3Afdzy2%3Alczy2%3Aaxi1y2%3Ayizy2%3Asgi1y2%3Agai1y3%3Apoai1y3%3Apobi2y3%3Apoci-3y3%3Ashxzy3%3Ashyzy1%3Azi1g, $ \red{ \boxed{ x = \frac {-b}{ 2a} }} $. For a quadratic function in standard form, Once you know the vertex length, this is easy to graph a parabola with maximum accuracy. In Graph 3, what is the equation of the axis of symmetry? b = vertex of the parabola By translating the parabola x 2 = 2py its vertex is moved from the origin to the point A (x 0, y 0) so that its equation transforms to (x-x 0) 2 = 2p(y-y 0). b Varsity Tutors connects learners with experts. Not only will it provide you with the parabola equation in both the standard form and the vertex form, but also calculate the parabola vertex, focus and directrix for you. We see that the focus and the vertex are located along the y axis that mean that the parabola is vertical and because the vertex is above the focus the open side of the parabola is downward. 6 All you have to do is to use the following equations: y₀ = c - (b² - 1)/(4a) = -4 - (9-1)/8 = -5, y = c - (b² + 1)/(4a) = -4 - (9+1)/8 = -5.25, If you're looking for a parabola with a horizontal orientation, check the, Check out 23 similar coordinate geometry calculators , How to use the parabola equation calculator: an example, Enter the coefficients a, b and c of the standard form of your quadratic equation. The axis of symmetry always passes through the Therefore, the axis of symmetry is When a quadratic function is graphed in the coordinate plane, the resulting parabola and corresponding axis of symmetry are vertical. There are two different formulas that you can use to find the axis of symmetry. Varsity Tutors does not have affiliation with universities mentioned on its website. + A parabola is a U-shaped symmetrical curve. If the parabola is horizontal then the equation of the parabola is: The value of x is found according to equation. Parabola--its graph, forms of its equation, axis of symmetry and much more explained visually 2 For a quadratic function in standard form, When the parabola is vertical, the line of symmetry is vertical. Painted Agama For Sale Canada, Dear Mayang Street, Prière D'action De Grâce Pour L'année écoulée, Michael Turchin Art, Weaver 1x Scope, Mapping The Byzantine Empire Worksheet, Toy Story Thesis Statement, Pigeon Lofts For Sale Scotland, Gonzo Breaking Bad, The Self Sufficient Backyard For The Independent Homesteader Reviews, equation of axis of symmetry parabola calculator 2020
CommonCrawl
Analysis and modelling of performances of the HL (Hyperloop) transport system Kees van Goeverden ORCID: orcid.org/0000-0002-1880-47311, Dimitris Milakis1, Milan Janic1 & Rob Konings2 Hyperloop (HL) is presented as an efficient alternative of HSR (High Speed Rail) and APT (Air Passenger Transport) systems for long-distance passenger transport. This paper explores the performances of HL and compares these performances to HSR and APT. The following performances of the HL system are analytically modeled and compared to HSR and APT: (i) operational performance; (ii) financial performance; (iii) social/environmental performance. The main operational result is that the capacity of HL is low which implies a low utilization of the infrastructure. Because the infrastructure costs dominate the total costs, the costs per passenger km are high compared to those for HSR and APT. The HL performs very well regarding the social/environmental aspects because of low energy use, no GHG emissions and hardly any noise. The safety performance needs further consideration. The HL system is promising for relieving the environmental pressure of long-distance travelling, but has disadvantages regarding the operational and financial performances. The competition between contemporary transport modes has been rather constant over the past decades. However, this has not applied to the European long-distance passenger transport where the airlines have increased their market share substantively. Van Goeverden et al. [1] have estimated that air travel increased by about 45% between 2001 and 2013 while usage of the alternative modes has been rather stable (car, train) or declining (bus). The increasing dominance of air transport has enlarged the environmental impacts of long-distance transport and this trend is expected to continue in the next decades [2, 3]. The aircraft high speed in combination with comparatively low fares particularly those offered by low cost carriers has caused that requirements of travellers have become increasingly demanding, thus leading to a pressure on modes to offer high service quality particularly in terms of the shorter travel times, and low fares. In addition, the environmental impact of transport has gained increasing interest, implying a growing concern with the further dominance of air transport and a demand for more environmental-friendly competitive transport alternatives. This is particularly the case since the current transport modes have been trying to adapt their operational, commercial, environmental, and social performances, though being bounded by their technologies. For these technologies, marginal but not radical improvements have been permanently made. Radical new technologies, which could offer significantly better performances, are still rare and so far have not been able to enter the transport market successfully. The HL (Hyperloop) system is a new transport technology in conceptual stage that is claimed to provide superior performances to HSR (High Speed Rail) and APT (Air Passenger Transport) system, particularly regarding the travel time, transport costs, energy consumption, and transport safety [4]. So far, studies on HL have focused on enabling technologies of the system such as the electromagnetic levitation [5], the dynamics of the HL vehicle and the infrastructure [6,7,8,9], the implications of the HL for bridge dynamics [10] and the impact of earthquake forces on the HL vehicle [11]. Decker et al. [12] explored the feasibility of the HL system focusing on trades between technical/design aspects and the associated cost. Finally, Janić [13, 14] analysed multiple performances (e.g. operational, economic, social, and environmental) of high-speed rail and compared them to competing modes, without including HL in his analysis though. Existing studies have not yet systematically explored the HL system's performances as compared to other transport modes. This paper aims at filling in this gap in the literature by exploring the operational, financial, and social/environmental performances of the HL system and comparing them with those of the HSR and APT system. The results of such comparison are intended to underpin the discussion about the overall feasibility of the HL system. In addition to this introductory section, the paper consists of four other sections. Section 2 provides a brief description of the considered HS (High Speed) transport systems - already fully operational HSR and APT and still on the conceptual stage HL system. Section 3 deals with an analysis and analytical modelling of the above-mentioned performances of the three systems. Section 4 gives a comparison of the HL system's performances with those of the HSR and APT. For such purpose, the inputs for estimating indicators of performances of the latter two systems (HSR and APT) are extracted from the existing secondary sources (references). The final Section (5) summarizes the main conclusions regarding the prospective advantages and disadvantages of the HL system and provides some perspectives on its market opportunities. The HS (high speed) transport systems In this section, we present an overview of the deployment and main technical characteristics of the three high speed transport systems considered in this study: HSR, ART and HL. The HSR (high speed rail) system The HSR systems have been developing worldwide (Europe, Far East-Asia, and USA -United States of America-) as an actually innovative system within the railway transport mode, particularly as compared to its conventional passenger counterparts. The system has had different definitions in the particular world's regions. For example: In Japan, the HSR system is called 'Shinkansen' (i.e., 'new trunk line') whose trains can run at the speed of at least 200 km/hr. The system's network has been built with the specific technical standards (i.e., dedicated tracks without the level crossings and the standardized and special loading gauge). In Europe the HSR system has included infrastructure specially built and/or upgraded for the HS (High Speed) travel and considered to be a part of the Trans-European rail transport system/network. Respecting the maximum speed, the HSR lines have been categorized as Category I (for the speeds equal to or greater than 250 km/h), Category II (those specially upgraded for the speeds of about 200 km/h), and Category III (those upgraded with particular features resulting from the topographical relief or the town-planning constraints). In China, according to Order No. 34, 2013 from the country's Ministry of Railways, the HSR system has been considered to be the new built passenger-dedicated lines with (actual or reserved) speed equal to and/or greater than 250 km/h along these lines and 200 km/h along the mixed (passenger and freight) lines. In the USA, the HSR system has mainly been considered as that providing the frequent express services between the major population centres on the distances from 200 to 600 mi (mile) with a few or no intermediate stops, at the speeds of at least 150 mph (mi/h) on the completely grade-separated, dedicated rights-of way lines (1 mi = 1.609 km) [14]. Table 1 gives an example of developing the HSR networks round the world. Table 1 Development of the HSR network around the world [14] In addition, Fig. 1 shows the development of the passenger transportation in the European HSR network. Development of the volumes of passenger transportation in the European HSR network over time (Period: 1990–2015) [34] As can be seen, the volumes of transportation in terms of p-km have continuously been growing over the specified period of time, which has been possible thanks to expanding the HSR network in particular European countries. The APT (air passenger transport) system The APT has been permanently growing thanks to improving the 'aircraft capabilities', the 'airline strategy' and 'governmental regulation' (Boeing, 1998). The 'aircraft capabilities' has related to increasing speed, payload, and take-off-weight. Both the speed and payload have contributed to an enormous increase in the aircraft productivity, for more than 100 times during the last forty years. In particular, increase in the speed has been noticeable over the last six decades as shown on Fig. 2. Development of the aircraft speed over time [35] During the same period, the aircraft seat capacity has increased from 21 to 32 at the aircraft DC3 to almost 600 at Airbus A380. In addition, the 'airline strategy' have permanently deployed bigger, faster, safer, and more fuel-efficient aircraft equipped with lower emission and less-noise engines. As well, the aircraft of various sizes have been progressively engaged to efficiently match markets in the different network configurations, route length, and demand density. The 'governmental regulation' has mainly been leading towards liberalization of the national and partially international markets. That in USA (1978) and EU (European Union) (1997) are some of the earliest cases. Consequently, the APT system has been growing over time as shown by the examples on Fig. 3. Development of air passenger transport in EU 28 (European Union) and USA (United States of America) over time [34] As can be seen, in both areas the volumes of air passenger transportation have been generally growing in the long term, with some fluctuations. As well, the volumes of the world's air passenger transportation have been growing at an annual rate of 4–6% up to about 7 trillion p-km in the year 2015 [15]. The HL (Hyperloop) system Historically, several pneumatic and maglev trains similar to HL have been proposed at conceptual level primarily aiming to substantially reduce travel time compared to existing modes, and therefore being adopted in the transport system. For example, in 1910 Robert Goddard designed a floating train on magnets inside a vacuumed tunnel that could reach 250 miles/hour covering the distance between Boston and New York in 10 min. In 1972, RAND suggested that a very high speed transit (VHST) system operating in underground evacuated tubes propelled by electromagnetic waves would be technically feasible to travel coast-to-coast in the US in as low as 21 min [16]. Yet, this report recognized that political feasibility of such project would be very low. For the purpose of this analysis and modelling its performances, the HL system is assumed to consist of five main components: i) the line/tube including at least two parallel tubes and the stations along them, which enable operations of the HL vehicles in both directions without interfering with each other and embarking and disembarking of passengers, respectively; ii) the fleet of HL vehicles, which can consist of a single and/or few coupled capsules (these are operated by means of a magnetic linear accelerator positioned at the stations, which would accelerate the vehicles/capsules with the support of rotors attached to each of them); iii) the vacuum pumps maintaining the vacuum conditions within the tubes and at the stations at the specified parts; iv) the vehicle control system while operating along the line(s)/tube(s); and v) the maintenance systems for all previous components. The tubes will be based on elevated pillars except for tunnel sections, while the solar panels above the tubes will provide the system with energy. The ultra-high vacuum approximately at the level of 10− 8 Torr (British and German standards; Torr = Toricheli) would be maintained in the tube (the atmospheric pressure is variable but standardised at the level of 760 Torr or 1.013·105 Pa (Pascal)). Each station of the HL system is to be generally integrated within the tube. It would consist of three modules. The first one is the chamber as a part of the vacuum tube handling the arriving HL vehicle (ultimately 'arriving' chamber). After the vehicle enters, de-vacuuming of the chamber is carried out, and the vehicle proceeds to the second module with the normal atmospheric pressure where passengers embark and disembark the vehicle(s). After that, the vehicle(s) passes to the third chamber where at that moment normal atmospheric pressure prevails (ultimately 'departing' chamber). Then it spends time until the chamber is de-vacuuming, leaves it, and proceeds along the line/tube. This vehicle handling process takes place at each station of the line. The chambers are separated by the hermetic doors enabling establishing and maintaining the required air pressure in the above-mentioned order. The capsules would operate in the above-mentioned low-pressure tube(s) on a 0.5–1.3 mm layer of air featuring the pressurized air and the aerodynamic lift as shown on Fig. 4. Under such conditions, they would be able to reach the maximum speed of vmax = 1.220 km/h [4]; the maximum inertial acceleration would be a+ = 0.5 g; g = 9.81 m/s2. Conceptual design and subsystems of the HL system (source: [4]) The vacuum pumps are installed to initially evacuate and later maintain the required level of vacuum inside the tubes and in the stations' first and third chambers. In particular, creating vacuum within the tube implies an initially large-scale evacuation of air and later on removal of the smaller molecules near the tubes' walls using the heating techniques. These pumps would consume a rather substantive amount of energy. At the initial stage, they would operate until achieving the above-mentioned required level of tube vacuum, then, be automatically stopped, and the vacuum-lock isolation gates opened. In cases of air leakage in some section(s), the corresponding gates will be closed and the pumps activated again. The pumps would be located along the tube(s) in the required number depending on the volumes of air to be evacuated, available time, and their evacuation capacity. As far as de-vacuuming and vacuuming of chambers at the stations is concerned, the required number of vacuum pumps will operate accordingly. Regarding the characteristics of operations within the tube and at the station(s), the control of safe and efficient movement of vehicles and maintaining the vacuum inside the tube and at the station(s) would be provided by the convenient traffic control and management system. Given the above-mentioned technical features, the HL is envisioned to be a transport mode for the medium- to long-distance travelling. As such, if operating along the routes without substantive physical barriers, it seems to be a good alternative to APT. At present, the HL technology is being tested in practice on the short test tracks with prototype (capsule) models. Musk [4] considers two variants of the HL system: exclusively for passenger only and mixed for both passenger and freight. The latter has larger dimensions for both the tube and the capsules. For example, the diameter of the tube for the exclusive passenger variant is 2.23 m and for the mixed passenger and freight variant is 3.3 m. The capsules for passengers only are of the standardized seating capacity of S = 28 seats/unit, the mixed passenger and freight variant gives room for 14 passengers and 3 full size automobiles per unit. In particular, those intended exclusively to passengers allow them only to sit since the lack of space for walking through the vehicle(s). In addition, the capsules of both variants lack the toilets, which diminishes the flexibility and applicability of the system because the vehicles would need to stop every 30–60 min for a longer time for toilet visits. The capsules of the passenger and freight variant seem to be sufficiently large to enable walking through the vehicle and to visit a toilet when this is built-in. Their frontal area is supposed to be 4.0 m2 and the height about 1.9 m. The larger dimensions make the system more expensive, but they are essential for its functionality for long-distance travelling. Therefore, our analysis regards the passenger+freight variant with the larger dimensions. Unlike Musk [4], we assume that it is fully utilized for passenger transport and that the seating capacity is equal to the capacity of the small dimensioned passenger only variant: 28 seats per unit. Room for the toilet can be gained by reducing the luggage compartment. The larger dimensions of the vehicles imply that a larger volume for luggage is available per m2 area, and that the seating compartment has more room for storing luggage. Modelling performances of the HL, HSR, and APT system The development and adoption of transport innovations can be influenced by multiple factors. According to [17] development and adoption of transport innovations is a function of techno-economic, social and political feasibility. If any of those three minimum criteria is not met then the transport innovation will not be adopted. Janic [13] suggests that the performance of a new transport system can be considered in different ways and from the perspectives of different stakeholders involved, i.e., the users/customers, the transport operator, the governmental authorities at different institutional levels, and the society. If the different interests of stakeholders are not successfully balanced, they may block the implementation of a new transport system. In this study, we explore the operational, financial and social/environmental performances of HL that reflect its all feasibility dimensions (see Fig. 5). The performances of its counterparts HSR and APT are considered for the comparative purposes. The considered HL, HSR, and APT performances explored in this study The operational performance of the HL system generally includes the system capacity and the quality of services. The former is mainly relevant for the operators, and the latter for the users/customers. Similarly to its counterparts, the HSR and APT, the HL system is characterized by its traffic and transport 'ultimate' capacity. The 'ultimate' capacity is the capacity in the case that everything functions perfectly. In practice, this condition is not met, and the 'practical' capacity will be somewhat lower than the 'ultimate' capacity. Traffic capacity The traffic 'ultimate' capacity is defined by the maximum number of vehicles, which can pass through the "reference location" for their counting in one direction during a given period of time under conditions of constant demand for service. In case of the HL system, this is actually the capacity of the infrastructure, i.e., stations, segments between the stations, and the line/tube as the whole. Station(s) The 'ultimate' capacity of the station (i) of a given HL line/tube can be 'static' and 'dynamic'. The 'static' capacity can be defined by the number of tracks/places at the station. The static capacity that is needed to handle the vehicles of guided transport systems during the given period (T) under conditions of constant demand for service can generally be estimated as follows: $$ {n}_{s/i}={\mu}_{i-1}(T)\cdot {\tau}_{s/i} $$ μ i-1 (T) : is the capacity on the (i-1) segment of the line/tube in terms of the maximum transport service frequency during the time period (T) (veh/min or h); and τ s/i : is the average time of occupying a track/place at a station (i) by the Hyperloop vehicle (min, h/track). In the case of the HL system the relation between occupation time and capacity is more complex. The vehicles pass through the three above mentioned chambers, an arriving chamber, a chamber for disembarking and embarking passengers, and a departure chamber. The arriving and departure chambers function as locks. The static capacity is not related to the sum of these occupation times by a vehicle of the three chambers (τs/i), because a) occupation times can overlap (e.g. vehicle 1 can enter the arriving chamber while vehicle 2 is still occupying the platform in chamber 2) –this enlarges the capacity– and b) the arriving and departure chambers are for some time occupied while they are empty (adapting the air pressure for the next vehicle) –this lowers the capacity–. The static capacity of HL can be estimated as: $$ {n}_{s/i}={\mu}_{i-1}(T)\cdot \max \left({\tau}_{ca/i};{\tau}_{p/i};{\tau}_{cd/i}\right) $$ τ ca/i : is the average occupation time of the arriving chamber of station i for one vehicle (min) τ p/i : is the average occupation time of the platform of a station (i) by one vehicle (min). τ cd/i : is the average occupation time of the departing chamber of station i for one vehicle (min) Assuming that the occupation times of the two chambers that function as a lock are equal (τca/i = τcd/i), the equation can be rewritten as: $$ {n}_{s/i}={\mu}_{i-1}(T)\cdot \max \left({\tau}_{c/i};{\tau}_{p/i}\right) $$ τ c/i : is the average occupation time of a lock chamber of station i for one vehicle (min) The meaning of the eqs. 1 to 3 is, that the number of tracks will be sufficient to handle the vehicles when they run at a frequency that equals the segment capacity. This implies that the station capacity will not be critical for the traffic capacity. However, there may spatial or financial limitations for the number of tracks. One should note that a track includes three chambers and each additional track means building three additional chambers. In that case the station capacity can be critical. The 'dynamic' capacity of the station (i) can be defined by the maximum number of HL vehicles, which can be handled at the given number of tracks/spaces at the station (i) during the given period (T) under conditions of the constant demand for service. Based on Eq. 3, this can be estimated as follows: $$ {\mu}_{s/i}(T)={n}_{s/i}/\max \left({\tau}_{c/i};{\tau}_{p/i}\right) $$ where all symbols are analogous to those in Eq. 3. The occupation time of one lock chamber (τc/i) in Eq. 4 can be estimated as follows: $$ {\tau}_{c/i}=2\cdot \frac{\left({V}_i-{k}_c\cdot {V}_c\right)\cdot \ln \left({P}_{1i}/{P}_{2i}\right)}{n_i\cdot {C}_i}+{\tau}_{0i} $$ V i : is the spatial volume of the first ('arriving') and the third ('departing') chamber at the station (i) of the line/tube (i = 1,2,.., N) (ft3 or m3); k c : is a binary variable taking the value "1" if a vehicle is in the chamber and the value "0" is the chamber is empty; during the vacuuming and de-vacuuming cycle, both values apply one time; V c : is the spatial volume of a vehicle (ft3 or m3); P 1i , P 2i : is the initial and final pressure during de-vacuuming and vacuuming the first ('arriving') and the third ('departing') chamber of the station (i), respectively (mmHg or Pa (Pascals)); n i : is the number of vacuum pumps at a lock chamber of the station (i); C i : is the capacity of a vacuum pump at the first and the third chamber of the station (i) (ft3/min or m3/min); and τ 0i : is the average time of opening and closing the lock gates and disembarking and embarking the HL vehicle in the chamber of the station (i) (min). Under an assumption that the first and the third chamber at each station are of the same volume, this volume of either of them at the station (i) in Eq. 3 can be estimated as follows: $$ {V}_i={A}_i\cdot \left(m\cdot d+B\right)=\pi \cdot {\left(\Delta /2\right)}^2\cdot \left(m\cdot d+B\right) $$ A i : is the area of the vertical profile of the first ('arriving') and the third ('departing') chamber (m2); is the maximum number of capsules constituting the HL vehicle per single departure; is the length of the HL capsule (m); B : is the 'buffer' distance between the ends of the HL vehicle and the entry and exit door, respectively, of the first and second chamber (m); Δ : is the diameter of the chamber (m). The volume of the vehicle can be estimated as: $$ {V}_c={A}_c\cdot m\cdot d $$ A c : is the frontal area of the capsule (m2). The occupation time of the platform can be described as; $$ {\tau}_{p/i}={\tau}_{ab/i}+{\tau}_{e/i} $$ τ ab/i : is the average time that a vehicle stays at the platform for boarding and alighting; τ e/i : is the minimum time between leaving a vehicle and entering the next at the platform ii) Line segment(s) The 'ultimate' capacity of the segment (i-1) in front of the station (i) of a given HL line/tube during the period (T) in Eq. 1 can be estimated as follows: $$ {\mu}_{i-1}(T)=1/{\tau}_{\min /i-1}={a}_{\max /i-1}^{-}/{v}_{\max /i-1}\kern1em f\kern0.1em or\kern0.5em i\in N\kern0.4em $$ τ min/i-1 : is the minimum time interval between dispatching successive Hyperloop trains along the (i)-th segment of the line/tube in the single direction (min or h); v max/i-1 : is the maximum operating speed of a Hyperloop vehicle on the (i)-th segment of the line/tube (km/h); and \( {a}_{\max /i-1}^{-} \) : is the maximum safe deceleration rate of the Hyperloop vehicle on the (i)-th segment of the line/tube (m/s2). Equation 9 assumes that for safety reasons in each pair of successive HL vehicle(s) moving in the same direction the leading vehicle needs to be separated by at least the minimum breaking distance of the following vehicle. iii) Line/tube The line/tube capacity is the traffic capacity of the HL system and is defined as the lowest of the station and segment capacities. From Eqs. 4 and 9, the 'ultimate' capacity of a given HL line/tube in the single direction can be estimated as follows: $$ \mu (T)=\min \left[{\mu}_{s/i}(T);{\mu}_{i-1}(T)\right]\kern0.6em for\kern0.4em i\in N\kern0.1em $$ where all symbols are analogous to those in the previous Eqs. Eq. 10 indicates that the 'ultimate' capacity of a given HL line/tube is determined by the minimum 'ultimate' capacity of its ("critical") segment(s) and/or the station(s). The 'ultimate' capacity is higher than the 'practical' capacity. The latter can be described as: $$ \mu {(T)}^{\ast }=\mu (T)\cdot {U}_i $$ where. μ(T)* : is the practical traffic capacity; U i : is the utilisation rate of the ultimate traffic capacity Transport capacity The transport 'ultimate' capacity of a given HL line/tube can be expressed by the maximum number of offered seats in the single direction during the specified period of time (T). Based on Eq. 10, it can be estimated as follows: $$ C(T)=\mu (T)\cdot m\cdot S $$ S : is the number of seats per capsule (seats/capsule). The other symbols are analogous to those in Eqs. 6 and 10. The practical transport capacity can be described as: $$ C{(T)}^{\ast }=\mu {(T)}^{\ast}\cdot m\cdot S\cdot \theta $$ C(T)* : is the practical transport capacity; Θ : is the average load factor of the vehicles (the 'utilisation rate' of the ultimate vehicle capacity) For the practical applications, the actual transport service frequency instead of the 'ultimate' transport capacity of a HL line/tube in Eq. 10 needs to be considered. This frequency generally depends on the volumes of demand, the HL vehicle's average seating capacity per departure, and the average preferred load factor as follows: $$ f\left(T,Q\right)=\min \left[\mu {(T)}^{\ast };\frac{Q(T)}{m\cdot s\cdot \theta}\right] $$ Q(T) : is the user/passenger demand during the period (T) in single direction (pass/h or pass/day); The other symbols are analogous to those in Eq. 13. The meaning of Eq. 14 is that if the frequency is set equal to the practical traffic capacity (μ(T)*), the transport capacity can be superfluous compared to the demand. That could be a reason to provide services with a lower frequency. In that case, the (scheduled) service frequency can in some cases depend on a policy regarding a 'decency' transport service frequency. Technical productivity Multiplied by the average vehicle operating speed along the line/tube the transport capacity gives an estimate of the technical productivity of a HL system under given conditions. From Eqs. 10 and 14, this maximum technical productivity is equal to: $$ TP(T)=\min \left[\mu (T);f\left(T,Q\right)\right]\cdot \overline{v} $$ \( \overline{v} \) : is the average speed of the HL vehicle(s) along the line/tube in the single direction (km/h). The other symbols are analogous to those in the previous Eqs. One can conclude from Eqs. 11 and 14 that f(T,Q) never can exceed μ(T). Eq. 15 can then be rewritten as: $$ TP(T)=f\left(T,Q\right)\cdot \overline{v} $$ The average speed (\( \overline{v} \)) of the HL vehicle(s) in Eq. 16 can be estimated as follows: $$ \overline{v}=2\cdot L/\tau $$ is the length of a given HL line/tube (km); and τ : is the average turnaround time of the vehicles/capsules (min) The other symbols are analogous to those in previous Eqs. (\( L=\sum \limits_{i=1}^{N-1}{l}_i \)). The technical productivity in Eq. 16 can also be estimated analogously. Fleet size Based on Eq. 3, the total time, which the HL vehicle(s) would spend at all stations along the line while moving in the same direction, is estimated as: $$ {\tau}_s={\tau}_{s/1}+\sum \limits_{i=2}^{N-1}{\tau}_{s/i}+{\tau}_{s/N} $$ N : is the number of stations along the line/tube including the begin and end station (terminuses); and τ s/1 , τ s/N : is the average time, which the HL vehicle spends at the begin and the end station (terminus), respectively (min/veh). τs/i is the passing time of a vehicle through the station (see also Eq. 1), This time is equal to: $$ {\tau}_{s/i}=2\cdot {\tau}_{c/i}+{\tau}_{p/i} $$ The other symbols are analogous to those in Eq. 3. The running time of the HL vehicle(s) along the line/tube in the single direction is estimated as follows: $$ {t}_L=\sum \limits_{i=1}^{N-1}\left(\frac{1}{2}\frac{v_{\max /i}}{a_i^{+}}+\frac{l_i}{v_{\max /i}}+\frac{1}{2}\frac{v_{\max /i}}{a_i^{-}}\right) $$ v max/i : is the maximum operating speed of the vehicle along the (i)- the segment of the line (km/h); and \( {a}_i^{-},{a}_i^{+} \) : is the maximum safe deceleration and acceleration rate, respectively, of the HL vehicle(s) on the (i)- the segment of the line (m/s2). The total turnaround time of the HL vehicle along the line can be estimated based on Eqs. 18 and 20 as follows: $$ \tau =2\cdot \left({\tau}_s+{\tau}_L\right) $$ Given the transport 'ultimate' capacity of a given line/tube (μ(T)) in Eq. 10 or the transport service frequency in Eq. 14, and the average turnaround time per vehicle (τ) in Eq. 21, the required size of the HL fleet (Total number of capsules) can be estimated as follows: $$ M(T)=\min \left[\mu (T);f\Big(T,Q\Big)\right]\cdot \tau \cdot m $$ Quality of services The quality of services influences (in addition to fares) the attractiveness of the HL system services and as such indicates its relative advantage/disadvantage over the competing modes such as HSR and APT. The relative advantage can be seen as the degree to which an innovation is perceived better than the product it replaces or competes with [18]. The relative advantage has considered to be one of the strongest predictors of the outcome of the decision on whether or not to adopt the innovation. In general, a new transport system does not need to perform better on all aspects, but overall - taking all the relevant characteristics of the service into account - it should offer some added value, i.e., benefits to its users/passengers. In the given context, the attributes of quality service of the HL system such as a) door-to-door travel time; b) transport service frequency; and c) reliability of services are considered relevant for eventual mode/system choice. Door-to-door travel time The door-to-door travel time consists of the access and egress time, schedule delay (including possible time for luggage checking) at the boarding and alighting stations, in-vehicle time, and the interchange time between different HL vehicles and their particular services at intermediate and end stations. The access and egress time The access and egress time depends on the interconnectivity between the HL system and the pre- and post-haulage systems, the density of the HL stations, and the speed of the pre- and post-haulage systems (from the users' doors to the HL station, and vice versa). The access and egress time generally varies at particular HL stations depending on the local spatial and traffic conditions. The waiting time The waiting time depends on the frequency of accessible HL services. If there is no limitation on the accessibility, the waiting time is determined by the schedule delay. Based on Eq. 14, the schedule delay can be estimated as follows: $$ SD(T)=\frac{1}{2}\cdot \frac{T}{f\left(T,Q\right)} $$ where all symbols are as in the previous Eqs. In the case of full accessibility and frequent and punctual services, the waiting time will be equal to the schedule delay. If the frequency is lower than 6/h, the average waiting time at the station will tend to be smaller than the schedule delay [19], but then there will be some 'hidden' waiting time at the departure location. If seat reservation is obligatory, which is common for long-distance modes, the passengers can use only the service for which they reserved a seat; in that case, the frequency of accessible services is just 1. Particularly in the case of low frequencies, the timetables of connecting scheduled systems as well as risk aversion of travellers for missing the intended service can affect the waiting time. Low punctuality increases waiting time. The punctuality of the HL system correlates with the homogeneity of successive services (regarding to destination/routing, intermediate stops) and the scheduled buffer times. In the case of a network where some passengers also make interchanges, the policy on whether/how long to wait for the delayed connecting services can additionally affect the punctuality and waiting time. In-vehicle time and interchange time The in-vehicle time of the HL system depends on the travel distance, the average speed, and the stopping time at the particular stations. If the HL system is set up as the network where some travellers also make interchanges within it, the interchange time will depend on the frequency of services, matching the timetables, punctuality, and the policy on waiting for delayed connecting services. The in-vehicle time and the interchange time correlate with the door-to-door distance. The access/egress and waiting times are 'fixed 'times to this respect. The relative values of the latter two time components will decrease when the travel distances increase. iv) The need to make interchanges generally diminishes the overall quality of service because these may extend travel times and make trips less convenient. In the long-distance travel markets, which the HL system is supposed to penetrate, the users/passengers usually have luggage with them. They generally will have to make at least two interchanges (between the access mode and the HL, and between the HL and the egress mode). In some cases they have to make interchanges within the HL system. The opportunity of interchanges in the access and egress trips is related to the density of HL stations. The opportunity of interchanges within the HL system is related to the design of the HL network. Transport service frequency The relevance of the service frequency as perceived by the traveller will depend on the envisaged business plan of HL: either as a 'walk up' service (i.e. direct access without reservation in advance) or through an advanced obligatory seat reservation. In a scenario with the advanced seat reservation, on the one hand a lower frequency (i.e. 3–4 dep/h) would be well acceptable, while on the other hand the offered service frequency at the time of booking will be lower than the scheduled frequency in the case services are fully booked. In case of 'walk up' services the service frequency can also be lower than the scheduled frequency, i.e. when the demand exceeds temporally the offered capacity; then the imbalance between demand and supply will increase waiting times [20]. Service reliability The HL system has two major characteristics that enable a potential high reliability of its services. This is a completely automated system, which as such, per definition, excludes delays due to the human errors. In addition, HL system operates in a closed environment which makes it resilient to the weather conditions. Of course, like any other transport system, the reliability of the HL transport services will depend on the technical reliability of all parts of the system (i.e., capsules, infrastructure, and control system). Table 2 gives the very preliminary estimates of the above-mentioned indicators of operational performances for three considered systems - HL, HSR, and APT using the above-mentioned analytical models. Mode specific assumptions are presented below the table in the form of notes. Table 2 Some estimates of the indicators of operational performances of the HL system and its counterparts - HSR and APT As can be seen, based on the technical characteristics of the HL, its transport service frequency is estimated to be 12 dep/h, which is comparable to that of HSR. In addition, under given conditions, the HL system would perform better than its HSR and APT system counterpart only in terms of the indicator - the total station-station travel time. The station capacity depends on the choices of pumping capacity and number of tracks. Based on Eq. 5, the pumping capacity for one lock chamber that makes the capacity of the chamber equal to the segment capacity can be calculated. Assuming one track, the required capacity can be described as: $$ {n}_i\cdot {C}_i=2\cdot \frac{\left({V}_i-{k}_c\cdot {V}_c\right)\cdot \ln \left({P}_{1i}/{P}_{2i}\right)}{\tau_{c/i}\cdot Ui-{\tau}_{0i}}=2\cdot \frac{\left(\pi \cdot {\left(\Delta /2\right)}^2\cdot \left(m\cdot d+B\right)-{k}_c\cdot {A}_c\cdot m\cdot d\right)\cdot \ln \left({P}_{1i}/{P}_{2i}\right)}{60/{f}^{\ast }{U}_i-{\tau}_{0i}} $$ Assuming that the chamber diameter is equal to the tube diameter (Δ = 3.3 m), and that m = 1, d = 30 m, B = 3 m, kc = 0.5 (average of 0 and 1), Ac = 4.0 m2, P1i = 0.74·1.013·105 Pa (Equivalent to the altitude of 2500 m MSL (Middle-Sea-Level)), P2i = 1·10–10 Pa (Ultra High Vacuum), f = 12/h, Ui = 0.8 and τ0i = 1 min, the required pumping capacity is about 5000 m3/min, e.g. 10 pumps (ni = 10) that produce 500 m3/min each (Ci = 500). If more tracks are built, the calculated pumping capacity should be divided by the number of tracks. Similarly as at the other transport modes and their systems, the financial performance of the HL system is defined by its revenues, costs, and profits as the difference between the former two. Consequently, the zero profitability achieved by the competitive prices given the costs could guarantee the bottom line for a stable economic viability of the HL system. The costs consist of capital costs, operational costs, and overhead costs. The capital costs are the costs for building the infrastructure (tracks, stations), and the costs for purchasing the vehicles. The operational costs regard the cost of maintenance of infrastructure and vehicles, and the costs related to the operation of the vehicles and stations. The overhead costs comprise the capital and maintenance cost of real estate, and the staff costs. The estimation of the costs of a still not existing system is a rather complex task. Therefore, in the given context, these costs are estimated based on published figures regarding the actual costs of the Maglev-system that are – to a certain extent – comparable to that of the HL system [21]. The cost level is defined by the cost value, currency, and time. One Euro in 2010 reflects a different cost level than either one US Dollar in 2010 or one Euro in 2015. For the sake of comparability, we will convert the figures to Euros of 2015. Capital cost for building tracks The capital cost for building 1 km of line/tube is likely to depend largely on the local conditions. Building in an empty area on flat sandy soil will be cheaper than building in a highly urbanized area, in moorland, or in mountains. Crossing wide rivers or the need to build tunnels will increase the costs. Musk [4] has estimated the costs of tubes on pylons and tubes in tunnels amounted €10.3 million/km and €34.0 million/km, respectively, for the passenger + freight variant (converted into 2015€). For the purpose of comparison, there is the example of a high-speed Maglev connection between Shanghai Pudong airport and the outskirts of the city in the form of a dual track of the length of 30 km and two stations (begin and end). Published costs are $1.2 billion and $1.33 billion [22, 23] (2002$US). A possible explanation for the difference is the exclusion/inclusion of the two stations. Both amounts included the purchase cost of the vehicles. Excluding station costs and vehicle costs, the investment costs would have been about €41 million per km track (€2015). Cost estimates for an extension of the line to Shanghai Hongqiao Airport were just the half: about €20 million €/km [24]. A reported reason for the lower costs has been using all-concrete modular design that would reduce the cost by 30%. A second possible reason for the lower cost has been a more solid soil. The current track has been built in an area with seismic activity and weak alluvial soil. This has required the construction on piles, which raised the costs. Another cost estimate of 34 million AU$/km (2008) or 26 million €/km (2015) relates to the proposed Maglev line in the Melbourne area [25]. This estimate is somewhat higher than that for the Shanghai extension. Considering that the cost estimates generally are too low and therefore the Melbourne estimate might be more realistic than the Shanghai estimate, it is assumed that the costs of the Maglev track are in the order of 25 million €/km under favourable conditions. The costs of 1 km of the HL line/tube will likely be somewhat higher than the cost of Maglev because the latter system does not have the costs for tube construction and the costs for vacuum pumps. On the other hand, the HL does not need the concrete guideway unlike the Maglev. Consequently, it is assumed that the construction costs of the two systems are similar and therefore adopted to be 25 million €/km for the HL system built on solid soil This appears more than double the costs that were estimated by Musk [4] . Assuming that the actual cost of 40 million €/km for the current Maglev track built on weak soil could have been reduced to about 35 million €/km by using a modular design, the latter figure is adopted for the HL system as well. The estimated costs for building tunnels at the HL system of 34.0 million €/km [4] can only be compared with the corresponding costs of the railway or road tunnels – the Gotthard base tunnel consisting of two single-track tunnels: 200 million €/km [26]; the Chuo Shinkansen railway line in Japan between Tokyo and Nagoya where 60% of the line goes through tunnels: 160 million €/km [27]; the Channel tunnel between France and Britain of the length of 50.5 km: 4.65 billion £/km (1990) or 190 million €/km (2015) [28]. These figures indicate that the tunnel costs for a double track railway line are in the order of 200 million €/km. This is likely considerably higher than the corresponding costs at the HL system. One of the main reasons is a much smaller diameter of the HL tube – for example, the two single-track Gotthard tunnels with diameters of about 9 m vs a HL tube of 3.3 m. The tunnel construction costs for two HL tubes might then even be somewhat lower than the costs for one single-track rail tunnel. If it is assumed that the costs were underestimated by about a factor 2, just like the argued underestimation for the tube on pylons, the real costs for two parallel tubes would be in the order of 70 million €/km. Capital cost for building stations/terminals The building costs for a station/terminal were estimated to be about 125 million $US (116 million €). The costs for the two stations of the current Maglev line near Shanghai could be 130 million US$ for two stations (i.e., 77 million €/station), which is significantly lower than the above-mentioned amount [4]. However, the HL system's stations are more complex than that of the Maglev system because they should give access to vehicles in the evacuated tubes as mentioned above. Therefore, it is assumed that the cost per station of the HL system of €116 million is a fairly good estimate. Stations at nodes of the network where several lines inter-connect will likely be more expensive. Costs of vehicles The costs for purchase of a vehicle (capsule) were estimated to be about €1.42 million [4]. These are the costs of a vehicle without toilets. Adding a toilet is supposed to increase the costs to about €1.52 million. For the purpose of comparison, the cost of one carriage of a Maglev train with the capacity of 90 seats are €12.5–15 million (compared to the capacity of the HL capsule of 28 seats) [25]. The average unit cost per seat which might be rather comparable are €0.14–0.17 million for the Maglev and €0.054 million estimated for the HL. In the present case, it is assumed that the average unit cost for the HL capsule is 0.17 million €/seat, which is more than the threefold of the above-mentioned estimation by Musk [4]. The assumed cost of a capsule is then €4.8 million. The annual costs The capital costs discussed above as incidental costs can be calculated as the annual costs (depreciation and interest) as follows: $$ {C}_{b(e)}=\frac{C_{b(e)}-{R}_e}{L_{t(e)}}+\frac{C_{b(e)}+{R}_e}{2}\cdot {I}_t $$ C b( e) : is the annual capital cost of the cost element e (€/track, station, and/or vehicle); is the incidental capital cost of the cost element e (€); R e : is the residual value of cost element e (€); L t( e) : is the life span of infrastructure element e (years); and I t : is the interest rate (%/year). Table 3 gives an overview of the incidental investment and the annual costs for the HL system. In all cases, it is assumed no residual value (Re= 0) for all cost elements, the interest rate: It = 4%/year, and the life spans as an average used in the EU-countries for the rail and road infrastructure and rolling stock [29]. Maintenance costs of infrastructure and rolling stock Table 3 Investment and annual capital cost for the HL system infrastructure and vehicles For the maintenance costs of the HL lines/tubes, stations, and rolling stock, a fixed ratio to the capital costs is assumed. The World Bank [30] states that the variable component of rail infrastructure cost can vary from just a few percent to about 30% depending on the intensity of use. The HL system is assumed to be heavily used, leading to relatively high maintenance cost, but the ratio to the capital cost will be smaller than for rail because of the lack of physical contact between the vehicles and the infrastructure. Consequently, the ratio of 10% is assumed for both infrastructure and vehicles setting the annual maintenance costs at 10% of the annual capital costs. The operating costs consist of the costs for staff in the vehicles and at the stations, and the traffic management costs. Generally the energy costs for moving the vehicles are also part of the operating costs, but the HL is a special case because it is assumed to take energy from the solar panels at the top of the tube. Some estimates indicate that such produced energy exceeds the energy consumption by the vehicles [4]. The capital and maintenance costs of the solar panels and the transmission of energy to the vehicles are then the only energy costs. The costs for employees in the vehicles and stations depend on the organization, i.e., the number of employees in the vehicles, and manpower needed for ticket sales and control. In the present context, it is assumed that in each capsule one employee is present checking the seat belts, helping in the case of problems, and possibly providing some food and drink. The staff at stations would include two employees per station controlling and possibly selling tickets, and helping and guiding passengers. Assuming that the average operation time of a capsule is 15 h/day, that stations are opened for 18 h/day, and that the average working time of an employee is 7 h/day (including holiday and sickness absence), the number of full-time employees for a single capsule is 2.14 and for a station 5.14. Assuming an average annual wage of €35,000, the annual operation cost for one capsule would be €75,000 and for a station €180,000. These costs appear to be relatively small compared to the capital cost. The traffic management costs depend on the intensity of use and the complexity of the network. It is assumed that these costs are equal to the wage of one employee for each 1000 km of 'double tube'. Assuming an operation time of 18 h per day, 2,57 full employees are needed per 1000 km of the line/tube. The relating annual costs would be €90,000/1000 km, or €90/km. Overhead costs The overhead costs include the capital and maintenance cost of real estate, and the staff costs. In the present context, it is assumed that the real estate costs are marginal compared to the capital and maintenance costs of the HL infrastructure. As such they are neglected. As far as the staff costs, one overhead employee is assumed per each ten employees needed for operation, these costs are included by increasing the costs of operational staff for 10%. Overview of the costs Table 4 gives an overview of the annual unit costs of the HL system. At vehicles, the costs are also expressed per seat and seat-km, which makes them comparable to that of other transport systems. For the calculation of numbers per seat km, we assume 28 seats per capsule, 15 operating hours per day per capsule, and an average distance of 600 km per hour in the operating period. Table 4 Estimated annual costs of the HL system (€2015) The vehicle cost per seat-km is very low compared to the vehicle costs of other systems. Earlier calculations indicated that these costs ranged from 0.022–0,058 €/s-km (1993€) at different public transport systems in the Netherlands [31]. These costs would be even higher when expressed in 2015€, but public transport provision has become more cost-efficient since. An interesting finding in the study was that the costs are negatively correlated to the speed of a system. The explanation is the fact that most cost components are time related, like the salary of the staff, which lowers the cost per km when the speed increases. Very low costs for the extremely fast HL system could then be expected. An additional explanation is that the energy costs – the only cost component where the per-km cost increases with distance– are not included in the HL vehicle costs. Revenues/prices At an economically viable transport system as the HL system intends to be, the average unit price that the users/passengers pay should at least cover the corresponding total average unit cost. These costs depend on the local conditions and the configuration of the system, including Soil condition, natural barriers; the impact is illustrated in Table 4. Average station spacing. Connectivity; this defines together with the station spacing the number of stations per km track that has to be built; in the case of just one line connecting two stations, the number of stations per km at a given station spacing is about two times the number in a large network. Frequency of the services; when the service frequency increases, the infrastructure costs are divided among more services and will be lower per ride. Load factor of the vehicles; this is inversely linearly correlated with the costs per passenger; because of the low transport capacity of the HL (see Table 2) and the high market potential because of the very high speed (even higher than the airplane), generally a high load factor might be expected [32]. The costs for the track infra make up the major part of the costs. This implies a strong relation between service frequency and costs per person km. Figure 6 shows this relation for three types of local conditions, starting from the costs in Table 4. Assumptions are a station spacing of 500 km, a high network connectivity, and a load factor of 80%. The indicated frequencies range from 15 to 216 dep/day/direction, which is the maximum capacity based on the traffic capacity of μ = 12 dep/h (Table 2) under conditions of operation during 18 h/day. Relation between the average unit costs and transport service frequency The lowest costs (and the cost-meeting fares) in the figure, in the case of 216 services per day, are €0,30 in the case of solid soil along the whole route, €0,36 in the case of 50% solid soil and 50% weak soil, and €0,40 in the case of 50% solid soil, 40% weak soil, and 10% tunnel. A comparison with the fares of alternative modes is difficult because there is generally a wide range of fares for the same route because of revenue management. Based on the experiences of De Decker [33], the most common fares for high-speed trains would be between €0,15 and €0,25 per km, and those for conventional long-distance trains about half the HSR fares. The fares for low-cost airlines are also significantly lower than the HSR fares. We remind the reader that the HL cost calculations regard the system with larger dimensions that enables to walk through the vehicles and visit the toilet. These costs could be 20–30% higher than those for a smaller system where passengers only can sit (according to Musk [4] who calculated figures for the two systems). A small-dimensioned system would still be expensive compared to the other long-distance travel modes. Cost calculations for a still not existing transport system are inevitably highly uncertain. However, the cost calculations produce one strong result: the costs are predominantly determined by the costs for the track infra, and therefore a high service frequency combined with a high load factor is needed for cost-meeting fares that are more or less competitive to those of the alternative transport modes. Social/environmental performance The indicators of the social/environmental performances of the HL system include noise, safety, energy consumption and related emissions of GHGs (Green House Gases), and land use. Noise produced by transport vehicles can cause annoyance and harmful effects to people living and/or working close to transport routes. Moreover, traffic noise limits the possible use of space along the routes, and hence may cause an opportunity cost regarding land use. The impact of noise depends on the noise levels at sources, the number of people exposed to them, and duration of the noise exposure. This implies that this performance at the considered systems mainly depends on the routing of transport lines and speed and number of passing by vehicles. The HL is supposed to hardly produce any external noise affecting relatively close population. This is due to the fact that the HL is not in contact with the tube and therefore there is no transfer of vibration. Any noise from the capsule itself will not be heard outside the tube and the low air pressure inside the tube prevents noise from moving the capsule. The only potential source of noise could be the vacuum pumps, but these are assumed to produce negligible noise [21]. In evaluating the performance of a transport system a distinction is usually made regarding the internal and external safety. The internal safety relates to risk and damage caused by incidents to the users/operator of the transport system itself. The external safety reflects the possible risk and damage of accidents/incidents to people and their living/working environment outside the systems. The HL system is a dedicated and closed transport system, excluding any kind of interaction with other transport modes and its direct environment. Hence there are no external safety concerns, giving the HL system seemingly an advantage over its prospective counterparts - APT and HSR. Internal safety benefits are expected because the HL system is a completely automated system and hence excludes the possibility of human errors. In addition the HL system is supposed to be designed according to the fail-safe-principle: in case of danger (e.g., a rapid depressurization in the capsule or tunnel), the "clever" systems will stop the capsule and, if needed, will provide means of individual salvation (e.g., oxygen masks for passengers). However, many safety issues still need further consideration, elaboration and testing, such as for example, evacuation of people, stranded capsules, incorporation of emergency exits, etc. [20]. Energy consumption and emissions of GHGs (green house gases) The HL system is expected to be less energy demanding compared to the HSR mainly due to having less friction with the track(s) and low air resistance due to the low pressure in the tube. Some preliminary estimates have suggested that the HL system can be about 2–3 times more energy-efficient than the HSR, and depending on transport distances, about 3–6 times more energy-efficient than APT [20]. This is mainly because the HL system is intended to be completely propelled by the electrical energy obtained by the solar panels on top of the tube(s). These are claimed to be able to generate more than the energy needed to operate the system. This also takes into account that sufficient energy can be stored (e.g., in the battery packs on board the vehicles) to operate the system at night, in periods of cloudy weather, and in tunnels [4]. In general, emissions of GHGs are directly related to the energy consumption. If only emissions of GHGs by operations are considered, regarding the above-mentioned primary energy source, the HS system will not make any of them. However, the indirect emissions from building the infrastructure (lines and stations/terminals), rolling stock (capsules), and other equipment should be taken into account in cases of dealing with the system's life-cycle emissions of GHGs. In general, land used to facilitate transport systems cannot, except for underground transportation, be used for other purposes, hence creating an opportunity cost. The valuation of land occupied by the HL system will be a function of the space that is needed (width and length of the infrastructure) and the value of the land. The latter will depend heavily on the specific routing of the line. On the one hand, the HL system is planned to be elevated on pillars, so the effective land occupation on the ground (net area of land needed) can be limited. On the other hand, it remains to be seen if the space between the pillars can be used meaningfully. Moreover, the elevated construction may bring along visual pollution. In general the total amount of land (gross area of land) required for new transport infrastructure can be minimized by maintaining the route as close as possible to the existing transport infrastructure. The HL system's tubes will be mounted side by side on elevated pillars. For the small tubes – designed for passenger transport only – the size of the pillar that carries two tubes is about 3.5 m wide [21]. The tubes for mixed traffic (passengers and freight) are larger and hence the pillars for these tubes are also expected to be larger, i.e. 5.2 m. Since the pillars will be spaced averagely 30 m and the possibilities to use the space on the ground in between effectively is limited, the net area needed for 1 km of HL system's line will be about 0.5 ha. The average gross area of taken land by the line is estimated to be about 1.0 ha/km. Despite of more efficient land use of the HL system, it is likely that it will have higher cost of land than, for instance, its HSR counterpart. This is because the HL system is less flexible than HSR system in routing the line particularly in terms of accommodating to the sharp turns. Overview of the indicators of performance An overview of the estimated indicators of performances of the HL, HSR, and APT systems according to the methodology presented in Sections 3.1 to 3.3 is summarized in Table 5. For those estimations HL, HSR, and APT systems are considered as potential competitors in the medium-distance passenger transport markets(s) of the length of 600 km. Mode-specific assumptions are presented below Table 5 in the form of notes. Table 5 Indicators of performances of the HL system and its APT and HSR counterparts As can be seen, some indicators are expressed in quantitative and some others in qualitative terms, the latter based on a relative performance, i.e., relative ranking the transport systems at three levels - low, moderate, and high. Table 5 demonstrates that the most striking differences between the HL mode and the two other modes regard the vehicle capacity, the vehicle costs per seat km, and the GHG emissions. In all cases, the values are low for HL. The low vehicle capacity results from the assumed short vehicle length. Coupling vehicles would increase the capacity, but moving coupled vehicles through curves might be a technical challenge. Moreover, the chamber lengths have to be increased. The low vehicle costs per seat km can be explained by the extremely high speed (most vehicle costs are correlated with time and not with distance) and the absence of energy costs: the costs for the solar cells are part of the line infrastructure costs. The GHG emissions are low (even zero) because it is assumed that the solar cells provide all energy. Hyperloop (HL) is a new mode of transport that claims to be a competitive and sustainable alternative to the long-distance rail transport (HSR (High Speed Rail)) and the medium-distance APT (Air Passenger Transport) system (less than or equal to 1.500 km). Taking into account that the performance of the HL system can be considered in different ways and from the perspectives of different stakeholders (i.e., passengers, transport operators, government authorities, and society) the operational, financial, social/environmental performances of the HL system have been investigated and evaluated. In comparing the HL with the HSR and APT system, it has been found that the HL system has relatively positive social/environmental performances, particularly in terms of the energy consumption, emissions of GHGs, and noise. The HL system can potentially be a very safe mode, but both HSR and APT have also a very good safety track record. A major weak point of the HL system technology appears to be its rather low transport capacity, mainly due to the low seating capacity of individual vehicles/capsules, which affects both the operational and the financial performance. Consequently, the investment costs of HL infrastructure make up a large part of the total costs per seat-kilometre, raising the latter to a higher level than those of its counterparts – HSR and APT. Hence, the break-even fares would also be higher, even if the load factor is relatively high. This finding suggests that HL-application may be limited to the premium passenger transport market, in which there is 'willingness to pay' for the strongest feature of HL system service carried out at the very high average speed. So far, the HL technology is in its infancy and there are still many uncertainties around the system that need further exploration. From operational perspective, an important research issue is if and how the HL system transport capacity could be increased, for instance, by increasing the number seats or coupling several capsules into a single vehicle ('train'). And also to what extent such change in capacity could influence other operational, financial and socio/environmental performances of the system. An initial study explored the relationship of HL vehicle capacity to total energy consumption and found the former being rather insensitive to the latter (Decker et al., 2017). From financial perspective, further research is needed to more accurately estimate costs associated with HL development especially with respect to infrastructure (i.e. tracks, stations and vehicles) which form the larger part of the total costs per seat-kilometre. Apparently, further specification of these costs requires more research on technological aspects of the system. Finally, from social/environmental perspective, further research is required in exploring the total life-cycle energy consumption and GHGs emissions of the system including infrastructure development (lines and stations/terminals), rolling stock (capsules), and operation of sub-systems such as the vacuum pumps. Moreover, estimation of social performance of the system would be improved by further research on possible implications of HL for social welfare such as accessibility to life-enhancing opportunities and creation of jobs (direct and indirect). Van Goeverden CD, Van Arem B, Van Nes R (2016) Volume and GHG emissions of long-distance travelling by western Europeans. Transp Res D 45:28–47. https://doi.org/10.1016/j.trd.2015.08.009 Lee DS, Fahey DW, Forster PM, Newton PJ, Wit RCN, Lim LL, Owen B, Sausen R (2009) Aviation and global climate change in the 21st century. Atmos Environ 43:3520–3537. https://doi.org/10.1016/j.atmosenv.2009.04.024 European Commission (2014) EU Energy, Transport and GHG Emissions, Trends to 2050, Reference Scenario 2013. Publications Office of the European Union, Luxembourg Musk E (2013) Hyperloop Alpha. SpaceX, Texas http://www.spacex.com/sites/spacex/files/hyperloop_alpha-20130812.pdf Abdelrahman AS, Sayeed J, Youssef MZ (2018) Hyperloop transportation system: analysis, design, control, and implementation. IEEE Trans Ind Electron 65(9):7427–7436. https://doi.org/10.1109/TIE.2017.2777412 Braun, J, Sousa, J, Pekardan, C (2017) Aerodynamic design and analysis of the hyperloop. AIAA Journal 55(12):4053-60 https://doi.org/10.2514/1.J055634 Chin, JC, Gray, JS, Jones, SM, Berton, JJ (2015) Open-source conceptual sizing models for the hyperloop passenger pod. 56th AIAA/ASCE/AHS/ASC Structures, Structural dynamics, and materials Conference, Kissimmee, Florida. https://doi.org/10.2514/6.2015-1587 Janzen R (2017) TransPod ultra-high-speed tube transportation: dynamics of vehicles and infrastructure. Procedia Engineering 199:8–17 https://doi.org/10.1016/j.proeng.2017.09.142 Yang Y, Wang H, Benedict M, Coleman D (2017) Aerodynamic Simulation of High-Spee Capsule in the Hyperloop System. In: 35th AIAA Applied Aerodynamics Conference. American Institute of Aeronautics and Astronautics, Reston Alexander NA, Kashani MM (2018) Exploring bridge dynamics for ultra-high-speed, Hyperloop, trains. Structures 14:69–74 https://doi.org/10.1016/j.istruc.2018.02.006 Heaton TH (2017) Inertial forces from earthquakes on a Hyperloop pod. Bull Seismol Soc Am 107(5):2521–2524 https://doi.org/10.1785/0120170054 Decker K, Chin J, Peng A, Summers C, Nguyen G, Oberlander A, Sakib G, Sharifrazi N, Heath C, Gray JS, Falck RD (2017) Conceptual Sizing and Feasibility Study for a Magnetic Plane Concept. In: 55th AIAA Aerospace Sciences Meeting. American Institute of Aeronautics and Astronautics, Reston Janić M (2003) Multicriteria evaluation of high-speed rail, Transrapid maglev and air passenger transport in Europe. Transp Plan Technol 26(6):491–512 https://doi.org/10.1080/0308106032000167373 Janić M (2016) A multidimensional examination of performances of HSR (high-speed rail) systems. Journal of Modern Transportation 24(1):1–21 https://doi.org/10.1007/s40534-015-0094-y AIRBUS (2017) Growing Horizons 2017/2036: Global Market Forecast. AIRBUS, S.A.S, Blagnac Cedex, France Salter RM (1972) The Very High Speed Transit System. RAND Corporation, Santa Monica Feitelson E, Salomon I (2004) The political economy of transport innovations. In: Beuthe M, Himanen V, Reggiani A, Zamparini L (eds) Transport development and innovations in an evolving world. Springer, Berlin, pp 11–26 Tidd J, Bessant J, Pavitt K (2001) Managing innovation, integrating technological, market, and organizational change. John Wiley and Sons, Chichester Weber W (1966) Die Reisezeit der Fahrgäste öffentlicher Verkehrsmittel in Abhängigkeit von Bahnart und Raumlage. Forschungsarbeiten des Verkehrswissenschaftlichen Instituts an der Technischen Hochschule Stuttgart, Stuttgart Taylor CT, Hyde DJ, Barr LC (2016) Hyperloop commercial feasibility analysis: high level overview. Volpe (US Department of Transport), Cambridge Wilkinson J (2016) A comparison of Hyperloop performances against high speed rail and air passenger transport using multi-criteria analysis: case study of the San Francisco-Los Angeles corridor. Minor thesis, Delft University of technology, Delft Antlauf W, Bernardeau FG, Coates KC (2004) Fast track. Civil Engineering, The Magazine of the American Society of Civil Engineers 74(11):37–43 Wikipedia: https://en.wikipedia.org/wiki/Transrapid, assessed 31 October 2016 Wikipedia: https://en.wikipedia.org/wiki/Maglev, assessed 31 October 2016 TKTA: ThyssenKrupp Transrapid Australia (2008) ThyssenKrupp Transrapid Australia submission in response to East West link needs assessment report "Investing in Transport". ThyssenKrupp Transrapid Gmbh, Kassel Wikipedia: https://en.wikipedia.org/wiki/Gotthard_Base Tunnel, assessed 31 October 2016 Quora: https://www.quora.com/What-are-the-cost-differences-between-Maglev-and-conventional-high-speed-rail, assessed 31 October 2016 Wikipedia: https://en.wikipedia.org/wiki/Channel_Tunnel, assessed 31 October 2016 Ecorys transport, CE Delft (2005) Infrastructure expenditures and costs, Practical guidelines to calculate total infrastructure costs for five modes of transport, final report. Rotterdam. https://ec.europa.eu/transport/sites/transport/files/themes/infrastructure/studies/doc/2005_11_30_guidelines_infrastructure_report_en.pdf, assessed 31 October 2016 The World Bank (2011) Railway Reform: Toolkit for Improving Rail Sector Performance. The World Bank, Washington Goeverden V, CD Peeters PM (2006) Suspending subsidies for public transport, its impacts on the public transport system in the Netherlands. 85th annual meeting of the Transportation Research Board, Washington Van Goeverden CD, Janić M, Milakis D (2018) Is Hyperloop helpful in relieving the environmental burden of long-distance travel? An explorative analysis for Europe. Proceedings international conference on transport, climate change and clean air, Gif-sur-Yvette. https://www.iip.kit.edu/downloads/WCTR_SIGf2_2018_BookOfAbstracts.pdf, assessed 1 July 2018 De Decker K (2013) High Speed Trains are Killing the European Railway Network. Low-tech Magazine 15, http://www.lowtechmagazine.com/2013/12/high-speed-trains-are-killing-the-european-railway-network.html. Assessed 31 Oct 2016. EC (2017) EU Transport in Figures. Statistical Pocketbook 2017. European Commission. Publications Office of the European Union, Luxembourg, p 164 Janić M (2007) The sustainability of air transportation: quantitative analysis and assessment. Ashgate Publishing Company, Farnham EEC (2015) Standard Inputs for EUROCONTROL Cost-Benefit Analyses. Edition Number: 7.0 Edition Date: November 2015. European Organisation for the Safety of Air Navigation (EUROCONTROL), Brussels UIC (2010) High speed rail: fast track to sustainability. International Union of Railways, Paris, France, http://www.hsr.ca.gov/docs/about/business_plans/BPlan_2012LibraryCh7FastTrack.pdf. Accessed 1 Apr 2017. ICAO (2016) Carbon Emissions Calculator, https://www.icao.int/environmental-protection/CarbonOffset/Pages/default.aspx. Accessed 1 Apr 2017. A previous version of this paper was presented at the BIVEC/GIBET Transport Research Days 2017 in Liege, Belgium. Transport & Planning Department, Delft University of Technology, P.O.Box 5048, 2600 GA, Delft, the Netherlands Kees van Goeverden, Dimitris Milakis & Milan Janic OTB, Research for the Built Environment, Delft University of Technology, Julianalaan 134, 2628 BL, Delft, the Netherlands Rob Konings Kees van Goeverden Dimitris Milakis Milan Janic DM conceived the research concept and wrote the theoretical and historical background to the subject and the directions for future research. DM, MJ, KVG and RK designed the study. MJ described the HSR and APT systems and analysed the operational performance of HL. KVG analysed the financial performance of HL and contributed to the analysis of HL capacity. RK analysed the social and environmental performance of HL. All authors read and approved the final manuscript. Correspondence to Kees van Goeverden. van Goeverden, K., Milakis, D., Janic, M. et al. Analysis and modelling of performances of the HL (Hyperloop) transport system. Eur. Transp. Res. Rev. 10, 41 (2018). https://doi.org/10.1186/s12544-018-0312-x Received: 15 December 2017 HL (Hyperloop) system Long-distance transport
CommonCrawl
Spatial and temporal resolution of geographic information: an observation-based theory Auriol Degbelo ORCID: orcid.org/0000-0001-5087-87761 & Werner Kuhn2 Open Geospatial Data, Software and Standards volume 3, Article number: 12 (2018) Cite this article After a review of previous work on resolution in geographic information science (GIScience), this article presents a theory of spatial and temporal resolution of sensor observations. Resolution of single observations is computed based on the characteristics of the receptors involved in the observation process, and resolution of observation collections is assessed based on the portion of the study area (or study period) that has been observed by the observations in the collection. The theory is formalized using Haskell. The concepts suggested for the description of the resolution of observation and observation collections are turned into ontology design patterns, which can be used for the annotation of current observations with their spatial and temporal resolution. Resolution is a key notion to the field of geographic information science (GIScience): it is critical in determining a data set's fitness for a given use (see [1]), and influences the patterns that can be observed during an analysis process (see [2]). In addition, as Goodchild [3] pointed out, resolution determines the volume of data which is generated and therefore the processing costs and storage volume. Finally, resolution is necessarily present in any data collection process because the world is too complex to be studied in its full detail (see for instance [1] and [4]). The literature on geographic information science and related field contains various definitions, and understandings of resolution. In an attempt to provide conceptual clarity, Degbelo and Kuhn [5] discussed some of these notions, and presented a framework to reconcile various connotations of the term. The framework consists of definitions of resolution, proxy measures for resolution and related notions to resolution. Definitions of resolution refer to possible ways of defining the term; proxy measures for resolution denote different measures that can be used to characterize resolution; and related notions to resolution denote notions closely related to resolution, but in fact different from it. Examples of related notions to resolution include scale, granularity and accuracy, while examples of proxy measures include the step size of a sensor, and the mean spacing of samples. In line with [5], resolution is defined in this article as the amount of detail (or level of detail, or degree of detail) in a representation. Resolution applies to data (i.e., representations), whereas granularity applies to conceptual models (see [4, 5]). Resolution is only one of many components of scale, with other components including extent, grain, lag, support and cartographic ratio (see [6, 7]). The transition to the digital age, and the rise of Volunteered Geographic Information (VGI, [8]) call for a rethinking of traditional criteria to describe the resolution of data in GIScience. The current work explores the idea of an observation-based characterization of resolution. At least four reasons motivate this. First, observations are key to the geo-sciences. For example, Frank [9] asserts that "all we know about the world is based on observation". Janowicz [10] indicates that observations have been proposed as the foundation of geo-ontologies. Adams and Janowicz [11] point out that the geosciences rely on observations, models, and simulations to answer complex scientific questions such as the impact of global change. Stasch et al. [12] point out that observations form the basis of empirical and physical sciences. The information-based ontological system outlined in [13] has, at its core, the notion of observation. Second, observations are a central concept of the digital age (which relies on information), and of VGI (where humans act as sensors to produce geographic information). Describing resolution in the era of VGI is a complex, underexplored, and important issue. As Goodchild [14] indicated, metrics of spatial resolution are strongly affected by the analog to digital transition. In addition (and as pointed out in [15]), mechanisms to describe the quality of human observations are needed, and are missing. Describing the quality of these observations, in turn, is important to effectively assess their suitability for a given task. Third, Frank [16] indicated that "Data quality research needs a quantitative, theory based approach. The theory must relate to the physical characteristics of the observation process, where the imperfections in the data originate" (emphasis added). An ontology design pattern for the spatial data quality of observations was proposed in [17]. Yet, more specific, observation-based treatments of how spatial data quality components (e.g., accuracy, resolution, completeness and lineage) may be accounted for are still needed. The ultimate usefulness of an observation-based characterization of resolution is the provision of a conceptual apparatus, which helps to understand semantic differences with respect to resolution in two geographic datasets. Fourth, a full 'science of scale' (as envisioned in [18]) requires progress in the understanding of resolution. As described in [18], a science of scale needs to tackle five main issues: invariants of scale, the ability to change scale, measures of the impact of scale, scale as a parameter in process models, and the implementation of multiscale approaches. Since resolution is one component of scale, measuring the impact of scale on spatial analysis requires a better understanding of resolution. Frank [19] pointed out that scale (and hence resolution) is introduced in spatial datasets by observation processes. Therefore, a first step towards measuring the impact of resolution of spatial analysis is a greater understanding of how resolution is introduced in observation processes. The current article attempts to provide an answer to this question, by elaborating an observation-based characterization of resolution. The main contributions of this article are as follows: a brief review of previous work on resolution in GIScience; a formal theory of spatial and temporal resolution of observations underlying geographic information. The theory has a dual importance: (i) at the theoretical level, it is to be taken as a small and necessary piece of the science of scale; (ii) at the practical level, the axioms of the theory (or parts of them) can be implemented, and serve the purposes of reasoning over datasets at different spatial and temporal resolution; a critical analysis of existing criteria for the description of the spatial and temporal resolution of observation collections; ontology design patterns extending the SSN Ontology [20]. These ontology design patterns can be used for the annotation of observations, and observation collections of the Sensor Web with their spatial and temporal resolution. Since GIScience has investigated resolution of geographic information for many years, Section Resolution in GIScience: a brief review briefly reviews previous work, pointing out what is still missing. Ontology is used as a method to elaborate the theory, and Section Method introduces the different steps followed during the development. An observation-based theory of resolution for single observations is expounded in Section Spatial and temporal resolution of a single observation, and Section Spatial and temporal resolution of an observation collection discusses the resolution of observation collections. Since the spatial and temporal dimensions of geographic information are currently more understood than the thematic dimension, the theory focuses on these two as a first step (deferring a theory of observation resolution applicable to all three dimensions of geographic information to future work). Section Applications presents some examples of use of the ideas discussed. Section Comparison with previous work discusses the current work in relation to previous work. Section Limitations points at limitations before Section Conclusion and future work concludes the paper. Resolution in GIScience: a brief review Despite many discussions of the broader notion of 'scale' from different viewpoints (see for example a discussion from an hydrology perspective in [21], a discussion from a geostatistics perspective in [6, 22], and a discussion from a GIS perspective in [23]), discussions of the more specific notion of 'resolution' have been very few. Progress on resolution has been made over the years, but the ideas are scattered throughout articles. This section presents some of the previous work - in GIScience - addressing four areas: the optimum resolution, the influence of resolution on other variables, integration of multi-resolution features and multi-resolution databases, and previous formal accounts for resolution. The optimum resolution: Lam and Quattrochi [24] commented on the issues of scale, resolution, and fractal analysis in the mapping sciences and pointed out one important research question in this context, namely 'what is the optimum resolution for a study or does an optimum really exist?'. On that subject, Marceau et al. [25] proposed and tested a method to identify the optimal resolution for a study. They concluded that (i) the concept of optimal spatial resolution is relevant and meaningful for the field of remote sensing, and (ii) there is a need of selecting the appropriate resolution in any study involving the manipulation of geographical data. Though the study was conducted in the field of remote sensing, its results are included in this literature review because they are relevant for GIScience. Influence of resolution on other variables: Gao [26] explored the correlations between spatial resolution and root mean square error (RMSE), spatial resolution and accuracy, as well as spatial resolution and mean gradient in the context of digital elevation models (DEMs). He concluded that (i) the RMSE of a gridded DEM increases linearly with its spatial resolution from 10m to 60m, (ii) the accuracy of representing a terrain with a gridded DEM decreases as the resolution decreases from 10m to 60m, and (iii) resolution has a minimal impact on mean gradient. Deng et al. [27] used correlation and regression analysis to assess the effect of DEM resolution on calculated terrain attributes such as slope, plan curvature, profile curvature, north–south slope orientation, east–west slope orientation, and topographic wetness index. Their work indicated that terrain attributes respond to resolution change in different ways. Among the different terrain attributes studied, plan and profile curvatures were found to be the most sensitive attributes, and slope was the least sensitive attribute to change in resolution. The findings are valid only for landscapes found in the Santa Monica Mountains. The experiments reported in [28] revealed that there is a logarithmic relationship between DEM resolution and mean slope. Jantz and Goetz [29] examined the ability of the urban land-use-change model SLEUTH (slope, land use, exclusion, urban extent, transportation, hillshade) to capture urban growth patterns across varying spatial resolutions (i.e., cell sizes). The authors reported that, during their experiments, the amount of growth that could be produced through spontaneous growth at a resolution of 360m was more than five times the amount at a resolution of 45m. That is, the resolution of the input data impact the overall performance of an urban land-use-change model. A similar conclusion was reached by Kim [30] whose study indicated that variations in spatial and temporal resolution can generate substantial differences in the outcomes of a land-use change simulation. Pontius Jr and Cheuck [31] proposed a method which helps to examine the sensitivity of statistical results to changes in resolution. The method was designed to facilitate multi-resolution analysis during the comparison of maps that display a shared categorical variable. Csillag et al. [32] studied the impact of spatial resolution on the classification of areas into taxonomic attributes. 'Classification' here means that a measurement is made at a point in space, and based on the measurement value, one would like to assign a (predefined) class to the point at which the measurement is made. Csillag et al. [32] used two examples during their study: (i) vegetation is sampled at given locations and classified according to species and/or associations; (ii) soil properties are measured at a given locations, and soil types are assigned to the locations based on the value of the measured property. They pointed out that changes in spatial resolution lead to changes in the accuracy in terms of class identification, and concluded that there may not be a single best resolution for environmental data. Finally, Lechner and Rhodes [33] recently presented a review of the effects of spatial and thematic resolution on ecological analysis. They indicated that spatial resolution affects statistical analysis outcomes such as inference about population mean, variation and statistical significance. In addition, changing spatial and thematic resolution affects the characterisation of landscapes and ecological analyses (e.g., measuring land cover proportions, landscape metrics and change detection). Lechner and Rhodes [33] also pointed out that spatial and thematic resolution not only have affect ecological variables, but also mutually influence one another. Integration of multi-resolution features and multi-resolution databases: Du et al. [34] suggested an approach to check directional consistency between representations of features at different resolutions. Examples of direction relations include east (of), west (of), south (of), north (of), southeast (of), southwest (of), northeast (of), northwest (of), and directional consistency is evaluated by checking whether direction relations between pairs of spatial regions at different resolutions are similar. Balley et al. [35] proposed an approach to build a unified database from source databases. The source databases are databases which contain the same feature represented at different levels of spatial and thematic detail. Formalisms for resolution: A formal framework for multi-resolution spatial data handling was suggested in [36]. The framework has five main components: map, map space, granularity lattice, stratified map space, and sheaf of stratified map spaces. It can be used to assess the correctness of generalization algorithms and the integration of geometrically and semantically heterogeneous spatial datasets. Skogan [37] suggested another framework to deal with multi-resolution objects and multi-resolution databases. The framework consists of four components: the federated multi-resolution database management system, the resolution space, the multi-resolution type and methods for aggregating resolution. Worboys [38] dealt with multi-resolution geographic spaces and proposed a formal account for multi-resolution geographic spaces using ideas related to fuzzy logic and rough set theory. Other formalisms for resolution, focusing on sensor observation and processes, can be found in [16] and [39] respectively. Frank [16] suggested to model (formally) the effect of resolution on the final sensor observation using a convolution with a Gaussian kernel. Weiser and Frank [39] proposed a formalism to represent multiple level of details (i.e., resolution) in discrete processes (e.g., a train ride). Finally, Bruegger [40] suggested a theory for the integration of spatial data presenting differences in spatial resolution and representation format (i.e., raster and vector). Summary: In sum, there is a need of selecting the appropriate resolution in any study involving the manipulation of geographical data. The literature has also documented correlations between resolution and other parameters (e.g., error, accuracy, slope). This stresses the importance of choosing the appropriate resolution, and documenting the resolution at which inferences are done during an analysis. In addition, different formalisms have been suggested to model the resolution of geographic data. Yet, there is no observation-based theory of resolution. The aim of the next section is to outline one. The theory is proposed as an ontology, and this has two main benefits: (i) conceptual clarification, and (ii) implementability and processability by machines (when encoded in ontology languages such as the Web Ontology Language). The latter benefit (i.e., processability by machines) is one of the advantages of the new theory over previous formalisms for resolution (and what makes the theory applicable to the Sensor Web). The steps followed in this work involve a design stage and an implementation stage. In line with [41], the design stage includes the identification of a motivating scenario, the identification of terms useful to describe the resolution of datasets, and the formal specification of these terms. The design stage results in a logical theory. The implementation stage derives a computational artifact (from the design stage), which can be used for practical tasks such as query disambiguation and query expansion. Both the motivating scenario, and terms used to describe the observation process are presented in the following subsections. The terms to describe the resolution of datasets and the computational artifacts derived from the design stage (i.e., Ontology Design Patterns) are introduced in Sections Spatial and temporal resolution of a single observation and Spatial and temporal resolution of an observation collection. Applications of the ontology design patterns are discussed in Section Applications. In keeping with [42], different languages were used for the different phases of the theory development. Haskell was used in the work for the design phase (presented in Sections Spatial and temporal resolution of a single observation and Spatial and temporal resolution of an observation collection), while the Web Ontology Language was used during the implementation phase (whose results are introduced in Section Applications). The use of different languages at different stages helps to better accommodate the requirements of each of the phases of ontology development. As Bittner et al. [43] put it: "[o]nce one has developed a highly expressive theory, less expressive logics with better computational properties can be used to implement certain portions of the full theory for specific purposes". Motivating scenario A collection of sensors has been deployed in a city to measure the concentration of carbon monoxide (CO) in the air. The concentration of CO is taken at different moments of the day, by different carbon monoxide analyzers (COAs) placed at different locations in the city. A group of scientists is interested in analyzing the quality of the air in the city. Using the Semantic Sensor Observation Service (SemSOS), the group is able to develop an application software, which retrieves data generated by the COAs so that differences of sensors and observations regarding measurement procedures and measurement units are harmonized. The group is now interested in extending the semantic capabilities of the application so that the resolution of the observations is made explicit, and retrieval at different resolution, with minimal human intervention, is made possible. In particular, the group would like to know the spatial and temporal resolution of one observation (Q1), and the spatial and temporal resolution of the observation collection produced by the COAs (Q2). Making an application software understand what 'resolution' of an observation (or an observation collection) means, is only possible through a formal characterization of the concept. The scenario above presupposes the use of in-situ COAs, but remote COAs such as the MOPITT instrument introduced in [44] might be also used for data collection purposes. The theory proposed in this article takes into account both in-situ and remote sensors. For an introduction to SemSOS, see [45]. Q1 and Q2 are competency questions in the sense of [46]. Reuse of terms from existing observation ontologies Observations have been analyzed from a variety of perspectives, yielding observation ontologies in [20, 47–50]. The relevance of these analyses for the Sensor Web is at least twofold: to provide a shared conceptual basis for scientific discourse; and to provide (practical) means of representing observations generated by sensors in an information system. This section aims at selecting one of these observation ontologies as starting point for the development of the ontology of resolution. Three criteria are used to guide this choice: Remain neutral with respect to the distinction between field and object (C1): as mentioned in [51], the most widely accepted conceptual model for GIScience considers that geographic reality is represented either as fully definable entities (objects) or smooth, continuous spatial variation (fields). An ontology of resolution, which remains neutral to the distinction field vs object is therefore highly desirable, to ensure a wide applicability of the terms suggested in GIScience and the Sensor Web. Take into account humans as sensors (C2): Goodchild [8] defined Volunteered Geographic Information as the widespread engagement of private citizens in the creation of geographic information, and pointed out some valuable aspects of the information produced by volunteers: (i) the information can be timely; (ii) it is far cheaper than any alternative; and (iii) information produced by volunteers can tell about local activities in various geographic locations that go unnoticed by the world's media. Humans acting as sensors are at the heart of VGI; the ontology of resolution should therefore be developed using a notion of sensor encompassing both instruments and humans, to be usable for both observations generated by humans and technical devices. Only ontologies capable of processing both types of observations can help to take advantage of VGI's potential, namely, "the potential to be a significant source of geographers' understanding of the surface of the Earth" [8]. Take into account observation as a result and observation as a process (C3): there has been two uses of 'observation' in the literature: observation as a process and observation as a result. An observation process is "an act associated with a discrete time instant or period through which a number, term or other symbol is assigned to a phenomenon" [52]. An observation result (or observation for short) is the outcome of an observation process. The ontology of resolution should be developed in such a way that justice is done to these two senses of 'observation'. Table 1 presents the results of the application of these three criteria to the observation ontologies mentioned at the outset of this section. A detailed explanation of the results is provided in [53]. The table shows that the functional ontology of observation and measurement (or FOOM for short) is the only one which fulfills the three criteria outlined above. According to FOOM, four main entities are involved in the observation process: the particular (i.e., entity to be observed), the stimulus (i.e., detectable change in the environment), the observer or sensor (i.e., someone or something that provides a symbol for a property of the particular) and the observation result (i.e., a value). FOOM was formally specified using Haskell, and aligned to the foundational ontology DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering, see [54]). For this reason, both Haskell and DOLCE are also used while extending FOOM with concepts of spatial and temporal resolution in the next sections. Figure 1 illustrates the observation process. Terms useful to specify the spatial and temporal resolution of sensor observations are highlighted in bold in the next sections. Observation process (reprinted from [55] with permission) Table 1 Criterias C1, C2 and C3 applied to the observation ontologies The theories of observation-based resolution are expounded in this section. Section Spatial and temporal resolution of a single observation discusses the resolution of single observations, and Section Spatial and temporal resolution of an observation collection discusses observation collections. Spatial and temporal resolution of a single observation The first competency question (Q1, see Section Motivating scenario) is the focus of this section. As mentioned in Section Introduction, resolution is a property of a representation. On that account, two terms are introduced: spatial resolution, and temporal resolution. The spatial resolution is the amount of spatial detail in an observation, and the temporal resolution is the amount of temporal detail in an observation. Previous work has proposed to model the spatial and temporal resolution of an observation using one of two approaches: a stimulus-centric approach and a property-centric approach. A stimulus-centric approach constrains spatial/temporal resolution using the spatial/temporal extent of the stimulus participating in the observation process. It suffers from vagueness issues regarding the determination of the spatial extent of the stimulus, and strongly depends on one's adopted view (i.e., stimulus as process or an event) for the determination of the temporal extent of the stimulus. A property-centric approach specifies resolution based on the spatial/temporal region over which the property of interest is considered homogeneous. It avoids vagueness issues, but needs to accommodate arbitrariness since there might be various reasons for which a data provider considers the property of interest homogeneous for his/her data collection purposes. To cope with both issues, our work introduces a receptor-centric approach where the spatial and temporal resolution of a single sensor observation are specified based on the physical properties of the observer. The three approaches are discussed in detail next. The stimulus-centric approach Stasch et al. [55] suggested to constrain the spatial and temporal resolution of an observation by the spatial and temporal extent of the stimulus. A drawback of this approach is that there is no one-way of defining the spatial and/or temporal extent of the stimulus involved in an observation process. For instance, in the case of a thermometer placed in a room of area 20m2 and measuring the temperature, the stimulus is the heat flow of the amount of air in the room. It can be stated that the spatial extent of the stimulus is equal to the spatial footprint of the amount of air in the room (e.g., 20 m2), but there is no logical basis for preferring the value 20m2 over smaller values of the amount of air in the room such as 15m2, 10m2 or 1m2. In fact, every size of the amount of air in the room falling within the interval ] 0, 20 ] has an equal right to be called the spatial extent of the stimulus participating in the observation process. Said another way, vagueness issues arise as to the determination of the spatial extent of the stimulus. As regards the temporal extent of the stimulus, its characterization is not straightforward because, as [48] pointed out, a detectable change can be viewed as a process (periodic or continuous) or an event (intermittent). The duration of the stimulus is therefore perspective-dependent. The property-centric approach Frank [16] indicates that a sensor always measures over an extended area and time (called ε), and reports a point-observation (i.e., average value for an attribute) for this extended area and time. The extended area or time was termed the support of the sensor. Frank [16] ascribes support to the sensor, but support has also been attributed in the literature to the observation. For instance, Atkinson and Tate [22] define support as "[t]he size, geometry, and orientation of the space on which the observation is defined" (emphasis added). Modelling support as an attribute of the observation rather than of the sensor is the standpoint adopted in this work, because ε needs not be related to the characteristics of the sensing device. As Burrough and McDonnell ([56], page 101) pointed out, support is the technical name used in geostatistics for the area or volume of the physical sample on which the measurement is made. Measuring soil pH over a physical soil sample of 10 cm · 10 cm would imply a support of 100 cm2. That is, the support is determined independently of the sensor (i.e., the instrument measuring and reporting a value for the pH of the soil at a location). A general definition of support is "the largest time interval [T], area [L2] or volume [L3] for which the property of interest is considered homogeneous" [57]. The spatial resolution of an observation can be equated with its spatial support, and its temporal resolution with its temporal support. The downside of this approach is that no precision is given regarding the way of estimating the area, volume or time interval for which the property of interest is considered homogeneous. The example of soil pH abovementioned mentions only a size, but additional attributes such as shape and orientation are also defining characteristics of the support. Deciding whether the shape of the support should be rectangular, circular or irregular involves a certain degree of arbitrariness. Using support as a criterion to characterize the resolution of the observation implies therefore a certain degree of arbitrariness inherent in the resolution value. The next subsection attempts to improve this situation by proposing a method to characterize the resolution of the observation based on the physical characteristics of the observer. The receptor-centric approach From the previous two subsections, existing criteria for observation resolution are wanting in some respects. Besides, Frank [16] pointed out in previous work that "quantitative descriptors of data quality must be justified by the properties of the observation process" (emphasis added). That is, in the context of resolution, quantitative descriptors should be traceable to the physical properties of the observation process. The introduction of a new criterion for observation resolution here aims at making progress towards fulfilling this desideratum. In line with [48], the observation process is conceptualized as consisting of four steps (the first two steps are required only once, to determine the observed phenomenon): Step 1: choose an observable, Step 2: find one or more stimuli that are causally linked to the observable, Step 3 (also called 'impression'): detect the stimuli producing analog signals, Step 4 (also called 'expression'): convert the signals to observation values. The entity which produces the analog signal upon detection of the stimulus (Step 3) is called here the receptor. Receptors are similar to the threshold devices introduced in [58], in that the production of the output (analog signal) doesn't happen immediately upon activation of the input (stimulus), but only after a short delay. However (and contrary to [59]), receptors are not considered as the interface between the external world and the observer. In other words, receptors don't need to be located at the surface of the observer. It is suggested here to use the spatial region containing all the receptors stimulated during the observation process as criterion to characterize the spatial resolution of the observation. The short delay required by the receptors to produce analog signals (upon detection of the stimulus) can be used as a criterion to specify the temporal resolution of the observation. Two new terms borrowed and adapted from neuroscience (see [60–62]) are also introduced at this point: the spatial receptive field (of the observer) and the temporal receptive window (of the observer). The spatial receptive field (SRF) is the spatial region of the observer which is stimulated during the observation process. This spatial region can be seen as two-dimensional (e.g., the palm of the hand) or three-dimensional (e.g., the whole hand) depending on the type of receptors participating in the observation process, and hence the word 'field' in SRF to reflect this fact. The temporal receptive window (TRW) is the smallest interval of time required by the observer's receptors in order to produce analog signals. The definition of SRF above is compatible with the one of receptive field in neuroscience as a "specific region of sensory space in which an appropriate stimulus can drive an electrical response in a sensory neuron" [60]. The definition of TRW paraphrases and generalizes to all sensor devices the definition proposed in [61, 62]. The spatial resolution of an observation can be approximated by the spatial receptive field of the observer, and its temporal resolution could be equated with the temporal receptive window of the observer participating in the observation process. There might be a chaining of different types of receptors in an observation process. In these cases, the relevant receptors for the computation of the spatial and temporal resolution are those that are stimulated by external stimuli. Figure 2 illustrates this point. Observer with several receptors. Note: Only receptor R1 is relevant to the estimation of the spatial and temporal resolution of the observation because it is directly stimulated by external stimuli. An example of observation process where several receptors are chained is the hearing process as described in [93]. The process can be summarized as follows: eardrums (R1) collect sound waves and vibrate; after them, hair cells (R2) convert the mechanical vibrations to electrical signals. These electrical signals are then carried to the auditory cortex, i.e., the part of the brain involved in perceiving sound. In the auditory cortex, there are specialist neurons (R3) which specialize in different combinations of tone (e.g., some are sensitive to pure tones, such as those produced by a flute, and some to complex sounds like those made by a violin). At last, there are other neurons (R4) which can combine information from the specialist neurons to recognize a word or an instrument Examples of SRF and TRW for a single observation With the approach introduced in Section The receptor-centric approach, the computation of the spatial and temporal resolution of a single sensor observation involves three steps: Step 1: identify the type of receptor involved in the observation process; Step 2: find the duration needed for the production of analog signal upon detection of the stimulus (relevant to the estimation of the TRW); Step 3: find the size of the receptors and the number of receptors stimulated during the observation process (relevant to the estimation of the SRF). The approach hinges on the availability of information about the receptors which participate in an observation process. This information can be found in technical documentations (for sensor devices), and in research outcomes of the field of neuroscience (for human observers). The next paragraphs provide some examples of receptor, spatial receptive field and temporal receptive window for human and technical observers. As said in Section The receptor-centric approach, the production of an observation involves two stages: impression and expression. Strictly speaking, the TRW is the time interval required for the impression operation. However, most information about sensors (or observations) currently available provide only hints about the duration of the whole observation process (i.e., impression + expression). More work will be needed in the future to tease the impression's duration and the expression's duration apart. For the time being, the examples of temporal receptive window that follow are based on the assumption that the time needed for the expression operation is negligible compared to the time needed for the impression operation. That is, for now, TRW is approximated using the duration of the whole observation process. EXAMPLE 1: A Carbon Monoxide Analyzer of type GM901 (see [63]) returns the concentration of carbon monoxide (Observation) in a gas. The receptor of this sensing device is the measuring probe. The spatial receptive field is equal to the size of the opening of the measuring probe, and the temporal receptive window is equal to the response time. The value of the temporal receptive window lies between 5 and 360 seconds. The diameter of the opening of the measuring probe varies between 300 and 500 millimeters and this suggests a spatial receptive field between 707 and 1963 square centimeters. EXAMPLE 2: A digital camera returns an image (Observation), with a spatial receptive field equal to the size of the aperture and a temporal receptive window equal to the shutter speed. The aperture is "the size of the adjustable opening inside the lens, which determines how much light passes through the lens to strike the image sensor" [64], and the shutter speed is "the amount of time the digital camera's shutter remains open when capturing a photograph" [65]. The receptor of the camera is the image sensor, but the size of the aperture determines the actual portion of the image sensor that is stimulated during the production of an image. The shutter speed determines the duration of the image sensor's exposure to light. It is acknowledged here that resolution has been defined in the literature (for example, [66]) as a function of imaging aperture and the wavelength of the light. This is the optical resolution (which has sometimes also been called spatial resolution). It is inversely correlated with aperture, and measures the shortest distance between two points on an image that can still be distinguished by the observer. However, in line with previous work [5, 67], the term 'discrimination' is reserved for the shortest distance between two points on an image that can still be distinguished by the observer (the optical resolution). Approximating spatial resolution (as defined in this work, see Section Introduction) with spatial receptive field intends to inform about a different notion, and tell instead about the portion of the observer which was stimulated while producing the observation. It also reflects intuition, namely: the larger the aperture, the smaller the optical resolution (smallest details the lens can resolve), and consequently the larger the amount of spatial detail in the final image (spatial resolution). Further examples of receptors for technical observers include thermistors (for medical digital thermometers), bulbs (for clinical mercury thermometers), telescopes (for laser altimeters), aneroid capsules (for pressure altimeters), bulbs (for psychrometers), to name a few. EXAMPLE 3: A human observer reports on a scenery at a temporal receptive window of about 14 milliseconds (ms) using the sentence 'there is an apple here' (Observation). The value of TRW is assigned based on the results from [68], where the authors investigated the mechanisms involved in object recognition by monkeys' and humans' visual systems. Keysers et al. [68] studied visual responses to very rapid image sequences composed of "color photographs of faces, everyday objects familiar and unfamiliar to the subjects, and naturalistic images taken from image archives" and reported a rate of 14 ms per image for human perception and memory. EXAMPLE 4: The previous example is illustrative of the temporal receptive window of an observation sentence as defined in [59, 69] in that the observer assigns unreflectively on the spot a value to external stimuli. Lederman [70] indicates that, in the context of purposive exploration of the world, it typically takes 1 to 2 seconds to identify common objects such as spoon. Therefore, the temporal receptive window for the observation 'spoon' in the context of a purposive exploration task using human hands (of blind subjects) varies between 1 and 2 seconds. The temporal receptive windows of observations produced by human observers will depend on the observer, the type of task, and the stimulus. EXAMPLE 5: The spatial receptive field of human observations is equal to the size of the surface stimulated during the observation process. This surface might be calculated using the product N · S, where N is the number of receptors which have participated in the observation process, and S is the size of one receptor (if the receptors overlap, the size of the overlap should be subtracted from the product). As starting point for the computation, the knowledge presented in Table 2 can be used. The exact knowledge of the receptors which have participated in an observation process will become available as neuroscience evolves. For example, Krulwich [71] pointed out that it is only in 2002 that it became the new view that there is a fifth taste (umami), in addition to the four admitted during many centuries (bitter, salty, sour, sweet). This fifth taste is detected by a specific type of receptors (receptors for L-glutamate on the tongue). Table 2 Examples of receptors for a human observer As this section illustrates, a receptor-centric approach to characterize the spatial and temporal resolution of a sensor observation is applicable to both in-situ (e.g., tongue) and remote (e.g., eye) sensors, and to both human and technical observers. Information given about the Carbon Monoxide Analyzer of type GM901 (technical observer) was extracted from the technical documentation of the product (see [63]). Table 2 illustrates that the (neuroscience) literature is a useful source to gather necessary information to estimate the spatial resolution of observations produced by human observer. [68], cited previously, is an example showing that the literature on neuroscience is also a useful source to collect information for the computation of the temporal resolution of observations generated by human observers. Alignment to DOLCE: resolution of an observation 'Spatial resolution', 'temporal resolution', 'spatial receptive field', and 'temporal receptive window' as characteristics of an entity (the observation or the observer) correspond to the notion of quality in DOLCE. A quality can be defined as "any aspect of an entity (but not a part of it), which cannot exist without that entity" [72]. Spatial receptive field and temporal receptive window inhere in the observer, and are therefore physical qualities. Spatial receptive field and temporal receptive window are also examples of referential qualities, i.e., "qualities of an entity taken with reference to another entities" [73]. Both SRF and TRW are qualities of the observer taken with reference to the stimulus. Spatial resolution and temporal resolution inhere in the observation (i.e., a social object), and hence belong to DOLCE's class abstract quality. Finally, DOLCE proposes a general distinction between agentive physical objects (i.e., endurants with unity to which we ascribe intentions, beliefs and desires), and non-agentive physical objects (which are endurants which constitute these agentive physical objects). The receptor, being an element of the observer, is a non-agentive physical object. Formal specification: resolution of a sensor observation The case of a carbon monoxide analyzer (COA) of type GM901 reporting a value of the concentration of carbon monoxide (CO) (see Example 1, Section Examples of SRF and TRW for a single observation) is taken as running example for the formal specification presented in this section. The section walks the reader through the definition of the concepts of involved in observation, as well as a step by step account of how spatial and temporal resolution are introduced in observation processes. The specification of resolution presented next builds upon the specification for observations provided at https://git.io/f3TuI (last accessed: June 19, 2018), and described in [48]. Listing 1 introduces three relevant datatypes for the scenario: Magnitude (to represent the magnitude of a quality), Quale (entity evoked in a cognitive agent's mind when observing a quality), and ObsValue (to represent observation values). For a detailed discussion of these notions, see [74]. The amount of air surrounding the COA is modelled as containing a certain amount (i.e., magnitude) of carbon monoxide, that is: A receptor has an id, a size, a processing time for incoming stimuli and a certain role. The receptor involved in the observation of the CO concentration in the city is the measuring probe (see Section Examples of SRF and TRW for a single observation). It has a size and a processing time set provisionally to 1500 cm2 and 60 seconds respectively, and the role of detecting CO molecules. The size of the receptor is set here to the size of the opening of the measuring probe (the opening of the measuring probe determines the actual portion of the measuring probe that is stimulated by external stimuli). The receptor's role is modelled here as a description in natural language. An observer has an id and a number of receptors of a certain type. It carries a quale and an observation value. The measurement unit used below for observation values is "ppm" standing for parts per million. For simplicity, it is assumed here that all receptors (with a similar function) have the same size, and there is no malfunction during the observation process (i.e., either all the receptors detecting the stimulus are stimulated or none of them). The assumption that all receptors have the same size is in line with Quine [59] who states: "The subject's sensory receptors are fixed in position, limited in number, and substantially alike". A COA has one measuring probe. Listing 2 presents the alignment of the terms 'observer' and 'receptor' to DOLCE. During the perception of the observed quality (i.e., the carbon monoxide of the amount of air), the observer produces a quale. The perception of the observed quality involves inherently a loss of spatial and temporal detail, and this leads to a spatial and temporal resolution for the quale. The spatial resolution of the quale is modelled in the current work as being equal to the spatial receptive field of the observer involved in the perception operation. The temporal resolution of the quale is equal to the temporal receptive window of the observer which participated in the perception of the observed quality. The function magnitudeToQuale establishes a mapping from a certain magnitude to the corresponding quale, and more details about it are provided below. Based on the quale, the observer produces an observation value. The function qualeToMeasure introduced below establishes a mapping between a quale and an observation value (resulting from a measurement process). The spatial resolution and the temporal resolution of the observation value are now equated with the spatial resolution and temporal resolution of the quale respectivelyFootnote 1. Spatial receptive field is now specified as the size of the spatial region containing all receptors stimulated during the observation process. Temporal receptive window is the processing time of the receptors stimulated during the observation process. The last stage of this formal specification is the definition of the functions magnitudeToQuale and qualeToMeasure. These two functions are introduced to reflect the idea (already present in [74]) that an observation process is the approximation of the absolute magnitude of a certain quality. Probst [74] indicated two types of approximations: qualia approximate absolute magnitude (this happens during the perception or impression process), and observation values approximate qualia (this happens during the expression process). As a general requirement, the composition of magnitudeToQuale and qualeToMeasure is a monotonic function. In the context of the current scenario, these two functions will be given a simple definition, assuming an approximation factor of the magnitude amounting to 0.9 during the mapping magnitudeToQuale, and another approximation factor of 0.9 during the mapping qualeToMeasure. The Haskell specification presented above is available at https://doi.org/10.5281/zenodo.1293285. As argued in Winter and Nittel [75], a running Haskell specification guarantees the consistency (i.e., internal consistency between the concepts in the specification), correctness (i.e., the developer has said what he intended to say), and completeness (i.e., appropriate coverage of questions within a domain) of the specification. The theory expounded above is thus consistent, correct, and complete with respect to the question Q1 (how to specify the resolution of an observation using the characteristics of the observed entity, the stimulus, and the sensor?). All concepts suggested to model the resolution of single observations have also been implemented as an ontology design pattern (ODP) in the Web Ontology Language (OWL). The ODP is an extension to the SSN Ontology [20] and offers concepts needed to annotate single observations with their resolution. The ODP for the resolution of single observations is shown in Fig. 3, and can be downloaded at: https://doi.org/10.5281/zenodo.1293285. ODP for the resolution of a single observation Spatial and temporal resolution of an observation collection The second competency question (Q2, see Section Motivating scenario) is put under scrutiny in this section. 'Spatial resolution' and 'temporal resolution' denote the amount of spatial detail in the observation collection, and the amount of temporal detail in the observation collection respectively. An observation collection is a collection of single observations (or 'observations' for short). Wood and Galton [76] presented a review of existing ontologies (including DOLCE and the Basic Formal Ontology) for the representation of collectives ('collective' from [76] is equivalent to 'collection' in this article), and proposed a taxonomy allowing the classification of around 1800 distinct types of collectives. Adapting their reflections to the specific case of collections of observations leads to the following statements: An observation collection is a concrete particular, not a type, nor an abstract entity; An observation collection is a continuant, that is, it is to be thought of as enduring over a period of time, existing as a whole at each moment during that period, and possibly undergoing various types of change over that period; An observation collection has multiple observations (and only observations) as members. In line with [77], the member-collection relationship is a more specific kind of part-of relation. Winston et al. [77] also point out that membership in a collection is determined based on one of two factors: spatial proximity or social connection. As regards observation collections, membership in an observation collection is determined based on social connection (not spatial proximity). In addition, the current work adopts the standpoint that an entity is either a single observation or an observation collection. It cannot be both. Put differently, an observation collection has n members, where n is a natural number greater than one. An observation collection with only one observation is a single observation. In that sense, one remote sensing image is not an observation collection, but two consecutive pictures of an area (are already enough to) form an observation collection. Observation collections and observations are social objects in the sense of the DOLCE Ultra Light (DUL) upper ontologyFootnote 2. There is however one important difference between the two which relates to their process of generation: an observation is generated by observing the physical realityFootnote 3; an observation collection is produced by gathering other social objects (i.e., observations). In terms of DUL, an observation collection can be viewed as a DUL:Configuration ('A collection whose members are organized according to a certain schema that can be represented by a Description') while an observation may be regarded as a DUL:Situation ('A relational context created by an observer on the basis of a Description'). Figure 4 shows four examples of observation collections. Two criteria suggested in previous work - spacing and coverage - can be used to characterize the spatial resolution of the observation collections. These two criteria are critically discussed in the next two paragraphs. The arguments brought forward for the spatial resolution hold, mutatis mutandis, for the temporal resolution of the observation collections. Some examples of observation collections. Note: Each point on the figure represents the spatial location of a single observation, the dotted box in black represents the spatial extent of the study area for which the observations have been generated. a shows two collections with different amount of spatial detail, but similar total spacing; b shows two collections with distinct amount of spatial detail, but similar mean spacing. The observed study area (i.e., the portion of the study area that has been observed) reflects differences in amount of spatial detail where both total- and mean spacing fail Spacing: Goodchild and Proctor [1] mentioned the spacing of the points (i.e., observations) as a criterion to characterize the spatial resolution of observation collections. The estimation of spacing necessitates some information about the spatial location of each observation. Spacing can be calculated in (at least) four ways: the maximum spacing, the minimum spacing, the total spacing and the mean spacing. All four have some disadvantages. For example, the maximum spacing and the minimum spacing say nothing about how the observation collection is spatially detailed. They rather tell that, within the current observation collection, the closest locations are within a distance equal to the minimum spacing, the farthest within a distance equal to the maximum spacing. Regarding the total spacing, one disadvantage is the need to specify a spatial ordering for the observation collection. As discussed in [53], this choice might involve some arbitrariness. The ultimate implication of the use of total spacing as criterion, is that a decision-maker will be provided with different values of spatial resolution for an observation collection, with no means to decide which one to choose for his or her purpose. In addition, there are cases such as the one from Fig. 4a where the total spacing fails to capture the fact that two observation collections have different amounts of spatial detail. It is indeed arguable that (under the assumption that the size of the points is negligible) the two observation collections from Fig. 4a have the same spacing S. The use of the mean spacing has the advantage that it is no longer necessary to define what observation is the first, and what is the next. However, a serious drawback of this criterion is that, when applied to the observation collections from Fig. 4b, it gives the same value. In other words, this criterion fails to capture the fact, as far as Fig. 4b is concerned, the observation collection further right is spatially more detailed than the observation collection further left. Coverage: coverage, proposed in [78], is another criterion that can be used to characterize the spatial resolution of observation collections. The value C of this criterion for the observation collections presented in Fig. 4 is: $$C =\frac{N \cdot A}{E}$$ where N is the number of observations, A is the area covered by each observation, and E is the extent of the study area. This criterion will yield different values for the spatial resolutions of the observation collections from Fig. 4a-b, capturing the fact that these observation collections have different amounts of spatial detail. There is also no need to face the arbitrariness which comes with the specification of a spatial ordering for an observation collection, and C gives an immediate impression of the portion of the study area which has been observed. A drawback of this criterion is that it leads to a dimensionless value, and this fails to account for the intuition (reflected in expressions such as '10 meters resolution', '20 meters resolution', and so on) that resolution is a property to which humans associate a dimension of length. From the previous paragraphs, spacing and coverage as criteria to characterize the spatial/temporal resolution of observation collections are wanting in some respects. As a general requirement for the Sensor Web and GIScience, proxy measures for the spatial/temporal resolution of observation collections should: (i) avoid the arbitrariness ensuant on the necessity to define a spatial ordering for the observation collection; (ii) have the dimension of length/timeFootnote 4.; and (iii) mirror the fact that a perfect sampling strategy covers the whole study area/period. This motivates the introduction of the following two terms for the description of the resolution of observation collections: observed study area and observed study period. The observed study area is the portion of the study area that has been observed. The observed study period is the portion of the study period that has been observed. The study area is the spatial extent of the analysis and the study period is the temporal extent of the analysis. Observed study area and observed study period The observed study area of an observation collection can be obtained by summing up the observed areas of each of the observations of the collection. The observed area of an observation is the spatial region of the phenomenon of interest that has been observed. Let RSum be defined after [79] as the sum of two spatial regions. RSum is similar to the operator union used in set theory in that the RSum of two regions A and B is a region C such that all the elements belonging to C either belong to A, or B or both A and B. The RSum of two regions is itself a region (see [79] for the formalization). The following equation holds: $$ObservedStudyArea = RSum_{i=1}^{n} \left[a_{i}\right] $$ where ai denotes the observed area of each observation, and n is the number of observations in the observation collection. Likewise, the observed study period of an observation collection can be obtained by summing up the observed periods of each of the observations in the observation collection. The observed period of a single observation is the temporal region of the phenomenon of interest that has been observed. $$ObservedStudyPeriod = RSum_{i=1}^{n} \left[w_{i}\right] $$ where wi designates the observed period of each observation, and n is the number of observations in the observation collection. Modelling the resolution of an observation collection The spatial resolution and temporal resolution of the observation collection (Q2) can be equated with the observed study area, and the observed study period respectively. The observed study area provides the decision-maker with a value which reflects how much of the study area has effectively been observed (or sampled). Its value is independent of the ordering of the observations, and also independent of the type of sampling strategy (i.e., regular vs irregular). The observed areas of the individual observations in the collection need not be alike (some might be greater or smaller than others). The observed study area has a dimension of length squared, but a linear measure can be obtained by taking the square root. For a given study area, the equation \(ObservedStudyArea = RSum_{i=1}^{n} [a_{i}] \) will approximate the study area if n tends to infinity (and under the sufficient condition that the ai are disjoint). The observed study area and the observed study period are more suitable than spacing and coverage to characterize the spatial and temporal resolution of observation collections. The fulfill the three requirements for proxy measures for resolution listed above, thereby addressing shortcomings of criteria suggested in previous work. In addition, decision-makers are free to compute the proportion of the study area/study period that has effectively been observed through the ratios \(\frac {ObservedStudyArea}{StudyArea}\) or \(\frac {ObservedStudyPeriod}{StudyPeriod}\). If the observed area is defined as spatial reference field, and the observed period as temporal receptive window, a computation of the observed study area and the observed study period based on the receptors involved in the observation process becomes possible. An example of computation of observed study area and observed study period based on the spatial receptive field, and the temporal receptive window is provided in [53] (Chapter 5). Specifying observed area, and observed period based on the spatial receptive field, and the temporal receptive window respectively leads to a definition of the observed study area and the observed study period based on the properties of the observation process as Frank's [16] desideratum (see Section The receptor-centric approach) expressed. In the absence of information about the spatial receptive field, and temporal receptive window, the spatial and temporal supports may be used as criteria to characterize the observed area and observed period and compute the observed study area and observed study period (one should be aware of supports' drawbacks discussed in Section The property-centric approach though). A minor drawback of these two criteria is that their full significance is only unfolded when the extent of the whole study area/period is known. For example, stating that the observed study period of an observation collection is one hour says nothing about the actual quality of the observation collection, unless the whole temporal extent under consideration (e.g., one day or one month) is also made explicit. The extent of the study area/period will also be required for a meaningful comparison of two observation collections with respect to their spatial and temporal resolution. Even so, this drawback is not intrinsic to the criteria suggested. It is rather a consequence from the general fact that values need some context if their significance is to be assessed. Another minor drawback of these two criteria is that they are new, and thus yet to be adopted by the practice of metadata documentation. However, the lack of criteria in the literature which fulfill the three characteristics mentioned in Section Spatial and temporal resolution of an observation collection suggests that the Sensor Web and GIScience should come up with new criteria to describe the resolution of observations collections (rather than conforming to existing ones). Alignment to DOLCE: resolution of an observation collection In line with [80], an 'observation collection' is viewed as a social object. A social object is an object that exists only within a process of social communication, in which at least one PhysicalObject participates. 'Spatial resolution', 'temporal resolution', 'observed study area', 'observed study period', 'observed area' and 'observed period' are all qualities that inhere in a social object, and therefore abstract qualities. Figure 5 shows the ODP for the resolution of observation collections which summarizes all terms introduced in this section. The ODP can be downloaded at https://doi.org/10.5281/zenodo.1293285. ODP for the resolution of an observation collection This section presents the practical usefulness of some of the ideas presented in this work. As discussed in [41], practical usefulness is an evaluation criterion of the implementation stage of ontology development, and is demonstrated through one or more applications which use the ontology. The ontology design patterns introduced earlier in Sections Formal specification: resolution of a sensor observation and Alignment to DOLCE: resolution of an observation collection are particularly relevant in this context. As the name suggests, they are relevant for the design stage of ontology development. If in addition, they are encoded in an ontology implementation language (e.g., OWL), they become useful for practical tasks (e.g., information retrieval). In short, ODPs act here as a bridge between design stage and implementation stage during ontology development, and provide a nexus between the theoretical investigations and their practical complements. Section Resolution of single observations: Retrieval of Flickr data at a certain temporal resolution shows how the ODP for the resolution of single sensor observation can be used to annotate and retrieve Flickr data with their temporal resolution. The purpose of the section is to illustrate how information from a real dataset could be accommodated through the ODP. Section Resolution of single observations: Expressing resolution qualitatively demonstrates how translation rules in SPARQL can be specified to account for qualitative values of resolution in the context of query expansion. Both sections illustrate the use of the ODP for information retrieval and query expansion respectively. Since the principles are similar, information retrieval and query expansion using the ODP for observation collection are not further presented. Instead, Section Resolution of observation collections: Cross-comparison of average values for air quality in Europe focuses on demonstrating the practical usefulness of the concepts of observed study area/period for policy making (and in particular cross-comparison of average values for air quality in Europe). The implementation described here was done using the Java Programming Language, and Eclipse as tool. The software code can be accessed at https://doi.org/10.5281/zenodo.1293285. Resolution of single observations: Retrieval of Flickr data at a certain temporal resolution This subsection illustrates how the ODP, which was presented above to characterize the resolution of single observations, can be used to retrieve Flickr data satisfying some (temporal) resolution constraints. Flickr is an online platform for the sharing of photographs. Flickr photographs are associated with a great variety of themes but they can be organized into albums or galleries with a limited thematic scope. The Lava shots galleryFootnote 5 for example groups photos capturing "volcanic activity and areas, featuring Sicily's Mt. Etna and Hawaii's national parks". The ODP for the resolution of single observations can be used to annotate and infer the temporal resolutions of these images, based on the physical properties (i.e., the shutter speeds) of the cameras which produced them. Figure 6 shows the ids of the photographs from the Lava shots gallery, which have a temporal resolution below 0.4 seconds. The different steps followed to get the results displayed are: Photographs of the Lava shots gallery (Flickr) with a temporal resolution less than or equal to 0.4 seconds Step1: Retrieve the pictures contained in the Lava shots gallery using the method flickr.galleries.getPhotos from the Flickr API; Step2: Get the Exif (Exchangeable Image File Format) data about each picture, as well as the shutter speed (if available) of the camera which produced the picture through the flickr.photos.getExif method of the Flickr API; Step3: Populate the ODP with pictures (for which the shutter speed has been explicitly documented) using the OWL API [81, 82]; Step4: Infer the temporal resolution of these pictures using the Pellet Reasoner [83, 84]; Step5: Retrieve pictures at a given temporal resolution using SPARQL. Resolution of single observations: Expressing resolution qualitatively The examples introduced so far in this article have given only quantitative values to the resolution of spatial and temporal observations (or observation collections). Even so, spatial and temporal resolution can also be expressed qualitatively. One could envision the following information needs where resolution is expressed qualitatively: retrieve all the remote sensing imageries (observation) in the knowledge base, which have a high spatial resolution return the census data (observation collection) from last year, at the county level provide daily data (observation collection) about the level of the Danube river retrieve the air quality observations in the database, which have a low temporal resolution To account for such queries, one must specify translation rules establishing correspondences between quantitative and qualitative values of resolution. As an example illustrating how the translation could be done, Listing 3 presents a SPARQL query to retrieve the Flickr photographs from the Lava shots gallery with both their qualitative and quantitative temporal resolution. The translation rule is specified in the query through "BIND(IF(?quantitativeTres ≤ 0.4, 'high', 'low') AS ?qualitativeTres)" which states that pictures with a temporal resolution less than or equal to 0.4 seconds have a 'high' temporal resolution, and those with a temporal resolution greater than 0.4 seconds have a 'low' temporal resolution. Figure 7 displays the results of the query. Results of Q3 Resolution of observation collections: Cross-comparison of average values for air quality in Europe In 2008, the European Commission introduced the Directive 2008/50/EC on ambient air quality and cleaner air for Europe. The following quote is taken from this directive: "In order to ensure that the information collected on air pollution is sufficiently representative and comparable across the Community, it is important that standardised measurement techniques and common criteria for the number and location of measuring stations are used for the assessment of ambient air quality" [85]. It is argued here that the observed study area, and the observed study period of observation collections should be taken into consideration, if average values are to be "sufficiently representative and comparable across the Community" as the directive 2008/50/EC requires. To give an example, Table 3 shows three European Member states with their respective numbers of monitoring stations measuring ozone levels. The numbers of monitoring stations are taken from [86], a recent report on air pollution by ozone across Europe. It is assumed, for the purposes of the illustration, that each of the monitoring station in these countries has an observed area of 100 m2 (the report did not provide information to derive the observed area, and the validity of the arguments exposed in the next paragraph is not influenced by the value of the observed areas chosen: 100 m2 or others). Table 3 Number of monitoring stations for the ozone level in three European countries Only average values from France and Germany over an observed study area of 8,300 m2 can be used for a consistent comparison of the average ozone levels in France, Germany and United Kingdom. Likewise, only average values from France over an observed study area of 26,000 m2 are pertinent for an adequate comparison of average ozone levels in France and Germany. The report presented in [86] remained silent about this aspect. For instance, the occurrence of exceedances in each European country (henceforth called 'occurrences per country') was defined as "the average number of exceedances observed per station in a country" (emphasis added) and the report informed about the occurrences per country (see page 11 of the report). The occurrences per country have later been summed up and averaged, to give an average value of occurrences in Europe of 1.5 (and this without mention of the spatial areas for which the occurrences per country are valid). This approach bears the risk of producing meaningless results. Indeed, average values over 83 stations cannot be compared with average values over 260 stations, in the same way as average values over a day cannot be compared with average values over a month (observed areas and observed periods being equal). A similar observed study area or observed study period is a prerequisite for an appropriate comparison of average values of observations belonging to observation collections. In the absence of this information in the report, the meaningfulness of the values provided in the report for a cross-comparison of occurrences per country in Europe may be questioned. In sum, observed study area and observed study period should always be documented when manipulating average values. The general rule that a comparison of average values requires similar observed study areas/periods is an axiom, which can used to check the consistency of information stored in (sensor web or) geographic information systems. This work has provided the basis for assessing the observed study area and observed study period of observation collections. Both criteria are derived from the observed areas and observed periods respectively (see Section Spatial and temporal resolution of an observation collection). The observed areas and observed periods can be estimated using the spatial receptive fields and temporal receptive windows of the observers - in this case the monitoring stations in Europe - which produced the observations. Comparison with previous work With regard to previous work, the discussion from [16] is most closely related to the work presented here. The main difference between the two is in the nature of the investigation: Frank [16] essentially discussed the effects of observations' limited resolution on the size of the objects that could be formed based on these observations; this work analyzed the relationship between the characteristics of entities participating in an observation process, and the resolution of the final observations. Table 4 recapitulates the similarities and differences between the two works ('previous work' denotes the work done in [16] and 'current work' refers to this article). Table 4 Comparison with previous work In addition, the work has shown that a receptor-centric approach is applicable to both technical and human sensors. The receptor-based approach thus looks a promising way to cope with resolution in the VGI age. Since none of the previous formalisms reviewed in Section Resolution in GIScience: a brief review has explicitly considered VGI as a possible use case, the work contributes to advance the state of the art regarding that aspect. It's worth mentioning that there are two cases of VGI to which the theory suggested would apply, namely: humans going around using sensors to collect values (e.g., about noise) and reporting them; and humans directly reporting qualitative values about the environment (e.g., saying via Twitter: "there is an apple here" or "it's now very cold in Las Vegas"). In the first case, resolution of the VGI can be traced back to the properties of the instruments used by people during the data collection activity. As to the second case, the work has proposed that the resolution of VGI should be traced back to the properties of human sensing (which are unveiled by research from neuroscience). In both cases, resolution could be specified using the spatial/temporal receptive field, or the observed study area/period depending on whether one is talking about one VGI observation or a collection of VGI observations. A case of VGI not supported yet by the theory is the case where a group of people collaboratively creates a map of an affected region after a disaster (as in e.g., [87]). The rationale for this is that the scope of applicability of the work is limited to the tiers 0 and 1 of [9, 88] (see Table 4). For this latter case of VGI, approaching the resolution problem of VGI as the specification of resolution of a vector dataset seems logical (though specifying resolution of vector data in the digital area is still an open issue, see [23]). The motivation for this work has been to explore whether a specification of resolution based on physical properties of the observation process is feasible at all (given the current dearth of approaches looking at this). As mentioned earlier, "data quality research needs a quantitative, theory-based approach" [16]. It appears that spatial receptive field, and temporal receptive windows are good candidates for such a quantitative, theory-based approach with respect to the spatial data quality criterion 'resolution'. The receptor-based approach can also cope with both technical and human observers, and this is a major advance compared to current approaches to model resolution, which mainly address technical sensorsFootnote 6. Some examples were provided to illustrate the practical usefulness of the theory in Section Applications. Nonetheless, much still needs to be done to make it readily usable to annotate datasets with their resolution. There are two main obstacles (already mentioned, but briefly recapped here): documentation practice (have the sensor industry provide metadata that can help compute both the spatial receptive field and temporal window), and pace of evolution of neuroscience as to the understanding of the human's observation process. Both obstacles are acknowledged here, but it's also argued that they are not insurmountable on the road towards more quantitative, objective approaches to observation resolution. The merit for this work has been to lay down a foundation upon which future work can build upon, as better knowledge about human sensing processes, and better sensor documentation practices become available. Finally, geographic information has three components: space, time and theme (or attribute). Though the interdependence of these three dimensions is acknowledged, the work has deliberately chosen to focus on the spatial and temporal dimension of geographic dimension. The main rationale for this is that space and time are more specific and well-understood case of the (more varied) attribute dimension. It appeared reasonable to start with these two to make the investigation more manageable, as theories pertaining to the thematic dimension have proven challenging to establish. For instance, though spatial and temporal reference systems have been around (and used) for years, semantic reference systems suggested 15 years ago [89] are yet to be produced. The ideas proposed in this article can be used as a starting point for the formulation of a more general, receptor-based, theory of resolution which applies to all three dimensions of geographic information. Regarding thematic resolution, Veregin [90] suggested a distinction between two types: thematic resolution for quantitative data and thematic resolution for categorical data. The former refers to the degree to which small differences in the quantitative attribute can be discerned (e.g., 10.03mA and 10.0251mAFootnote 7 indicate two different thematic resolutions for an observation reporting about the amount of electric current in an electrical circuit); the latter denotes the fineness of category definition (e.g., a classification of entities as being either 'anthropogenic' or 'natural' as opposed to a classification of the same entities as belonging to the classes 'Agriculture', 'Grass and Riparian and Dense Urban vegetation' 'Desert' or 'Urban'Footnote 8). The best setting for reuse of the ideas presented in this work is a theory of thematic resolution of quantitative data. In particular, interesting questions to investigate are the extent to which a receptor-based approach is applicable to the thematic resolution of observations, and the interplay between the thematic resolution of an observation (say an image), and the discrimination of the sensor (e.g., satellite) which produced the observation. These questions have not been discussed in this work and could be taken up by future studies. Conclusion and future work Resolution is one of several components of scale, and a science of scale requires progress in its understanding. Observations are central to Geographic Information Science (GIScience) because "all we know about the world is based on observation" [9]. Previous work has proposed different formalisms for the resolution of geographic data, yet offered no observation-based theory of resolution. This paper has expounded one, and suggested to model the resolution of observations based on the receptors of the observer which participated in the observation process. The theory was specified using Haskell, and the different concepts suggested were implemented as an Ontology Design Pattern, which can be used while annotating sensor observations with their resolution. The article also discussed criteria proposed in the literature to characterize the resolution of observation collections, pointed out their limitations, and suggested that resolution of observation may be better described by the observed study area, and the observed study period of an observation collection. Both the transition to the digital age, and the rise of Volunteered Geographic Information (VGI) call for a rethinking of traditional criteria to describe the resolution of data in GIScience. The ideas presented in this paper have shown one way to redefine resolution of observation and observation collections in order to accommodate both technical and human sensors. The article gave examples illustrating the applicability of a receptor-centric approach to observation resolution description. An immediate direction for future work is to extend the theory's applicability to account for the thematic resolution of observations and observation collections. In addition, since current metadata documentation practices limit themselves to the documentation of the characteristics of observation values, further tests of the applicability of the theory can only be done as the practice of metadata documentation evolves towards a more explicit documentation of the quale's contribution to the observation process. Finally, it became clear during the course of this work that a better understanding of the notion of quale (and especially its relationship with observation value) would help advance observation ontology. In fact, the following equations hold: spatialResolution(observation) ≤ spatialResolution(quale); temporalResolution(observation) ≤ temporalResolution(quale); thematicResolution(observation) ≤ thematicResolution(quale), since the transformation of the quale into an observation value (through the expression operation mentioned in Section The receptor-centric approach) might involve another loss of spatial/temporal/thematic detail. The example introduced here assumes no loss of spatial/temporal detail during the expression operation, and equates the spatial/temporal resolution of the observation with the spatial/temporal resolution of the quale. A thorough investigation of the interplay between resolution of quale and resolution of observation value (for the spatial, temporal and thematic dimensions) is deferred to future work. http://www.ontologydesignpatterns.org/ont/dul/DUL.owl(last accessed: December 05, 2017). A DUL:SocialObject is an object that is created in the process of social communication. This idea can be found in [10, 20, 47, 48, 55]. Length/time is mentioned here in opposition to dimensionless values. Area/time or volume/time as dimensions are also suitable to proxy measures for resolution. This requirement of length dimension for values of resolution was already brought forth in [1]. https://www.flickr.com/photos/flickr/galleries/72157645265344193/, last accessed: December 05, 2017. An exception is Scheider and Stasch [91] who recently suggested the use of attention as a metaphor to interpret sensor observations, proposing time/location of the focus of measurement as a proxy measure for resolution. Nonetheless, exploring the translation of this idea into a quantitative, computational resolution theory (as in this work) is still ongoing work. mA is an abbreviation for milliampere. This second example is based on the illustration of map reclassification rules from [92]. DOLCE: Descriptive Ontology for Linguistic and Cognitive Engineering DUL: DOLCE Ultra Light FOOM: Functional Ontology of Observation and Measurement GIScience: SemSOS: Semantic Sensor Observation Service ODP: Ontology Design Pattern OWL: Web Ontology Language VGI: Volunteered Geographic Information Goodchild M, Proctor J. Scale in a digital geographic world. Geogr Environ Model. 1997; 1(1):5–23. Gibson CC, Ostrom E, Ahn TK. The concept of scale and the human dimensions of global change: a survey. Ecol Econ. 2000; 32(2):217–39. https://doi.org/10.1016/S0921-8009(99)00092-0. Goodchild M. Accuracy and spatial resolution: critical dimensions for geoprocessing In: Douglas DH, Boyle AR, editors. Cartography and Geographic Information Processing: Hope and Realism. Ottawa: Canadian Cartographic Association: 1982. p. 87–90. Degbelo A, Kuhn W. Five general properties of resolution In: Krzysztof J, Adams B, McKenzie G, Kauppinen T, editors. CEUR Workshop Proceedings. Vienna: CEUR-WS.org: 2014. p. 40–7. Degbelo A, Kuhn W. A conceptual analysis of resolution In: Bogorny V, Namikawa L, editors. XIII Brazilian Symposium on Geoinformatics. Campos do Jordão: MCT/INPE: 2012. p. 11–22. https://doi.org/ISSN2179-4847. Dungan JL, Perry JN, Dale MRT, Legendre P, Citron-Pousty S, Fortin MJ, Jakomulska A, Miriti M, Rosenberg MS. A balanced view of scale in spatial statistical analysis. Ecography. 2002:626–40. https://doi.org/10.1034/j.1600-0587.2002.250510.x. Wu J, Li H. Concepts of scale and scaling In: Wu J, Jones B, Li H, Loucks O, editors. Scaling and Uncertainty Analysis in Ecology: Methods and Applications. Dordrecht: Springer: 2006. p. 3–16. https://doi.org/10.1007/1-4020-4663-4_1. Goodchild M. Citizens as sensors: the world of volunteered geography. GeoJournal. 2007; 69(4):211–221. https://doi.org/10.1007/s10708-007-9111-y. Frank A. Ontology for spatio-temporal databases In: Sellis T, Koubarakis M, Frank AU, Grumbach S, Güting RH, Jensen CS, Lorentzos N, Manolopoulos Y, Nardelli E, Pernici B, Theodoulidis B, Tryfona N, Schek H, Scholl M, editors. Spatio-Temporal Databases: The CHOROCHRONOS Approach. Berlin Heidelberg: Springer: 2003. p. 9–77. Chap. 2. https://doi.org/10.1007/978-3-540-45081-8_2. Janowicz K. Observation-driven geo-ontology engineering. Trans GIS. 2012; 16(3):351–74. https://doi.org/10.1111/j.1467-9671.2012.01342.x. Adams B, Janowicz K. Constructing geo-ontologies by reification of observation data In: Agrawal D, Cruz I, Jensen C, Ofek E, Tanin E, editors. Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. Chicago: ACM: 2011. p. 309–18. https://doi.org/10.1145/2093973.2094015. Stasch C, Scheider S, Pebesma E, Kuhn W. Meaningful spatial prediction and aggregation. Environ Model Softw. 2014; 51:149–65. https://doi.org/10.1016/j.envsoft.2013.09.006. Couclelis H. Ontologies of geographic information. Int J Geogr Inf Sci. 2010; 24(12):1785–809. https://doi.org/10.1080/13658816.2010.484392. Goodchild M. Scales of cybergeography In: Sheppard E, McMaster RB, editors. Scale and Geographic Inquiry: Nature, Society, and Method. Malden: Blackwell Publishing Ltd: 2004. p. 154–169. Chap. 7. Resch B, Blaschke T. Fusing human and technical sensor data: concepts and challenges. SIGSPATIAL Spec. 2015; 7(2):29–35. https://doi.org/10.1145/2826686.2826692. Frank A. Why is scale an effective descriptor for data quality? The physical and ontological rationale for imprecision and level of detail In: Cartwright W, Gartner G, Meng L, Peterson MP, editors. Research Trends in Geographic Information Science. Lecture Notes in Geoinformation and Cartography. Berlin Heidelberg: Springer: 2009. p. 39–61. Chap. 4. https://doi.org/10.1007/978-3-540-88244-2_4. Degbelo A. An ontology design pattern for spatial data quality characterization in the semantic sensor web In: Henson C, Taylor K, Corcho O, editors. The 5th International Workshop on Semantic Sensor Networks. Boston, Massachusetts: CEUR-WS.org: 2012. p. 103–8. Goodchild M, Quattrochi D. Introduction: scale, multiscaling, remote sensing, and GIS In: Quattrochi D, Goodchild M, editors. Scale in Remote Sensing and GIS. Boca Raton: Lewis Publishers: 1997. p. 1–11. Frank A. Scale is introduced in spatial datasets by observation processes In: Devillers R, Goodchild H, editors. Spatial Data Quality: From Process to Decisions. St. John's, Newfoundland and Labrador: CRC Press: 2009. p. 17–29. Compton M, Barnaghi P, Bermudez L, García-Castro R, Corcho O, Cox S, Graybeal J, Hauswirth M, Henson C, Herzog A, Huang V, Janowicz K, Kelsey WD, Le Phuoc D, Lefort L, Leggieri M, Neuhaus H, Nikolov A, Page K, Passant A, Sheth A, Taylor K. The SSN ontology of the W3C semantic sensor network incubator group. Web Semant Sci Serv Agents World Wide Web. 2012. https://doi.org/10.1016/j.websem.2012.05.003. Blöschl G, Sivapalan M. Scale issues in hydrological modelling: a review. Hydrol Process. 1995; 9(3-4):251–90. https://doi.org/10.1002/hyp.3360090305. Atkinson PM, Tate NJ. Spatial scale problems and geostatistical solutions: a review. Prof Geogr. 2000; 52(4):607–23. https://doi.org/10.1111/0033-0124.00250. Goodchild M. Scale in GIS: an overview. Geomorphology. 2011; 130(1-2):5–9. https://doi.org/10.1016/j.geomorph.2010.10.004. Lam NSN, Quattrochi DA. On the Issues of Scale, Resolution, and Fractal Analysis in the Mapping Sciences*. Prof Geogr. 1992; 44(1):88–98. https://doi.org/10.1111/j.0033-0124.1992.00088.x. Marceau DJ, Gratton DJ, Fournier RA, Fortin JP. Remote sensing and the measurement of geographical entities in a forested environment. 2. The optimal spatial resolution. Remote Sens Environ. 1994; 49(2):105–17. https://doi.org/10.1016/0034-4257(94)90046-9. Gao J. Resolution and accuracy of terrain representation by grid DEMs at a micro-scale. Int J Geogr Inf Sci. 1997; 11(2):199–212. https://doi.org/10.1080/136588197242464. Deng Y, Wilson JP, Bauer BO. DEM resolution dependencies of terrain attributes across a landscape. Int J Geogr Inf Sci. 2007; 21(2):187–213. https://doi.org/10.1080/13658810600894364. Chow TE, Hodgson ME. Effects of lidar post-spacing and DEM resolution to mean slope estimation. Int J Geogr Inf Sci. 2009; 23(10):1277–95. https://doi.org/10.1080/13658810802344127. Jantz CA, Goetz SJ. Analysis of scale dependencies in an urban land-use-change model. Int J Geogr Inf Sci. 2005; 19(2):217–41. https://doi.org/10.1080/13658810410001713425. Kim JH. Spatiotemporal scale dependency and other sensitivities in dynamic land-use change simulations. Int J Geogr Inf Sci. 2013; 27(9):1782–803. https://doi.org/10.1080/13658816.2013.787145. Pontius Jr RG, Cheuk ML. A generalized cross-tabulation matrix to compare soft-classified maps at multiple resolutions. Int J Geogr Inf Sci. 2006; 20(1):1–30. https://doi.org/10.1080/13658810500391024. Csillag F, Kummert A, Kertész M. Resolution, accuracy and attributes: approaches for environmental geographical information systems. Comput Environ Urban Syst. 1992; 16(4):289–97. https://doi.org/10.1016/0198-9715(92)90010-O. Lechner AM, Rhodes JR. Recent progress on spatial and thematic resolution in Landscape Ecology. Curr Landsc Ecol Rep. 2016; 1(2):98–105. https://doi.org/10.1007/s40823-016-0011-z. Du S, Guo L, Wang Q. A scale-explicit model for checking directional consistency in multi-resolution spatial data. Int J Geogr Inf Sci. 2010; 24(3):465–85. https://doi.org/10.1080/13658810802629360. Balley S, Parent C, Spaccapietra S. Modelling geographic data with multiple representations. Int J Geogr Inf Sci. 2004; 18(4):327–52. https://doi.org/10.1080/13658810410001672881. Stell J, Worboys M. Stratified map spaces: a formal basis for multi-resolution spatial databases In: Poiker T, Chrisman N, editors. SDH'98 - Proceedings 8th International Symposium on Spatial Data Handling. Vancouver: 1998. p. 180–9. Skogan D. Managing resolution in multi-resolution databases In: Bjø rke JT, Tveite H, editors. ScanGIS'2001 - The 8th Scandinavian Research Conference on Geographical Information Science. Ås, Norway: 2001. p. 99–113. Worboys M. Imprecision in finite resolution spatial data. GeoInformatica. 1998; 2(3):257–79. https://doi.org/10.1023/A:1009769705164. Weiser P, Frank A. Modeling discrete processes over multiple levels of detail using partial function application In: Degbelo A, Brink J, Stasch C, Chipofya M, Gerkensmeyer T, Humayun MI, Wang J, Broelemann K, Wang D, Eppe M, Lee JH, editors. GI Zeitgeist 2012 - Proceedings of the Young Researchers Forum on Geographic Information Science. Muenster: AKA, Heidelberg, Germany: 2012. p. 93–7. Bruegger B. Theory for the integration of scale and representation formats: major concepts and practical implications In: Frank AU, Kuhn W, editors. Spatial Information Theory: a Theoretical Basis for GIS. Semmering: Springer: 1995. p. 297–310. https://doi.org/10.1007/3-540-60392-1_19. Degbelo A. A snapshot of ontology evaluation criteria and strategies In: Hoestra R, Faron-Zucker C, Pellegrini T, de Boer V, editors. Proceedings of the 13th International Conference on Semantic Systems - SEMANTICS 2017. Amsterdam: ACM Press: 2017. https://doi.org/10.1145/3132218.3132219. Kuhn W. Modeling vs encoding for the Semantic Web. Semant Web. 2010; 1(1):11–5. https://doi.org/10.3233/SW-2010-0012. Bittner T, Donnelly M, Smith B. A spatio-temporal ontology for geographic information integration. Int J Geogr Inf Sci. 2009; 23(6):765–98. https://doi.org/10.1080/13658810701776767. Drummond JR, Mand GS. The measurements of pollution in the troposphere (MOPITT) instrument: overall performance and calibration requirements. J Atmos Ocean Technol. 1996; 13(2):314–20. https://doi.org/10.1175/1520-0426(1996)013<0314:TMOPIT>2.0.CO;2. Henson CA, Pschorr JK, Sheth AP, Thirunarayan K. SemSOS: semantic sensor observation service In: McQuay W, Smari W, editors. International Symposium on Collaborative Technologies and Systems (CTS 2009). Baltimore: IEEE: 2009. p. 44–53. https://doi.org/10.1109/CTS.2009.5067461. Grüninger M, Fox MS. Methodology for the design and evaluation of ontologies. In: Proceedings of the IJCAI Workshop on Basic Ontological Issues in Knowledge Sharing. Montreal, Quebec: 1995. Janowicz K, Compton M. The Stimulus-Sensor-Observation ontology design pattern and its integration into the semantic sensor network ontology In: Taylor K, Ayyagari A, De Roure D, editors. The 3rd International Workshop on Semantic Sensor Networks. Shanghai: CEUR-WS.org: 2010. Kuhn W. A functional ontology of observation and measurement In: Janowicz K, Raubal M, Levashkin S, editors. GeoSpatial Semantics: Third International Conference. Mexico City, Mexico: Springer: 2009. p. 26–43. https://doi.org/10.1007/978-3-642-10436-7_3. Madin J, Bowers S, Schildhauer M, Krivov S, Pennington D, Villa F. An ontology for describing and synthesizing ecological observation data. Ecol Informat. 2007; 2(3):279–96. https://doi.org/10.1016/j.ecoinf.2007.05.004. Probst F. Ontological analysis of observations and measurements In: Raubal M, Miller H, Frank A, Goodchild M, editors. Geographic Information Science: Fourth International Conference. Münster, Germany: Springer: 2006. p. 304–20. https://doi.org/10.1007/11863939_20. Fonseca F, Davis C, Câmara G. Bridging ontologies and conceptual schemas in geographic information integration. Geoinformatica. 2003; 7(4):355–78. https://doi.org/10.1023/A:1025573406389. Percivall G. OGC Reference Model. OpenGIS® Implementation Specification (version 2.0), OGC 08-062r4. Technical report, Open Geospatial Consortium. 2008. Degbelo A. Spatial and Temporal Resolution of Sensor Observations. IOS Press: Dissertations in Geographic Information Science; 2015, p. 206. Masolo C, Borgo S, Gangemi A, Guarino N, Oltramari A. WonderWeb Deliverable D18. Technical report. 2003. Stasch C, Janowicz K, Bröring A, Reis I, Kuhn W. A stimulus-centric algebraic approach to sensors and observations In: Trigoni N, Markham A, Nawaz S, editors. GeoSensor Networks: Third International Conference. Oxford: Springer: 2009. p. 169–79. https://doi.org/10.1007/978-3-642-02903-5_17. Burrough PA, McDonnell RA. Principles of Geographical Information Systems, vol. 333. New York: Oxford University Press; 1998, p. 333. Finke PA, Bierkens MFP, de Willigen P. Choosing appropriate upscaling and downscaling methods for environmental research In: Steenvoorden J, Claessen F, Willems J, editors. Proceedings of the International Conference on Agricultural Effects on Ground and Surface Waters. Wageningen: IAHS: 2002. p. 405–9. Braitenberg V. Vehicles: Experiments in Synthetic Psychology. Cambridge: MIT press; 1984, p. 152. Quine WV. In praise of observation sentences. The Journal of Philosophy. 1993; 90(3):107–16. https://doi.org/10.2307/2940954. Alonso J, Chen Y. Receptive field. Scholarpedia. 2009; 4(1):5393. https://doi.org/10.4249/scholarpedia.5393. Hasson U, Yang E, Vallines I, Heeger DJ, Rubin N. A hierarchy of temporal receptive windows in human cortex. J Neurosci. 2008; 28(10):2539–50. https://doi.org/10.1523/JNEUROSCI.5487-07.2008. Lerner Y, Honey CJ, Silbert LJ, Hasson U. Topographic mapping of a hierarchy of temporal receptive windows using a narrated story. J Neurosci. 2011; 31(8):2906–15. https://doi.org/10.1523/JNEUROSCI.3684-10.2011. SICK. Product information GM901. 2015. Available online from https://www.sick.com/media/dox/3/73/473/Product_information_GM901_Carbon_Monoxide_Gas_Analyzers_en_IM0011473.PDF. Accessed 04 Aug 2016. Schurman K. Aperture. 2013. http://cameras.about.com/od/digitalcameraglossary/g/aperture.htm. Accessed 04 Aug 2016. Schurman K. Shutter Speed. 2013. http://cameras.about.com/od/digitalcameraglossary/g/shutter_speed.htm. Accessed 04 Aug 2016. den Dekker AJ, van den Bos A. Resolution: a survey. J Opt Soc Am A. 1997; 14(3):547. https://doi.org/10.1364/JOSAA.14.000547. Sydenham PH. Static and dynamic characteristics of instrumentation In: Webster JG, editor. The Measurement, Instrumentation, and Sensors Handbook. Boca Raton: CRC Press LLC: 1999. Chap. 3. Keysers C, Xiao D-K, Földiák P, Perrett DI. The speed of sight. J Cogn Neurosci. 2001; 13(1):90–101. https://doi.org/10.1162/089892901564199. Quine WV. From Stimulus to Science. Cambridge, Massachusetts, USA: Harvard University Press; 1995, p. 114. Lederman SJ. Skin and touch In: Dulbecco R, editor. Encyclopedia of Human Biology. vol. 8, 2nd edn. San Diego: Academic Press: 1997. p. 49–61. Krulwich R. Sweet, sour, salty, bitter... and umami. 2007. http://www.npr.org/templates/story/story.php?storyId=15819485. Accessed 22 Jan 2013. Gangemi A. DOLCE+DnS Ultralite 3.31. 2010. Available from http://www.ontologydesignpatterns.org/ont/dul/DUL.owl. Accessed 04 Aug 2016. Ortmann J, Daniel D. An ontology design pattern for referential qualities In: Aroyo L, Welty C, Alani H, Taylor J, Bernstein A, Kagal L, Noy N, Blomqvist E, editors. The Semantic Web - ISWC 2011: 10th International Semantic Web Conference. Bonn: Springer: 2011. p. 537–552. https://doi.org/10.1007/978-3-642-25073-6_34. Probst F. Observations, measurements and semantic reference spaces. Appl Ontol. 2008; 3(1):63–89. https://doi.org/10.3233/AO-2008-0046. Winter S, Nittel S. Formal information modelling for standardisation in the spatial domain. Int J Geogr Inf Sci. 2003; 17(8):721–41. https://doi.org/10.1080/13658810310001596067. Wood Z, Galton A. A taxonomy of collective phenomena. Appl Ontol. 2009; 4(3):267–92. https://doi.org/10.3233/AO-2009-0071. Winston ME, Chaffin R, Herrmann D. A taxonomy of part-whole relations. Cogn Sci. 1987; 11(4):417–44. https://doi.org/10.1207/s15516709cog1104_2. Degbelo A, Stasch C. Level of detail of observations in space and time In: Egenhofer MJ, Giudice N, Moratz R, Worboys M, editors. Poster Session at Conference on Spatial Information Theory: COSIT'11. Belfast, Maine: 2011. Casati R, Varzi AC. The structure of spatial localization. Philos Stud. 1996; 82(2):205–39. https://doi.org/10.1007/BF00364776. Bottazzi E, Catenacci C, Gangemi A, Lehmann J. From collective intentionality to intentional collectives: an ontological perspective. Cogn Syst Res. 2006; 7(2):192–208. https://doi.org/10.1016/j.cogsys.2005.11.009. Horridge M, Bechhofer S. The OWL API: a Java API for working with OWL 2 ontologies In: Hoekstra R, Patel-Schneider PF, editors. Proceedings of the 6th International Workshop on OWL: Experiences and Directions (OWLED 2009). Chantilly: CEUR-WS.org: 2009. Horridge M, Bechhofer S. The OWL API: a Java API for OWL ontologies. Semant Web. 2011; 2(1):11–21. https://doi.org/10.3233/SW-2011-0025. Parsia B, Sirin E. Pellet: an OWL DL reasoner. In: Poster Track at the Third International Semantic Web Conference (ISWC2004). Hiroshima: 2004. Sirin E, Parsia B, Cuenca Grau B, Kalyanpur A, Katz Y. Pellet: a practical OWL-DL reasoner. Web Semant Sci Serv Agents World Wide Web. 2007; 5(2):51–3. https://doi.org/10.1016/j.websem.2007.03.004. European Commission. Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Off J Eur Union. 2008; 51(L152). EEA. Air pollution by ozone across Europe during summer 2012: overview of exceedances of EC ozone threshold values for April-September 2012. Technical report, European Environment Agency. 2013. Zook M, Graham M, Shelton T, Gorman S. Volunteered geographic information and crowdsourcing disaster relief: a case study of the Haitian earthquake. World Med Health Policy. 2010; 2(2):2. https://doi.org/10.2202/1948-4682.1069. Frank A. Tiers of ontology and consistency constraints in geographical information systems. Int J Geogr Inf Sci. 2001; 15(7):667–78. https://doi.org/10.1080/13658810110061144. Kuhn W. Semantic reference systems. Int J Geogr Inf Sci. 2003; 17(5):405–9. https://doi.org/10.1080/1365881031000114116. Veregin H. Data quality measurement and assessment: NCGIA Core Curriculum in Geographic Information Science; 1998, pp. 1–10. Scheider S, Stasch C. The semantics of sensor observations based on attention In: Marchetti G, Benedetti G, Alharbi A, editors. Attention and Meaning: The Attentional Basis of Meaning. Pub Inc: Nova Science: 2015. p. 319–343. Buyantuyev A, Wu J. Effects of thematic resolution on landscape pattern analysis. Landsc Ecol. 2007; 22(1):7–13. https://doi.org/10.1007/s10980-006-9010-5. Society for Neuroscience. Brain Facts : a Primer on the Brain and Nervous System. 7th edn. Washington, DC: Society for Neuroscience; 2012, p. 92. Britannica.com. Tympanic membrane. Encyclopædia Britannica Online. 2013. https://www.britannica.com/science/tympanic-membrane. Accessed 04 Aug 2016. Chudler EH. Brain facts and figures. 2013. http://faculty.washington.edu/chudler/facts.html. Accessed 04 Aug 2016. Kolb H. Facts and figures concerning the human retina In: Kolb H, Fernandez E, Nelson R, editors. Webvision: The Organization of the Retina and Visual System. Salt Lake City (UT): University of Utah Health Sciences Center: 2005. Available From: http://www.ncbi.nlm.nih.gov/books/NBK11556/. Accessed 04 Aug 2016. Optipedia. Photoreceptors: Optipedia. Free optics information from SPIE. 2013. http://spie.org/x32354.xml?pf=true. Accessed 04 Aug 2016. Jenkins PM, McEwen DP, Martens JR. Olfactory cilia: linking sensory cilia function and human disease. Chem Senses. 2009; 34(5):451–64. https://doi.org/10.1093/chemse/bjp020. Leffingwell JC. Olfaction. Technical report, Leffingwell & Associates. 2001. Britannica.com. Taste bud. Encyclopædia Britannica Online. 2013. http://www.britannica.com/EBchecked/topic/584034/taste-bud. Accessed 04 Aug 2016. Meyerhof W. Human taste receptors In: Blank I, Wüst M, Yeretzian C, editors. Expression of Multidisciplinary Flavour Science - Proceedings of the 12th Weurman Symposium. Interlaken: Zürcher Hochschule für Angewandte Wissenschaften (ZHAW): 2008. p. 3–12. We would like to thank the anonymous reviewers for their many insightful comments and suggestions. The work has been partially funded by the German Academic Exchange Service (DAAD A/10/98506) and the European Union (FP7-249170), and was conducted within the International Research Training Group on Semantic Integration of Geospatial Information (DFG GRK 1498). We also acknowledge support by the Open Access Fund of the University of Muenster. The Haskell code ("Results" section), the OWL files for the Ontology Design Patterns for Resolution ("Results" section), and the software described in the article ("Applications" section) are all available for download at https://doi.org/10.5281/zenodo.1293285. Institute for Geoinformatics, University of Muenster, Muenster, Germany Auriol Degbelo Department of Geography, University of California, Santa Barbara, USA Werner Kuhn The idea of an observation-based theory of resolution was jointly developed by AD and WK. AD implemented the software and the ontology design patterns. WK contributed to getting the Haskell formalization sound. AD primarily wrote AD primarily wrote Resolution in GIScience: a review, Methods, Results, Applications, Comparison with previous work, and Limitations. WK and AD jointly wrote the Introduction and the Conclusion. Both authors read and approved the final manuscript. Correspondence to Auriol Degbelo. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Degbelo, A., Kuhn, W. Spatial and temporal resolution of geographic information: an observation-based theory. Open geospatial data, softw. stand. 3, 12 (2018). https://doi.org/10.1186/s40965-018-0053-8 Observation collection Spatial information theory
CommonCrawl
Asian Pacific Journal of Cancer Prevention Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회) Health Sciences > Development of Pharmaceutical The Asian Pacific Journal of Cancer Prevention is a monthly electronic journal publishing papers in all areas of cancer control. Its is indexed on PubMed (Impact factor for 2014 : 2.514) and the scope is wide-ranging: including descriptive, analytical and molecular epidemiology; experimental and clinical histopathology/biology of preneoplasias and early neoplasias; assessment of risk and beneficial factors; experimental and clinical trials of primary preventive measures/agents; screening approaches and secondary prevention; clinical epidemiology; and all aspects of cancer prevention education. All of the papers published are freely available as pdf files downloadable from www.apjcpcontrol.org, directly or through PubMed, or obtainable from the first authors. The APJCP is financially supported by the UICC Asian Regional Office and the National Cancer Center of Korea, where the Editorial Office is housed. KSCI Volume 17 Issue sup3 Review of the Molecular Pathogenesis of Osteosarcoma He, Jin-Peng;Hao, Yun;Wang, Xiao-Lin;Yang, Xiao-Jin;Shao, Jing-Fan;Guo, Feng-Jin;Feng, Jie-Xiong 5967 https://doi.org/10.7314/APJCP.2014.15.15.5967 PDF KSCI Treating the osteosarcoma (OSA) remains a challenge. Current strategies focus on the primary tumor and have limited efficacy for metastatic OSA. A better understanding of the OSA pathogenesis may provide a rational basis for innovative treatment strategies especially for metastases. The aim of this review is to give an overview of the molecular mechanisms of OSA tumorigenesis, OSA cell proliferation, apoptosis, migration, and chemotherapy resistance, and how improved understanding might contribute to designing a better treatment target for OSA. TRAIL Mediated Signaling in Pancreatic Cancer Nogueira, Daniele Rubert;Yaylim, Ilhan;Aamir, Qurratulain;Kahraman, OzlemTimirci;Fayyaz, Sundas;Naqvi, Syed Kamran-Ul-Hassan;Farooqi, Ammad Ahmad 5977 Research over the years has progressively shown substantial broadening of the tumor necrosis factor alpha-related apoptosis-inducing ligand (TRAIL)-mediated signaling landscape. Increasingly it is being realized that pancreatic cancer is a multifaceted and genomically complex disease. Suppression of tumor suppressors, overexpression of oncogenes, epigenetic silencing, and loss of apoptosis are some of the extensively studied underlying mechanisms. Rapidly accumulating in vitro and in vivo evidence has started to shed light on the resistance mechanisms in pancreatic cancer cells. More interestingly a recent research has opened new horizons of miRNA regulation by DR5 in pancreatic cancer cells. It has been shown that DR5 interacts with the core microprocessor components Drosha and DGCR8, thus impairing processing of primary let-7. Xenografting DR5 silenced pancreatic cancer cells in SCID-mice indicated that there was notable suppression of tumor growth. There is a paradigm shift in our current understanding of TRAIL mediated signaling in pancreatic cancer cells that is now adding new layers of concepts into the existing scientific evidence. In this review we have attempted to provide an overview of recent advances in TRAIL mediated signaling in pancreatic cancer as evidenced by findings of in vitro and in vivo analyses. Furthermore, we discuss nanotechnological advances with emphasis on PEG-TRAIL and four-arm PEG cross-linked hyaluronic acid (HA) hydrogels to improve availability of TRAIL at target sites. Emerging and Established Global Life-Style Risk Factors for Cancer of the Upper Aero-Digestive Tract Gupta, Bhawna;Johnson, Newell W. 5983 Introduction: Upper aero-digestive tract cancer is a multidimensional problem, international trends showing complex rises and falls in incidence and mortality across the globe, with variation across different cultural and socio-economic groups. This paper seeks some explanations and identifies some research and policy needs. Methodological Approach: The literature illustrates the multifactorial nature of carcinogenesis. At the cellular level, it is viewed as a multistep process involving multiple mutations and selection for cells with progressively increasing capacity for proliferation, survival, invasion, and metastasis. Established and emerging risk factors, in addition to changes in incidence and prevalence of cancers of the upper aero-digestive tract, were identified. Risk Factors: Exposure to tobacco and alcohol, as well as diets inadequate in fresh fruits and vegetables, remain the major risk factors, with persistent infection by particular so-called "high risk" genotypes of human papillomavirus increasingly recognised as also playing an important role in a subset of cases, particularly for the oropharynx. Chronic trauma to oral mucosa from poor restorations and prostheses, in addition to poor oral hygiene with a consequent heavy microbial load in the mouth, are also emerging as significant risk factors. Conclusions: Understanding and quantifying the impact of individual risk factors for these cancers is vital for health decision-making, planning and prevention. National policies and programmes should be designed and implemented to control exposure to environmental risks, by legislation if necessary, and to raise awareness so that people are provided with the information and support they need to adopt healthy lifestyles. Functional Roles of Long Non-coding RNA in Human Breast Cancer Ye, Ni;Wang, Bin;Quan, Zi-Fang;Cao, San-Jie;Wen, Xin-Tian;Huang, Yong;Huang, Xiao-Bo;Wu, Rui;Ma, Xiao-Ping;Yan, Qi-Gui 5993 The discovery of long noncoding RNA (LncRNA) changes our view of transcriptional and posttranscriptional regulation of gene expression. With application of new research techniques such as high-throughput sequencing, the biological functions of LncRNAs are gradually becoming to be understood. Multiple studies have shown that LncRNAs serve as carcinogenic factors or tumor suppressors in breast cancer with abnormal expression, prompts the question of whether they have potential value in predicting the stages and survival rate of breast cancer patients, and also as therapeutic targets. Focusing on the latest research data, this review mainly summarizes the tumorigenic mechanisms of certain LncRNAs in breast cancer, in order to provide a theoretical basis for finding safer, more effective treatment of breast cancer at the LncRNA molecular level. Critical Review on the Carcinogenic Potential of Pesticides Used in Korea Choi, Sangjun 5999 Pesticides used in Korea are grouped by four classes of hazard (extremely, highly, moderately and slightly hazardous) based on acute oral and dermal toxicity in the rat. However, there is little information of carcinogenic effects. The aim of this study was to evaluate potential carcinogenicity for active ingredients of pesticides used in Korea. A total of 1,283 pesticide items were registered under the Pesticide Control Act of which 987 were commercially available. Of these 987 items, 360 active ingredients not duplicated were evaluated for carcinogenicity using the carcinogen list established by the US Environmental Protection Agency (EPA). Some 25 out of 360 ingredients were classified as likely to be carcinogenic (probable) to humans and 52 had suggestive evidence of carcinogenic potential (suspected) based on the US EPA classification. Some 31% of 987 items contained probable or suspected human carcinogenic ingredients. Carcinogenic pesticides accounted for 24% (5,856/24,795 tons) of the total volume of consumption in Korea. Interestingly, pesticides with lower acute toxicity were found to have higher carcinogenic potential. Based on these findings, the study suggests that it is important to provide information on long-term toxicity to farmers, in addition to acute toxicity data. Comparison of Recurrence Rates with Contour-Loop Excision of the Transformation Zone (C-LETZ) and Large Loop Excision of the Transformation Zone (LLETZ) for CIN Boonlikit, Sathone;Srichongchai, Hemwadee 6005 Aim: To compare recurrence rates of large loop excision of the transformation zone (LLETZ) with those of contour-loop excision of the transformation zone (C-LETZ) in the management of cervical intraepithelial neoplasia (CIN). Materials and Methods: The medical records of 177 patients treated consecutively by LLETZ and C-LETZ for CIN at Rajavithi Hospital between 2006 and 2009 were retrospectively reviewed. Results: Of the 87 women in the C-LETZ group, 2 cases (2.30%) had recurrence compared with 13 cases (14.4%) of the 90 women in the LLETZ group, the higher recurrence rate in the latter being statistically significant (p<0.05). Median times of follow up in the C-LETZ and LLETZ groups were 12 months and 14 months respectively (p>0.05). The C-LETZ group showed less intraoperative bleeding compared to the LLETZ group, but the rate of achievement of single specimens and positive margins were similar in the two groups. Conclusions: The present study demonstrated the superiority of C-LETZ over LLETZ in terms of efficacy; C-LLETZ is associated with a lower recurrence rate and also carries a smaller risk of intraoperative bleeding than LLETZ. The rotating technique still has a potential role in treating precancerous lesions of the cervix. Impact of Cellular Immune Function on Prognosis of Lung Cancer Patients after Cytokine-induced Killer Cell Therapy Jin, Congguo;Li, Jia;Wang, Yeying;Chen, Xiaoqun;Che, Yanhua;Liu, Xin;Wang, Xicai;Sriplung, Hutcha 6009 Aims: To investigate changes in cellular immune function of patients with lung cancer before and after cytokine-induced killer (CIK) cell therapy and to identify variation effects on overall survival (OS) and progression-free survival (PFS). Materials and Methods:A total of 943 lung cancer patients with immune dysfunction were recruited from January 2002 to January 2010, 532 being allocated to conventional therapy and 411 to CIK therapy after a standard treatment according to the NCCN Clinical Practice Guidelines. All the patients were investigated for cellular immune function before and after therapy every three months. and clinical prognostic outcomes were analyzed. Results: After six courses of treatment, immune function was much improved in patients receiving CIK cells therapy as compared to controls. The percentages of recurrence and/or metastases for patients undergoing CIK cell therapy was 56.2% and 49.1% respectively but 78.6% and 70.3% among controls (p<0.001). The median OS times for CIK cell therapy and control groups were 48 and 36 months respectively. The OS rates at 12, 36, 60, 84 months in CIK treated patients were 97.8%, 66.9%, 27.7%, and 4.1% while they were 92.3%, 44.5%, 9.2%, and 1.5% in controls. OS and PFS were significantly different by log rank test between the two groups and across the three immune improvement classes. Conclusions: The immune function of lung cancer patients was improved by CIK cell therapy, associated with an increase in the OS rate and extension of the time to recurrence and/or metastasis. Prognostic Role of Circulating Tumor Cells in Patients with Pancreatic Cancer: a Meta-analysis Ma, Xue-Lei;Li, Yan-Yan;Zhang, Jing;Huang, Jing-Wen;Jia, Hong-Yuan;Liu, Lei;Li, Ping 6015 Background: Isolation and characterization of circulating tumor cells (CTCs) in patients suffering from a variety of different cancers have become hot biomarker topics. In this study, we evaluated the prognostic value of CTCs in pancreatic cancer. Materials and Methods: Initial literature was identified using Medline and EMBASE. The primary data were hazard ratios (HRs) with 95% confidence intervals (CIs) of survival outcomes, including overall survival (OS) and progression free survival/recurrence free survival (PFS/RFS). Results: A total of 9 eligible studies were included in this meta-analysis, published between 2002 and 2013. The estimated pooled HR and 95%CI for OS for all studies was 1.64 (95%CI 1.39-1.94, p<0.00001) and the pooled HR and 95%CI for RFS/DFS was 2.36 (95%CI 1.41-3.96, p<0.00001). The HRs and 95%CIs for OS and RFS/DFS in patients before treatment were 1.93 (95%CI 1.26-2.96, p=0.003) and 1.82 (95%CI 1.22-2.72, p=0.003), respectively. In patients receiving treatment, the HRs and 95%CI for OS and RFS/DFS were 1.37 (95%CI 1.00-1.86, p=0.05) and 1.89 (95%CI 1.01-3.51, p=0.05), respectively. Moreover, the pooled HR and 95%CI for OS in the post-treatment group was 2.20 (95%CI 0.80-6.02, p=0.13) and the pooled HR for RFS/DFS was 8.36 (95%CI 3.22-21.67, p<0.0001). Conclusions: The meta-analysis provided strong evidence supporting the proposition that CTCs detected in peripheral blood have a fine predictive role in pancreatic patients especially on the time point of post-treatment. Upregulation of STK15 in Esophageal Squamous Cell Carcinomas in a Mongolian Population Chen, Guang-Lie;Hou, Gai-Ling;Sun, Fei;Jiang, Hong-Li;Xue, Jin-Feng;Li, Xiu-Shen;Xu, En-Hui;Gao, Wei-Shi;Cao, Jian-Ping 6021 Background: The STK15 gene located on chromosome 20q13.2 encodes a centrosome-associated kinase critical for regulated chromosome segregation and cytokinesis. Recent studies have demonstrated STK15 to be significantly associated with many tumors, with aberrant expression obseved in many human malignancies. The purpose of this study was to investigate expression of STK15 in esophageal squamous cell carcinomas (ESCCs) in a Mongolian population. Methods: Two non-synonymous single nucleotide polymorphisms in the coding region of STK15, rs2273535 (Phe31Ile) and rs1047972 (Val57Ile) were assessed in 380 ESCC patients and 380 healthy controls. We also detected STK15 mRNA expression in 39 esophageal squamous cell carcinomas and corresponding adjacent tissues by real time PCR. Results: rs2273535 showed a significant association with ESCC in our Mongolian population (rs227353, P allele = 0.0447, OR (95%CI) = 1.259 (1.005~1.578)). Real time PCR analysis of ESCC tissues showed that expression of STK15 mRNA in cancer tissues was higher than in normal tissues (p = 0.013). Conclusions: Our study showed that functional SNPs in the STK15 gene are associated with ESCC in a Mongolian population and up-regulation of STK15 mRNAoccurs in ESCC tumors compared adjacent normal tissues. STK15 may thus have an important role in the prognosis of ESCC and be a potential therapeutic target. Survival Analysis of Biliary Tract Cancer Cases in Turkey Akca, Zeki;Mutlu, Hasan;Erden, Abdulsamet;Buyukcelik, Abdullah;Cihan, Yasemin Benderli;Goksu, Sema Sezgin;Aslan, Tuncay;Sezer, Emel Yaman;Inal, Ali 6025 Background: Because of the relative rarity of biliary tract cancers (BTCs), defining long term survival results is difficult. In the present study, we aimed to evaluate the survival of a series of cases in Turkey. Materials and Methods: A totally of 47 patients with billiary tract cancer from Mersin Goverment Hospital, Acibadem Kayseri Hospital and Kayseri Training and Research Hospital were analyzed retrospectively using hospital records between 2006-2012. Results: The median overall survival was $19.3{\pm}3.9$ months for all patients. The median disease free and overall survivals were $24.3{\pm}5.3$ and $44.1{\pm}12.9$ months in patients in which radical surgery was performed, but in those with with inoperable disease they were only $5.3{\pm}1.5$ and $10.7{\pm}3.2$ months, respectively. Conclusions: BTCs have a poor prognosis. Surgery with a microscopic negative margin is still the only curative treatment. Comparison of Survival Rates between Chinese and Thai Patients with Breast Cancer Che, Yanhua;You, Jing;Zhou, Shaojiang;Li, Li;Wang, Yeying;Yang, Yue;Guo, Xuejun;Ma, Sijia;Sriplung, Hutcha 6029 The burden and severity of a cancer can be reflected by patterns of survival. Breast cancer prognosis between two countries with a different socioeconomic status and cultural beliefs may exhibit wide variation. This study aimed to describe survival in patients with breast cancer in China and Thailand in relation to demographic and clinical prognostic information. Materials and Methods: We compared the survival of 1,504 Chinese women in Yunnan province and 929 Thai women in Songkhla with breast cancer from 2006 to 2010. Descriptive prognostic comparisons between the Chinese and Thai women were performed by relative survival analysis. A Cox regression model was used to calculate the hazard ratios of death, taking into account the age, disease stage, period of diagnosis and country. Results: The overall 5-year survival proportion for patients diagnosed with breast cancer for Yunnan province (0.72) appeared slightly better than Songkhla (0.70) without statistical significance. Thai women diagnosed with distant and regional breast cancer had poorer survival than Chinese women. Disease stage was the most important determinant of survival from the results of Cox regression model. Conclusions: Breast cancer patients in Kunming had slightly greater five-year survival rate than patients in Songkhla. Both Chinese and Thai women need improvement in prognosis, which could conceivably be attained through increased public education and awareness regarding early detection and compliance to treatment protocols. Parathyroid Hormone Gene rs6256 and Calcium Sensing Receptor Gene rs1801725 Variants are not Associated with Susceptibility to Colorectal Cancer in Iran Mahmoudi, Touraj;Karimi, Khatoon;Arkani, Maral;Farahani, Hamid;Nobakht, Hossein;Dabiri, Reza;Asadi, Asadollah;Zali, Mohammad Reza 6035 Background: Substantial evidence from epidemiological studies has suggested that increased levels of calcium may play a protective role against colorectal cancer (CRC). Given the vital role of calcium sensing receptor (CaSR) and parathyroid hormone (PTH) in the maintenance of calcium homeostasis, we explored whether the rs1801725 (A986S) variant located in exon 7 of the CaSR gene and the rs6256 variant located in exon 3 of PTH gene might be associated with CRC risk. Materials and Methods: In this study 860 subjects including 350 cases with CRC and 510 controls were enrolled and genotyped using PCR-RFLP methods. Results: We observed no significant difference in genotype or allele frequencies between the cases with CRC and controls for both CaSR and PTH genes either before or after adjustment for confounding factors including age, BMI, sex, smoking status, and family history of CRC. Furthermore, no evidence for effect modification of any association of rs1801725 and rs6256 variants and CRC by BMI, sex, or tumor site was observed. In addition, there was no significant difference in genotype and allele frequencies between the normal weight (BMI < $25kg/m^2$) cases and overweight/obese (BMI ${\geq}25kg/m^2$) cases for the two SNPs. Conclusions: These data indicated that the CaSR gene A986S variant is not a genetic contributor to CRC risk in the Iranian population. Furthermore, our results suggest for the first time that PTH gene variant does not affect CRC risk. Nonetheless, further studies with larger sample size are needed to validate these findings. Computed Tomography Manifestations of Histologic Subtypes of Retroperitoneal Liposarcoma Lu, Jing;Qin, Qin;Zhan, Liang-Liang;Yang, Xi;Xu, Qing;Yu, Jing;Dou, Li-Na;Zhang, Hao;Yang, Yan;Chen, Xiao-Chen;Yang, Yue-Hua;Cheng, Hong-Yan;Sun, Xin-Chen 6041 Objective: Liposarcoma (LPS) is the most common soft tissue sarcoma and accounts for approximately 20% of all mesenchymal malignancies, often occurring in deep soft tissue of retroperitoneal space. Accurate preoperative diagnosis is therefore necessary. We explored whether computed tomography (CT) could be used to differentiate between the various types of retroperitoneal liposarcoma (RPLS). Method: Forty-seven cases of RPLS, diagnosed surgically and histologically, were analyzed retrospectively. CT features were correlated with postoperative pathological appearance. Results: The study radiologist identified 29, 11, 2, 2 and 3 RPLS as atypical lipomatous tumor/well-differentiated liposarcoma (ALT/WDL), dedifferentiated liposarcoma (DDL), myxoid/round cell liposarcoma (ML/RCL), pleomorphic liposarcoma (PL) and mixed-type liposarcoma. Analysis of CT scans revealed the following typical findings of the different subtypes of RPLS: ALT/WDL was mainly visible as a well-delineated fatty hypodense tumor with uniform density and integrity margin; DDL was marked by the combination of focal nodular density and hypervascularity. ML/RCL, PL and mixed liposarcoma showed malignant biological behaviour and CT findings need further studies. Conclusions: CT scanning can reveal important details including internal components, margins and surrounding tissues. Based on CT findings, tumor type can be roughly evaluated and biopsy location and therapeutic scheme guided. Weight Loss Correlates with Macrophage Inhibitory Cytokine-1 Expression and Might Influence Outcome in Patients with Advanced Esophageal Squamous Cell Carcinoma Lu, Zhi-Hao;Yang, Li;Yu, Jing-Wei;Lu, Ming;Li, Jian;Zhou, Jun;Wang, Xi-Cheng;Gong, Ji-Fang;Gao, Jing;Zhang, Xiao-Tian;Li, Jie;Li, Yan;Shen, Lin 6047 Background: Weight loss during chemotherapy has not been exclusively investigated. Macrophage inhibitory cytokine-1 (MIC-1) might play a role in its etiology. Here, we investigated the prognostic value of weight loss before chemotherapy and its relationship with MIC-1 concentration and its occurrence during chemotherapy in patients with advanced esophageal squamous cell carcinoma (ESCC). Materials and Methods: We analyzed 157 inoperable locally advanced or metastatic ESCC patients receiving first-line chemotherapy. Serum MIC-1 concentrations were assessed before chemotherapy. Patients were assigned into two groups according to their weight loss before or during chemotherapy:>5% weight loss group and ${\leq}5%$ weight loss group. Results: Patients with weight loss>5% before chemotherapy had shorter progression-free survival period (5.8 months vs. 8.7 months; p=0.027) and overall survival (10.8 months vs. 20.0 months; p=0.010). Patients with weight loss >5% during chemotherapy tended to have shorter progression-free survival (6.0 months vs. 8.1 months; p=0.062) and overall survival (8.6 months vs. 18.0 months; p=0.022), and if weight loss was reversed during chemotherapy, survival rates improved. Furthermore, serum MIC-1 concentration was closely related to weight loss before chemotherapy (p=0.001) Conclusions: Weight loss both before and during chemotherapy predicted poor outcome in advanced ESCC patients, and MIC-1 might be involved in the development of weight loss in such patients. Utility of Frozen Section Pathology with Endometrial Pre-Malignant Lesions Oz, Murat;Ozgu, Emre;Korkmaz, Elmas;Bayramoglu, Hatice;Erkaya, Salim;Gungor, Tayfun 6053 Aim: To determine utility of the frozen section (FS) in the operative management of endometrial pre-malignant lesions. Materials and Methods: We retrospectively analyzed patients who underwent abdominal hysterectomy with preoperative diagnosis of complex atypical endometrial hyperplasia (CAEH) and simple endometrial hyperplasia (SEH) between May 2007 and December 2013. Frozen and paraffin section (PS) results were compared. Sensitivity, specificity, the positive predictive value (PPV), the negative predictive value (NPV) and the accuracy in predicting EC on FS were evaluated with 95% confidence intervals (CIs) for each parameter. The correlation between FS and PS was calculated as an ${\kappa}$ coefficient. Results: Among 143 preoperatively diagnosed CAEH cases, 60 (42%) were malignant and 83 (58%) were benign in PS; and among 60 malignant cases diagnosed in PS, 43 (71%) were "malignant" in FS. Sensitivity, specificity, PPV and NPV for FS were 76%, 100%, 100% and 87.5%, respectively. Conclusions: We found that FS is reliable and applicable in the management of endometrial hyperplasias. It is important that the pathologist should be experienced because FS for endometrial pre-malignant lesions has significant inter-observer variability. The other conclusion is that patients with the diagnosis of EH, especially those who are postmenopausal, should undergo surgery where FS investigation is available. Descriptive Epidemiology of Colorectal Cancer in University Malaya Medical Centre, 2001 to 2010 Magaji, Bello Arkilla;Moy, Foong Ming;Roslani, April Camilla;Law, Chee Wei 6059 Background: Colorectal cancer is the second most frequent cancer in Malaysia. Nevertheless, there is little information on treatment and outcomes nationally. We aimed to determine the demographic, clinical and treatment characteristics of colorectal cancer patients treated at the University Malaya Medical Centre (UMMC) as part of a larger project on survival and quality of life outcomes. Materials and Methods: Medical records of 1,212 patients undergoing treatment in UMMC between January 2001 and December 2010 were reviewed. A retrospective-prospective cohort study design was used. Research tools included the National Cancer Patient Registration form. Statistical analysis included means, standard deviations (SD), proportions, chi square, t-test/ANOVA. P-value significance was set at 0.05. Results: The male: female ratio was 1.2:1. The mean age was 62.1 (SD12.4) years. Patients were predominantly Chinese (67%), then Malays (18%), Indians (13%) and others (2%). Malays were younger than Chinese and Indians (mean age 57 versus 62 versus 62 years, p<0.001). More females (56%) had colon cancers compared to males (44%) (p=0.022). Malays (57%) had more rectal cancer compared to Chinese (45%) and Indians (49%) (p=0.004). Dukes' stage data weres available in 67%, with Dukes' C and D accounting for 64%. Stage was not affected by age, gender, ethnicity or tumor site. Treatment modalities included surgery alone (40%), surgery and chemo/radiotherapy 32%, chemo and radiotherapy (8%) and others (20%). Conclusions: Significant ethnic differences in age and site distribution, if verified in population-based settings, would support implementation of preventive measures targeting those with the greatest need, at the right age. SLC35B2 Expression is Associated with a Poor Prognosis of Invasive Ductal Breast Carcinoma Chim-ong, Anongruk;Thawornkuno, Charin;Chavalitshewinkoon-Petmitr, Porntip;Punyarit, Phaibul;Petmitr, Songsak 6065 Background: Breast cancer is the most common malignancy in women worldwide, including Thailand, and is a major cause of mortality and morbidity, despite advances in diagnosis and treatment. Novel gene expression in breast cancer is a focus in searches for prognostic biomarkers and new therapeutic targets. Materials and Methods: The mRNA expression of novel B4GALT4, SLC35B2, and WDHD1 genes in breast cancer were examined in invasive ductal breast carcinoma (IDC) patients using quantitative real-time reverse transcription polymerase chain reaction (QRT-PCR). Results: Among these genes, increased expression of SLC35B2 mRNA was significantly associated with TNM stage III + IV of IDC (p<0.001). Hence, up-regulation of SLC35B2 may serve as a prognostic biomarker for poor prognosis, and is also a potential therapeutic target in breast cancer. Association of Rs11615 (C>T) in the Excision Repair Cross-complementing Group 1 Gene with Ovarian but not Gynecological Cancer Susceptibility: a Meta-analysis Ma, Yong-Jun;Feng, Sheng-Chun;Hu, Shao-Long;Zhuang, Shun-Hong;Fu, Guan-Hua 6071 Background: Evidence suggests that the rs11615 (C>T) polymorphism in the ERCC1 gene may be a risk factor for gynecological tumors. However, results have not been consistent. Therefore we performed this meta-analysis. Methods: Eligible studies were identified by search of PubMed, MEDLINE and Chinese National Knowledge Infrastructure (CNKI). Odds ratios (ORs) and 95% confidence intervals (CIs) were applied to assess associations between rs11615 (C>T) and gynecological tumor risk. Heterogeneity among studies was tested and sensitivity analysis was applied. Results: A total of 6 studies were identified, with 1,766 cases and 2,073 controls. No significant association was found overall between rs11615 (C>T) polymorphism and gynecological tumors susceptibility in any genetic model. In further analysis stratified by cancer type, significantly elevated ovarian cancer risk was observed in the homozygote and recessive model comparison (TT vs. CC: OR=1.69, 95% CI=1.03-2.77, heterogeneity=0.876; TT vs. CT/CC: OR=1.72, 95% CI=1.07-2.77, heterogeneity=0.995). Conclusion: The results of the present meta-analysis suggest that there is no significant association between the rs11615 (C>T) polymorphism and gynecological tumor risk, but it had a increased risk in ovarian cancer. Distinct Pro-Apoptotic Properties of Zhejiang Saffron against Human Lung Cancer Via a Caspase-8-9-3 Cascade Liu, Dan-Dan;Ye, Yi-Lu;Zhang, Jing;Xu, Jia-Ni;Qian, Xiao-Dong;Zhang, Qi 6075 Lung cancer is the leading cause of cancer-related death worldwide. Here we investigated the antitumor effect and mechanism of Zhejiang (Huzhou and Jiande) saffron against lung cancer cell lines, A549 and H446. Using high performance liquid chromatography (HPLC), the contents of crocin I and II were determined. In vitro, MTT assay and annexin-V FITC/PI staining showed cell proliferation activity and apoptosis to be changed in a dose- and time-dependent manner. The inhibition effect of Jiande saffron was the strongest. In vivo, when mice were orally administered saffron extracts at dose of 100mg/kg/d for 28 days, xenograft tumor size was reduced, and ELISA and Western blotting analysis of caspase-3, -8 and -9 exhibited stronger expression and activity than in the control. In summary, saffron from Zhejiang has significant antitumor effects in vitro and in vivo through caspase-8-caspase-9-caspase-3 mediated cell apoptosis. It thus appears to have more potential as a therapeutic agent. Effectiveness of the Microlux/DLTM Chemiluminescence Device in Screening of Potentially Malignant and Malignant Oral Lesions Ibrahim, Suzan Seif;Al-Attas, Safia Ali;Darwish, Zeinab Elsayed;Amer, Hala Abbas;Hassan, Mona Hassan 6081 Background: To evaluate the effectiveness of Microlux/DL with and without toluidine blue in screening of potentially malignant and malignant oral lesions. Materials and Methods: In this diagnostic clinical trial clinical examination was carried out by two teams: 1) two oral medicine consultants, and 2) two general dentists. Participants were randomly and blindly allocated for each examining team. A total of 599 tobacco users were assessed through conventional oral examination (COE); the examination was then repeated using Microlux/DL device and toluidine blue. Biopsy of suspicious lesions was performed. Also clinicians opinions regarding the two tools were obtained. Results: The sensitivity and, specificity and positive predictive value (PVP) of Microlux/DL for visualization of suspicious premalignant lesions considering COE as a gold standard (i.e screening device) were 94.3%, 99.6% and 96.2% respectively, while they were 100%, 32.4% and 17.9% when considering biopsy as a gold standard. Moreover, Microlux/DL enhanced detection of the lesion and uncovered new lesions compared to COE, whereas it did not alter the provisional clinical diagnosis, or alter the biopsy site. On the other hand, adding toluidine blue dye did not improve the effectiveness of the Microlux/DL system. Conclusions: The Microlux/DL seems to be a promising adjunctive screening device. Impact of Prognostic Factors on Survival Rates in Patients with Ovarian Carcinoma Arikan, Sevim Kalsen;Kasap, Burcu;Yetimalar, Hakan;Yildiz, Askin;Sakarya, Derya Kilic;Tatar, Sumeyra 6087 Purpose: The aim of the present study was to invesitigate the impact of significant clinico-pathological prognostic factors on survival rates and to identify factors predictive of poor outcome in patients with ovarian carcinoma. Materials and Methods: A retrospective chart review of 74 women with pathologically proven ovarian carcinoma who were treated between January 2006 and April 2011 was performed. Patients were investigated with respect to survival to find the possible effects of age, gravida, parity, menstruel condition, pre-operative Ca-125, treatment period, cytologic washings, presence of ascites, tumor histology, stage and grade, maximal tumor diameter, adjuvan chemotherapy and cytoreductive success. Also 55 ovarian carcinoma patients were investigated with respect to prognostic factors for early 2-year survival. Results: The two-year survival rate was 69% and the 5-year survival rate was 25.5% for the whole study population. Significant factors for 2-year survival were preoperative CA-125 level, malignant cytology and FIGO clinical stage. Significant factors for 5-year survival were age, preoperative CA-125 level, residual tumor, lymph node metastases, histologic type of tumor, malignant cytology and FIGO clinical stage. Logistic regression revealed that independent prognostic factors of 5-year survival were patient age, lymph node metastasis and malignant cytology. Conclusions: We consider quality registries with prospectively collected data to be one important tool in monitoring treatment effects in population-based cancer research. Perception and Practices on Screening and Vaccination for Carcinoma Cervix among Female Healthcare Professional in Tertiary Care Hospitals in Bangalore, India Swapnajaswanth, M.;Suman, G.;Suryanarayana, S.P.;Murthy, N.S. 6095 Background:Cervical cancer is potentially the most preventable and treatable cancer. Despite the known efficacy of cervical screening, a significant number of women do not avail themselves of the procedure due to lack of awareness. Objectives: This study was conducted to elicit information on the knowledge, attitude and practice (KAP) regarding screening (Pap test) and vaccination for carcinoma cervix among female doctors and nurses in a tertiary care hospital in Bangalore and to assess barriers to acceptance of the Pap test. Materials and Methods: A cross-sectional, descriptive study was conducted with semi-structured, self-administered questionnaire among female health professionals. The study subjects were interviewed for KAP regarding risk factors for cancer cervix, Pap test and HPV vaccination for protection against carcinoma cervix. Results: Higher proportion of doctors 45 (78.9%) had very good knowledge as compared to only 13 (13.3%) of the nurses, about risk factors for cancer cervix and Pap test (p=0.001). As many as 138(89.6%) of the study subjects had favorable attitude towards Pap test and vaccination, but 114 (73.6%) of the study subjects never had a Pap test and the most common reason 35 (31%) for not practicing was absence of disease symptoms. Conclusions: In spite of good knowledge and attitudes towards cancer cervix and Pap test being good, practice remained low among the study subjects and most common reasons for not undergoing Pap test was absence of disease symptoms. The independent predictors of ever having a Pap test done was found to be the occupation and duration of married life above 9yrs. Hence there is a strong need to improve uptake of Pap test by health professionals by demystifying the barriers. Elevated Serum Ferritin Levels in Patients with Hematologic Malignancies Zhang, Xue-Zhong;Su, Ai-Ling;Hu, Ming-Qiu;Zhang, Xiu-Qun;Xu, Yan-Li 6099 Purpose: To retrospectively analyze variability and clinical significance of serum ferritin levels in Chinese patients with hematologic malignancies. Materials and Methods: Serum ferritin were measured by radioimmunoassay, using a kit produced by the Beijing Institute of Atomic Energy. Patients with hematologic malignancies, and treated in the Department of Hematology in Nanjing First Hospital and fulfilled study criteria were recruited. Results: Of 473 patients with hematologic malignancies, 262 patients were diagnosed with acute leukemia, 131 with lymphoma and 80 with multiple myeloma. Serum ferritin levels of newly diagnosed and recurrent patients were significantly higher than those entering complete remission stage or in the control group (p<0.001). Conclusions: Serum ferritin lever in patients with hematologic malignancies at early stage and recurrent stage are significantly increased, so that detection and surveillance of changes of serum ferritin could be helpful in assessing conditions and prognosis of this patient cohort. Prognostic Significance of Beta-Catenin Expression in Patients with Esophageal Carcinoma: a Meta-analysis Zeng, Rong;Duan, Lei;Kong, Yu-Ke;Wu, Xiao-Lu;Wang, Ya;Xin, Gang;Yang, Ke-Hu 6103 Many studies have reported ${\beta}$-catenin involvement in the development of esophageal carcinoma (EC), but its prognostic significance for EC patients remains controversial. Therefore, we conducted this meta-analysis to explore the issue in detail. After searching PubMed, EMBASE, Web of Science, and Chinese Biomedical Literature Database, we included a total of ten relevant studies. We pooled the overall survival (OS) data using RevMan 5.2 software. The results showed that aberrant expression of ${\beta}$-catenin was associated with a significant increase of mortality risk (hazard ratio 1.71, 95%CI 1.46-2.01; p<0.00001). Subgroup analyses further suggested that aberrant expression of ${\beta}$-catenin resulted in poor OS of EC patients regardless of histological type of EC, study location or criteria for aberrant expression of ${\beta}$-catenin, and the sensitivity analyses revealed that the result was robust. The meta-analysis revealed that aberrant expression of ${\beta}$-catenin could be a predicative factor of poor prognosis for EC patients. Clinicopathological Features of Indonesian Breast Cancers with Different Molecular Subtypes Widodo, Irianiwati;Dwianingsih, Ery Kus;Triningsih, Ediati;Utoro, Totok;Soeripto, Soeripto 6109 Background: Breast cancer is a heterogeneous disease with molecular subtypes that have biological distinctness and different behavior. They are classified into luminal A, luminal B, Her-2 and triple negative/basal-like molecular subtypes. Most of breast cancers reported in Indonesia are already large size, with high grade or late stage but the clinicopathological features of different molecular subtypes are still unclear. They need to be better clarified to determine proper treatment and prognosis. Aim: To elaborate the clinicopathological features of molecular subtypes of breast cancers in Indonesian women. Materials and Methods: A retrospective cross-sectional study of 84 paraffin-embedded tissues of breast cancer samples from Dr. Sardjito General Hospital in Central Java, Indonesia was performed. Expression of ER, PR, Her-2 and Ki-67 was analyzed to classify molecular subtypes of breast cancer by immunohistochemistry. The relation of clinicopathological features of breast cancers with molecular subtypes of luminal A, luminal B, Her-2 and triple negative/basal-like were analyzed using Pearson's Chi-Square test. A p-value of <0.05 was considered statistically significant. Results: Case frequency of luminal A, Luminal B, Her-2+ and triple negative/basal-like subtypes were 38.1%, 16.7%, 20.2% and 25%, respectively. Significant difference was found in breast cancer molecular subtypes in regard to age, histological grade, lymph node status and staging. However it showed insignificant result in regard to tumor size. Luminal A subtype of breast cancer was commonly found in >50 years old women (p:0.028), low grade cancer (p:0.09), negative lymph node metastasis (p:0.034) and stage III (p:0.017). Eventhough the difference was insignificant, luminal A subtype breast cancer was mostly found in small size breast cancer (p:0.129). Her-2+ subtype breast cancer was more commonly diagnosed with large size, positive lymph node metastasis and poor grade. Triple negative/basal-like cancer was mostly diagnosed among <50 years old women. Conclusions: This study suggests that immunohistochemistry-based subtyping is essential to classify breast carcinoma into subtypes that vary in clinicopathological features, implying different therapeutic options and prognosis for each subtype. Comparisons between the KKU-Model and Conventional Rectal Tubes as Markers for Checking Rectal Doses during Intracavitary Brachytherapy of Cervical Cancer Padoongcharoen, Prawat;Krusun, Srichai;Palusuk, Voranipit;Pesee, Montien;Supaadirek, Chunsri;Thamronganantasakul, Komsan 6115 Background: To compare the KKU-model rectal tube (KKU-tube) and the conventional rectal tube (CRT) for checking rectal doses during high-dose-rate intracavitary brachytherapy (HDR-ICBT) of cervical cancer. Materials and Methods: Between February 2010 and January 2011, thirty -two patients with cervical cancer were enrolled and treated with external beam radiotherapy (EBRT) and intracavitary brachytherapy (ICBT). The KKU-tube and CRT were applied intrarectally in the same patients at alternate sessions as references for calculation of rectal doses during ICBT. The gold standard references of rectum anatomical markers which are most proximal to radiation sources were anterior rectal walls (ARW) adjacent to the uterine cervix demonstrated by barium sulfate suspension enema. The calculated rectal doses derived from actual anterior rectal walls, CRT and the anterior surfaces of the KKU-tubes were compared by using the paired t-test. The pain caused by insertion of each type of rectal tube was assessed by the visual analogue scale (VAS). Results: The mean dose of CRT was lower than the mean dose of ARW ($Dmean_0-Dmean_1$) by $80.55{\pm}47.33cGy$ (p-value <0.05). The mean dose of the KKU-tube was lower than the mean dose of ARW ($Dmean_0-Dmean_2$) by $30.82{\pm}24.20cGy$ (p-value <0.05). The mean dose difference [($Dmean_0-Dmean_1$)-($Dmean_0-Dmean_2$)] was $49.72{\pm}51.60cGy$, which was statistically significant between 42.32 cGy -57.13 cGy with the t-value of 13.24 (p-value <0.05). The maximum rectal dose by using CRT was higher than the KKU-tube as much as 75.26 cGy and statistically significant with the t-score of 7.55 (p-value <0.05). The mean doses at the anterior rectal wall while using the CRTs and the KKU-tubes were not significantly different (p-value=0.09). The mean pain score during insertion of the CRT was significantly higher than the KKU-tube by a t-score of 6.15 (p-value <0.05) Conclusions: The KKU-model rectal tube was found to be an easily producible, applicable and reliable instrument as a reference for evaluating the rectal dose during ICBT of cervical cancer without negative effects on the patients. Prostate-Specific Antigen Levels in Relation to Background Factors: Are there Links to Endocrine Disrupting Chemicals and AhR Expression? Bidgoli, Sepideh Arbabi;Jabari, Nasim;Zavarhei, Mansour Djamali 6121 Background: Prostate-specific antigen (PSA) is a potential biomarker for early detection of prostate cancer (PCa) but its level is known to be affected by many background factors and roles of ubiquitous toxicants have not been determined. Endocrine disrupting chemicals (EDCs) are ubiquitous reproductive toxicants used in consumer products, which promote tumor formation in some reproductive model systems by binding to AhR, but human data on its expression in prostate cancer as well as its association with PSA levels are not clear. This study aimed to evaluate the expression levels of AhR and its association with serological levels of PSA and to detect possible effects of background factors and EDC exposure history on PSA levels in PCa cases. Materials and Methods: A cross-sectional study was conducted on the tissue levels of AhR and serum levels of PSA in 53 PCa cases from 2008-2011 and associations between each and background and lifestyle related factors were determined. Results: Although the AhR was overexpressed in PCa and correlated with the age of patients, it did not correlate with PSA levels.Of nutritional factors, increased intake of polysaturated fats and fish in the routine regimen of PCa cases increased the PSA levels significantly. Conclusions: AhR overexpression in PCa pontws to roles of EDCs in PCa but without any direct association with PSA levels. However, PSA levels are affected by exposure to possible toxicants in foods whichneed to be assessed as possible risk factors of PCa in future studies. Allogeneic Hemopietic Stem Cell Transplants for the Treatment of B Cell Acute Lymphocytic Leukemia Dong, Wei-Min;Cao, Xiang-Shan;Wang, Biao;Lin, Yun;Hua, Xiao-Ying;Qiu, Guo-Qiang;Gu, Wei-Ying;Xie, Xiao-Bao 6127 Objective: Explore the feasibility of allo-hemopietic stem cell transplants in treating patients with B cell acute lymphocytic leukemia. Methods: Between september 2006 and February 2011, fifteen patients with B cell acute lymphocytic leukemia (ALL) were treated by allo-hemopietic stem cell transplants (HSCT). Stem cell sources were peripheral blood. Six patients were conditioned by busulfan (BU) and cyclophosphamide (CY) and nine patients were conditioned with TBI and cyclophosphamide (CY). Graft versus host disease (GVHD) prophylaxis regimen consisted of cyclosporine A (CSA), methotrex ate (MTX) and mycophenolatemofetil (MMF). Results: Patients received a median of $7.98{\times}10^8{\cdot}kg^{-1}$ ($5.36-12.30{\times}10^8{\cdot}kg^{-1}$) mononuclear cells (MNC). The median time of ANC> $0.5{\times}10^9/L$ was day 12 (10-15), and PLT> $20.0{\times}10^9/L$ was day 13 (11-16). Extensive acute GVHD occurred in 6 (40.0%) patients, and extensive chronic GVHD was recorded in 6 (40.0%) patients. Nine patients were alive after 2.5-65 months follow-up. Conclusion: Allogeneic stem cell transplant could be effective in treating patients with B cell acute lymphocytic leukemia. Lack of Any Association of GST Genetic Polymorphisms with Susceptibility to Ovarian Cancer - a Meta-analysis Han, Li-Yuan;Liu, Kui;Lin, Xia-Lu;Zou, Bao-Bo;Zhao, Jin-Shun 6131 Objective: Epidemiology studies have reported conflicting results between glutathione S-transferase Mu-1 (GSTM1), glutathione S-transferase theta-1 (GSTT1) and glutathione S-transferase pi-1 (GSTP1) and ovarian cancer (OC) susceptibility. In this study, an updated meta-analysis was applied to determine whether the deletion of GSTM1, GSTT1 and GSTP1 has an influence on OC susceptibility. Methods: A published literature search was performed through PubMed, Embase, Cochrane Library, and Science Citation Index Expanded database for articles published in English. Pooled odds ratios (ORs) and 95% confidence intervals (95%CIs) were calculated using random or fixed effects models. Heterogeneity between studies was assessed using the Cochrane Q test and $I^2$ statistics. Sub-group analysis was conducted to explore the sources of heterogeneity. Sensitivity analysis was employed to evaluate the respective influence of each study on the overall estimate. Results: In total, 10 published studies were included in the final analysis. The combined analysis revealed that there was no significant association between GSTM1 null genotype and OC risk (OR=1.01, 95%CI: 0.91-1.12). Additionally, there was no significant association between GSTT1 genetic polymorphisms and OC risk (OR=0.98, 95% CI: 0.85-1.13). Similalry, no significant associations were found concerning the GSTP1 rs1695 locus and OC risk. Meanwhile, subgroup analysis did not show a significant increase in eligible studies with low heterogeneity. However, sensitivity analysis, publication bias and cumulative analysis demonstrated the reliability and stability of the current meta-analysis. Conclusions: These findings suggest that GSTs genetic polymorphisms may not contribute to OC susceptibility. Large epidemiological studies with the combination of GSTM1 null, GSTT1 null and GSTP1 Ile105Val polymorphisms and more specific histological subtypes of OC are needed to prove our findings. Serum Adiponectin but not Leptin at Diagnosis as a Predictor of Breast Cancer Survival Lee, Sang-Ah;Sung, Hyuna;Han, Wonshik;Noh, Dong-Young;Ahn, Sei-Hyun;Kang, Daehee 6137 Limited numbers of epidemiological studies have examined the relationship between adipokines and breast cancer survival. Preoperative serum levels of obesity-related adipokines (leptin and adiponectin) were here measured in 370 breast cancer patients, recruited from two hospitals in Korea. We examined the association between those adipokines and disease-free survival (DFS). The TNM stage, ER status and histological grade were aslo assessed in relation to breast cancer survival. Elevated adiponectin levels were associated with reduced DFS of breast cancer ($P_{trend}=0.03$) among patients with normal body weight, predominantly in postmenopausal women. There was no association of leptin with breast cancer survival. In conclusion, our study suggests that high levels of adiponectin at diagnosis are associated with breast cancer survival among women with normal body weight. Expression of Toll-like Receptor 9 Increases with Progression of Cervical Neoplasia in Tunisian Women - A Comparative Analysis of Condyloma, Cervical Intraepithelial Neoplasia and Invasive Carcinoma Fehri, Emna;Ennaifer, Emna;Ardhaoui, Monia;Ouerhani, Kaouther;Laassili, Thalja;Rhouma, Rahima Bel Haj;Guizani, Ikram;Boubaker, Samir 6145 Toll-like receptors (TLRs) are expressed in immune and tumor cells and recognize pathogen-associated molecular patterns. Cervical cancer (CC) is directly linked to a persistent infection with high risk human papillomaviruses (HR-HPVs) and could be associated with alteration of TLRs expression. TLR9 plays a key role in the recognition of DNA viruses and better understanding of this signaling pathway in CC could lead to the development of novel immunotherapeutic approaches. The present study was undertaken to determine the level of TLR9 expression in cervical neoplasias from Tunisian women with 53 formalin-fixed and paraffin-embedded specimens, including 22 samples of invasive cervical carcinoma (ICC), 18 of cervical intraepithelial neoplasia (CIN), 7 of condyloma and 6 normal cervical tissues as control cases. Quantification of TLR9 expression was based on scoring four degrees of extent and intensity of immunostaining in squamous epithelial cells. TLR9 expression gradually increased from CIN1 (80% weak intensity) to CIN2 (83.3% moderate), CIN3 (57.1% strong) and ICC (100% very strong). It was absent in normal cervical tissue and weak in 71.4% of condyloma. The mean scores of TLR9 expression were compared using the Kruskall-Wallis test and there was a statistical significance between normal tissue and condyloma as well as between condyloma, CINs and ICC. These results suggest that TLR9 may play a role in progression of cervical neoplasia in Tunisian patients and could represent a useful biomarker for malignant transformation of cervical squamous cells. Prevalence and Genotype Distribution of HPV among Women Attending a Cervical Cancer Screening Mobile Unit in Lampang, Thailand Paengchit, Kannika;Kietpeerakool, Chumnan;Lalitwongsa, Somkiet 6151 A growing body of literature is evidence that identifying subtypes of high-risk human papillomavirus (HR-HPV) has impacted on various steps of cervical cancer prevention.Thus, it is mandatory to determine the background prevalence and distribution of HPV subtypes for designing and implementing area-specific management. The present study was conducted to evaluate prevalence and distribution of HPV subtypes among women aged 30-70 years living in Lampang, an area with a high incidence of cervical cancer, through use of a mobile screening unit. Of 2,000 women recruited in this study, 108 (5.40%, 95%CI: 4.45-6.48) were found to have HR-HPV infection. Risk was significantly correlated with age and number of partners. Singly or in combination, the most common genotype was HPV 52 (17.6%), followed by HPV 16 (14.81%), HPV 58 (13.89%), HPV 33 (11.11%), HPV 51 (11.11%), and HPV 56 (9.26%). HPV 18 was found in only 5.6% of cases. Together, HPV 16/18 were noted in approximately 20.4% of cases. Eighteen(16.67%) women were positive with multiple subtypes of HR-HPV. Co-infection most frequently involved HPV 16 or HPV 58. These findings have obvious implications for vaccine policy. Diffusion-Weighted Imaging for the Left Hepatic Lobe has Higher Diagnostic Accuracy for Malignant Focal Liver Lesions Han, Xue;Dong, Yin;Xiu, Jian-Jun;Zhang, Jie;Huang, Zhao-Qin;Cai, Shi-Feng;Yuan, Xian-Shun;Liu, Qing-Wei 6155 Background: This study was conducted to investigate whether apparent diffusion coefficient (ADC) measurements by dividing the liver into left and right hepatic lobes may be utilized to improve the accuracy of differential diagnosis of benign and malignant focal liver lesions. Materials and Methods: A total of 269 consecutive patients with 429 focal liver lesions were examined by 3-T magnetic resonance imaging that included diffusion-weighted imaging. For 58 patients with focal liver lesions of the same etiology in left and right hepatic lobes, ADCs of normal liver parenchyma and focal liver lesions were calculated and compared using the paired t-test. For all 269 patients, ADC cutoffs for focal liver lesions and diagnostic accuracy in the left hepatic lobe, right hepatic lobe and whole liver were evaluated by receiver operating characteristic curve analysis. Results: For the group of 58 patients, mean ADCs of normal liver parenchyma and focal liver lesions in the left hepatic lobe were significantly higher than those in the right hepatic lobe. For differentiating malignant lesions from benign lesions in all patients, the sensitivity and specificity were 92.6% and 92.0% in the left hepatic lobe, 94.4% and 94.4% in the right hepatic lobe, and 90.4% and 94.7% in the whole liver, respectively. The area under the curve of the right hepatic lobe, but not the left hepatic lobe, was higher than that of the whole liver. Conclusions: ADCs of normal liver parenchyma and focal liver lesions in the left hepatic lobe were significantly higher than those in the right hepatic lobe. Optimal ADC cutoff for focal liver lesions in the right hepatic lobe, but not in the left hepatic lobe, had higher diagnostic accuracy compared with that in the whole liver. Radiation Induces Phosphorylation of STAT3 in a Dose- and Time-dependent Manner Gao, Ling;Li, Feng-Sheng;Chen, Xiao-Hua;Liu, Qiao-Wei;Feng, Jiang-Bin;Liu, Qing-Jie;Su, Xu 6161 Background: We have reported the radiation could activate STAT3, which subsequently promotes the invasion of A549 cells. We here explored the dose- and time-response of STAT3 to radiation and the effect of radiation on upstream signaling molecules. Materials and Methods: A549 cells were irradiated with different doses of ${\gamma}$-rays. The expression of and nucleus translocation of p-STAT3 in A549 cells were detected by immunoblotting and immunofluorescence, respectively. The level of phosphorylated EGFR was also assessed by immunoblotting, and IL-6 expression was detected by real time PCR and ELISA. Results: Radiation promoted the phosphorylation of STAT3 at Y705 in a dose- and time-dependent manner and nuclear translocation. The level of phosphorylated EGFR in A549 cells increased after radiation. In additional, the mRNA and protein levels of IL-6 in A549 cells were also up regulated by radiation. Conclusions: STAT3 is activated by radiation in a dose-and time-dependent manner, probably due to radiation-induced activation of EGFR or secretion of IL-6 in A549 cells. Intra-Peritoneal Cisplatin Combined with Intravenous Paclitaxel in Optimally Debulked Stage 3 Ovarian Cancer Patients: An Izmir Oncology Group Study Unal, Olcun Umit;Yilmaz, Ahmet Ugur;Yavuzsen, Tugba;Akman, Tulay;Ellidokuz, Hulya 6165 Background: The advantage of intra-peritoneal (IP) chemotherapy (CT) in the initial management of ovarian cancer after cytoreductive surgery is well known. The feasibility and toxicity of a treatment regimen with an IP + intravenous CT (IPIVCT) for optimally debulked stage III ovarian cancer were here evaluated retrospectively. Materials and Methods: A total of 30 patients were treated in our institution between October 2006 and February 2011. Patients received IV paclitaxel $175mg/m^2$ over 3 hours followed by IP cisplatin $75mg/m^2$ on day 1; they also received IP paclitaxel $60mg/m^2$ on day 8. They were also scheduled to receive 6 courses of CT every 21 days. Results: The median age of the patients was 55 years (35-77), and the majority had papillary serous ovarian cancer (63.3%). The patients completed a total of 146 cycles of IPIVCT. Twenty-eight were able to receive at least three cycles of IPIVCT and 18 (60%) completed the scheduled 6 cycles. Two patients discontinued the IPIVCT because of toxicity of chemotherapy agents and 6 had to stop treatment due to intolerable abdominal pain during IP drug administration, obstruction and impaired access. Grade 3/4 toxicities included neutropenia (6 patients; 20%), anemia (2 patients; 6.7%) and nausea-vomiting (2 patients; 6.7%). Doses were delayed in 12 cycles (8%) for neutropenia (n=6), thrombocytopenia (n=3) and elevated creatinine (n=3). Drug doses were not reduced. The median duration of progression-free survival (PFS) was 47.7 months (95%CI, 38.98-56.44) and overall survival (OS) was 51.7 months (95%CI, 44.13-59.29). Two and five-year overall survival rates were 75.6 % and 64.8%, respectively. Conclusions: IPIVCT is feasible and well-tolerated in this setting. Its clinically proven advantages should be taken into consideration and more efforts should be made to administer IPIVCT to suitable patients. In Whom Do Cancer Survivors Trust Online and Offline? Shahrokni, Armin;Mahmoudzadeh, Sanam;Lu, Bryan Tran 6171 Background: In order to design effective educational intervention for cancer survivors, it is necessary to identify most-trusted sources for health-related information and the amount of attention paid to each source. Objective: The objective of our study was to explore the sources of health information used by cancer survivors according to their access to the internet and levels of trust in and attention to those information sources. Materials and Methods: We analyzed sources of health information among cancer survivors using selected questions adapted from the 2012 Health Information National Trends Survey (HINTS). Results: Of 357 participants, 239 (67%) had internet access (online survivors) while 118 (33%) did not (offline survivors). Online survivors were younger (p<0.001), more educated (p<0.001), more non-Hispanic whites (p<0.001), had higher income (p<0.001), had more populated households (p<0.001) and better quality of life (p<0.001) compared to offline survivors. Prevalence of some disabilities was higher among offline survivors including serious difficulties with walking or climbing stairs (p<0.001), being blind or having severe visual impairment (p=0.001), problems with making decisions (p<0.001), doing errands alone (p=0.001) and dressing or bathing (p=0.001). After adjusting for socio-demographic status, cancer survivors who were non-Hispanic whites (OR= 3.49, p<0.01), younger (OR=4.10, p<0.01), more educated (OR= 2.29, p=0.02), with greater income (OR=4.43, p<0.01), and with very good to excellent quality of life (OR=2.60, p=0.01) had higher probability of having access to the internet, while those living in Midwest were less likely to have access (OR= 0.177, p<0.01). Doctors (95.5%) were the most and radio (27.8%) was the least trusted health related information source among all cancer survivors. Online survivors trusted internet much more compared to those without access (p<0.001) while offline cancer survivors trusted health-related information from religious groups and radio more than those with internet access (p<0.001 and p=0.008). Cancer survivors paid the most attention to health information on newsletters (63.8%) and internet (60.2%) and the least to radio (19.6%). More online survivors paid attention to internet than those without access (68.5% vs 39.1%, p<0.001) while more offline survivors paid attention to radio compared to those with access (26.8% vs 16.5%, p=0.03). Conclusions: Our findings emphasize the importance of improving the access and empowering the different sources of information. Considering that the internet and web technologies are continuing to develop, more attention should be paid to improve access to the internet, provide guidance and maintain the quality of accredited health information websites. Those without internet access should continue to receive health-related information via their most trusted sources. HPV Vaccination for Cervical Cancer Prevention is not Cost-Effective in Japan Isshiki, Takahiro 6177 Background: Our study objectives were to evaluate the medical economics of cervical cancer prevention and thereby contribute to cancer care policy decisions in Japan. Methods: Model creation: we created presence-absence models for prevention by designating human papillomavirus (HPV) vaccination for primary prevention of cervical cancer. Cost classification and cost estimates: we divided the costs of cancer care into seven categories (prevention, mass-screening, curative treatment, palliative care, indirect, non-medical, and psychosocial cost) and estimated costs for each model. Cost-benefit analyses: we performed cost-benefit analyses for Japan as a whole. Results: HPV vaccination was estimated to cost $291.5 million, cervical cancer screening $76.0 million and curative treatment $12.0 million. The loss due to death was $251.0 million and the net benefit was -$128.5 million (negative). Conclusion: Cervical cancer prevention was not found to be cost-effective in Japan. While few cost-benefit analyses have been reported in the field of cancer care, these would be essential for Japanese policy determination. Lack of Associations of the COMT Val158Met Polymorphism with Risk of Endometrial and Ovarian Cancer: a Pooled Analysis of Case-control Studies Liu, Jin-Xin;Luo, Rong-Cheng;Li, Rong;Li, Xia;Guo, Yu-Wu;Ding, Da-Peng;Chen, Yi-Zhi 6181 This meta-analysis was conducted to examine whether the genotype status of Val158Met polymorphism in catechol-O-methyltransferase (COMT) is associated with endometrial and ovarian cancer risk. Eligible studies were identified by searching several databases for relevant reports published before January 1, 2014. Pooled odds ratios (ORs) were appropriately derived from fixed-effects or random-effects models. In total, 15 studies (1,293 cases and 2,647 controls for ovarian cancer and 2,174 cases and 2,699 controls for endometrial cancer) were included in the present meta-analysis. When all studies were pooled into the meta-analysis, there was no evidence for significant association between COMT Val158Met polymorphism and ovarian cancer risk (Val/Met versus Val/Val: OR=0.91, 95% CI=0.76-1.08; Met/Met versus Val/Val: OR=0.90, 95% CI=0.73-1.10; dominant model: OR=0.90, 95% CI=0.77-1.06; recessive model: OR=0.95, 95% CI=0.80-1.13). Similarly, no associations were found in all comparisons for endometrial cancer (Val/Met versus Val/Val: OR 0.97, 95% CI=0.77-1.21; Met/Met versus Val/Val: OR=1.02, 95% CI=0.73-1.42; dominant model: OR=0.98, 95% CI=0.77-1.25; recessive model: OR=1.02, 95% CI=0.87-1.20). In the subgroup analyses by source of control and ethnicity, no significant associations were found in any subgroup of population. This meta-analysis strongly suggests that COMT Val158Met polymorphism is not associated with increased endometrial and ovarian cancer risk. Knowledge, Perceptions and Acceptability of HPV Vaccination among Medical Students in Chongqing, China Fu, Chun-Jing;Pan, Xiong-Fei;Zhao, Zhi-Mei;Saheb-Kashaf, Michael;Chen, Feng;Wen, Ying;Yang, Chun-Xia;Zhong, Xiao-Ni 6187 Objectives: To evaluate medical students' knowledge of HPV and HPV related diseases and assess their attitudes towards HPV vaccination. Methods: A total of 605 medical undergraduates from Chongqing Medical University in China were surveyed using a structured and pretested questionnaire on HPV related knowledge. Results: Some 68.9% of the medical students were females, and mean age was 21.6 (${\pm}1.00$) years. Only 10.6% correctly answered more than 11 out of 14 questions on HPV related knowledge, 71.8% being willing to receive/advise on HPV vaccination. Female students (OR: 2.69; 95% CI: 1.53-4.72) and students desiring more HPV education (OR: 4.24; 95% CI: 1.67-10.8) were more willing to accept HPV vaccination. HPV vaccination acceptability was observed to show a positive association with HPV related knowledge. Conclusions: Our survey found low levels of HPV related knowledge and HPV vaccination acceptability among participating medical students. HPV education should be systematically incorporated into medical education to increase awareness of HPV vaccination. Pattern of Tobacco Use and its Correlates among Older Adults in India Mini, G.K.;Sarma, P.S.;Thankappan, K.R. 6195 Purpose: We examined tobacco use pattern and its correlates among older adults. Materials and Methods: We used data of 9,852 older adults (${\geq}60$ years) (men 47% mean age 68 years) collected by the United Nations Population Fund on Ageing from seven Indian states. Logistic regression analysis was used to assess the correlates of tobacco use. Results: Current use of any form of tobacco was reported by 27.8% (men 37.9%, women 18.8%); 9.2% reported only smoking tobacco, 16.9% smokeless tobacco only and 1.7% used both forms. Alcohol users (OR:5.20, 95% CI:4.06-6.66), men (OR:2.92, CI :2.71-3.47), those reporting lower income (OR:2.74, CI:2.16-3.46), rural residents (OR 1.34, CI 1.17-1.54) and lower castes (OR:1.29, CI:1.13-1.47) were more likely to use any form of tobacco compared to their counterparts. Conclusions: Tobacco cessation interventions are warranted in this population focusing on alcohol users, men, those from lower income, rural residents and those belonging to a lower caste. The MMP-2 -735 C Allele is a Risk Factor for Susceptibility to Breast Cancer Yari, Kheirollah;Rahimi, Ziba;Moradi, Mohamad Taher;Rahimi, Zohreh 6199 Background: The expression of MMP genes has been demonstrated to be associated with tumor invasion, metastasis and survival rate for a variety of cancers. The functional promoter polymorphism MMP-2 C-735T is associated with decreased expression of the MMP-2 gene. The aim of present study was to detect any association between MMP-2 C-735T and susceptibility to breast cancer. Materials and Methods: The MMP-2 C-735T polymorphism was studied in 233 women (98 with breast cancer and 135 healthy controls). All studied women were from Kermanshah and Ilam provinces of Western Iran. The MMP-2 C-735T polymorphism was detected using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Results: The frequencies of MMP-2 CC, CT and TT genotypes in healthy individuals were 59.3, 38.5 and 2.2%, respectively. However, in breast cancer patients, only CC (71.4%) and CT (28.6%) genotypes were observed (p=0.077). In patients the frequency of the MMP-2 C allele was significantly higher (85.7%) compared to that in controls (78.5 %, p=0.048). The presence of C allele of MMP-2 increased the risk of breast cancer by 1.64-fold [OR=1.64 (95%CI 1.01-2.7, p=0.049)]. The frequency of MMP-2 C allele was also higher in patients ${\leq}40$ years (88.9%) than those aged ${\geq}41$ years (67.5%, p=0.07). In addition, the frequency of MMP-2 C allele tended to be higher in patients with a family history of cancer in first-degree relatives (76.6%) compared to that without a family history of cancer (67.3%, p=0.31). Conclusions: Our findings indicate that the C allele of MMP-2 C-735T polymorphism is associated with increased risk of breast cancer. Also, the MMP-2 C allele might increase the risk of young onset breast cancer in our population. Iranian Cancer Patient Perceptions of Prognosis and the Relationship to Hope Seyedrasooli, Alehe;Rahmani, Azad;Howard, Fuchsia;Zamanzadeh, Vahid;Mohammadpoorasl, Asghar;Aliashrafi, Raha;Pakpour, Vahid 6205 Background: The aim of this study was to investigate Iranian cancer patient perceptions of their prognosis, factors that influence perceptions of prognosis and the effect this has on patient level of hope. Materials and Methods: Iranian cancer patients (n=200) completed self-report measures of their perceptions of their prognosis and level of hope, in order to assess the relationship between the two and identify factors predictive of perceptions by multiple linear regression analysis. Results: Cancer patients perceived of their prognosis positively (mean 11.4 out of 15), believed their disease to be curable, and reported high levels of hope (mean 40.4 out of 48.0). Multiple linear regression analyses demonstrated that participants who were younger, perceived they had greater family support, and had higher levels of hope reported more positive perceptions of their cancer prognosis. Conclusions: Positive perceptions of prognosis and its positive correlation with hope in Iranian cancer patients highlights the importance of cultural issues in the disclosure of cancer related information. Associations between AT-rich Interactive Domain 5B gene Polymorphisms and Risk of Childhood Acute Lymphoblastic Leukemia: a Meta-analysis Zeng, Hui;Wang, Xue-Bin;Cui, Ning-Hua;Nam, Seungyoon;Zeng, Tuo;Long, Xinghua 6211 Previous genome-wide association studies (GWAS) have implicated several single nucleotide polymorphisms (SNPs) in the AT-rich interactive domain 5B (ARID5B) gene with childhood acute lymphoblastic leukemia (ALL). However, replicated studies reported some inconsistent results in different populations. Using meta-analysis, we here aimed to clarify the nature of the genetic risks contributed by the two polymorphisms (rs10994982, rs7089424) for developing childhood ALL. Through searches of PubMed, EMBASE, and manually searching relevant references, a total of 14 articles with 16 independent studies were included. Odds ratios (ORs) with 95% confidence intervals (95%CI) were calculated to assess the associations. Both SNPs rs10994982 and rs7089424 showed significant associations with childhood ALL risk in all genetic models after Bonferroni correction. Furthermore, subtype analyses of B-lineage ALL provided strong evidence that SNP rs10994982 is highly associated with the risk of developing B-hyperdiploid ALL. These results indicate that SNPs rs10994982 and rs7089424 are indeed significantly associated with increased risk of childhood ALL. Anti-metastasis Activity of Black Rice Anthocyanins Against Breast Cancer: Analyses Using an ErbB2 Positive Breast Cancer Cell Line and Tumoral Xenograft Model Luo, Li-Ping;Han, Bin;Yu, Xiao-Ping;Chen, Xiang-Yan;Zhou, Jie;Chen, Wei;Zhu, Yan-Feng;Peng, Xiao-Li;Zou, Qiang;Li, Sui-Yan 6219 Background: Increasing evidence from animal, epidemiological and clinical investigations suggest that dietary anthocyanins have potential to prevent chronic diseases, including cancers. It is also noteworthy that human epidermal growth factor receptor 2 (ErbB2) protein overexpression or ErbB2 gene amplification has been included as an indicator for metastasis and higher risk of recurrence for breast cancer. Materials and Methods: The present experiments investigated the anti-metastasis effects of black rice anthocyanins (BRACs) on ErbB2 positive breast cancer cells in vivo and in vitro. Results: Oral administration of BRACs (150 mg/kg/day) reduced transplanted tumor growth, inhibited pulmonary metastasis, and decreased lung tumor nodules in BALB/c nude mice bearing ErbB2 positive breast cancer cell MDA-MB-453 xenografts. The capacity for migration, adhesion, motility and invasion was also inhibited by BRACs in MDA-MB-453 cells in a concentration dependent manner, accompanied by decreased activity of a transfer promoting factor, urokinase-type plasminogen activator (u-PA). Conclusions: Together, our results indicated that BRACs possess anti-metastasis potential against ErbB2 positive human breast cancer cells in vivo and in vitro through inhibition of metastasis promoting molecules. Novel DOX-MTX Nanoparticles Improve Oral SCC Clinical Outcome by Down Regulation of Lymph Dissemination Factor VEGF-C Expression in vivo: Oral and IV Modalities Abbasi, Mehran Mesgari;Monfaredan, Amir;Hamishehkar, Hamed;Seidi, Khaled;Jahanban-Esfahlan, Rana 6227 Background: Oral squamous cell carcinoma (OSCC) remains as one of the most difficult malignancies to control because of its high propensity for local invasion and cervical lymph node dissemination. The aim of present study was to evaluate the efficacy of novel pH and temperature sensitive doxorubicin-methotrexate-loaded nanoparticles (DOX-MTX NP) in terms of their potential to change the VEGF-C expression profile in a rat OSCC model. Materials and Methods: 120 male rats were divided into 8 groups of 15 animals administrated with 4-nitroquinoline-1-oxide to induce OSCCs. Newly formulated doxorubicin-methotrexate-loaded nanoparticles (DOX-MTX NP) and free doxorubicin were IV and orally administered. Results: Results indicated that both oral and IV forms of DOX-MTX-nanoparticle complexes caused significant decrease in the mRNA level of VEGF-C compared to untreated cancerous rats (p<0.05). Surprisingly, the VEGF-C mRNA was not affected by free DOX in both IV and oral modalities (p>0.05). Furthermore, in DOX-MTX NP treated group, less tumors characterized with advanced stage and VEGF-C mRNA level paralleled with improved clinical outcome (p<0.05). In addition, compared to untreated healthy rats, the VEGF-C expression was not affected in healthy groups that were treated with IV and oral dosages of nanodrug (p>0.05). Conclusions: VEGF-C is one of the main prognosticators for lymph node metastasis in OSCC. Down-regulation of this lymph-angiogenesis promoting factor is a new feature acquired in group treated with dual action DOX-MTX-NPs. Beside the synergic apoptotic properties of concomitant use of DOX and MTX on OSCC, DOX-MTX NPs possessed anti-angiogenesis properties which was related to the improved clinical outcome in treated rats. Taking together, we conclude that our multifunctional doxorubicin-methotrexate complex exerts specific potent apoptotic and anti-angiogenesis properties that could ameliorate the clinical outcome presumably via down-regulating dissemination factor-VEGF-C expression in a rat OSCC model. Roles for Paraoxonase but not Ceruloplasmin in Peritoneal Washing Fluid in Differential Diagnosis of Gynecologic Pathologies Yildirim, Mustafa;Demirpence, Ozlem;Kaya, Vildan;Suren, Dinc;Karaca, Mehmet;Evliyaoglu, Osman;Yilmaz, Necat;Gunduz, Seyda 6233 Background: Intraperitoneal spread of gynecologic cancers is a major cause of mortality and morbidity and often presents with malignant ascites. Microscopic tumor spread can be demonstrated by a peritoneal wash cytology and help assess the prognosis of the disease. In our study, the roles of paraoxonase and ceruloplasmin, measured in peritoneal washing fluid of patients operated for gynecologic pathologies in differential diagnosis was investigated. Materials and Methods: Patients operated for malign or benign gynecologic pathologies in Antalya Education and Research Hospital Gynecology Clinic between 2010-2012 were included in the study. Samples were obtained during surgery. Results: A statistically significant difference was detected between patients with benign and malign diseases with regards to PON1 levels measured in peritoneal washing fluid (p:0.044), the average values being $64.2{\pm}30.8$ (Range 10.8-187.2) and $41.4{\pm}21.4$ (Range 10.4-95.5), respectively. No significant variation was evident for ceruloplasmin. Conclusions: Paraoxonase levels measured in peritoneal washing fluid may contribute to the differentiation of malign-benign diseases in gynecologic pathologies. Comparison of Neutrophil/Lymphocyte and Platelet/Lymphocyte Ratios for Predicting Malignant Potential of Suspicious Ovarian Masses in Gynecology Practice Topcu, Hasan Onur;Guzel, Ali Irfan;Ozer, Irfan;Kokanali, Mahmut Kuntay;Gokturk, Umut;Muftuoglu, Kamil Hakan;Doganay, Melike 6239 Purpose: To compare the diagnostic accuracy of the neutrophil/lymphocyte ratio (NLR) with the platelet/lymphocyte ratio (PLR) in predicting malignancy of pelvic masses which are pre-operatively malignant suspicious. Materials and Methods: In this retrospective study we evaluated the clinical features of patients with ovarian masses which had pre-operatively been considered suspicious for malignancy. The patients whose intraoperative frozen sections were malign were classified as the study group, while those who had benign masses were the control group. Data recorded were age of the patient, diameter of the mass, pre-operative serum Ca 125 levels, platelet count, neutrophil/lymphocyte ratio and platelet/lymphocyte ratio. Results: There was statistically significantly difference between the groups in terms of age, diameter of the mass, serum Ca 125 levels, platelet number and platelet/lymphocyte ratio. Mean neutrophil/lymphocyte ratios showed no difference between the groups. ROC curve analysis showed that age, serum Ca 125 levels, platelet number and PLR were discriminative markers in predicting malignancy in adnexal masses. Conclusions: According to the current study, serum Ca 125 levels, pre-operative platelet number and PLR may be good prognostic factors, while NLR is an ineffective marker in predicting the malignant characteristics of a pelvic mass. miRNA-1297 Induces Cell Proliferation by Targeting Phosphatase and Tensin Homolog in Testicular Germ Cell Tumor Cells Yang, Nian-Qin;Zhang, Jian;Tang, Qun-Ye;Guo, Jian-Ming;Wang, Guo-Min 6243 To investigate the role of miR-1297 and the tumor suppressor gene PTEN in cell proliferation of testicular germ cell tumors (TGCT). MTT assays were used to test the effect of miR-1297 on proliferation of the NCCIT testicular germ cell tumor cell line. In NCCIT cells, the expression of PTEN was assessed by Western blotting further. In order to confirm target association between miR-1297 and 3'-UTR of PTEN, a luciferase reporter activity assay was employed. Moreover, roles of PTEN in proliferation of NCCIT cells were evaluated by transfection of PTEN siRNA. Proliferation of NCCIT cells was promoted by miR-1297 in a concentration-dependent manner. In addition, miR-1297 could bind to the 3'-UTR of PTEN based on luciferase reporter activity assay, and reduced expression of PTEN at protein level was found. Proliferation of NCCIT cells was significantly enhanced after knockdown of PTEN by siRNA. miR-1297 as a potential oncogene could induce cell proliferation by targeting PTEN in NCCIT cells. Emodin Inhibits Breast Cancer Cell Proliferation through the ERα-MAPK/Akt-Cyclin D1/Bcl-2 Signaling Pathway Sui, Jia-Qi;Xie, Kun-Peng;Zou, Wei;Xie, Ming-Jie 6247 Background: The aim of the present study was to investigate the involvement of emodin on the growth of human breast cancer MCF-7 and MDA-MB-231 cells and the estrogen (E2) signal pathway in vitro. Materials and Methods: MTT assays were used to detect the effects of emodin on E2 induced proliferation of MCF-7 and MDA-MB-231 cells. Flow cytometry (FCM) was applied to determine the effect of emodin on E2-induced apoptosis of MCF-7 cells. Western blotting allowed detection of the effects of emodin on the expression of estrogen receptor ${\alpha}$, cyclin D1 and B-cell lymphoma-2 (Bcl-2), mitogen-activated protein kinases (MAPK) and phosphatidylinostiol 3-kinases (PI3K). Luciferase assays were emplyed to assess transcriptional activity of $ER{\alpha}$. Results: Emodin could inhibit E2-induced MCF-7 cell proliferation and anti-apoptosis effects, and arrest the cell cycle in G0/G1 phase, further blocking the effect of E2 on expression and transcriptional activity of $ER{\alpha}$. Moreover, Emodin influenced the ER ${\alpha}$ genomic pathway via downregulation of cyclin D1 and Bcl-2 protein expression, and influenced the non-genomic pathway via decreased PI3K/Akt protein expression. Conclusions: These findings indicate that emodin exerts inhibitory effects on MCF-7 cell proliferation via inhibiting both non-genomic and genomic pathways. Survival of Colorectal Cancer Patients in the Presence of Competing-Risk Baghestani, Ahmad Reza;Daneshvar, Tahoura;Pourhoseingholi, Mohamad Amin;Asadzade, Hamid 6253 Background: Colorectal cancer (CRC) is considered to be a main cause of malignancy-related death in the world, being commonly diagnosed in both men and women. It is the third leading cause of cancer dependent death in the world and there are one million new cases diagnosed per year. In Iran the incidence of colorectal cancer has increased during the last 25 years and it is the fifth cause of cancer in men and the third in women. Materials and Methods: In this article we analyzed the survival of 475 colorectal patients of Taleghani hospital in Tehran with the semi-parametric competing-risks model. Results: There were 55% male cases and at the time of the diagnosis most of the patients were between 48 and 67years old. The probability of a patient death from colorectal cancer with survival of more than 25 years was about 0.4. Body mass index, height, tumour site and gender had no influence. Conclusions: According to these data and by using semi-parametric competing-risks method, we found out that only age at diagnosis has a significant effect on these patient survival time. Development of In-House Multiplex Real Time PCR for Human Papillomavirus Genotyping in Iranian Women with Cervical Cancer and Cervical Intraepithelial Neoplasia Sohrabi, Amir;Mirab-Samiee, Siamak;Modarressi, Mohammad Hossein;Izadimood, Narge;Azadmanesh, Kayhan;Rahnamaye-Farzami, Marjan 6257 Background: HPV related cervical cancer as one of the most common women cancers in developing countries. Regarding accessibility of commercial vaccines, any long or short term modality for integrating preventive immunization against HPV in a national program needs comprehensive information about HPV prevalence and its genotypes. The important role of selecting most accurate diagnostic technologies for obtaining relevant data is underlined by different assays proposed in the literature. The main objective of the present study was to introduce an in-house HPV typing assay using multiplex real time PCR with reliable results and affordable cost for molecular epidemiology surveys and diagnosis. MATERIALS AND METHODS: 112 samples of formalin fixed paraffin embedded tissues and liquid based cytology specimens from patients with known different grades of cervical dysplasia and invasive cancer, were examined by this method and the result were verified by WHO HPV LabNet proficiency program in 2013. RESULTS: HPV was detected in 105 (93.7%) out of 112 samples. The dominant types were HPV 18 (61.6%) and HPV 16 (42.9%). Among the mixed genotypes, HPV 16 and 18 in combination were seen in 12.4% of specimens. CONCLUSIONS: According to acceptable performance, easy access to primers, probes and other consumables, affordable cost per test, this method can be used as a diagnostic assay in molecular laboratories and for further planning of cervical carcinoma prevention programs. Timing of Thoracic Radiotherapy in Limited Stage Small Cell Lung Cancer: Results of Early Versus Late Irradiation from a Single Institution in Turkey Bayman, Evrim;Etiz, Durmus;Akcay, Melek;Ak, Guntulu 6263 Background: It is standard treatment to combine chemotherapy (CT) and thoracic radiotherapy (TRT) in treating patients with limited stage small cell lung cancer (LS-SCLC). However, optimal timing of TRT is unclear. We here evaluated the survival impact of early versus late TRT in patients with LS-SCLC. Materials and Methods: Follow-up was retrospectively analyzed for seventy consecutive LS-SCLC patients who had successfully completed chemo-TRT between January 2006 and January 2012. Patients received TRT after either 1 to 2 cycles of CT (early TRT) or after 3 to 6 cycles of CT (late TRT). Survival and response rates were evaluated using the Kaplan-Meier method and comparisons were made using the multivariate Cox regression test. Results: Median follow-up was 24 (5 to 57) months. Carboplatin+etoposide was the most frequent induction CT (59%). Median overall, disease free, and metastasis free survivals in all patients were 15 (5 to 57), 5 (0 to 48) and 11 (3 to 57) months respectively. Late TRT was superior to early TRT group in terms of response rate (p=0.05). 3 year overall survival (OS) rates in late versus early TRT groups were 31% versus 17%, respectively (p=0.03). Early TRT (p=0.03), and incomplete response to TRT (p=0.004) were negative predictors of OS. Significant positive prognostic factors for distant metastasis free survival were late TRT (p=0.03), and use of PCI (p=0.01). Use of carboplatin versus cisplatin for induction CT had no significant impact on OS (p=0.634), DFS (p=0.727), and MFS (p=0.309). Conclusions: Late TRT appeared to be superior to early TRT in LS-SCLC treatment in terms of complete response, OS and DMFS. Carboplatin or cisplatin can be combined with etoposide in the induction CT owing to similar survival outcomes. MiR-150-5p Suppresses Colorectal Cancer Cell Migration and Invasion through Targeting MUC4 Wang, Wei-Hua;Chen, Jie;Zhao, Feng;Zhang, Bu-Rong;Yu, Hong-Sheng;Jin, Hai-Ying;Dai, Jin-Hua 6269 Growing evidence suggests that miR-150-5p has an important role in regulating genesis of various types of cancer. However, the roles and the underlying mechanisms of miR-150-5p in development of colorectal cancer (CRC) remain largely unknown. Transwell chambers were used to analyze effects on cell migration and invasion by miR-150-5p. Quantitative real-time PCR (qRT-PCR), Western blotting and dual-luciferase 3' UTR reporter assay were carried out to identify the target genes of miR-150-5p. In our research, miR-150-5p suppressed CRC cell migration and invasion, and MUC4 was identified as a direct target gene. Its effects were partly blocked by re-expression of MUC4. In conclusiomn, miR-150-5p may suppress CRC metastasis through directly targeting MUC4, highlighting its potential as a novel agent for the treatment of CRC metastasis. Efficacy and Tolerance of Pegaspargase-Based Chemotherapy in Patients with Nasal-Type Extranodal NK/T-Cell Lymphoma: a Pilot Study Wen, Jing-Yun;Li, Mai;Li, Xing;Chen, Jie;Lin, Qu;Ma, Xiao-Kun;Dong, Min;Wei, Li;Chen, Zhan-Hong;Wu, Xiang-Yuan 6275 Nasal-type extranodal natural killer (NK)/T-cell lymphoma (ENKL) is a highly invasive cancer with a poor prognosis. More effective and safer treatment regimens for ENKL are needed. Pegaspargase (PEG-Asp) has a similar mechanism of action to L-asparaginase (L-Asp), but presents lower antigenicity. The aim of the present research was to evaluate the safety profile and the latent efficacy of a PEG-Asp-based treatment regimen in patients with ENKL. Data collected from 20 patients with histologically confirmed ENKL, admitted to the Third Affiliated Hospital of Sun Yat-Sen University from January 2009 to August 2013, were included in the study. All patients received $2500IU/m^2$/IM PEG-Asp on day 1 of every 21-day treatment cycle. Patients received combination chemotherapy with CHOP (n=5), EPOCH (n=7), GEMOX (n=7) or CHOP with bleomycin (n=1). After 2-5 treatment cycles (median, 4 cycles) of PEG-Asp-based chemotherapy, five patients (25%) showed a complete response (CR), and the overall response rate (ORR) was 60%. Grade 3/4 neutropenia occurred in fourteen patients (70%). Grade 3 alanine aminotransferase (ALT) elevation was observed in two. Grade 1-2 non-hematological toxicity consisted of activated partial thromboplastin time (APTT) elongation (n=9), hypofibrinogenemia (n=6), hypoproteinemia (n=17), hyperglycemia (n=3), and nausea (n=6). No allergic reactions were detected. No treatment related death was reported. Our results suggested that PEG-Asp-based chemotherapy presented an acceptable tolerance and a potential short-term outcome in patients with nasal-type ENKL. Identification of Patients with Microscopic Hematuria who are at Greater Risk for the Presence of Bladder Tumors Using a Dedicated Questionnaire and Point of Care Urine Test - A Study by the Members of Association of Urooncology, Turkey Turkeri, Levent;Mangir, Naside;Gunlusoy, Bulent;Yildirim, Asif;Baltaci, Sumer;Kaplan, Mustafa;Bozlu, Murat;Mungan, Aydin 6283 In patients with microscopic hematuria there is a need for better identification of those who are at greater risk of harbouring bladder tumors. The RisikoCheck(C) questionnaire has a strong correlation with the presence of urothelial carcinoma (UC) of the bladder and in combination with other available tests may help identify patients who require detailed clinical investigations due to increased risk of presence of bladder tumors. This study aimed to evaluate the efficacy of RisikoCheck(C) questionnaire together with NMP-22(R) (BladderChek(R)) as a point-of-care urine test in predicting the presence of bladder tumors in patients presenting with microscopic hematuria as the sole finding. In this multi-institutional prospective evaluation of 303 consecutive patients without a history of urothelial carcinoma (UC), RisikoCheck(C) risk group assessment, urinary tract imaging and cystourethroscopy as well as urine cytology and Nuclear Matrix Protein-22 (NMP-22 BladderChek) testing were performed where available. The sensitivity, specificity, negative predictive value (NPV), and positive predictive values (PPV) for the risk adapted approach were calculated. All patients underwent cystoscopy, and tumors were detected in 18 (5.9%). Urine cytology and NMP-22 was positive for malignancy in 9 (3.2%) and 12 (7.5%) of patients, respectively. A total of 43 (14%) patients were in the high risk group according to the RisikoCheck(C) questionnaire. The sensitivity and specificity of the questionnaire in detecting a bladder tumor was 61.5 % and 84.0 % in the high risk group. In patients with either a positive NMP-22 test or high risk category RisikoCheck(C), 23.6% had bladder tumors with a corresponding sensitivity of 54.2% and specificity of 88.6%. If both tests were negative only 3.3% of the patients had bladder tumors. The results of our study suggest that the efficacy of diagnostic evaluation of patients with microscopic hematuria may be further enhanced by combining RisikoCheck(C) questionnaire with NMP-22. Effect of Lymphangiogenesis and Lymphovascular Invasion on the Survival Pattern of Breast Cancer Patients Sahoo, Pradyumna Kumar;Jana, Debarshi;Mandal, Palash Kumar;Basak, Samindranath 6287 Background: Invasion of breast cancer cells into blood and lymphatic vessels is one of the most important steps for metastasis. In this study the prognostic relevance of lymphangiogenesis and lymphovascular invasion (LVI) in breast cancer patients was evaluated in terms of survival. Materials and Methods: This retrospective study concerned 518 breast cancer patients who were treated at Department of Surgical Oncology, Saroj Gupta Cancer Centre and Research Institute, Kolkata-700063, West Bengal, India, a reputed cancer centre and research institute of eastern India between January 2006 and December 2007. Results: The median overall survival and disease free survival of the patients were 60 months and 54 months respectively. As per Log-rank test, poor overall as well as disease free survival pattern was observed for LVI positive patients as compared with LVI negative patients (p<0.01). Also poor overall as well as disease free survival pattern was observed for perineural invasion (PNI) positive patients as compared to PNI negative patients (p<0.01). Conclusions: From this study it is evident that LVI and PNI are strongly associated with outcome in terms of disease free as well as overall survival in breast cancer patients. Thus LVI and PNI constitute potential targets for treatment of breast cancer patients. We advocate incorporating their status into breast cancer staging systems. Combined Detection of CEA, CA 19-9, CA 242 and CA 50 in the Diagnosis and Prognosis of Resectable Gastric Cancer Tian, Shu-Bo;Yu, Jian-Chun;Kang, Wei-Ming;Ma, Zhi-Qiang;Ye, Xin;Cao, Zhan-Jiang;Yan, Chao 6295 Our aim was to investigate the value of combined detection of serum carcinoembryonic antigen (CEA), carbohydrate antigen (CA) 19-9, CA 242 and CA 50 in diagnosis and assessment of prognosis in consecutive gastric cancer patients. Clinical data including preoperative serum CEA, CA 19-9, CA 242, and CA 50 values and information on clinical pathological factors were collected and analyzed retrospectively. Univariate and multivariate survival analyses were used to explore the relationship between tumor markers and survival. Positive rates of tumor markers CEA, CA 19-9, CA 242 and CA 50 in the diagnosis of gastric cancer were 17.7, 17.1, 20.4 and 13.8%, respectively, and the positive rate for all four markers combined was 36.6%. Patients with elevated preoperative serum concentrations of CEA, CA 19-9, CA 242 and CA 50, had late clinical tumor stage and significantly poorer overall survival. Five-year survival rates in patients with elevated CEA, CA 19-9, CA 242 and CA 50 were 28.1, 25.8, 27.0 and 24.1%, respectively, compared with 55.0, 55.4, 56.4 and 54.5% in patients with these markers at normal levels (p<0.01). In multivariate Cox proportional hazards analyses, an elevated CA 242 level was determined to be an independent prognostic marker in gastric cancer patients. Combined detection of four tumor markers increased the positive rate for gastric cancer diagnosis. CA 242 showed higher diagnostic value and CA 50 showed lower diagnostic value. In resectable gastric carcinoma, preoperative CA 242 level was associated with disease stage, and was found to be a significant independent prognostic marker in gastric cancer patients. Preliminary Evaluation of the in vitro Efficacy of 1, 2-di (Quinazolin-4-yl) Diselane against SiHa Cervical Cancer Cells Huang, Yin-Jiu;Zhang, Yu-Yuan;Liu, Gang;Tang, Jie;Hu, Jian-Guo;Feng, Zhen-Zhong;Liu, Fang;Wang, Qi-Yi;Li, Dan 6301 Cervical cancer is one the most common malignancies among females. In recent years, its incidence rate has shown a rising trend in some countries so that development of anticancer drugs for cervical cancer is an urgent priority. In our recent anticancer drug discovery screen, 1, 2-di (quinazolin-4-yl)diselane (LG003) was found to possess wide spectrum anticancer efficacy. In the present work, the in vitro anticancer activity of LG003 was evaluated in the SiHa cervical cancer cell line. Compared with commercial anticancer drugs 10-hydroxycamptothecin, epirubicin hydrochloride, taxol and oxaliplatin, LG003 showed better anticancer activity. Furthermore, inhibition effects were time- and dose-dependent. Morphological observation exhibited LG003 treatment results in apoptosis like shrinking and blebbing, and cell membrane damage. Lactate dehydrogenase release assay revealed that LG003 exerts such effects in SiHa cells through a physiology pathway rather than cytotoxicity, which suggests that title compound LG003 can be a potential candidate agent for cervical cancer. Are Women in Kuwait Aware of Breast Cancer and Its Diagnostic Procedures? Saeed, Raed Saeed;Bakir, Yousif Yacoub;Ali, Layla Mohammed 6307 The aim of this study was to examine the knowledge and awareness of women in Kuwait with regard to risk factors, symptoms and diagnostic procedures of breast cancer. A total of 521 questionnaires were distributed among women in Kuwait. Results showed that 72% of respondents linked breast cancer factors to family history, while 69.7% scored abnormal breast enlargement as the most detectable symptom of the disease. Some 84% of participants had heard about self-examination, but knowledge about mammograms was limited to 48.6% and only 22.2% were familiar with diagnostic procedures. Some 22.9% of respondents identified the age over 40 years as the reasonable age to start mammogram screening. Risk factor awareness was independent on age groups (p>0.05), but both high education and family history increased the likelihood of postivie answers; the majority knew about a few factors such as aging, pregnancy after age 30, breast feeding for short time, menopause after age of 50, early puberty, and poor personal hygiene. In conclusion, 43.1% of participants had an overall good knowledge of breast cancer with regards to symptoms, risk factors and breast examination. Very highly significant associations (p<0.005) were evident for all groups except for respondents distributed by nationality (p=0.444). Early campaigns for screening the breast should be recommended to eliminate the confusion of wrong perceptions about malignant mammary disease. Expression of Neuronal Markers, NFP and GFAP, in Malignant Astrocytoma Hashemi, Forough;Naderian, Majid;Kadivar, Maryam;Nilipour, Yalda;Gheytanchi, Elmira 6315 Background: Immunohistochemical markers are considered as important factors in diagnosis of malignant astrocytomas. The aim of the current study was to investigate the frequency of the immunohistochemical markers neurofilament protein (NFP) and glial fibrillary acidic protein (GFAP) in malignant astrocytoma tumors in Firoozgar and Rasool-Akram hospitals from 2005 to 2010. Materials and Methods: In this cross-sectional study, immunohistochemical analysis of NFP and GFAP was performed on 79 tissue samples of patients with the diagnosis of anaplastic and glioblastoma multiform (GBM) astrocytomas. Results: The obtained results demonstrated that all patients were positive for GFAP and only 3.8% were positive for NFP. There was no significant association between these markers and clinical, demographic, and prognostic features of patients (p>0.05). Conclusions: NFP was expressed only in GBMs and not in anaplastic astrocytomas. It would be crucial to confirm the present findings in a larger number of tumors, especially in high grade gliomas. Activation of JNK/p38 Pathway is Responsible for α-Methyl-n-butylshikonin Induced Mitochondria-Dependent Apoptosis in SW620 Human Colorectal Cancer Cells Wang, Hai-Bing;Ma, Xiao-Qiong 6321 ${\alpha}$-Methyl-n-butylshikonin (MBS), one of the active components in the root extracts of Lithospermum erythrorhizon, posses antitumor activity. In this study, we assess the molecular mechanisms of MBS in causing apoptosis of SW620 cells. MBS reduced the cell viability of SW620 cells in a dose-and time-dependent manner and induced cell apoptosis. Treatment of SW620 cells with MBS down-regulated the expression of Bcl-2 and up-regulated the expression of Bak and caused the loss of mitochondrial membrane potential. Additionally, MBS treatment led to activation of caspase-9, caspase-8 and caspase-3, and cleavage of PARP, which was abolished by pretreatment with the pan-caspase inhibitor Z-VAD-FMK. MBS also induced significant elevation in the phosphorylation of JNK and p38. Pretreatment of SW620 cells with specific inhibitors of JNK (SP600125) and p38 (SB203580) abrogated MBS-induced apoptosis. Our results demonstrated that MBS inhibited growth of colorectal cancer SW620 cells by inducing JNK and p38 signaling pathway, and provided a clue for preclinical and clinical evaluation of MBS for colorectal cancer therapy. Patterns and Trends with Cancer Incidence and Mortality Rates Reported by the China National Cancer Registry Chen, Peng-Lai;Zhao, Ting;Feng, Rui;Chai, Jing;Tong, Gui-Xian;Wang, De-Bin 6327 National cancer registration reports provide a huge potential for identifying patterns and trends of important policy, research, prevention and treatment significance. As summary reports written on an annual basis, the China Cancer Registry Annual Reports (CCRARs) fall short from fully addressing their potential. This paper attempts to explore part of the patterns and trends hidden behind published CCRARs. It extracted data for cancer incidence rates (IRs) and mortality rates (MRs) for 2004, 2006 and 2009 from relevant CCRARs and portrayed 4 kinds of indicators in line graphs. The study showed that: a) all of the line graphs of age-specific IRs and MRs characterized typical "growth curves or histogram"; b) graphs of IRs and MRs for males and urban areas had higher peaks than that for females and rural regions; c) most of the line graphs of IR/MR ratios comprised a starting peak, a secondary peak and a decreasing tail and the secondary peaks for females and urban areas were higher than those for males and rural areas; d) most of the urban versus rural IR ratios valued above one, but most the urban versus rural MR ratios, below one; e) the accumulative IRs and MRs showed a stable increasing trend from 2004 to 2009 for urban areas, but mixed for rural regions. Lack of Association between the MTHFR C677T Polymorphism and Lung Cancer in a Turkish Population Yilmaz, Meral;Kacan, Turgut;Sari, Ismail;Kilickap, Saadettin 6333 Background: In this case-control study, we aimed to investigate the relationship between the methylenetetrahydrofolate reductase (MTHFR) C677T polymorphism and lung cancer. Materials and Methods: Total 200 individuals including 100 patients with lung cancer and 100 controls were analyzed. Genotyping of MTHFR C677T was performed using PCR and RFLP methods. Results: The majority of the patients were men and 90% were smokers. We found that the risk ratio for development of LC was 13-times higher in smokers compared with non-smokers between patient and control groups in our study (OR:13.5, 95%CI:6.27-29.04, p:0.0001). Besides, the risk ratio for development of LC was nine times higher in individuals with cancer history in their family than those without cancer history (OR:9.65, 95%CI: 2.79-33.36; p:0.0001). When genotype distributions and allele frequencies were analyzed in the study groups, no significant difference was apparent (${\chi}^2$:0.53, p=0.76). In addition, no correlation between genotypes of MTHFRC677T polymorphism and histological type of LC was found (${\chi}^2$:0.99, p=0.60). Conclusions: These results suggest that there was no association between the MTHFR C677T polymorphism and lung cancer in the Turkish population. Outcome of Rectal Cancer in Patients Aged 30 Years or Less in the Pakistani Population Akbar, Ali;Bhatti, Abu Bakar Hafeez;Khattak, Shahid;Syed, Aamir Ali;Kazmi, Ather Saeed;Jamshed, Aarif 6339 Background: The incidence of rectal cancer is increasing in younger age groups. Limited data is available regarding survival outcome in younger patients with conflicting results from western world. The goal of this study was to determine survival in patients with rectal cancer <30 years of age and compare it with their older counterparts in the Pakistani population. Materials and Methods: A retrospective chart review of patients operated for rectal adenocarcinoma between January 2005 and December 2010 was performed. Patients were divided into two groups, Group 1 aged ${\leq}30years$ and Group 2 aged >30years. Patient characteristics, surgical procedure, histopathological details and number of loco-regional and distant failures were compared. Expected 5 year survival was calculated using Kaplan Meier curves and significance was determined using the Log rank test. Results: There were 38 patients in group 1 and 144 in group 2. A significantly high number of younger patients presented with poorly differentiated histology (44.7% vs 9.7%) (p=0.0001) and advanced pathological stage (63.1% vs 38.1%) (p=0.04). Predicted overall 5 year survival was 38% versus 57% in groups I and II, respectively (p=0.05). Disease free survival was 37% versus 52% and was significantly different (p=0.007). Conclusions: Early onset rectal cancer is associated with poor pathological features and a worse outcome in Pakistani population. Preparation and Antitumor Activity of a Tamibarotene-Furoxan Derivative Wang, Xue-Jian;Duan, Yu;Li, Zong-Tao;Feng, Jin-Hong;Pan, Xiang-Po;Zhang, Xiu-Rong;Shi, Li-Hong;Zhang, Tao 6343 Multi-target drug design, in which drugs are designed as single molecules to simultaneously modulate multiple physiological targets, is an important strategy in the field of drug discovery. QT-011, a tamibarotene-furoxan derivative, was here prepared and proposed to exert synergistic effects on antileukemia by releasing nitric oxide and tamibarotene. Compared with tamibarotene itself, QT-011 displayed stronger antiproliferative effects on U937 and HL-60 cells and was more effective evaluated in a nude mice U937 xenograft model in vivo. In addition, QT-011 could release nitric oxide which might contribute to the antiproliferative activity. Autodocking assays showed that QT-011 fits well with the hydrophobic pocket of retinoic acid receptors. Taken together, these results suggest that QT-011 might be a highly effective derivative of tamibarotene and a potential candidate compound as antileukemia agent. Insulin Promotes Proliferation and Migration of Breast Cancer Cells through the Extracellular Regulated Kinase Pathway Pan, Feng;Hong, Li-Quan 6349 The present study was undertaken to determine the roles of insulin in the growth of transplanted breast cancer in nude mice, and the proliferation and migration of MCF-7 human breast cancer cells and assess its influence on downstream signaling pathways. In a xenograft mouse model with injection of MCF-7 human breast cancer cells, tumor size was measured every other day. The insulin level and insulin receptor (IR) were increased in the breast cancer patient tissues. Insulin injected subcutaneously around the tumor site in mice caused increase in the size and weight of tumor masses, and promoted proliferation and migration of MCF-7 cells. The effects of insulin on the increase in the proliferation and migration of MCF-7 human breast cancer cells were abolished by pretreatment with the extracellular regulated kinase (ERK) inhibitor PD98059. Insulin increased the phosphorylation of ERK in the MCF-7 cells. These results indicate that insulin promotes the growth of breast cancer in nude mice, and increases the proliferation and migration of MCF-7 human breast cancer cells via the ERK pathway. Risk Factors of Lymph Node Metastases with Endometrial Carcinoma Cetinkaya, Kadir;Atalay, Funda;Bacinoglu, Ahmet 6353 Background: The purpose of this study was to investigate and evaluate risk factors for lymph node metastases (LNM) in cases of endometrial cancer (EC). Materials and Methods: A retrospective single institution analysis of patients surgically staged for EC at Ankara Oncology Education and Research Hospital from 1996 to 2010 was performed. Roles of prognostic factors, such as age, histological type, grade, depth of myometrial invasion, cervical involvement, peritoneal cytology, and tumor size, in the prediction of LNM were evaluated. Fisher's exact test and logistic regression analysis were used to assess the effects of various factors on LNM. Results: LNM was observed in 22 out of 247 patients (8.9%) and was significantly more common in the presence of tumors of higher grade, deep myometrial invasion (DMI), cervical involvement, size >2cm, and with positive peritoneal cytology. Logistic regression analysis revealed that DMI remained the only independent risk factor for LNM. NPV, PPV, sensitivity, and specificity for satisfying LNM risk were 98.0, 19.5, 86.3, and 65.3%, respectively for DMI. Conclusions: The incidence of LNM is influenced independently by DMI. If data support a conclusion of DMI, LND should be seriously considered. Cancers of the Young Population in Brunei Darussalam Mohammad, Ibnu Ayyub;Bujang, Mas Rina Wati;Telisinghe, Pemasari Upali;Abdullah, Muhd Syafiq;Chong, Chee Fui;Chong, Vui Heng 6357 Background: Globally, the overall incidence of cancer is increasing as a result of ageing populations and changing lifestyles. Cancer is one of the leading causes of death, especially in the developed nations. Cancers affecting the young population are generally considered uncommon. This study assessed the demography and trends of cancers of the young in Brunei Darussalam, a small and developing Southeast Asia nation. Materials and Methods: All patients diagnosed with cancers between 2000 and 2012 were identified from the cancer registry maintained by the State Histopathology Laboratory. Cancers of the young was defined as any cancers diagnosed under the age of 40 years. Demographic data and the type of cancers were collected and analysed using SPSS Statistics 17.0. Results: Among the 6,460 patients diagnosed with cancer over the study period, 18.7% (n=1,205) were categorized as young with an overall decline in the proportion from 26.6% in 2000 to 18.8% in 2012 (p<0.001 for trend). Among all cancers of the young, the most common systems affected were gynecological (24.1%), hematological/lymphatic (15.8%), subcutaneous/dermatological/ musculoskeletal (10.5%), breast (10.5%) and gastrointestinal (9.9%). Overall, among the different systems, neurological (54.9%) had the highest proportion of cancers of the young followed by gynecological/reproductive (30.6%), hematological/lymphatic (39.9%), endocrine (38.7%), subcutaneous/dermatological/ musculoskeletal (22.3%) and the head and neck region (20.1%). There was a female predominance (66.9%) and the incidence was significantly higher among the Malays (20.1%) and expatriates (25.1%) groups compared to the Chinese (10.7%) and indigenous (16.8%) groups (p<0.001 for trend). Conclusions: Cancers of the young (<40 years) accounted for almost a fifth of all cancers in Brunei Darussalam with certain organ systems more strongly affected. There was a female preponderance in all racial groups. Over the years, there has been a decline in the overall proportion of cancers of the young. Selective screening programs should nevertheless be considered. Preventive Effect of Actinidia Valvata Dunn Extract on N-methyl-N'-nitro-N-nitrosoguanidine-induced Gastrointestinal Cancer in Rats Wang, Xia;Liu, Hao;Wang, Xin;Zeng, Zhi;Xie, Li-Qun;Sun, Zhi-Guang;Wei, Mu-Xin 6363 Purpose: This study was conducted to assess the preventive effect of Actinidia valvata Dunn (AVD) extract on an animal model of gastrointestinal carcinogenesis on the basis of changes in tumor incidence, cell proliferation, and apoptosis. Materials and Methods: Seventy-five male Wistar rats were divided into five different treatment groups with 15 rats in each group. Group I was given normal feed, whereas Groups II to IV were treated with 10% sodium chloride in the first six weeks and 100ug/mL of N-methyl-N'-nitro-N-nitrosoguanidine (MNNG) in drinking water for 24 weeks. Group II was then given normal feed, whereas Group III was given AVD extract (0.24g/kg/day) for 12 weeks. Group IV was given AVD extract from the first week to the 36th week, whereas Group V was treated with AVD extract alone for 36 weeks. All rats were sacrificed at the end of the 36-week experiment and assessed for the presence of gastrointestinal tumors. The occurrence of cancer was evaluated by histology. Bax, Bcl-2, Caspase-3, and cyclinD1 were determined by immunohistochemical staining and Western blotting. Results: The incidences of gastric cancer were 0% in Group I, 73.3% in Group II, 33.3% in Group III, 26.7% in Group IV, and 0% in Group V. Bcl-2 and cyclinD1 expression was decreased in AVD extract treated groups, whereas Bax and Caspase-3 expression was increased. Comparison with group II revealed significant differences (p<0.01). Conclusions: AVD extract exhibits an obvious preventive effect on gastrointestinal carcinogenesis induced by MNNG in rats through the regulation of cell proliferation and apoptosis. Circulating Tumor Cells are Associated with Bone Metastasis of Lung Cancer Cheng, Min;Liu, Lin;Yang, Hai-Shan;Liu, Gui-Feng 6369 Lung cancer (LC) is the leading cause of cancer mortality worldwide, predominantly due to the difficulty of early diagnosis and its high metastatic potential. Recently, increasing evidence suggests that circulating tumour cells (CTCs) are responsible for cancer metastatic relapse, and CTCs have attracted interest in cancer metastasis detection and quantification. In present study, we collected blood samples from 67 patients with bone metastasis, and 30 patients without such metastasis, and searched for CTCs. Then the association of CTC numbers with bone metastasis and other clinico-pothological variants was analyzed. Results demonstrated that when 5 or 1 was taken as a threshhold for the CTC number, there were significantly higher positivity of CTCs in the bone metastasis group than in the non-metastasis group. While the increase in CTC number was not significantly associated with any other clinicopathological factor, including age, gender, pathological type, intrapulmonary metastasis and lymph node metastasis, the CTC number in patients with positivity of the last above mentioned variants was obviously higher than in patients with negativity of the two variants. Taken together, the CTC number appears to be significantly associated with the bone metastasis from lung cancer. Is the Neutrophil-Lymphocyte Ratio an Indicator of Progression in Patients with Benign Prostatic Hyperplasia? Tanik, Serhat;Albayrak, Sebahattin;Zengin, Kursad;Borekci, Hasan;Bakirtas, Hasan;Imamoglu, M. Abdurrahim;Gurdal, Mesut 6375 Purpose: The aim of this study was to evaluate inflammation parameters and assess the utility of the neutrophil-lymphocyte ratio (NLR) as a simple and readily available predictor for clinical disease activity in patients with nenign prostate hyperplasia BPH. We also aimed to investigate the relationship between inflammatory parameters with ${\alpha}$-blocker therapy response, and evaluate the potential association between NLR and the progression of benign prostatic hyperplasia (BPH). Materials and Methods: We examined 320 consecutive patients (July 2013-December 2013) admitted to our outpatient clinic with symptoms of the lower urinary tract at Bozok University. The mean age was 60 (range, 51-75) years. Complete blood count (CBC), prostate-specific antigen (PSA), erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were assessed. Correlations between PSA, CRP, ESR, prostate volume, International Prostate Symptom Score (IPPS), maximum urinary flow rate (Qmax), and NLR were assessed statistically. Patients were divided into two groups: high and low risk of progression. Results: NLR was positively correlated with IPSS (p=0.001, r=0.265), PSA (p=0.001, r=0.194), and negatively correlated with Qmax (p<0.001, r=-0.236). High-risk patients a had a higher NLR compared with low-risk patients, based on IPSS (p<0.001), PSA (p=0.013), and Qmax (p<0.001); however, there were no significant differences between the groups in terms of age (p>0.05), and prostate volume (p>0.05). Conclusions: NLR can predict BPH progression. We propose that increased inflammation is negatively associated with clinical status in BPH patients and suggest that NLR can give information along with LUTS severity which may be used as a readikly accessible marker for patient follow-up. Descriptive Report on Pattern of Variation in Cancer Cases within Selected Ethnic Groups in Kamrup Urban District of Assam, 2009-2011 Sharma, Jagannath Dev;Kalita, Manoj;Barbhuiya, Jamil Ahmed;Lahon, Ranjan;Sharma, Arpita;Barman, Debanjana;Kataki, Amal Chandra;Roy, Barsha Deka 6381 Background: The global burden of cancer is continuously increasing. According to recent report of the National Cancer Registry Programme (NCRP) on time trends it is estimated that future burden of cancer cases for India in 2020 will be 1,320,928. It is well known that knowledge of the incidence of cancer is a fundamental requirement of rational planning and monitoring of cancer control programs. It would help health planners to formulate public health policy if relevant ethnic groups were considered. North East-India alone contains over 160 Scheduled Tribes and 400 other sub-tribal communities and groups, whose cancer incidence rates are high compared to mainland India. As since no previous study was done focusing on ethnicity, the present investigation was performed. Materials and Methods: In this paper PBCR-Guwahati data on all cancer registrations from January 2009 to December 2011 for residents of the Kamrup Urban District, comprising an area of 261.8 sq. km with a total population of 900,518, including individual records with information on sex, age, ethnicity and cancer site are provided. Descriptive statistics including age adjusted rates (AARs) were taken as provided by NCRP. For comparison of proportional incidence ratios (PIR) the Student's t test was used, with p<0.05 considered as statistically significant. Results and Conclusions: Differences in leading sites of Kamrup Urban District since from the beginning of the PBCR-Guwahati were revealed among different ethnic groups by this study. The results should help policy makers to formulate different strategies to control the level of burden as well as for treatment planning. This study also suggests that age is an important factor of cancer among different ethnic populations as well as for overall population of Kamrup District of Assam. Prognostic Significance of the Peripheral Blood Absolute Monocyte Count in Patients with Locally Advanced or Metastatic Hepatocellular Carcinoma Receiving Systemic Chemotherapy Lin, Gui-Nan;Jiang, Xiao-Mei;Peng, Jie-Wen;Xiao, Jian-Jun;Liu, Dong-Ying;Xia, Zhong-Jun 6387 Background: The prognostic significance of the circulating absolute monocyte count (AMC) in patients with locally advanced hepatocellular carcinoma (HCC) is uncertain. This study was designed to assess the association of circulating AMC with survival outcomes in patients diagnosed with locally advanced or metastatic HCC receiving systemic chemotherapy. Materials and Methods: Between January 1, 2005 and December 30, 2012, locally advanced or metastatic HCC patients who had Child-Pugh stage A or B disease and received systemic chemotherapy were retrospectively enrolled. Patient features including gender, age, extrahepatic metastasis, Child-Pugh stage, serum alpha-fetoprotein(AFP) level and AMC were collected to investigate their prognostic impact on overall survival(OS). Results: A total of 216 patients were eligible for the study. The optimal cut-off value of AMC for OS analysis was $0.38{\times}10^9/L$. Median OS was 5.84 months in low-AMC group (95% confidence interval [CI], 5.23 to 6.45), and 5.21 months in high-AMC group (95% CI, 4.37 to 6.04; p=0.003). In COX multivariate analysis, elevated AMC remained as an independent prognostic factor for worse OS (HR, 1.578; 95% CI, 1.120 to 2.223, p=0.009). Conclusions: Our results indiicate that circulating AMC is confirmed to be an independent prognostic factor for OS in patients with locally advanced or metastatic HCC receiving systemic chemotherapy. Aquaporin 8 Involvement in Human Cervical Cancer SiHa Migration via the EGFR-Erk1/2 Pathway Shi, Yong-Hua;Tuokan, Talaf;Lin, Chen;Chang, Heng 6391 Overexpression of aquaporins (AQPs) has been reported in several human cancers. Epidermal growth factor receptor (EGFR)-extracellular signal-regulated kinases 1/2 (Erk1/2) are associated with tumorigenesis and cancer progression and may upregulate AQP expression. In this study, we demonstrated that EGF (epidermal growth factor) induces SiHa cells migration and AQP8 expression. Wound healing results showed that cell migration was increased by 2.79-1.50-fold at 24h and 48h after EGF treatment. AQP8 expression was significantly increased (3.33-fold) at 48h after EGF treatment in SiHa cells. An EGFR kinase inhibitor, PD153035, blocked EGF-induced AQP8 expression and cell migration and AQP8 expression was decreased from 1.59-fold (EGF-treated) to 0.43-fold (PD153035-treated) in SiHa. Furthermore, the MEK (MAPK (mitogen-activated protein kinase)/Erk (extracellular signal regulated kinase)/Erk inhibitor U0126 also inhibited EGF-induced AQP8 expression and cell migration. AQP8 expression was decreased from 1.21-fold (EGF-treated) to 0.43-fold (U0126-treated). Immunofluorescence microscopy further confirmed the results. Collectively, our findings show that EGF induces AQP8 expression and cell migration in human cervical cancer SiHa cells via the EGFR/Erk1/2 signal transduction pathway. Loss of Expression and Aberrant Methylation of the CDH1 (E-cadherin) Gene in Breast Cancer Patients from Kashmir Asiaf, Asia;Ahmad, Shiekh Tanveer;Aziz, Sheikh Aejaz;Malik, Ajaz Ahmad;Rasool, Zubaida;Masood, Akbar;Zargar, Mohammad Afzal 6397 Background: Aberrant promoter hypermethylation has been recognized in human breast carcinogenesis as a frequent molecular alteration associated with the loss of expression of a number of key regulatory genes and may serve as a biomarker. The E-cadherin gene (CDH1), mapping at chromosome 16q22, is an intercellular adhesion molecule in epithelial cells, which plays an important role in establishing and maintaining intercellular connections. The aim of our study was to assess the methylation pattern of CDH1 and to correlate it with the expression of E-cadherin, clinicopathological parameters and hormone receptor status in breast cancer patients of Kashmir. Materials and Methods: Methylation specific PCR (MSP) was used to determine the methylation status of CDH1 in 128 invasive ductal carcinomas (IDCs) paired with the corresponding normal tissue samples. Immunohistochemistry was used to study the expression of E-cadherin, ER and PR. Results: CDH1 hypermethylation was detected in 57.8% of cases and 14.8% of normal adjacent controls. Reduced levels of E-cadherin protein were observed in 71.9% of our samples. Loss of E-cadherin expression was significantly associated with the CDH1 promoter region methylation (p<0.05, OR=3.48, CI: 1.55-7.79). Hypermethylation of CDH1 was significantly associated with age at diagnosis (p=0.030), tumor size (p=0.008), tumor grade (p=0.024) and rate of node positivity or metastasis (p=0.043). Conclusions: Our preliminary findings suggest that abnormal CDH1 methylation occurs in high frequencies in infiltrating breast cancers associated with a decrease in E-cadherin expression. We found significant differences in tumor-related CDH1 gene methylation patterns relevant to tumor grade, tumor size, nodal involvement and age at diagnosis of breast tumors, which could be extended in future to provide diagnostic and prognostic information. Prevalence of Abnormal Anal Cytology in HIV-Infected Women: a Hospital-Based Study Pittyanont, Sirida;Yuthavisuthi, Prapap;Sananpanichkul, Panya;Thawonwong, Nutchanok;Techapornroong, Malee;Suwannarurk, Komsun;Bhamarapravatana, Kornkarn 6405 Background: To study the prevalence of abnormal anal cytology by Papanicolaou (Pap) technique in HIV-infected women who attended a HIV clinic at Prapokklao Hospital, Chanthaburi, Thailand. Materials and Methods: HIV-infected women who attended a HIV clinic at Prapokklao Hospital from March 2013 to February 2014 were recruited for anal Pap smears. Participants who had abnormal results of equally or over "abnormal squamous/glandular cells of undetermined significance" (ASC-US) were classified as abnormal anal cytology. Results: A total of 590 anal Pap smears were performed at HIV clinic of Prapokklao Hospital during the study period. There were only 13 patients who had abnormal Pap tests, which were: 11 ASC-US and 2 HSIL (high grade squamous intraepithelial lesion). The prevalence of abnormal anal Pap smears in HIV-infected women who attended HIV clinic at Prapokklao Hospital was 2.2 percent. Percentage of high risk HPV in patients who had abnormal Pap test was 88.9 (8/9). Conclusions: The prevalence of abnormal anal Papanicolaou smears in HIV-infected women who attended the HIV clinic at Prapokklao hospital was quite low in comparison to the earlier literature. Analysis of Mammographic Breast Density in a Group of Screening Chinese Women and Breast Cancer Patients Liu, Jing;Liu, Pei-Fang;Li, Jun-Nan;Qing, Chun;Ji, Yu;Hao, Xi-Shan;Zhang, Xue-Ning 6411 Background: A dense breast not only reduces the sensitivity of mammography but also is a moderate independent risk factor for breast cancer. The percentage of Western women with fat breast tissue is higher aged 40 years or older. To a certain extent, mammography as a first choice of screening imaging method for Western women of this group is reasonable. Hitherto, the frequency and age distribution of mammographic breast density patterns among Chinese women had not been characterized. The purpose of this study was to investigate the frequency and age distribution of mammographic breast density patterns among a group of Chinese screening women and breast cancer patients in order to provide useful information for age-specific guidelines for breast cancer screening in Chinese women. Methods: A retrospective review of a total of 3,394 screening women between August and December 2009 and 2,527 breast cancer patients between July 2011 and June 2012 was conducted. Descriptive analyses were used to examine the association between age and breast density. The significance of differences of breast density between the screening women and the breast cancer patients was examined using nonparametric tests. Results: There was a significant inverse relationship between age and breast density overall (r=-0.37, p< 0.01). Breast density of the breast cancer patients in the subgroups of 40-49 years old was greater compared with that of the screening women, the same in those aged 50-54 years and in those 55 years old or older, less than in the screening group. Conclusions: With regard to the Chinese women younger than 55 years old, the diagnostic efficiency of breast cancer screening imaging examinations may be potentially improved by combining screening mammography with ultrasound. Relationship Between the SER Treatment Period and Prognosis of Patients with Small Cell Lung Cancer Xiao, Xiao-Guang;Wang, Shu-Jing;Hu, Li-Ya;Chu, Qian;Wei, Yao;Li, Yang;Mei, Qi;Chen, Yuan 6415 Purpose: To explore the relationship between SER (time between the start of any treatment and the end of radiation therapy) and the survival of patients with limited-stage small cell lung cancer. Materials and Methods: Between 2008 and 2013, 135 cases of limited-stage small cell lung cancer (LS-SCLC) treated with consecutively curative chemoradiotherapy were included in this retrospective analysis. In terms of SER, patients were divided into early radiotherapy group (SER<30 days, n=76) and late radiotherapy group ($SER{\geq}30$ days, n=59) with a cut-off of SER 30 days. Outcomes of the two groups were compared for overall survival. Results: For all analyzable patients, median follow-up time was 23.8 months and median overall survival time was 16.8 months. Although there was no significant differences in distant metastasis free survival between the two groups, patients in early radiotherapy group had a significantly better PFS (p=0.003) and OS (p=0.000). Conclusions: A short SER may be a good prognostic factor for LD-SCLC patients treated with concurrent chemoradiotherapy. Mean Platelet Volume as a Prognostic Marker in Metastatic Colorectal Cancer Patients Treated with Bevacizumab-Combined Chemotherapy Tunce, Tolga;Ozgun, Alpaslan;Emirzeoglu, Levent;Celik, Serkan;Bilgi, Oguz;Karagoz, Bulent 6421 Background: Recent studies have revealed a prognostic impact of the MPV (mean platelet volume)/platelet count ratio in terms of survival in advanced non-small cell lung cancer. However, there has been no direct analysis of the survival impact of MPV in patients with mCRC. The aim of the study is to evaluate the pretreatment MPV of patients with metastatic and non-metastatic colorectal cancer (non-mCRC) and also the prognostic significance of pretreatment MPV to progression in mCRC patients treated with bevacizumab-combined chemotherapy. Materials and Methods: Fifty-three metastatic and ninety-five non-metastatic colorectal cancer patients were included into the study. Data on sex, age, lymph node status, MPV, platelet and platecrit (PCT) levels were obtained retrospectively from the patient medical records. Results: The MPV was significantly higher in the patients with mCRC compared to those with non-mCRC ($7.895{\pm}1.060$ versus $7.322{\pm}1.136$, p=0.013). The benefit of bevacizumab on PFS was significantly greater among the patients with low MPV than those with high MPV. The hazard ratio (HR) of disease progression was 0.41 (95%CI, 0.174-0.986; p=0.04). In conclusion, despite the retrospective design and small sample size, MPV can be considered a prognostic factor for mCRC patients treated with bevacizumab-combined chemotherapy. Overexpression of HER-2/neu in Patients with Prostatic Adenocarcinoma Zahir, Shokouh Taghipour;Tafti, Hamid Fallah;Rahmani, Koorosh 6425 Background: Prostatic adenocarcinoma is one of the main causes of cancer death, and its timely diagnosis and preventing its progression dramatically helps improve life indexes. Given the high disease recurrence rate, today, research is more inclined toward exploring causes of recurrence and development, and innovation of modern treatment methods. Several studies have explored over-expression of human epidermal growth factor receptor 2 (HER-2/neu) in prostatic cancer so far, with different results. Thus, it was decided to investigate HER-2/neu overexpression in patients with prostatic adenocarcinoma in Iran. Materials and Methods: A sample size of 40 patients with prostate cancer entered the study, using a cross-sectional, non-randomized sampling method. Parameters studied included patient age at surgery, Gleason score, serum prostatic specific antigen (PSA) before surgery, and positive sample rate after immunohistochemical staining to investigate HER-2/neu overexpression. Results: In terms of HER-2/neu receptor staining rate, of 40 slides, 16 (40%) scored 0, 13 (32.5%) 1+, 7 (17.5%) 2+, and 4 (10%) 3+. In total 27.5% of slides showed HER-2/neu overexpression. In terms of age, an inverse correlation was found (-0.181), but without significance (p=0.263). In terms of serum PSA, the correlation coefficient was 0.449 (p=0.004). With respect to Gleason score, the coefficient was 0.190 (p=0.240). Conclusions: In this study, HER-2/neu overexpression occurred in 27.5% of prostate cancer cases, which is a relatively high figure, compared to similar studies elsewhere. While, we failed to reveal any relationship between HER-2/neu expression status with progression and prognosis of disease, it was demonstrated that the serum PSA level was significantly higher in cases with increased receptor expression. Dimethylnitrosamine-Induced Reduction in the Level of Poly-ADP-Ribosylation of Histone Proteins of Blood Lymphocytes - a Sensitive and Reliable Biomarker for Early Detection of Cancer Kma, Lakhan;Sharan, Rajeshwar Nath 6429 Poly-ADP-ribosylation (PAR) is a post-translational modification of mainly chromosomal proteins. It is known to be strongly involved in several molecular events, including nucleosome-remodelling and carcinogenesis. In this investigation, it was attempted to evaluate PAR level as a reliable biomarker for early detection of cancer in blood lymphocyte histones. PAR of isolated histone proteins was monitored in normal and dimethylnitrosamine (DMN)-exposed mice tissues using a novel ELISA-based immuno-probe assay developed in our laboratory. An inverse relationship was found between the level of PAR and period of DMN exposure in various histone proteins of blood lymphocytes and spleen cells. With the increase in the DMN exposure period, there was reduction in the PAR level of individual histones in both cases. It was also observed that the decrease in the level of PAR of histones resulted in progressive relaxation of genomic DNA, perhaps triggering activation of genes that are involved in initiation of transformation. The observed effect of carcinogen on the PAR of blood lymphocyte histones provided us with a handy tool for monitoring biochemical or physiological status of individuals exposed to carcinogens without obtaining biopsies of cancerous tissues, which involves several medical and ethical issues. Obtaining blood from any patient and separating blood lymphocytes are routine medical practices involving virtually no medical intervention, post-procedure medical care or trauma to a patient. Moreover, the immuno-probe assay is very simple, sensitive, reliable and cost-effective. Therefore, combined with the ease of preparation of blood lymphocytes and the simplicity of the technique, immuno-probe assay of PAR has the potential to be applied for mass screening of cancer. It appears to be a promising step in the ultimate goal of making cancer detection simple, sensitive and reliable in the near future. Downregulation of Cdk1 and CyclinB1 Expression Contributes to Oridonin-induced Cell Cycle Arrest at G2/M Phase and Growth Inhibition in SGC-7901 Gastric Cancer Cells Gao, Shi-Yong;Li, Jun;Qu, Xiao-Ying;Zhu, Nan;Ji, Yu-Bin 6437 Background: Oridonin isolated from Rabdosia rubescens, a plant used to treat cancer in Chinese folk medicine, is one of the most important antitumor active ingredients. Previous studies have shown that oridonin has antitumor activities in vivo and in vitro, but little is known about cell cycle effects of oridonin in gastric cancer. Materials and Methods: MTT assay was adopted to detect the proliferation inhibition of SGC-7901 cells, the cell cycle was assessed by flow cytometry and protein expression by Western blotting. Results: Oridonin could inhibit SGC-7901 cell proliferation, the $IC_{50}$ being $15.6{\mu}M$, and blocked SGC-7901 cell cycling in the $G_2/M$ phase. The agent also decreased the protein expression of cyclinB1 and CDK1. Conclusions: Oridonin may inhibit SGC-7901 growth and block the cells in the $G_2/M$ phase by decreasing Cdk1 and cyclinB1 proteins. Baseline Stimulated Thyroglobulin Level as a Good Predictor of Successful Ablation after Adjuvant Radioiodine Treatment for Differentiated Thyroid Cancers Fatima, Nosheen;uz Zaman, Maseeh;Ikram, Mubashir;Akhtar, Jaweed;Islam, Najmul;Masood, Qamar;Zaman, Unaiza;Zaman, Areeba 6443 Background: To determine the predictive value of the baseline stimulated thyroglobulin (STg) level for ablation outcome in patients undergoing adjuvant remnant radioiodine ablation (RRA) for differentiated thyroid carcinoma (DTC). Materials and Methods: This retrospective study accrued 64 patients (23 male and 41 female; mean age of $40{\pm}14$ years) who had total thyroidectomy followed by RRA for DTC from January 2012 till April 2014. Patients with positive anti-Tg antibodies and distant metastasis on post-ablative whole body iodine scans (TWBIS) were excluded. Baseline STg was used to predict successful ablation (follow-up STg <2 ng/ml, negative diagnostic WBIS and negative ultrasound neck) at 7-12 months follow-up. Results: Overall, successful ablation was noted in 37 (58%) patients while ablation failed in 27 (42%). Using the ROC curve, a cut-off level of baseline STg level of ${\leq}14.5ng/ml$ was found to be most sensitive and specific for predicting successful ablation. Successful ablation was thus noted in 25/28 (89%) of patients with baseline STg ${\leq}14.5ng/ml$ and 12/36 (33%) patients with baseline STg >14.5 ng/ml ((p value <0.05). Age >40 years, female gender, PTS >2 cm, papillary histopathology, positive cervical nodes and positive TWBIS were significant predictors of ablation failure. Conclusions: We conclude that in patients with total thyroidectomy followed by I-131 ablation for DTC, the baseline STg level is a good predictor of successful ablation based on a stringent triple negative criteria (i.e. follow-up STg < 2 ng/ml, a negative DWBIS and a negative US neck). Albumin-globulin Ratio for Prediction of Long-term Mortality in Lung Adenocarcinoma Patients Duran, Ayse Ocak;Inanc, Mevlude;Karaca, Halit;Dogan, Imran;Berk, Veli;Bozkurt, Oktay;Ozaslan, Ersin;Ucar, Mahmut;Eroglu, Celalettin;Ozkan, Metin 6449 Background: Prior studies showed a relationship between serum albumin and the albumin to globulin ratio with different types of cancer. We aimed to evaluate the predictive value of the albumin-globulin ratio (AGR) for survival of patients with lung adenocarcinoma. Materials and Methods: This retrospective study included 240 lung adenocarcinoma patients. Biochemical parameters before chemotherapy were collected and survival status was obtained from the hospital registry. The AGR was calculated using the equation AGR=albumin/(total protein-albumin) and ranked from lowest to highest, the total number of patients being divided into three equal tertiles according to the AGR values. Furthermore, AGR was divided into two groups (low and high tertiles) for ROC curve analysis. Cox model analysis was used to evaluate the prognostic value of AGR and AGR tertiles. Results: The mean survival time for each tertile was: for the $1^{st}$ 9.8 months (95%CI:7.765-11.848), $2^{nd}$ 15.4 months (95%CI:12.685-18.186), and $3^{rd}$ 19.9 months (95%CI:16.495-23.455) (p<0.001). Kaplan-Meier curves showed significantly higher survival rates with the third and high tertiles of AGR in comparison with the first and low tertiles, respectively. At multivariate analysis low levels of albumin and AGR, low tertile of AGR and high performance status remained an independent predictors of mortality. Conclusions: Low AGR was a significant predictor of long-term mortality in patients with lung adenocarcinoma. Serum albumin measurement and calculation of AGR are easily accessible and cheap to use for predicting mortality in patients with lung adenocarcinoma. Five-Year Survival and Median Survival Time of Nasopharyngeal Carcinoma in Hospital Universiti Sains Malaysia Siti-Azrin, Ab Hamid;Norsa'adah, Bachok;Naing, Nyi Nyi 6455 Background: Nasopharyngeal carcinoma (NPC) is the fourth most common cancer in Malaysia. The objective of this study was to determine the five-year survival rate and median survival time of NPC patients in Hospital Universiti Sains Malaysia (USM). Methods: One hundred and thirty four NPC cases confirmed by histopathology in Hospital USM between $1^{st}$ January 1998 and $31^{st}$ December 2007 that fulfilled the inclusion and exclusion criteria were retrospectively reviewed. Survival time of NPC patients were estimated by Kaplan-Meier survival analysis. Log-rank tests were performed to compare survival of cases among presenting symptoms, WHO type, TNM classification and treatment modalities. Results: The overall five-year survival rate of NPC patients was 38.0% (95% confidence interval (CI): 29.1, 46.9). The overall median survival time of NPC patients was 31.30 months (95%CI: 23.76, 38.84). The significant factors that altered the survival rate and time were age (p=0.041), cranial nerve involvement (p=0.012), stage (p=0.002), metastases (p=0.008) and treatment (p<0.001). Conclusion: The median survival of NPC patients is significantly longer for age ${\leq}50$ years, no cranial nerve involvement, and early stage and is dependent on treatment modalities. Stathmin 1, a Therapeutic Target in Esophageal Carcinoma? Machado-Neto, Joao Agostinho 6461
CommonCrawl
Tag: geometry eBook 'geometry and the absolute point' v0.1 Published July 8, 2011 by lievenlb In preparing for next year's 'seminar noncommutative geometry' I've converted about 30 posts to LaTeX, centering loosely around the topics students have asked me to cover : noncommutative geometry, the absolute point (aka the field with one element), and their relation to the Riemann hypothesis. The idea being to edit these posts thoroughly, add much more detail (and proofs) and also add some extra sections on Borger's work and Witt rings (and possibly other stuff). For those of you who prefer to (re)read these posts on paper or on a tablet rather than perusing this blog, you can now download the very first version (minimally edited) of the eBook 'geometry and the absolute point'. All comments and suggestions are, of course, very welcome. I hope to post a more definite version by mid-september. I've used the thesis-documentclass to keep the same look-and-feel of my other course-notes, but I would appreciate advice about turning LaTeX-files into 'proper' eBooks. I am aware of the fact that the memoir-class has an ebook option, and that one can use the geometry-package to control paper-sizes and margins. Soon, I will be releasing a LaTeX-ed 'eBook' containing the Bourbaki-related posts. Later I might also try it on the games- and groups-related posts… Art and the absolute point (3) Published May 19, 2011 by lievenlb Previously, we have recalled comparisons between approaches to define a geometry over the absolute point and art-historical movements, first those due to Yuri I. Manin, subsequently some extra ones due to Javier Lopez Pena and Oliver Lorscheid. In these comparisons, the art trend appears to have been chosen more to illustrate a key feature of the approach or an appreciation of its importance, rather than giving a visual illustration of the varieties over $\mathbb{F}_1$ the approach proposes. Some time ago, we've had a couple of posts trying to depict noncommutative varieties, first the illustrations used by Shahn Majid and Matilde Marcolli, and next my own mental picture of it. In this post, we'll try to do something similar for affine varieties over the absolute point. To simplify things drastically, I'll divide the islands in the Lopez Pena-Lorscheid map of $\mathbb{F}_1$ land in two subsets : the former approaches (all but the $\Lambda$-schemes) and the current approach (the $\Lambda$-scheme approach due to James Borger). The former approaches : Francis Bacon "The Pope" (1953) The general consensus here was that in going from $\mathbb{Z}$ to $\mathbb{F}_1$ one looses the additive structure and retains only the multiplicative one. Hence, 'commutative algebras' over $\mathbb{F}_1$ are (commutative) monoids, and mimicking Grothendieck's functor of points approach to algebraic geometry, a scheme over $\mathbb{F}_1$ would then correspond to a functor $h_Z~:~\mathbf{monoids} \longrightarrow \mathbf{sets}$ Such functors are described largely by combinatorial data (see for example the recent blueprint-paper by Oliver Lorscheid), and, if the story would stop here, any Rothko painting could be used as illustration. Most of the former approaches add something though (buzzwords include 'Arakelov', 'completion at $\infty$', 'real place' etc.) in order to connect the virtual geometric object over $\mathbb{F}_1$ with existing real, complex or integral schemes. For example, one can make the virtual object visible via an evaluation map $h_Z \rightarrow h_X$ which is a natural transformation, where $X$ is a complex variety with its usual functor of points $h_X$ and to connect both we associate to a monoid $M$ its complex monoid-algebra $\mathbb{C} M$. An integral scheme $Y$ can then be said to be 'defined over $\mathbb{F}_1$', if $h_Z$ becomes a subfunctor of its usual functor of points $h_Y$ (again, assigning to a monoid its integral monoid algebra $\mathbb{Z} M$) and $Y$ is the 'best' integral scheme approximation of the complex evaluation map. To illustrate this, consider the painting Study after Velázquez's Portrait of Pope Innocent X by Francis Bacon (right-hand painting above) which is a distorded version of the left-hand painting Portrait of Innocent X by Diego Velázquez. Here, Velázquez' painting plays the role of the complex variety which makes the combinatorial gadget $h_Z$ visible, and, Bacon's painting depicts the integral scheme, build up from this combinatorial data, which approximates the evaluation map best. All of the former approaches more or less give the same very small list of integral schemes defined over $\mathbb{F}_1$, none of them motivically interesting. The current approach : Jackson Pollock "No. 8" (1949) An entirely different approach was proposed by James Borger in $\Lambda$-rings and the field with one element. He proposes another definition for commutative $\mathbb{F}_1$-algebras, namely $\lambda$-rings (in the sense of Grothendieck's Riemann-Roch) and he argues that the $\lambda$-ring structure (which amounts in the sensible cases to a family of endomorphisms of the integral ring lifting the Frobenius morphisms) can be viewed as descent data from $\mathbb{Z}$ to $\mathbb{F}_1$. The list of integral schemes of finite type with a $\lambda$-structure coincides roughly with the list of integral schemes defined over $\mathbb{F}_1$ in the other approaches, but Borger's theory really shines in that it proposes long sought for mystery-objects such as $\mathbf{spec}(\mathbb{Z}) \times_{\mathbf{spec}(\mathbb{F}_1)} \mathbf{spec}(\mathbb{Z})$. If one accepts Borger's premise, then this object should be the geometric object corresponding to the Witt-ring $W(\mathbb{Z})$. Recall that the role of Witt-rings in $\mathbb{F}_1$-geometry was anticipated by Manin in Cyclotomy and analytic geometry over $\mathbb{F}_1$. But, Witt-rings and their associated Witt-spaces are huge objects, so one needs to extend arithmetic geometry drastically to include such 'integral schemes of infinite type'. Borger has made a couple of steps in this direction in The basic geometry of Witt vectors, II: Spaces. To depict these new infinite dimensional geometric objects I've chosen for Jackson Pollock's painting No. 8. It is no coincidence that Pollock-paintings also appeared in the depiction of noncommutative spaces. In fact, Matilde Marcolli has made the connection between $\lambda$-rings and noncommutative geometry in Cyclotomy and endomotives by showing that the Bost-Connes endomotives are universal for $\lambda$-rings. Penrose tilings and noncommutative geometry Penrose tilings are aperiodic tilings of the plane, made from 2 sort of tiles : kites and darts. It is well known (see for example the standard textbook tilings and patterns section 10.5) that one can describe a Penrose tiling around a given point in the plane as an infinite sequence of 0's and 1's, subject to the condition that no two consecutive 1's appear in the sequence. Conversely, any such sequence is the sequence of a Penrose tiling together with a point. Moreover, if two such sequences are eventually the same (that is, they only differ in the first so many terms) then these sequences belong to two points in the same tiling, Another remarkable feature of Penrose tilings is their local isomorphism : fix a finite region around a point in one tiling, then in any other Penrose tiling one can find a point having an isomorphic region around it. For this reason, the space of all Penrose tilings has horrible topological properties (all points lie in each others closure) and is therefore a prime test-example for the techniques of noncommutative geometry. In his old testament, Noncommutative Geometry, Alain Connes associates to this space a $C^*$-algebra $Fib$ (because it is constructed from the Fibonacci series $F_0,F_1,F_2,…$) which is the direct limit of sums of two full matrix-algebras $S_n$, with connecting morphisms $S_n = M_{F_n}(\mathbb{C}) \oplus M_{F_{n-1}}(\mathbb{C}) \rightarrow S_{n+1} = M_{F_{n+1}}(\mathbb{C}) \oplus M_{F_n}(\mathbb{C}) \qquad (a,b) \mapsto ( \begin{matrix} a & 0 \\ 0 & b \end{matrix}, a)$ As such $Fib$ is an AF-algebra (for approximately finite) and hence formally smooth. That is, $Fib$ would be the coordinate ring of a smooth variety in the noncommutative sense, if only $Fib$ were finitely generated. However, $Fib$ is far from finitely generated and has other undesirable properties (at least for a noncommutative algebraic geometer) such as being simple and hence in particular $Fib$ has no finite dimensional representations… A couple of weeks ago, Paul Smith discovered a surprising connection between the noncommutative space of Penrose tilings and an affine algebra in the paper The space of Penrose tilings and the non-commutative curve with homogeneous coordinate ring $\mathbb{C} \langle x,y \rangle/(y^2)$. Giving $x$ and $y$ degree 1, the algebra $P = \mathbb{C} \langle x,y \rangle/(y^2)$ is obviously graded and noncommutative projective algebraic geometers like to associate to such algebras their 'proj' which is the quotient category of the category of all graded modules in which two objects become isomorphisc iff their 'tails' (that is forgetting the first few homogeneous components) are isomorphic. The first type of objects NAGers try to describe are the point modules, which correspond to graded modules in which every homogeneous component is 1-dimensional, that is, they are of the form $\mathbb{C} e_0 \oplus \mathbb{C} e_1 \oplus \mathbb{C} e_2 \oplus \cdots \oplus \mathbb{C} e_n \oplus \mathbb{C} e_{n+1} \oplus \cdots$ with $e_i$ an element of degree $i$. The reason for this is that point-modules correspond to the points of the (usual, commutative) projective variety when the affine graded algebra is commutative. Now, assume that a Penrose tiling has been given by a sequence of 0's and 1's, say $(z_0,z_1,z_2,\cdots)$, then it is easy to associate to it a graded vectorspace with action given by $x.e_i = e_{i+1}$ and $y.e_i = z_i e_{i+1}$ Because the sequence has no two consecutive ones, it is clear that this defines a graded module for the algebra $P$ and determines a point module in $\pmb{proj}(P)$. By the equivalence relation on Penrose sequences and the tails-equivalence on graded modules it follows that two sequences define the same Penrose tiling if and only if they determine the same point module in $\pmb{proj}(P)$. Phrased differently, the noncommutative space of Penrose tilings embeds in $\pmb{proj}(P)$ as a subset of the point-modules for $P$. The only such point-module invariant under the shift-functor is the one corresponding to the 0-sequence, that is, corresponds to the cartwheel tiling Another nice consequence is that we can now explain the local isomorphism property of Penrose tilings geometrically as a consequence of the fact that the $Ext^1$ between any two such point-modules is non-zero, that is, these noncommutative points lie 'infinitely close' to each other. This is the easy part of Paul's paper. The truly, truly amazing part is that he is able to recover Connes' AF-algebra $Fib$ from $\pmb{proj}(P)$ as the algebra of global sections! More precisely, he proves that there is an equivalence of categories between $\pmb{proj}(P)$ and the category of all $Fib$-modules $\pmb{mod}(Fib)$! In other words, the noncommutative projective scheme $\pmb{proj}(P)$ is actually isomorphic to an affine scheme and as its coordinate ring is formally smooth $\pmb{proj}(P)$ is a noncommutative smooth variety. It would be interesting to construct more such examples of interesting AF-algebras appearing as local rings of sections of proj-es of affine graded algebras. Who dreamed up the primes=knots analogy? One of the more surprising analogies around is that prime numbers can be viewed as knots in the 3-sphere $S^3$. The motivation behind it is that the (etale) fundamental group of $\pmb{spec}(\mathbb{Z}/(p))$ is equal to (the completion) of the fundamental group of a circle $S^1$ and that the embedding $\pmb{spec}(\mathbb{Z}/(p)) \subset \pmb{spec}(\mathbb{Z})$ embeds this circle as a knot in a 3-dimensional simply connected manifold which, after Perelman, has to be $S^3$. For more see the what is the knot associated to a prime?-post. In recent months new evidence has come to light allowing us to settle the genesis of this marvelous idea. 1. The former consensus Until now, the generally accepted view (see for example the 'Mazur-dictionary-post' or Morishita's expository paper) was that the analogy between knots and primes was first pointed out by Barry Mazur in the middle of the 1960's when preparing for his lectures at the Summer Conference on Algebraic Geometry, at Bowdoin, in 1966. The lecture notes where later published in 1973 in the Annales of the ENS as 'Notes on etale cohomology of number fields'. For further use in this series of posts, please note the acknowledgement at the bottom of the first page, reproduced below : "It gives me pleasure to thank J.-P. Serre for his vigorous editing and his suggestions and corrections, which led to this revised version." Independently, Yuri I. Manin spotted the same analogy at around the same time. However, this point of view was quickly forgotten in favor of the more classical one of viewing number fields as analogous to algebraic function fields of one variable. Subsequently, in the mid 1990's Mikhail Kapranov and Alexander Reznikov took up the analogy between number fields and 3-manifolds again, and called the resulting study arithmetic topology. 2. The new evidence On december 13th 2010, David Feldman posted a MathOverflow-question Mazur's unpublished manuscript on primes and knots?. He wrote : "The story of the analogy between knots and primes, which now has a literature, started with an unpublished note by Barry Mazur. I'm not absolutely sure this is the one I mean, but in his paper, Analogies between group actions on 3-manifolds and number fields, Adam Sikora cites B. Mazur, Remarks on the Alexander polynomial, unpublished notes." Two months later, on february 15th David Feldman suddenly found the missing preprint in his mail-box and made it available. The preprint is now also available from Barry Mazur's website. Mazur adds the following comment : "In 1963 or 1964 I wrote an article Remarks on the Alexander Polynomial [PDF] about the analogy between knots in the three-dimensional sphere and prime numbers (and, correspondingly, the relationship between the Alexander polynomial and Iwasawa Theory). I distributed some copies of my article but never published it, and I misplaced my own copy. In subsequent years I have had many requests for my article and would often try to search through my files to find it, but never did. A few weeks ago Minh-Tri Do asked me for my article, and when I said I had none, he very kindly went on the web and magically found a scanned copy of it. I'm extremely grateful to Minh-Tri Do for his efforts (and many thanks, too, to David Feldman who provided the lead)." The opening paragraph of this unpublished preprint contains a major surprise! Mazur points to David Mumford as the originator of the 'primes-are-knots' idea : "Mumford has suggested a most elegant model as a geometric interpretation of the above situation : $\pmb{spec}(\mathbb{Z}/p\mathbb{Z})$ is like a one-dimensional knot in $\pmb{spec}(\mathbb{Z})$ which is like a simply connected three-manifold." In a later post we will show that one can even pinpoint the time and place when and where this analogy was first dreamed-up to within a few days and a couple of miles. For the impatient among you, have a sneak preview of the cradle of birth of the primes=knots idea… Last time we did recall Manin's comparisons between some approaches to geometry over the absolute point $\pmb{spec}(\mathbb{F}_1)$ and trends in the history of art. In the comments to that post, Javier Lopez-Pena wrote that he and Oliver Lorscheid briefly contemplated the idea of extending Manin's artsy-dictionary to all approaches they did draw on their Map of $\mathbb{F}_1$-land. So this time, we will include here Javier's and Oliver's insights on the colored pieces below in their map : CC=Connes-Consani, Generalized torified schemes=Lopez Pena-Lorscheid, Generalized schemes with 0=Durov and, this time, $\Lambda$=Manin-Marcolli. Durov : romanticism In his 568 page long Ph.D. thesis New Approach to Arakelov Geometry Nikolai Durov introduces a vast generalization of classical algebraic geometry in which both Arakelov geometry and a more exotic geometry over $\mathbb{F}_1$ fit naturally. Because there were great hopes and expectations it would lead to a big extension of algebraic geometry, Javier and Oliver associate this approach to romantism. From wikipedia : "The modern sense of a romantic character may be expressed in Byronic ideals of a gifted, perhaps misunderstood loner, creatively following the dictates of his inspiration rather than the standard ways of contemporary society." Manin and Marcolli : impressionism Yuri I. Manin in Cyclotomy and analytic geometry over $\mathbb{F}_1$ and Matilde Marcolli in Cyclotomy and endomotives develop a theory of analytic geometry over $\mathbb{F}_1$ based on analytic functions 'leaking out of roots of unity'. Javier and Oliver depict such functions as 'thin, but visible brush strokes at roots of 1' and therefore associate this approach to impressionism. Frow wikipedia : 'Characteristics of Impressionist paintings include: relatively small, thin, yet visible brush strokes; open composition; emphasis on accurate depiction of light in its changing qualities (often accentuating the effects of the passage of time); common, ordinary subject matter; the inclusion of movement as a crucial element of human perception and experience; and unusual visual angles.' Connes and Consani : cubism In On the notion of geometry over $\mathbb{F}_1$ Alain Connes and Katia Consani develop their extension of Soule's approach. A while ago I've done a couple of posts on this here, here and here. Javier and Oliver associate this approach to cubism (a.o. Pablo Picasso and Georges Braque) because of the weird juxtapositions of the simple monoidal pieces in this approach. Lopez-Pena and Lorscheid : deconstructivism Torified varieties and schemes were introduced by Javier Lopez-Pena and Oliver Lorscheid in Torified varieties and their geometries over $\mathbb{F}_1$ to get lots of examples of varieties over the absolute point in the sense of both Soule and Connes-Consani. Because they were fragmenting schemes into their "fundamental pieces" they associate their approach to deconstructivism. Another time I'll sketch my own arty-farty take on all this.
CommonCrawl
Assessing the prices and affordability of oncology medicines for three common cancers within the private sector of South Africa Phyllis Ocran Mattila1, Zaheer-Ud-Din Babar1 & Fatima Suleman ORCID: orcid.org/0000-0002-8559-91682 Prices of cancer medicines are a major contributor to the cost of treatment for cancer patients and the comparison of these cost needs to be assessed. To assess the prices of cancer medicines for the three most common cancers ((breast, prostate and colorectal) in the private healthcare sector of South Africa. The methodology was adapted from the World Health Organization (WHO)/ Health Action International (HAI) methodology for measuring medicine prices. The Single Exit Price (SEP) variations between product types of the same medicine between the highest- and lowest-priced product and between Originator Brand (OB) and its Lowest Priced Generic (LPG) of the same medicine brand was compared, as of March 2020. The affordability of those medicines for cancer usage based on treatment affordability in relation to the daily wage of the unskilled Lowest-Paid Government Worker (LPGW) was also determined. Also, a comparison of the proportion of the population below the poverty line (PL) before (Ipre) and after (Ipost) procurement of the cancer medicines was determined. SEP Price differences ranged from 25.46 to 97.33% between highest- and lowest-priced products and a price variation of 72.09% more for the OB than the LPG medicine, except for one LPG that was more expensive than the OB. Affordability calculations showed that All OB treatments for all three cancers (breast, prostate and colorectal), except for paclitaxel 300 mg (0.2 days wage) and Fluorouracil (Fluroblastin) 500 mg (0.3 days wage) costs respectively were more than 1 day's wage, with patients diagnosed with colorectal cancer needing 32.5 days wages in order to afford a standard course of treatment for a month. There was a considerable variation in the price of different brands of cancer medicines available in the South African private sector. The global cancer burden is estimated to have risen to 18.1 million new cases and is responsible for an estimated 9.6 million deaths in 2018 [1]. Globally, about 1 in 6 deaths is due to cancer. Unless greater effort is done to alter the course of the disease, this number is expected to rise to close to 30 million new cases by 2040. About 70% of deaths from cancer occur in Low-and Middle-Income countries (LMICs) [1]. In South Africa, the estimated cancer cases are 107,464 in 2018 and may increase to 177,773 in 2040 [2]. In South Africa, breast, prostrate and colo-rectal cancer rank in the top 10 cancers with cases of 15,491, 13,152 and 7354 respectively in 2020 [2, 3]. It is known that early detection and treatment may improve health outcomes associated with the disease for adults [1] and this would depend on the equitable access to available and affordable low cost highly active cancer medicines. Cancer treatment is expensive and high prices of cancer medicines have a huge impact on access in LMICs. Most of the newer cancer medicines and new therapies such as immunotherapy, monoclonal antibodies, and targeted therapy are out of reach for the large populations with poor socio-economic conditions in LMICs and even the older cytotoxic agents remain only affordable to a minority of patients. For example, according to a World Health Organization report, a course of standard treatment for early-stage human epidermal growth factor receptor 2 positive (HER2+) breast cancer (doxorubicin, cyclophosphamide, docetaxel, trastuzumab) would cost about 10 years of average annual wages in India and South Africa [4]. According to the World Bank (WB), South Africa is regarded as an Upper Middle-Income Country (UMIC) with a population of 59,308,690 (mid-year estimate) [5], in 2020 and a Gross National Income (GNI) per capita of US$ 6040 in 2019 [6]. The World Bank (WB) has defined the international Poverty Line (PL) as US $1.90 per person per day using data on purchasing power parities and an expanded set of household income and expenditure surveys in 2011 [7]. This defines the cost of basic needs in some of the poorest countries in the world and is the absolute minimum threshold for defining poverty. However, for an upper middle-income country like South Africa, the PL has been set at US $5.50 per person per day, thus defining the cost of basic needs in South Africa [8]. If the pre-payment income (income before purchasing the medicine) is above the US$5.50 poverty line and post-payment income (income left after purchasing the medicine) falls below this poverty line then the purchasing of that medicine has impoverished people in South Africa [9]. The World Health Statistics 2020 reported that 1.4% of South Africans spend less than 10% their total household expenditure or income on health [10]. Some of this expenditure is due to public sector–dependent (unemployed or low earning) uninsured persons, accessing health care, including medicines, in the private sector [11]. Government funded healthcare is offered to all South Africans for free, yet people can opt to purchase private insurance in order to be treated at private hospitals and health clinics. Patients in the private sector (generally the wealthy) can either pay for their health care needs via a medical aid scheme (insurance) or the patient is faced with an out-of-pocket expenditure. Out-of-pocket (OOP) expenditure could be related to co-payments for treatment; nutritional changes in diet; rehabilitation, and travel to appointments. The extent of OOP for cancer patients is unknown in South Africa. South Africa has implemented several important medicine pricing interventions in the post-apartheid era, informed by the 1996 National Drug Policy (NDP) [12]. One key aim of the NDP of South Africa is to promote the availability of safe and effective medicines at the lowest possible cost by monitoring and negotiating medicine prices and by rationalising the medicine pricing system in the public and private sectors and by promoting the use of generic medicines [13, 14]. The Department of Health has adopted measures using regulations to address the pricing of medicines and one of them was the introduction of the SEP [11,12,13,14]. The SEP in terms of the regulations means "the price at which the manufacturer or importer of a medicine or scheduled substance can sell to a wholesaler or distributor. It combines the ex-manufacturer price as well as a logistic fee portion. The wholesaler or distributor will then sell the medicine to a pharmacist who adds a dispensing fee before the medicine is sold to a patient." This is complemented with a provision for a regulated maximum increase in the single exit price, determined annually by the Minister of Health, on the advice of the Pricing Committee [15], and the maximum capped percentage increase varies each year. Manufacturers can take the maximum increase, part of the increase, no increase, or even reduce their prices annually. The introduction of the SEP resulted in an approximately 22% reduction in medicine prices and saved the scheme about ZAR 319 million per year in medicine expenditure since 2004 [16]. In terms of private sector pricing, the SEP mechanism and the publication of annual adjustments has provided the state with a powerful tool [11, 12, 17]. The impact of the SEP on affordability though is unclear. Scarcity of pricing or affordability data is one of the major barriers in the development of effective and transparent pricing policy in LMICs. Thus, the focus of this study was to compare the SEPs for medicines used for three different cancer treatments (breast, prostate and colorectal) and its affordability and consequent impoverishment. The main objectives of the study were to answer the following questions: Is the private sector purchasing medicines efficiently; what is the price of originator brand and generic medicines in the private sectors? What is the difference in price of the higher unit price and lowest unit price of the same cancer medicine? How affordable are medicines for the treatment of cancer for people with low income? The methodology employed in this study was adapted from the WHO/HAI methodology of measuring medicine prices and affordability [18] for ten cancer medicines (both originator brand (OB) and lowest priced generic (LPG) products) [2, 19]. The prices were based on the low, high and median 2020 SEP unit price per vial of the injectable cancer medicine formulation obtained from the South African Medicine Price Registry as of 11th March 2020 [19]. The South African Medicines Price Registry is managed by the National Department of Health and is a publicly available database that contains the current SEP prices of all registered medicines in South Africa, though previous versions of the database are available at fixed points in time. The database is an implementation of the transparent pricing policies for the private sector that is part of the South African legislation. All manufacturers are obliged to submit their SEPs to the National Department of Health, which are then entered into the database and published as an EXCEL spreadsheet on the website [19], which is continually updated as prices change. All the various prices (including if only one price was found) of each of the 10 medicines was extracted as submitted by the manufacturers to the database and included in the analysis [19]. Treatment regimens for advanced stages of Colo-rectal, breast and prostate cancer (being the most common amongst South African men and women [2] were taken from the Electronic Medicines Compendium (EMC) [20], United Kingdom (UK) and the National Comprehensive Cancer Network (NCCN) treatment guidelines [21]. A standardized computerized workbook [18] was used to enter and analyse data from the private sector on the components of medicine prices and the affordability of the medicines. The workbook for data entry automatically generates summary tables, which shows the median prices of the medicines. In this current study, the Median Price Ratio (MPR) was not calculated due to the outdated 2015 Management Sciences for Health (MSH), External Reference Price (ERP) [22]. Therefore, a comparison with the International Price Ratio (IPR) was not done. However, the median price unit was presented in individual medicines. The following research findings will be discussed: Procurement efficiency/brand premium which examines whether procurement prices are comparable amongst other types/brands of the same medicine. Medicine price variations between product types of the same medicine's highest- and lowest-priced product, as well as between the OB and LPG, whereby analysis is limited to those medicines for which both product types were found (matched pair analysis). The difference is expressed in this paper as a ratio and a percentage. Cancer medicine affordability: affordability of those medicines for cancer usage based on treatment affordability in relation to the daily wage of the unskilled LPGW (WHO/HAI) [18] and the Niens et al. method [9, 17, 23]. Procurement efficiency is defined as the difference between the Highest-Priced Medicine (HPM) and Lowest-Priced Medicine (LPM) and brand premiums between the highest-priced generic or innovator brand products and their lowest-priced generic equivalents was determined [18]. The median SEP unit price was calculated rather than mean values. The percentage cost differential or price variation was calculated as; $$ Cost\ Differential\ \left(\%\right)=\left[\left( Price\ of\ the\ Originator- Price\ of\ the\ generic\right)/ Price\ of\ Originator\right]\times 100. $$ The Price ratio between OB and LPG or HPM and LPM was calculated as: $$ Price\ ratio= Price\ of\ the\ OB/ Price\ of\ the\ LPG\ or\ Price\ of\ the\ HPM/ Price\ of\ the\ LPM. $$ The maximum and the minimum of each medicine with the same strength regardless of it being OB or LPG was used to calculate the cost differential between minimum and maximum SEP (%). For this study, medicine affordability has been investigated in terms of the days' wages that a country's unskilled LPGW needs to spend on a standard course of treatment [18]. This study presents patient prices and product affordability based on the WHO/HAI method [18] by examining the costs of cancer treatments and comparing them with the daily wage of the LPGW [18]. The 2020 salary of the unskilled LPGW in South Africa was 166.08 ZAR per day based on a 20.76 ZAR per hour and 8 h per day work schedule [24], which is equivalent to 9.9271 USD (1 USD = 16.73 ZAR, 12 September 2020) [25]. A month of oncology treatment was used to demonstrate the economic implication on a patient if they would have to pay for it out of pocket, even though a cancer patient is expected to have more than one cycle of treatment with multiple medicine regimens. $$ Number\ of\ days\ wage\ to\ afford\ treatment= cost\ of\ vial(s)\ of\ cancer\ medicine\ needed\ per\ month/ daily\ wage\ of\ lowest\ paid\ government\ worker. $$ It is important to bear in mind that these costs refer only to the medicine component of the total treatment costs. Consultation fees and diagnostic tests may mean that the total cost to the patient is considerably higher. A limitation in the methodology of this study is the exclusion of the accompanying cost factors that play a role in the final cost to the cancer patient such as dispensing fees, facility fees, administration fees, doctors' fees etc. An additional measure of unaffordability using Niens et al. method [9, 17, 23] was included in this study. The unaffordability of a medicine also refers to the percentage of the population that is already below or would fall below the poverty line when having to procure the medicine [7,8,9, 17, 23]. We also used the impoverishment method to compare the proportion of the population below the poverty line (PL) before (Ipre) and after (Ipost) the hypothetical procurement of a medicine [17]. For the percentage of the population represented by Ipost, the medicine is deemed unaffordable. Three types of data were required: medicine prices, aggregated income data (Y) [26], and information on the income distribution [17, 26]. (Ref: Table 1). We use the PL threshold of 5.50 USD [7, 8] or ZAR 92.02 [25] a day. Table 1 Income distribution and average daily Income Per Capita (IPC) [26] This study therefore focused solely on the SEP of the chosen cancer medicines for the most common cancer conditions in the private sector and how these affected the affordability. This study through its comparisons of OB to LPG sought to emphasis the cost savings implications of using the LPG in the treatment of a cancer patient. The variation in the price of 10 cancer medicines of different strengths and dosage form was assessed (Table 2). The cost/price differential for 90% of all the medicines analysed was above 50%. The maximum variation was found in Doxorubicin 50 mg injection (97.33%), whereas Oxaliplatin 100 mg injection showed the minimum price variation (25.46%). when analysing the cancer medicines individually, Doxorubicin 50 mg injection had its highest priced medicine (37.44 times more expensive that its lowest priced medicine). Otherwise, almost all the cancer medicines (90%) in this analysis had significant price differences between their lowest and highest priced medicines with a cost differential ratio above 2. Table 2 Comparison of Lowest Price with Highest Price of the same medicine (SEP) Table 3 shows the median price variability / cost differences between the OB and the LPG. Only those medicines for which both the originator brand and a generically equivalent product were found, were included in the analysis to allow for the comparison of prices between the two product types. Docetaxel 20 mg, Oxaliplatin 50 mg and Oxaliplatin 100 mg were excluded from the results as they had no OB for comparison. Results show that in the private sector, OBs cost more, on average, than their generic equivalents. The MPR ranged from 3.58–0.13, with 86% of the cancer medicines having an MPR above 1. The price variability between the OB and LPG for 66.7% of the medicines analysed was over 50%, this means that when OB medicines are prescribed/dispensed in the private sector, patients pay over 50% more than they would for generics. The highest cost differential was seen in Doxorubicin 100 mg (72.09%) followed by Irinotecan 100 mg (70.06%), Irinotecan 40 mg (64.65%) and Docetaxel 80 mg (62.13%) respectively. Thus, patients are paying substantially more to purchase OB medicines when LPGs are available. Table 3 Price variation among different brands of cancer medicines available in private pharmacies database of South Africa Price variability of the 28,6% surveyed OB and LPG cancer medicines had low-cost differentials less than 50%. The lowest was paclitaxel 300 mg (22.39%) followed by Doxorubicin 50 mg (32.35) respectively. The fluorouracil generic medicine was more expensive than its branded medicine, thus having a negative price variation of − 679.73%. Affordability [18] (Ref: Tables 4 & 5 and Fig. 1) has been assessed only for 17 versions of OB and LPG cancer medicines from the private sector database. All OB treatments except for paclitaxel 300 mg (0.2 days wage) and Fluorouracil (Fluroblastin) 500 mg (0.3 days wage) costs respectively were more than 1 day's wage. Among all the surveyed medicines, the OB of a one-month treatment of Irinotecan (Campto) 40 mg required 32.3 days' wages and is the most unaffordable. The cost of generic versions of Irinotecan 40 mg was 11.5 days' wages. For Docetaxel 80 mg the cost in days' wages for the OB product is 9, while the LPG product cost is 3.4 days' wages. For Irinotecan 100 mg the cost in days' wages for the OB product is 17.1 while the LPG product cost is 5.1 days' wages. For Doxorubicin 50 mg the cost in days' wages for the OB product is 3.8 while the LPG product cost is 2.6 days' wages. The cost of a one-month treatment with Doxorubicin 10 mg required about 3.5 days' wages for the OB and 1-day wage for the LPG. For the LPG medicines without a comparator OB, buying Docetaxel 20 mg, Oxaliplatin 100 mg and Oxaliplatin 50 mg cost in days' wages 13.6, 1.1 and 0.5 respectively. Moreover, Paclitaxel 300 mg OB, paclitaxel 300 mg LPG, Doxorubicin 10 mg LPG, Fluorouracil (Fluroblastin) 500 mg OB, Oxaliplatin 50 mg LPG and Oxaliplatin 100 LPG were found to be the most affordable cancer medicines in the private sector in South Africa. It is important to bear in mind that these costs refer only to the medicine component of the total treatment costs. Consultation fees and diagnostic tests may mean that the total cost to the patient is considerably higher. Table 4 Treatment Regimen for calculating affordability [20] Table 5 Affordability in terms of the HAI method using number of day's wages of a government worker required to pay for treatment with cancer medicine(s) [18] shows the affordability data for the selected cancer medicines Using the Niens et al. method [9, 17, 23], the proportion of the population living below the poverty line before (Ipre) the hypothetical procurement of a medicine is 57%. The proportion impoverished after (Ipost) the hypothetical procurement of a medicine ranges up to 26%, making the most expensive medicine, Irinotecan (Campto) 40 mg OB, unaffordable to 82.95%. The proportion impoverished range from 0.3 to 17.8% for the rest of the medicines (Ref: Table 6) below. Table 6 Medicine prices, cost of treatment per month and proportion impoverishment data [26] People living with cancer's survival depends on factors such as availability, affordability, and accessibility to treatment. Access to high-cost cancer medicines has become a major challenge in many countries, because of scarcity of pricing or affordability data to develop effective and transparent pricing policy, lack of insurance coverage and the resulting financially unaffordable cost to patients [11]. The health and economic objective of the South Africa NDP is to ensure the availability and accessibility of essential medicines and lower the cost of medicines in both the private and public sectors to all citizens [11, 13]. This study sought to analyse price variation among different brands of cancer medicines from the private sector and explore if the objectives of the NDP are being met with regards to oncology medicines [13]. The results of this study suggest that oncology medicine prices in South Africa are still high, and there are large price differences in the private sector between highest-priced and their lowest-priced equivalents, as well as between OB and LPG. The differences in price between HPM and LPM equivalents were found to be as high as 37.44 times in some instances. The private sector showed higher prices of OBs costing more than the LPGs with some having a cost difference of about 72.05%. Similar findings were also seen in studies conducted in LMICs (India, Nepal; and the African, Latin America, South East Asia, Western Pacific, East Mediterranean regions) on pricing of cancer medicines [3, 27,28,29,30,31]. These studies showed wide variations in price across different countries in regions, in the same country across different brands of the same medicine in the same dose and dosage form, individual medications and OBs versus LPGs [2, 27,28,29,30,31,32]. High patient prices can be due to lack of generic competition, suppliers of generic medicines pricing popular products only slightly below the originator brand version, high manufacturer profit margins, high government taxes and duties on medicines, and inefficient supply system. Current pricing policies (or the lack thereof) have led to considerable variability in the prices of cancer medicines within a country [4]. There are various reasons for the observed price variations such as patent protection, monopolistic markets for new entities, regulatory issues, tax and tariffs, geographic location, income status and lack of internal price regulation measures. In LMICs, improving health system strengthening is the key and it can improve various facets of medicine chain including access and affordability of medicines [33]. Differences in guidelines of medicine regulating authorities of various countries and their pricing policies account for the varying prices of medicines among different countries [27]. Assessing the prices of chemotherapy medicines in the private sector showed that the price differences between the OB and the LPG for the medicines used in this study ranged from the OB being 1.29 to 3.58 times more than the price of the LPG. In this study, Fluorouracil 500 mg's LPG was more expensive than its OB with a negative cost differential of 679.73%, which may be because of generic competition or some other factors. For paclitaxel, ten LPG cancer medicines were available with only one OB. These results indicate that there are savings that can be achieved by using the LPGs. The need for the LPG to be available and improve the affordability of cancer medicines was highlighted. About 33% of the medicines have a ratio of almost 1 between OBs and LPGs, which suggests that the SEP policy may be hindering competition of some products by setting a price ceiling or capping increases. Alternatively, companies may be using the OB price as a guide to their price setting. The existence of generics on the market does affect originator prices in some countries. In some countries, originator prices might have decreased because of generic competition, whereas in other countries originator prices remained at a high level [31]. South Africa has a large and highly developed private pharmaceutical manufacturing system and market estimated to account for 25% of the volume but 65% of the market by value [11, 31]. Generic medicines were estimated to account for about 65% of all items dispensed in the private sector and 40% of expenditure [34]. In South Africa, an area of importance has been that of generic penetration, in response to the legal requirements for mandatory offer of generic substitution by all dispensers [12]. As shown by this study, large differences in the price of the same medicine (by a different manufacturer) might affect patients' expenditure on medicines, especially when the patient does not know about price variation and cheaper alternatives, or if a medical scheme has included a different medicine on their formulary. The patient may still face high co-payments as part of their plan with a medical scheme. In terms of South African private sector pricing, the SEP mechanism and the publication of annual adjustments has provided some transparency in medicines pricing in terms of the full medicines price (without the dispensing fee addition). Medical schemes should follow suit and publish their co-payment schedule on their website to allow patients to understand the costs they incur, and to determine if alternatives exist where no co-payment is required. The government should continue in its efforts to promote generic prescribing and utilization, generic substitution, transparent pricing, efficient regulation, empower patients to r request cheaper alternatives, improve price transparency by medical schemes, introduce internal and external reference pricing benchmarking, health technology assessment processes and apply pharmacoeconomic analyses to negotiate the SEP prices of cancer medicines [11, 12, 17, 35]. Pricing policies will also have to be reviewed based on the fluctuations in South African currency, which could impact on the supply of essential cancer medicines. The regulated maximum SEP mechanism, with annual adjustments, may need reconsideration and refinement, and its increases need to consider exceptional circumstances that may arise as the result of extreme currency fluctuations within a given calendar year [11, 12]. Regular revision of the national medicines policy which confronts the demands of a purchaser-provider split and a completely reformed health financing system are needed to guide pharmaceutical practice in the future. Such a policy will need to build on the gains achieved [11]. In treating the commonly occurring cancer conditions in South Africa using standard regiments, affordability of generics seems to be an issue in the Irinotecan 40 mg, Irinotecan 100 mg, Doxorubicin 50 mg, Docetaxel 20 mg and Fluorouracil 500 mg. The LPGW would need between 0.2–13.6 days' wages to purchase lowest priced generic medicines from the private sector. If OBs are prescribed/dispensed, costs escalate to between 0.2–3 2.5 days' wages, respectively. The LPG could be up to 67% more affordable than the OB. Some treatments were clearly unaffordable, e.g. the treatment of Colorectal cancer with OB or LPG Irinotecan (Campto) 40 mg would cost 32.5 days' or 11.5 wages respectively. Thus, for some cancer medicines, a month's wage would not be enough to afford treatment. Affordability data indicated that 57% of the population would not be able to pay for their cancer medicines as they live below the poverty line before (Ipre) the hypothetical procurement of the cancer medicines. Irinotecan (Campto) 40 mg OB is expensive and was unaffordable to 82.95% of the population. Our findings were consistent with Niens et al. study, on the impoverishing effects of purchasing medicines using the measure of the proportion of a population that fell below a relevant poverty line after buying medicines which concluded that the impoverishing effect of medicines varied between OB and the LPG products and that a substantial portion of the population would be pushed into poverty as a result of medicine procurement [9]. Another study showed that the monthly costs of biological cancer medicines in Pakistan were higher than 20% of the monthly household income after spending on food [36]. Only 58.1% of non-biological cancer medicines were affordable [36]. The private sector cancer medicines are funded largely from insurance premiums (paid by individuals and employers) but also from out-of-pocket payments [11]. In South Africa, only 17.1% of people belonged to a medical scheme in 2017 [10]. Thus if these medicines are not covered by health insurance, the unaffordable prices will prevent these medicines from being used for a substantial portion of cancer patients and impact their health if they had to pay for their treatment out-of-pocket or if these medicines were unavailable in the public sector. The results of our findings are consistent with a study [37] which showed that the affordability of LPGs (67.9%) cancer medicines was more as compared to OBs (53.4%) in the private sector in Pakistan. Studies in India and Bangladesh, on affordability of paediatric cancer medicines revealed that cancer treatments were not affordable for most families leading to treatment abandonment [31, 38]. A study comparing prices in Australia, China, India, Israel, South Africa, the United Kingdom, and the United States, linked the price of cancer medicines to affordability using international markers of wealth and showed that there were major differences in patterns of affordability between countries [39]. Medicines in South Africa were less affordable than in all high-income countries, including the US where prices were considerably higher. These differences were driven by lower levels of wealth in middle-income countries. In understanding differences in wealth between countries there may be some debate regarding the most appropriate metric to use, as GDP per capita does not incorporate personal income that may be impacted by unemployment levels, retirement age, and social patterns of employment. Differential pricing may be an acceptable policy to ensure global affordability of highly active cancer therapies. High inflation, low per capita income and increasing cost of living are among the several hurdles that hinders people from affording cancer medication. Differential pricing, low premium insurance schemes, medicine discounts, patient-access schemes, tax benefits, concerted public-private initiatives, patent changes, national health plans emulation of salient models in governance and public health administration is required for long term sustainability [31, 38, 40]. The relationship between price and healthcare outcomes should be enhanced through arrangements that reward innovation, while ensuring the sustainability of an affordable healthcare system [38, 41, 42]. Limitations of the research The medicines included in this study was from the private sector database, and thus there may be concern that the research is not representative of the situation in South Africa.. This study, using basic indicators only, cannot give a complete picture of the pharmaceutical sector in South Africa. The median price ratio was not calculated, and therefore, the data collected were not comparable with the international reference prices. Results on affordability may also lead to over-estimation since the calculation used was based on the lowest-paid government workers' wages. A significant proportion of the population earn less than the LPGW. The calculation of affordability utilizes the standard dose of individual medicines and affordability may vary if patients are taking more than one medicine. Cancer is expensive to treat. The results of the study show that the affordability and price of medicines in South Africa is of concern. As the country moves towards National Health Insurance, options for patients incurring high-cost treatment needs to be considered and formulated. The Government of South Africa has regulated the prices of medicines; however, more needs to be done and further strategies are needed to address the high costs of cancer medicines. This requires multi-faceted interventions, as well as the review and refocusing of policies, regulations and educational interventions. A recommendation for future research would be to investigate the impact of medicine price bench marking for oncology medicines in the private sector of South Africa. The data and materials are available on request from the authors. EMC: Electronic medicines compendium ERP: External Reference Price GDP: GNI: Gross National Income HAI: Health Action International HER2 +: Human Epidermal growth factor Receptor 2 positive HICs: High-Income Countries HPM: Highest-Priced Medicine IPC: Income Per Capita Ipost: Income post-payment Ipre : Income pre-payment IPR: International Price Ratio LMIC: Lower Middle-Income Country LPG: Lowest Priced Generic LPGW: Lowest Paid Government Worker LPM: Lowest-Priced Medicine MPR: Median Price Ratio MSH: Management Sciences for Health NCCN: National Comprehensive Cancer Network NDP: National Drug Policy OB: Originator Brand OOP: Out-of-pocket SEP: Single Exit Price UMIC: Upper Middle-Income Country ZAR: World Health Organization. Facts Sheet. 2018. https://www.who.int/news-room/fact-sheets/detail/cancer. Accessed 5 July 2020. International Agency for Research on Cancer. South Africa: Source: Globocan 2020. The global Cancer observatory, the World Health Organization, December 2020. https://cansa.org.za/files/2021/02/IARC-Globocan-SA-2020-Fact-Sheet.pdf. Staff Writer. Massive rise in cancer related treatment in South Africa – these are the most common types among men and women. Businesstech, 8 February 2020. https://businesstech.co.za/news/lifestyle/371244/massive-rise-in-cancer-related-treatment-in-south-africa-these-are-the-most-common-types-among-men-and-women/. World Health Organization. Technical Report: Pricing of cancer medicines and its impacts. Geneva: World Health Organization. 2018. Worldometer. South African Population. https://www.worldometers.info/world-population/south-africa-population/. Accessed 12 April 2021. Macrotrends. South Africa GNI Per Capita 1962–2021. https://www.macrotrends.net/countries/ZAF/south-africa/gni-per-capita. Accessed 12 April 2021. World Bank. Poverty and shared prosperity 2018: piecing together the poverty puzzle. Washington, DC: World Bank; 2018. Jolliffe, Dean, and Espen Beer Prydz. Estimating International Poverty Lines from Comparable National Thresholds. The Journal of Economic Inequality 14(2):185–98. Niens LM, Cameron A, Van de Poel E, Ewen M, Brouwer WBF, et al. Quantifying the impoverishing effects of purchasing medicines: a cross-country comparison of the affordability of medicines in the developing world. PLoS Med. 2010;7(8):e1000333. https://doi.org/10.1371/journal.pmed.1000333. World Health Organization. World Health Statistics 2020: Monitoring health for the SDGs. https://www.who.int/gho/publications/world_health_statistics/2020/en/. Suleman F, Gray A. Pharmaceutical Policy in South Africa. In: Pharmaceutical policies in South Africa. In: Babar Z, editor. Pharmaceutical Policy in Countries with Developing Healthcare Systems. Springer International Publishing; 2017. Gray A, Suleman F. Pharmaceutical pricing in South Africa. In: Babar Z, editor. Pharmaceutical Prices in the 21st Century. Springer International Publishing; 2015. South African National Department of Health. National Medicine Policy of South Africa. Pretoria: Department of Health; 1996. Gray A, Suleman F, Pharasi B. South Africa's National Medicine Policy: 20 years and still going. South African Health Review. 2017;1:49–58 https://hdl.handle.net/10520/EJC-c80c69129. Republic of South Africa. Medicines and Related Substances Control Amendment Act (90 of 1997). Government Gazette Vol. 390, No. 18505, Cape Town, 12 December 1997. Discovery Health Medical Scheme. Integrated annual report. Johannesburg, Discovery Health. 2012. Niëns LM, Van de Poel E, Cameron A, Ewen M, Laing R, Brouwer WBF. Practical measurement of affordability: an application to medicines. Bull World Health Organ. 2012;90(3):219–27. https://doi.org/10.2471/BLT.10.084087. World Health Organization, Health Action International. Measuring medicine prices, availability, affordability and price components. 2nd edition. Geneva: World Health Organization. 2008. Updated July 2020. https://haiweb.org/what-we-do/price-availability-affordability/collecting-evidence-on-medicine-prices-availability/. South African Department of Health. South African Medicine Price Registry Database of Medicine Prices. Pretoria: National Department of Health. 2019. http://www.mpr.gov.za/PublishedDocuments.aspx#DocCatId=21. Accessed 24 August 2020. Datapharm Ltd. The electronic medicines compendium (emc). https://www.medicines.org.uk/emc. . National Comprehensive Cancer Network. Clinical guidelines in practical oncology: NCCN guidelines 2020. https://www.nccn.org/professionals/physician_gls/recently_updated.aspx. Management Sciences for Health. International Medical Products Price Indicator Guide. Cambridge, Massachussetts: Management Sciences for Health. 2016. https://mshpriceguide.org/en/home/. Accessed 19 August 2020. Niëns L.M., Brouwer W.B.F Measuring the affordability of medicines: importance and challenges. Health Policy 2013; 112: 45–52, 1-2, DOI: https://doi.org/10.1016/j.healthpol.2013.05.018. South African Government. Employment and Labour on new National Minimum Wage rate. Department of Employment and Labour. 24 February 2020. https://www.gov.za/speeches/new-nmw-base-rate-come-effect-march-%E2%80%93-department-employment-and-labour-24-feb-2020-0000. Accessed 12 April 2021. Google. Google exchange rate 12 September 2020. https://www.bing.com/search?q=convert+usd+to+zar&form=EDGTCT&qs=CA&cvid=b9d76661c67540b98c2e68e872cca9d9&cc=US&setlang=en-US. Accessed 12 September 2020. The World Bank Group. World development indicators. http://wdi.worldbank.org/table/4.8. . Kolasani B. P., Malathi D. C, Ponnaluri R.R. Variation of Cost among Cancer Medicines Available in Indian Market 2016. DOI: https://doi.org/10.7860/JCDR/2016/22384.8918, Variation of Cost among Anti-cancer Drugs Available in Indian Market. Cuomo R.E., Seidman R. L, Mackey T.K. Country and regional variations in purchase prices for essential cancer medication 2017. DOI https://doi.org/10.1186/s12885-017-3553-5, 17, 1, 566. Vogler S, Vitry A, Babar Z-U-D. Cancer medicines in 16 European countries, Australia, and New Zealand: a cross-country price comparison study. Lancet Oncol. 2016;17(1):39–47. https://doi.org/10.1016/S1470-2045(15)00449-0. Salmasi S, Lee KS, Ming LC, Neoh CF, Elrggal ME, Babar ZU, et al. Pricing appraisal of cancer medicines in the south east Asian, Western Pacific and East Mediterranean Region. BMC Cancer. 2017;17:903. Faruqui N., Martiniuk A., Sharma A., Sharma C., Rathore B., Arora R. S, Joshi R. Evaluating access to essential medicines for treating childhood cancers: a medicines availability, price and affordability study in New Delhi, India. 2018. BMJ Glob Health 2019;4: e001379. doi:https://doi.org/10.1136/bmjgh-2018-001379, 2. Gelband H, Sankaranarayanan R, Gauvreau CL, Horton S, Anderson BO, Bray F, et al. Costs, affordability, and feasibility of an essential package of cancer control interventions in low-income and middle-income countries: key messages from disease control priorities, 3rd edition. Lancet. 2016;387(10033):2133–44. https://doi.org/10.1016/S0140-6736(15)00755-2. Babar, ZUD. Ten recommendations to improve pharmacy practice in low and middle-income countries (LMICs). J Pharm Policy Pract. 2021;14 (6). https://doi.org/10.1186/s40545-020-00288-2. Bateman C. Attacking 'prejudice' against generics could save SA billions. S Afr Med J. 2015;105(12):1004–5. https://doi.org/10.7196/SAMJ.2015.v105i12.10317. Babar ZUD, Ibrahim MIM, Singh H, Bukahri NI, Creese A. Evaluating medicine prices, availability, affordability, and price components: implications for access to medicines in Malaysia. PLoS Med. 2007;4(3):e82. https://doi.org/10.1371/journal.pmed.0040082. Saqib A, Iftikhar S, Sarwar MR. Availability and affordability of biologic versus non-biologic cancer medicines: A cross-sectional study in Punjab, Pakistan. BMJ Open. 2018;8(6). Sarwar MR, Iftikhar S, Saqib A. Availability of cancer medicines in public and private sectors, and their affordability by low, middle and high-income class patients in Pakistan. BMC Cancer. 2018;18(1):14. https://doi.org/10.1186/s12885-017-3980-3. Islam A, Akhter T. Eden. Cost of treatment for children with acute lymphoblastic leukemia in Bangladesh 2015. J Cancer Policy. 2015;6:37–43. https://doi.org/10.1016/j.jcpo.2015.10.002. Goldstein DA, Clark J, Tu Y, Zhang J, Fang F, Goldstein R, et al. A global comparison of the cost of patented cancer medicines in relation to global differences in wealth. Oncotarget. 2017;8(42):71548–55. https://doi.org/10.18632/oncotarget.17742. Cherny NI, Sullivan R, Torode J., Saar M., Eniu A. The European Society for Medical Oncology (ESMO) International Consortium Study on the availability, out-of-pocket costs and accessibility of antineoplastic medicines in countries outside of Europe. Annals of Oncology. 2017; 28: 2633–2647. doi:https://doi.org/10.1093/annonc/mdx521. 2017. London School of Economics. Tender loving care? Purchasing medicines for continuing therapeutic improvement and better health outcomes. 2016. http://eprints.lse.ac.uk/67824/. Accessed 10 Nov 2016. Saltz LB. Perspectives on cost and value in cancer care. JAMA Oncol. 2015;2:1–3. Department of Pharmacy, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH, UK Phyllis Ocran Mattila & Zaheer-Ud-Din Babar Discipline of Pharmaceutical Sciences, School of Health Sciences, Westville Campus, University of KwaZulu-Natal, Private Bag X54001, Durban, 4000, South Africa Fatima Suleman Phyllis Ocran Mattila Zaheer-Ud-Din Babar FS and ZB conceptualised the study, validated the data and participated in the analyses and write up. POM participated in the data collection, analyses and write up of the article. The author(s) read and approved the final manuscript. Correspondence to Fatima Suleman. Ethics approval for the study was obtained from the Humanities and Social Science Ethics Committee of the University of KwaZulu-Natal (HSS/0154/013). The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article. Mattila, P.O., Babar, ZUD. & Suleman, F. Assessing the prices and affordability of oncology medicines for three common cancers within the private sector of South Africa. BMC Health Serv Res 21, 661 (2021). https://doi.org/10.1186/s12913-021-06627-6 Accepted: 08 June 2021
CommonCrawl
Attitude of pregnant women towards Normal delivery and factors driving use of caesarian section in Iran (2016) Soraya Siabani1, Khadijeh Jamshidi2 & Mohammad Mehdi Mohammadi3 Normal delivery is a natural and physiological process with numerous benefits for mother and baby. Giving birth by Caesarean Section (CS) should be limited to the cases in which normal delivery is not possible. The purpose of the study was to determine the attitudes of pregnant women towards Normal Delivery and factors driving the use of Caesarian Section in Kermanshah, Iran. This analytical-descriptive study was conducted on 410 pregnant women referred to the PHC centers in Kermanshah in western Iran. They had been selected through a multi-stage sampling method, including clustering, randomized, and proportional sampling, from among all eligible women. Data was collected using a questionnaire standardized by previous studies. The level of 0.05 was considered significance association, whenever applied. The mean and standard deviation for participant age was 27.65 ± 5.37 years. The median score for participant attitude was 60.7 ± 9.5 (range from 22 to 85). Generally, 21.5% had a negative attitude toward normal delivery and preferred CS. Participant attitude was negatively correlated with a pregnant woman's age, lower age, and a more positive attitude towards vaginal childbirth. The attitude of women with a history of normal delivery was 63 ± 9 and for those with a history of CS was 56.7 ± 9.3, significantly different. Most women had a positive attitude towards normal delivery, particularly those who had experienced normal delivery in their previous childbirth. Although only a quarter of the participants had a negative attitude toward normal delivery, this figure still was of utmost significance, therefore educational interventions, specifically encouraging women with history of normal delivery to consult their peers, are recommended. The advantages of normal delivery (vaginal delivery) for both mother and baby have been reported in numerous studies [1,2,3,4]. Childbirth through abdominal surgery, called Cesarean Section (CS), has been done for millions of mothers and babies over the past centuries. However it should be limited to the cases in which vaginal childbirth is not possible or normal delivery is subject to serious risks for the baby or mother [1]. Numerous complications may arise for mothers and babies due to CS, including the general surgical complications (e.g. fever, infections, bleeding, scarring, long time bed restand complications of anesthesia), and many specific complications such as urinary tract involvement, hysterectomy, child-mother relationship issues etc. [3, 4]. Also, while the mortality rate for elective CS has been reported to be about 6 in 100,000 cases, the rate for vaginal childbirth is 2 in 100,000 cases [5]. Cesarean section is used frequently in both developed and developing countries, especially in Asia (more than 50% of childbirths in China) [6]. In Iran, similarly, the CS rate is much higher than the standard rate (5 to 15%) that is expected by the World Health Organization [7]. The results of a study conducted in Tehran (Iran) showed that 66% of deliveries had been performed through CS [8]. The factors contributing CS in Iran can be divided into two main categories; those related to mother and those related to medicine/doctors. In general, medical issues, such as maternal age, being the first childbirth, history of CS, being a candidate for tubal ligation, and fear of painful vaginal childbirth, have been listed as contributing factors [9]. Nowadays, socioeconomic issues and financial issues play a significant role in the choice of CS by physicians who encourage women to choose CS [10, 11]. Further, the negative attitudes might be due to incorrect beliefs of people about health related problems, either about the baby or mother. Other less common reasons may be following vogue and fashion or disrespect by the medical staff of hospitals prior to the processes of natural childbirth. Also, concern about increasing the risks for the baby in normal delivery being traumatized about the vaginal area leading to sexual dysfunction have been reported as the most important incentives for mothers to select CS [12]. However, there is little information about women attitudes in western Iran, especially in the city of Kermanshah, about CS childbirth. People beliefs and attitudes are important determinants for their behavioral modification, therefore, before any health related intervention program, knowing their attitudes towards the issue can play an important role in adopting strategies and making decisions by healthcare organizations in order to reduce the rates of cesarean section delivery. Thus, the present study was conducted to determine the attitudes of pregnant women towards Normal Delivery and factors driving use of Caesarian Section in Kermanshah, Iran. This descriptive analytical, cross-sectional study, approved by the ethics committee at Kermanshah University of Medical Sciences (KUMS), was conducted in 2016 in nine primary healthcare (PHC) centers located in Kermanshah in western Iran. The study population was pregnant women who were provided with primary health care by the PHC centers. The necessary sample size was calculated at 375 individuals using the following formula and applying means and standard deviation from similar previous studies. Considering 10% of non-response, 410 eligible pregnant women were selected via a multi-stage sampling method that included random sampling (selection of healthcare centers), quota sampling (selection of the number of participants from each healthcare center), and then simple random sampling to select each quota as the study participants. $$ n=\frac{Z^2p\left(1-p\right)}{d^2} $$ The quota for PHC centers in terms of the number of pregnant women covered in each center was as follows: 25 women (6.1%) selected from Haj Daei Healthcare, 15 women (3.7%) from Samimi Healthcare, 63 women (15.4%) from Keihanshahr Healthcare, 42 women (10.2%) from Moalem Healthcare, 62 women (15.1%) from Farhangian Phase 2 Healthcare, 63 women (15.4%) from Pardis Healthcare, 89 women (21.7%) from Shahid Rajaei Healthcare, 20 women (4.9%) from Sina Healthcare, and 31 women (7.6%) from Kashani Healthcare. The data collection instrument for the measurement of attitudes was a questionnaire borrowed from another study [13], but with some modification. After modification, its face validity was determined through a pilot study and by obtaining the opinions of experts and professors in the School of Midwifery. During this process, minor changes were made in the appearance of some of the items in the given questionnaire. A few new items (items 5, 6, 7, 8, 9 and 15) were added to the original questionnaire. The reliability of the questionnaire was similarly measured through Cronbach's alpha coefficient, which was confirmed by 80%. The questionnaire consisted of two parts; in its first part, demographic information (age, occupation, level of education, type of health insurance, and place of living) were collected. The second part of the questionnaire comprised items examining the attitudes of pregnant women towards natural childbirth. In terms of ranking the items, a 4-point Likert-type scale was used. A quite negative attitude to natural childbirth was assigned a score of 1 and a quite positive attitude to natural childbirth was assigned a score of 4 (the scores for questions 14 through 20 are in reverse). The total score of the questionnaire is the result of summing the points for each item, so the score range for attitude was from 22 (the lowest attitude to natural delivery) to 88 (the highest attitude to natural delivery). In order to ease the interpretation of the results in this study, scores 22–54 were considered negative attitudes and scores 55–88 positive. The data collection was completed by two health care nursing experts who interviewed the participants following a full explanation of the study and after obtaining informed consent forms. The data were analyzed using the SPSS version 16. To describe the findings, descriptive statistics (frequency, mean, and standard deviation) and to test the relationships, analytical statistics (t-test, Pearson correlation coefficient, Analysis of Variance (ANOVA), and Chi-square test) were employed at the 0.05 level of significance. The mean and standard deviation for participant age (μ ± σ) was 27.65 ± 5.37 years. The mean age for their husbands was 32.31 ± 5.77 years. As well, 89.8% of these women were housewives and 25.4% of the husbands were self-employed. In terms of level of education, 28.5% of the pregnant women and 34.4% of their husbands had a diploma (Table 1). Moreover; 39.5% of the households were covered by social security insurance, 18.3% benefited from healthcare service insurance, 7.6% had military staff insurance, 1.2% were covered by Rostaei insurance (a type of public health care insurance providing for those living in rural areas of Iran), 0.2% used Imam Khomeini Relief Committee insurance, 1.2% had no insurance, and 32% were covered by other insurances. Table 1 Frequency distribution of the variables of occupation and level of education among pregnant women participating in the study and those of their husbands The results revealed that 46.1% of the pregnant women had no history of previous childbirth (the current pregnancy was their first), 36.1% of these women had had a previous delivery, 16.1% of them had experienced two or three deliveries, and 1% had a history of more than three deliveries. The mean for the number of children was 1.39; in this respect, 47% of women had no children, 36.3% had just one child, 12.9% of them had two children, and the rest had more than two children. Furthermore, the delivery mode among women who had a history of previous childbirth was natural in 58.8% of women and CS in 41.2% of women. As well, 17.8% of the pregnant women were in the first trimester of pregnancy, 43.2% of them were in the second trimester, and 39% were in the third trimester. The mean and standard deviation for the attitudes was (μ ± σ) 60.7 ± 9.5 with a range between 22 and 85. Overall, 78.5% of the women had a positive attitude and 21.5% had a negative attitude toward natural childbirth. Attitudes were inversely correlated with a pregnant woman's age in a way that the younger the age the more positive the attitude towards natural childbirth (r = − 0.1, p = 0.02). Attitudes were also inversely correlated with the age of the pregnant woman's husbands, but the given correlation was not statistically significant (r = − 0.06, p>0.05). There was a positive correlation between the attitude and the age at pregnancy, but this correlation was not significant (r = 0.06, p>0.05). The mean attitude of women with a history of childbirth was 60.9 ± 9.5 and the value for women with a history of previous CS delivery was 60.4 ± 9.6, the difference was not statistically significant (p>0.05). The mean attitude score for women with a history of natural childbirth was significantly higher than that of women with a history of CS (63 ± 9 vs 56.7 ± 9.3)(p<0.001) (Table 2). Table 2 the Attitudes score of women with a history of previous CS vs normal deliverya It should be noted that a woman's attitude toward natural childbirth was not significantly correlated with occupation, but such an attitude was significantly correlated with the husband's occupation (p<0.001) in a way that the occupation status of the husbands of most women with a more positive attitude to natural delivery was self-employed (20.2%) and the husband's occupation in the majority of women with negative attitudes towards natural childbirth was employee (8.8%). Women's attitudes were also significantly correlated with their level of education. Among the women with a positive attitude towards natural delivery, 22 and 21.2% had a diploma or high school degree, respectively (Table 3). Table 3 Frequency distribution of women's level of education based on their attitudes towards natural childbirth A significant relation was observed between a pregnant woman's attitude and her health insurance (p = 0.02): the highest scores assigned to attitudes were 64.6 ± 4.3 and 64 ± 9.7 for Rostaei insurance and military staff insurance, respectively (Table 4). Table 4 Mean score for pregnant women's attitudes towards natural childbirth based on the type of health insurance Acording the participants, factors affecting a positive attitude toward a natural delivery included lower maternal mortality (73%), delight to see the baby immediately after delivery (82%), establishment of a better bond between mother and baby (78%), lower incidence of infections after natural childbirth compared with CS delivery (82.4%), faster recovery (89%), faster return to daily life activities (87.6%), and lower costs of natural childbirth (85.6%). In addition, 68.2% of the women stated that if they had been aware of the complications of CS, they would not have demanded it for a previous childbirth. On the other hand, the factors affecting positive attitudes towards CS were a belief in having a healthier baby via CS delivery (56%), dislike for the position on the labor bed during vaginal delivery (55.6%), and less pain with CS compared with natural childbirth (61.4%) (Table 5). Table 5 Participants attitudes toward natural childbirth and CS The purpose of this study was to evaluate pregnant women's attitudes towards natural and CS delivery as well as the factors affecting the delivery mode selection by pregnant women referred to healthcare centers in the city of Kermanshah in western Iran in 2016. The results of this study revealed that about 80% of the participants had a positive attitude toward natural childbirth and about 20% had a positive attitude toward CS. Although the results of the current study showed a highly positive attitude toward normal delivery, the negative attitudes are of concern. In addition, the rates in comparison with those of other studies conducted in other places are relatively high. In a similar study conducted by Pourheidari in the city of Qom in central Iran, a positive attitude toward normal delivery was reported by 94% [14]. In another investigation, 97% of pregnant women living in the city of Shahrekord (in central Iran) also had a positive attitude toward natural childbirth [15]. However, in another study in the city of Ardebil in northwest Iran, more than 70% of the pregnant women stated that they would have natural childbirth, less than in our study. The most important finding of this study was that CS was the most frequent delivery mode (59%) and that, in this respect, the medical advice by the physician was the most important factor affecting delivery mode selection [16]. The results of another study in Bushehr City (south of Iran) showed that (45.3%) of pregnant women chose normal delivery and (41.1%) chose CS. The most frequent reason for choosing CS was fear of labor pain and the most frequent reason for choosing natural delivery was fewer complications rate [17]. It seems that the reason for the difference between the attitudes of different cities of Iran toward natural delivery and CS is the different cultures of Iran, because Iran is a multicultural nation and the cultures of its various cities are different, which perhaps leads to the difference in their attitudes. The attitude of women in western Iran toward natural delivery is at a lower level compared to other countries in the world. In a study conducted in Turkey, the majority of pregnant women selected natural delivery and less than 16% of them opted for CS. In regard to the rationale for selecting normal delivery, our results are consistent with the Turkish researchers' findings. The most important reasons for selecting normal delivery were lower maternal mortality, delight to see the baby immediately after childbirth, better emotional bonds between mother and baby, faster recovery, faster return to daily life activities, and lower costs of this method of delivery, similar to our findings. Similarly, the most important factors affecting the selection of CS delivery included the belief in having healthier newborns, dislike of the position of women on the labor bed, less pain experienced during CS, fear of vaginal delivery, and demand for tubal litigation during the CS to prevent later pregnancy [18]. Investigating the opinions of Brazilian pregnant women, Kasai reported that the majority of women (70.8%) had considered faster postpartum recovery as the main reason for selecting natural childbirth. Pregnant women who had also chosen CS mentioned no pain during childbirth and the need for tubal ligation as the most important factors affecting the selection of this delivery mode [19]. In Lee's study among South Korean women, most study participants showed more favorable attitudes toward vaginal delivery than CS. Over 95% of women preferred vaginal delivery during pregnancy and were willing to recommend this method to others [20]. Another study of Singaporean women showed that only 3.7% of them would prefer an elective CS. The most common reasons for choosing a CS were avoiding labor pains and lowering the risk of fetal distress [21]. According our findings, there was an inverse and significant correlation between an attitude toward natural delivery and a pregnant woman's age. In this respect, Fisher argued that age can be taken into account as a factor affecting the delivery mode selection [22]. In a study by Mohammad beigi et al. in Shiraz in Iran, a significant relation was observed between maternal age and such attitudes [23]. In a similar study among pregnant women in the city of Rasht in Iran, the findings also showed no statistically significant relation between a woman's attitude and her age [24]. Similarly, in another investigation examining pregnant women in Singapore, no significant relation was found between age and attitude [25]. In this study, the attitude was inversely correlated with the age of the pregnant woman's husband, but such a correlation was not statistically significant. However, Ghadimi et al. reported a significant relation between the delivery mode and the husband's age [26]. Thus, it seems that further studies should be conducted in this domain. Likewise, the results of the present study suggested that the attitude was not significantly correlated with gestational age, which was consistent with the findings of a study conducted in Turkey [18]. The results of this study also indicated that the mean score for attitude among women with no history of delivery and those with previous delivery experience was not statistically significant, but the mean score for attitude among women with a history of natural childbirth and those with a history of CS were significantly correlated. Thus, women with a history of vaginal delivery had better attitudes towards it. In this respect, the results of a study in the city of Shahrekord in Iran showed no significant relations among the attitude of pregnant women, number of previous CSs, and delivery mode [15]. Moeini et al. also demonstrated scores for attitude towards natural childbirth higher than those of a group with elective CS and one with medical reasons, thus this value for the group with elective CS was lower than those of the other two groups [24]. Considering attitude towards delivery pain, Atghaie et al. similarly found a statistically significant difference between groups undergoing natural delivery or CS; the mean scores for a negative attitude towards natural childbirth in the group with vaginal delivery was higher than that of the group with CS [27]. A history of natural childbirth could have a positive impact on attitude towards delivery mode and its re-selection for subsequent pregnancies. In this study, there was also a significant, positive relation between a woman's attitude and her level of education. These results are in line with the findings of Lee, Biglari and Ghadimi [20, 26, 28]. In this study, the attitude of women towards natural delivery was not significantly correlated with occupation. These findings were consistent with the results of other studies [28, 29]. However, Mohammadi Tabar considered the source of information acquisition to be the most important factor affecting delivery mode selection and showed a significant relation between a pregnant woman's occupation and the source of information acquisition [30]. Furthermore, a significant relation was found between a woman's attitude towards natural delivery and her husband's occupation, which was not in agreement with the findings by Pourheidari due to differences in the study populations [14]. In one study, 7.9% of women who chose CS stated that one of the reasons for that choice was having insurance to pay for it. Having insurance to pay for CS expenses can be related to its choice [31]. In our study a significant relation was observed between a pregnant woman's attitude and her type of health insurance, as the highest scores assigned to attitude were for Rostaei and Military staff insurance. It seems that people with better health insurance are more concerned about their health. Generally, people who have better insurance also who work in government agencies that provide their insurance. Because of their higher level of literacy, these people are likely more aware of the disadvantages of CS and prefer natural delivery. The results of the present study suggest that most of the women studied had a positive attitude toward normal delivery as a better delivery mode for giving birth. However, far fewer felt this way than in many other nations. Given that women play a determining role in the selection of their delivery mode, training can contribute to decision-making in terms of the selection of the right one. Therefore, training and monitoring during pregnancy and giving accurate information to pregnant women is indispensible. Moreover, holding educational classes and workshops for pregnant mothers and promoting their level of awareness and attitudes considering natural delivery and CS can be useful in this respect. Furthermore, given evidence showing that a woman's attitude is not the only important factor for selecting the normal delivery mode, more investigations seeking clarity in terms of the selected delivery method and comparing women's attitudes with final actions, as well as the reasons behind those actions, is recommended. Borghei NS, Borghei A, Jafar GP, Kashani E. The factors related of indication and delivery method. Bimonthly Med Res J. 2005;4(1):51–60. Saisto T, Halmesmäki E. Fear of childbirth: a neglected dilemma. Acta Obstet Gynecol Scand. 2003;82(3):201–8. Shrifirad GH, Fathi Z, Tirani M, Mehaki B. Assessing of pregnant women toward vaginal delivery and cesarean section based on behavioral intention model. Ilam University Med Sci J. 2007;15(1):19–23. Mohamadbeigi A, Tabatabai H, Mohammadsalehi N, Yazdani M. Factors affected on cesarean delivery in shiraz hospitals. Iran J Nurs. 2008;21(56):37–45. Azizi F. Cesarean delivery with increases in shocking. J Res Med Sci. 2007;31(3):191–4. Tang S, Li X, Wu Z. Rising cesarean delivery rate in primiparous women in urban China: evidence from three nationwide household health surveys. Am J Obstet Gynecol. 2006;195(6):1527–32. Moreno JM, Bartual E, Carmona M, Araico F, MirandaYA HAJ. Changes in the rate of tubal ligation done after cesarean section. Eur J Obstet Gynecol Reprod Biol. 2001;97(2):147–51. Sharyat M, Majlessi F, AS MM. The prevalence of cesarean delivery and related factors. Payesh. 2002;1(3):5–10. Negahban T. Preference delivery method and effective factors on that from viewpoint of referrer women to therapy centers and clinices in Rafsanjan. Rafsanjan University Med Sci J. 2006;5(3):2. Tavassoli A, Kalari F, Zafari Dizji A. Social factors affecting cesarean trend in pregnant women. J Med Ethics. 2014;8(29):145–70. Alehagen S, Wijma K, Wijma B. Fear during labor. Acta Obstet Gynecol Scand. 2001;80(4):315–20. Kasai KE, Nomura RMY, Benute GRG, Lucia MCS, Zugaib M. Women's opinions about mode of birth in Brazil: a qualitative study in a public teaching hospital. Midwifery. 2010;26(3):319–26. Ghotbi F, Akbari Sene A, Azargashb E, Shiva F, Mohtadi M, Zadehmodares S, et al. Women's knowledge and attitude towards mode of delivery and frequency of cesarean section on mother's request in six public and private hospitals in Tehran, Iran, 2012. J Obstet Gynaecol Res. 2014;40(5):1257–66. Pour Heydari M, Sozany A, Kasaeian A. Study of knowledge and attitudes of pregnant women referred to health centers in Qom to the method of termination of pregnancy. J Knowledge Health. 2007;2(2):28–34. Salehian T, Delaram M, Safdari F, Jazayeri F. Knowledge and attitudes of pregnant women about mode of delivery in health centers of Shahrekord. Tolooe Behdasht. 2007;6(2):1–10. Sharghi A, Kamran A, Gh S. Factors influencing delivery method selection in Primiparous pregnant women referred to health centers in Ardabil, Iran. J Res Health Syst. 2013;7(3):364–72. Najafi- Sharjabad F, Keshavarz P, Moradian Z. Survey on the prevalence and influencing factors for choosing Normal vaginal delivery among pregnant women in Bushehr City, 2015. Community Health J. 2017;11(1):20–9. Buyukbayrak EE, Kaymaz O, Kars B, Karsidag AYK, Bektas E, Unal O, Turan C. Caesarean delivery or vaginal birth: preference of Turkish pregnant women and influencing factors. J Obstet Gynaecol. 2010;30(2):155–8. Kasai Keila E, Nomura Roseli MY, Benute GR, de Lucia Mara CS, Zugaib M. Women's opinions about mode of birth in Brazil: a qualitative study in a public teaching hospital. Midwifery. 2010;26:319–26. Lee SI, Khang YH, Lee MS. Women's attitudes toward mode of delivery in South Korea-a society with high cesarean section rates. Birth J. 2004;31(2):108–16. Chong ES, Mongelli M. Attitudes of Singapore women toward cesarean and vaginal deliveries. Int J Gynaecol Obstet. 2003;80(2):189–94. Fisher J, Smith A, Astbary J. Private health insurance and a healthy personality: new risk factor for obstetric intervention. J Psychosom Obstet Gynaecol. 1995;16:10–3. Mohammadbeigi A, Tabatabayi H, Mohammadsalehi N. Determination of effective factors on cesarean in shiraz. J Faculty Nurs Midwifery Iran University Med Sci. 2008;21(56):37–45. Moeini B, Besharati F, Hazavehei M, Moghimbeigi A. Women's attitudes toward elective delivery mode based on the theory of planned behavior. J Guilan University Med Sci. 2011;20(79):68–76. Chong ES, Mongelli M. Attitudes of Singapore women towardscesarean and vaginal deliveries. Int J Gynecol Obstet. 2003;80:189–94. Ghadimi MR, Rasouli M, Motahar S, Lajevardi Z, Imani A, Chobsaz A, Razeghian S. Affecting factors the choice of delivery and attitude of pregnant women admitted to the civil hospitals, the social security organization in 2013. Quarterly J Sabzevar University Med Sci. 2014;21(2):310–9. Atghaee M, Nouhi E. The effect of imagination of the pain of vaginal delivery and cesarean section on the selection of Normal vaginal delivery in pregnant women attending clinics in Kerman University of Medical Sciences. Iranian J Obstetr Gynecol Infertility. 2013;14(7):44–50. Biglarifar F, Vysany Y, Delpisheh A. Women's knowledge and attitude towards choosing mode of delivery in the first pregnancy. IJOGI. 2015;17(136):19–24. Gh S, Fathiyan Z, Tirani M, Mahaky B. Perspective of pregnant woman than veginal delivery and cesarean section based on behavioral intention model. J Ilam University Med Sci. 2007;15(1):19–23. Mohammadi Tabar SH, Rahnama P, Heidari M, Kiani A, Mohammadi KH. Factors affecting on selection method of delivery in pregnant women referred to Tehran hospitals. J Med Ethics. 2012;6(21):131–44. Karami Matin B, Jalilian F, Mirzaei Alavijeh M, Mahboubi M, Abangah R, Zinat Motlagh F, et al. Factors influencing delivery method choice in Kermanshah pregnant women. J Clin Care. 2014;2(3):53–60. This article was derived from a project approved by Kermanshah University of Medical Sciences (N0 95494). We hereby express our gratitude to all the staff working in the PHC Departments in the City of Kermanshah for their cooperation and coordination. All the women participating and completing the interviews are also appreciated. We also express our thanks to the respected Office of Deputy Vice-Chancellor for Research in Kermanshah University of Medical Science for their financial support of this study. This study was funded by Kermanshah University of Medical Sciences (no: 95494). The data analyzed and materials used in this study are available from the corresponding author on reasonable request. School of Public Health, Kermanshah University of Medical Sciences and Registered External Supervisor at the University of Technology Sydney (UTS), Kermanshah, Iran Soraya Siabani School of Public Health, Kermanshah University of Medical Sciences, Kermanshah, Iran Khadijeh Jamshidi School of Nursing and Midwifery, Kermanshah University of Medical Sciences, Kermanshah, Iran Mohammad Mehdi Mohammadi Search for Soraya Siabani in: Search for Khadijeh Jamshidi in: Search for Mohammad Mehdi Mohammadi in: SS and KJ and MMM carried out the experiments, analyzed and interpreted the data, and drafted the manuscript. MMM and KJ designed the study and participated in analysis and interpretation of data. KJ and SS coordinated the study, revised the manuscript and approved the final version to be submitted for publication and helped in the analysis and interpretation of data. All authors read and approved the final manuscript. Correspondence to Khadijeh Jamshidi. This study was approved by the Ethics Committee of Kermanshah University of Medical Sciences, Kermanshah, Iran. Siabani, S., Jamshidi, K. & Mohammadi, M.M. Attitude of pregnant women towards Normal delivery and factors driving use of caesarian section in Iran (2016). BioPsychoSocial Med 13, 8 (2019) doi:10.1186/s13030-019-0149-0 Accepted: 18 March 2019 Pregnant women attitude Vaginal delivery
CommonCrawl
An algorithm for judging and generating multivariate quadratic quasigroups over Galois fields Ying Zhang1 & Huisheng Zhang1 SpringerPlus volume 5, Article number: 1845 (2016) Cite this article As the basic cryptographic structure for multivariate quadratic quasigroup (MQQ) scheme, MQQ has been one of the latest tools in designing MQ cryptosystem. There have been several construction methods for MQQs in the literature, however, the algorithm for judging whether quasigroups of any order are MQQs over Galois fields is still lacking. To this end, the objective of this paper is to establish a necessary and sufficient condition for a given quasigroup of order p kd to be an MQQ over \(GF(p^{k})\). Based on this condition, we then propose an algorithm to justify whether or not a given quasigroup in the form of multiplication table of any order p kd is an MQQ over \(GF(p^{k})\), and generate the d Boolean functions of the MQQ if the quasigroup is an MQQ. As a result, we can obtain all the MQQs over \(GF(p^{k})\) in theory using the proposed algorithm. Two examples are provided to illustrate the validity of our method. With the development of quantum computer, post-quantum cryptography (PQC) has gained intensive attention in recent years. Multivariate Public Key Cryptography (MPKC) is one among a few serious candidates to have risen to prominence as post-quantum options. In the last two decades, MPKC was developed rapidly, with many schemes being proposed, attacked and then amended. Based on multivariate quadratic quasigroups (MQQ), Gligoroski et al. recently proposed a novel type of MPKC-MQQ schemes (including both the signature scheme and the encryption scheme) (Gligoroski et al. 2008, 2011). As these schemes only need the basic operations of XOR and AND between bits during the encryption and decryption processes, they attain the speed of decryption/signature generation comparable to a typical symmetric block cipher (Hadedy et al. 2008). The size of the set of MQQs is rather large, which makes MQQ scheme have a bigger scale of private key and public key than conventional MPKC schemes (Gligoroski et al. 2008). Moreover, these schemes offer flexibility in their implementation from parallelization point of view (Hadedy et al. 2008). In a recent work, MQQ schemes have been successfully used in wireless sensor network (Maia et al. 2010). As the basic step for the MQQ scheme, generating MQQ is an important and challenging task. Gligoroski et al. established a sufficient condition of generating an MQQ for a given quasigroup (Gligoroski et al. 2008). Based upon this condition, a randomized generation algorithm for MQQs was also proposed therein. However, this algorithm is time-consuming and can only generate MQQs of order \(2^d(d\le 5)\). Subsequently, an improved algorithm to generate MQQs was proposed by Ahlawat et al. (2009), and the existence of MQQs from d = 2 to d = 14 was verified. Recently, the sufficient condition in Gligoroski et al. (2008) was simplified by Chen et al. (2010) and an efficient algorithm for generating bilinear MQQs (a subclass of MQQs) of any order 2d was proposed. In addition, new algorithms and theory for generating MQQs are also reported by Samardjiska et al. (2010) and Christov (2009), respectively. Different from the aforementioned work on constructing MQQs, equipped with a new necessary and sufficient condition for bilinear MQQ, Zhang and Zhang (2013) proposed an algorithm for judging and generating bilinear MQQ from the multiplication table of a quasigroup, thus answering the question how to judge whether or not an arbitrary quasigroup is a bilinear MQQ and providing a feasible way to generate all the bilinear MQQs in theory. Considering that bilinear MQQs are only a subclass of MQQs and the algebraic operation of Zhang and Zhang (2013) is only limited to GF(2), the objective of this paper is to extend the previous work (Zhang and Zhang 2013) by bringing out a solution on how to judge and generate MQQ over Galois fields. Specifically, we make the following contributions: We establish a necessary and sufficient condition for a quasigroup of any order p kd to be MQQ over \(GF(p^{k})\), which answers a theoretical question: when is a quasigroup an MQQ over \(GF(p^{k})\)? Based on the above condition, we propose an algorithm for justifying whether or not a given quasigroup of order p kd is an MQQ over \(GF(p^{k})\) and generating all its boolean functions if the quasigroup is an MQQ. Compared with the previous work (Zhang and Zhang 2013), the strategy proposed in this paper can identify all the MQQs, including both bilinear MQQs and non-bilinear ones. Moreover, the algebraic operation in Galois fields provides more flexibility in choosing p, k and d, which is useful for applying MQQ-design to various platforms and also benefits us to find more MQQs. The remainder of the paper is organized as follows. Second section recalls the original MQQ generation scheme (Gligoroski et al. 2008). Third section proposes a necessary and sufficient condition and an algorithm for justifying and generating MQQs in \(GF(p^{kd})\). Two examples are provided to show the validity our algorithms in fourth section. Finally, we conclude the paper in last section. Original MQQ generation scheme [Definition 1 in Chen et al. (2010)] A quasigroup \((Q,*)\) is a set Q with a binary operation \(*\) such that for any \(a,b\in Q\), there exist unique x, y: $$x*a=b;\quad a*y=b.$$ [Lemma 1 in Gligoroski et al. (2008)] For every quasigroup \((Q,*)\) of order \(2^d\) and for each bijection \(Q\rightarrow \{0,1,\ldots ,2^d-1\}\) , there are a uniquely determined vector valued Boolean function \(*vv\) and d uniquely determined 2d -ary Boolean functions \(f_1,f_2,\ldots ,f_d\) such that for each \(a,b,c\in Q\) $$\begin{aligned}a*b&=c \Longleftrightarrow *vv(x_1,\ldots ,x_d,x_{d+1},\ldots ,x_{2d}) \\&=(f_1(x_1,\ldots ,x_d,x_{d+1},\ldots ,x_{2d}),\ldots ,f_d(x_1,\ldots ,x_d,x_{d+1},\ldots ,x_{2d})). \end{aligned}$$ In general, for a randomly generated quasigroup of order \(2^d (d\ge 4)\), the degrees of Boolean functions are usually higher than 2. Such quasigroups are not suitable for the construction of multivariate quadratic public-key cryptosystem. [Definition 3 in Gligoroski et al. (2008)] A quasigroup \((Q,*)\) of order 2d is called multivariate quadratic quasigroup (MQQ) of type \(Quad_{d-k}Lin_k\) if exactly \(d-k\) of the polynomials \(f_s\) are of degree 2 and k of them are of degree 1, where \(0\le k<d\). In this section, we first establish a necessary and sufficient condition for a given quasigroup of order p kd to be an MQQ over \(GF(p^{k})\), and then use this condition to propose an algorithm for justifying whether or not a quasigroup of order p kd is an MQQ over \(GF(p^{k})\) and generating d Boolean functions of MQQ if it is. For convenience the following notations are adopted: \(I_n\) denotes the identity matrix of order n; \(E_{i,j}\) is the shorthand for the elementary matrix of switching all matrix elements on row i with their counterparts on row j of \(I_n\); \(E_{i,j}(1)\) denotes the elementary matrix of adding all matrix elements on row j (column i) to their counterparts on row i (column j) of \(I_n\). Necessary and sufficient condition for MQQs over \(GF(p^{kd})\) [see Golub and Loan (1996)] Given an \({m\times n}\) matrix \(A=(a_{ij})\), \(\overline{vec}(A)\) is a vector defined as $$\overline{vec}(A)=(a_{11},\ldots ,a_{1n},a_{21},\ldots ,a_{2n},\ldots ,a_{m1},\ldots ,a_{mn})^T.$$ [see Golub and Loan (1996)] Let \(A\in R^{m\times u},B\in R^{v\times n}, X\in R^{u\times v}\) , then $$\overline{vec}(AXB)=(A\otimes B^T)\overline{vec}(X).$$ Let \(A=(a_{ij})_{m\times u},B=(b_{lt})_{v\times n}, X=(x_{jl})_{u\times v}\), where \(a_{ij},b_{lt},x_{jl}\in \{0,1,\ldots ,p^k-1\},\) and p be prime number, then $$\overline{vec}(AXB\mod p^k)=(A \otimes B^T \mod p^k)~\overline{vec}(X)\mod p^k.$$ [see Golub and Loan (1996)] Let A, B, C, D be suitably sized matrices. Then $$(A+B)\otimes (C+D)=A\otimes C+A\otimes D+B\otimes C+B\otimes D.$$ Let a quasigroup \((Q,*)\) of order p kd be given by the multiplication scheme in Table 1, where \(q^{(j)}_i\in Q\), \((i,j=0,1,\ldots ,p^{kd}-1)\). For given i and \(\forall j\ne j^\prime\), we have \(q^{(j)}_i\ne q^{(j^\prime )}_{i}\); for given j and \(\forall i\ne i^\prime\), we have \(q^{(j)}_i\ne q^{(j)}_{i^\prime }\). One can choose two bijections \(\kappa : Q\rightarrow \{0 , 1,\ldots ,p-1\}^{dk}\) and \(\iota :\{0,1,\ldots ,p-1\}^k\rightarrow \{0,1,\ldots ,p^k-1\}\). Collect the elements of Table 1 into a vector $$\left( q^{(0)}_0, q^{(0)}_1, \ldots , q^{(0)}_{p^{kd}-1}, q^{(1)}_0, q^{(1)}_1, \ldots , q^{(1)}_{p^{kd}-1}, \ldots , q^{(p^{kd}-1)}_0, q^{(p^{kd}-1)}_1, \ldots , q^{(p^{kd}-1)}_{p^{kd}-1} \right) ^T,$$ and convert every element of the vector into a kd-ary sequence over GF(p) according to the bijection \(\kappa\). Then, divide every kd-ary binary sequence into d groups from left to right, where every group is a k-ary sequence, and represent every group by a unique element in \(\{0,1,\ldots ,p^k-1\}\) according to the bijection \(\iota\). In this way, we obtain a \(p^{2kd}\times d\) matrix \([b_1,\ldots ,b_d]\), where every \(b_s(s=1,\ldots , d)\) is a \(p^{2kd}\) dimensional column vector over finite field \(GF(p^k)\) Table 1 A quasigroup \((Q,*)\) of order p kd According to Lemma 1, whether a given quasigroup is an MQQ over \(GF(p^k)\) mainly lies in whether there is 2d-ary quadratic Boolean function set \(\{f_1,f_2,\ldots ,f_d\}\) satisfying Table 1. Note that, any \(f_s(x_1,\ldots ,x_d,x_{d+1},\ldots ,x_{2d})\) can be written in the form $$f_s=(1,x_1,\ldots ,x_d,x_{d+1},\ldots ,x_{2d}){\mathcal {A}}_s\left( \begin{array}{c}1\\ x_1\\ \vdots \\ x_d\\ x_{d+1}\\ \vdots \\ x_{2d}\end{array}\right) ,\quad (s=1,2,\ldots , d) ,$$ where \({\mathcal {A}}_s\) is a matrix of order \({2d+1}\) over finite field \(GF(p^k)\). By (2) and Table 1, when \((x_1,\ldots ,x_d)\) and \((x_{d+1},\ldots ,x_{2d})\) are respectively assigned d-ary sequences in the order of \(\{0,1,\ldots ,p^{kd}-1\}\) in which every element is written by d-ary sequence over \(GF(p^k)\), namely, \((1,x_1,\ldots ,x_d,x_{d+1},\ldots ,x_{2d})\) in \(f_s\) are assigned all row vectors of the following \({p^{2kd}\times (2d+1)}\) matrix of the form $$\left( \begin{array}{ccccccccc} 1 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0 &{}\cdots &{}0 &{} 0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ 1 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0&{}\cdots &{}0&{} p^k-1\\ \ddots &{}\ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots &{}\ddots \\ 1 &{} 0 &{}\cdots &{} 0 &{} 0&{} 0 &{}\cdots &{} p^k-1&{} 0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ 1 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0 &{}\cdots &{} p^k-1&{}p^k-1\\ \ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots \\ 1 &{} 0 &{} \cdots &{} 0 &{} p^k-1 &{} p^k-1 &{}\cdots &{} p^k-1&{}0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ 1 &{} 0 &{}\cdots &{} 0 &{} p^k-1&{}p^k-1 &{}\cdots &{} p^k-1&{}p^k-1\\ \ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots \\ 1 &{} p^k-1 &{} \cdots &{} p^k-1 &{} 0 &{} 0&{}\cdots &{}0&{} 0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ 1 &{} p^k-1 &{} \cdots &{} p^k-1 &{} 0 &{} 0&{}\cdots &{}0&{} p^k-1\\ \ddots &{}\ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots &{}\ddots \\ 1 &{} p^k-1 &{}\cdots &{} p^k-1 &{} 0&{} 0 &{}\cdots &{} p^k-1&{} 0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ 1 &{}p^k-1 &{} \cdots &{} p^k-1 &{} 0 &{} 0 &{}\cdots &{} p^k-1&{}p^k-1\\ \ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots &{} \ddots &{} \ddots &{} \ddots &{}\ddots \\ 1 &{} p^k-1 &{} \cdots &{} p^k-1 &{} p^k-1 &{} p^k-1 &{}\cdots &{} p^k-1&{}0\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ 1 &{} p^k-1 &{}\cdots &{} p^k-1 &{} p^k-1&{}p^k-1 &{}\cdots &{} p^k-1&{}p^k-1\\ \end{array}\right) =\left( \begin{array}{c} {\mathbf q} _0\\ {\mathbf q} _1\\ {\mathbf q} _2\\ \vdots \\ {\mathbf q} _{p^{2kd}-1} \end{array}\right) ,$$ we know that every \({\mathbf q }_k(k=0,1,\ldots ,p^{2kd}-1)\) for any \(b_s(s=1,\ldots ,d)\) needs to satisfy $$\left( \begin{array}{c} {\mathbf q} _0 {\mathcal {A}}_s {\mathbf q}_0^T\\ {\mathbf q}_1 {\mathcal {A}}_s {\mathbf q} _1^T\\ {\mathbf q}_2 {\mathcal {A}}_s {\mathbf q}_2^T\\ \vdots \\ {\mathbf q}_{p^{2kd}-1} {\mathcal {A}}_s {\mathbf q}_{p^{2kd}-1}^T \end{array}\right) \mod p^k=b_s.$$ By Lemma 3, (6) can be reshaped as $$\left( \begin{array}{c} {\mathbf q}_0 \otimes {\mathbf q}_0\\ {\mathbf q}_1 \otimes {\mathbf q} _1\\ {\mathbf q} _2 \otimes {\mathbf q} _2\\ \vdots \\ {\mathbf q} _{p^{2kd}-1} \otimes {\mathbf q} _{p^{2kd}-1} \end{array}\right) \overline{vec}(\mathcal {A}_s)\mod p^k=b_s.$$ Thus, the given quasigroup in Table 1 is an MQQ over \(GF(p^k)\) iff there is a set of matrices \(\{\mathcal {A}_1,\ldots ,{\mathcal {A}}_d\}\) satisfying the following matrix equation $$\left( \begin{array}{c} {\mathbf q} _0 \otimes {\mathbf q} _0\\ {\mathbf q} _1 \otimes {\mathbf q} _1\\ {\mathbf q} _2 \otimes {\mathbf q} _2\\ \vdots \\ {\mathbf q} _{p^{2kd}-1} \otimes {\mathbf q} _{p^{2kd}-1} \end{array}\right) [\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]\mod p^k=[b_1,\ldots ,b_d],$$ where \([\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]\) is regarded as an unknown matrix \([x_1,\ldots ,x_d]\). By now we have proved the following necessary and sufficient condition that a given quasigroup is an MQQ over \(GF(p^k)\). Theorem 1 For a given quasigroup \((Q,*)\) of order p kd , convert every element of \((Q,*)\) into a kd-ary sequence over GF(p) according to the bijection \(\kappa\) , divide every kd-ary sequence into d groups from left to right, and represent every k-ary sequence by a unique element in \(\{0,1,\ldots ,p^k-1\}\) according to the bijection \(\iota\). Then \((Q,*)\) is an MQQ over \(GF(p^k)\) of type \(Quad_{d-k}Lin_k\) if and only if the matrix equation (8) has solution. Furthermore, \(f_s\, (s=1,2,\ldots , d)\) obtained by (4) are just d Boolean polynomials of the MQQ, and their degrees are not more than 2. Proposed algorithm Based on Theorem 1, now we begin to develop an algorithm for justifying whether or not a quasigroup of order p kd is an MQQ over \(GF(p^{k})\) and generating d Boolean functions of MQQ if it is. $${\mathcal {Q}}_{k,d}=\left( \begin{array}{c} {\mathbf q} _0 \otimes {\mathbf q} _0\\ {\mathbf q} _1 \otimes {\mathbf q} _1\\ {\mathbf q} _2 \otimes {\mathbf q} _2\\ \vdots \\ {\mathbf q} _{p^{2kd}-1} \otimes {\mathbf q} _{p^{2kd}-1} \end{array}\right) \mod p^k,$$ then \([{\mathcal {Q}}_{k,d},b_1,\ldots ,b_d]\) is the augmented matrix associated with matrix equation (8). According to Theorem 1, the existence of the solution to the matrix equation (8) depends on whether the rank of \({\mathcal {Q}}_{k,d}\) is equal to the rank of \([{\mathcal {Q}}_{k,d},b_1,\ldots ,b_d]\). Firstly, we compute the rank of \({\mathcal {Q}}_{k,d}\). Note that the coefficient matrix \({\mathcal {Q}}_{k,d}\) is fixed for all the quasigroups of order p kd. Write $$\left( \begin{array}{c} {\mathbf q} _0\\ {\mathbf q} _1\\ {\mathbf q} _2\\ {\mathbf q} _3\\ \vdots \\ {\mathbf q} _{p^{2kd}-1} \end{array}\right) =\left( \begin{array}{c} {\mathbf q} _0\\ {\mathbf q} _0+{\mathbf p} _1\\ \vdots \\ {\mathbf q} _0+(p^k-1){\mathbf p} _1\\ {\mathbf q} _0+{\mathbf p} _2\\ {\mathbf q} _0+{\mathbf p} _2+{\mathbf p} _1\\ \vdots \\ {\mathbf q} _0+{\mathbf p} _2+(p^k-1){\mathbf p} _1\\ \vdots \\ {\mathbf q} _0+(p^k-1){\mathbf p} _2\\ {\mathbf q} _0+(p^k-1){\mathbf p} _2+{\mathbf p} _1\\ \vdots \\ {\mathbf q} _0+(p^k-1){\mathbf p} _2+(p^k-1){\mathbf p} _1\\ \vdots \\ {\mathbf q} _0+(p^k-1){\mathbf p} _{2d}+(p^k-1){\mathbf p} _{2d-1}+\cdots +(p^k-1){\mathbf p} _{2}+(p^k-1){\mathbf p} _{1} \end{array}\right) ,$$ then \({\mathcal {Q}}_{k,d}\) takes the form $$\left( \begin{array}{c} {\mathbf q} _0 \otimes {\mathbf q} _0\\ ({\mathbf q} _0+{\mathbf p} _1) \otimes ({\mathbf q} _0+{\mathbf p} _1)\\ \vdots \\ \left( {\mathbf q} _0+(p^k-1){\mathbf p} _1\right) \otimes \left( {\mathbf q} _0+(p^k-1){\mathbf p} _1\right) \\ ({\mathbf q} _0+{\mathbf p} _2)\otimes ({\mathbf q} _0+{\mathbf p} _2)\\ ({\mathbf q} _0+{\mathbf p} _2+{\mathbf p} _1)\otimes ({\mathbf q} _0+{\mathbf p} _2+{\mathbf p} _1)\\ \vdots \\ \left( {\mathbf q} _0+{\mathbf p} _2+(p^k-1){\mathbf p} _1\right) \otimes \left( {\mathbf q} _0+{\mathbf p} _2+(p^k-1){\mathbf p} _1\right) \\ \vdots \\ \left( {\mathbf q} _0+(p^k-1){\mathbf p} _2\right) \otimes \left( {\mathbf q} _0+(p^k-1){\mathbf p} _2\right) \\ \left( {\mathbf q} _0+(p^k-1){\mathbf p} _2+{\mathbf p} _1\right) \otimes \left( {\mathbf q} _0+(p^k-1){\mathbf p} _2+{\mathbf p} _1\right) \\ \vdots \\ \left( {\mathbf q} _0+(p^k-1){\mathbf p} _2+(p^k-1){\mathbf p} _1\right) \otimes \left( {\mathbf q} _0+(p^k-1){\mathbf p} _2+(p^k-1){\mathbf p} _1\right) \\ \vdots \\ \left( {\mathbf q} _0+(p^k-1){\mathbf p} _{2d}+\cdots +(p^k-1){\mathbf p} _1\right) \otimes \left( {\mathbf q} _0+(p^k-1){\mathbf p} _{2d}+\cdots +(p^k-1){\mathbf p} _1\right) \end{array}\right) \mod p^k.$$ After a succession of elementary row operations, namely left multiplication by the matrix below $$\begin{aligned} P_1&= \left( \prod \limits _{u=2d-2}^1\prod \limits _{l=2d-1}^{u+1}\prod \limits _{i=2}^{p^{(2d-l)k}}E_{i+p^{(2d-u)k}+p^{(2d-l)k},p^{(2d-u)k}+p^{(2d-l)k}+1}(-1)\right) \\&\quad \times \left( \prod \limits _{u=2d-1}^1\left[ \prod \limits _{l=2d}^{u+1}\left( \prod \limits _{j=2}^{p^k-1}\prod \limits _{i=1}^{p^{(2d-l)k}}E_{p^{(2d-u)k}+jp^{(2d-l)k}+i,i+p^{(2d-u)k}+p^{(2d-l)k}}(-j)\right) \right. \right. \\ &\quad \times\;\;\left. \left. \left( \prod \limits _{j=1}^{p^k-1}\prod \limits _{i=1}^{p^{(2d-l)k}} E_{p^{(2d-u)k}+jp^{(2d-l)k}+i,i+p^{(2d-u)k}}(-1)\right) \right] \right) \\ &\quad \times \left( \prod \limits _{u=2d-1}^1\prod \limits _{j=2}^{p^k-1} \prod \limits _{i=2}^{p^{(2d-u)k}}E_{i+jp^{(2d-u)k},1+jp^{(2d-u)k}} (-1)\right) \\ &\quad \times \left( \prod \limits _{u=2d}^1 \left( \prod \limits _{j=2}^{p^k-1}\prod \limits _{i=1}^{p^{(2d-u)k}} E_{i+jp^{(2d-u)k},i+p^{(2d-u)k}}(-j)\right) \right. \\&\quad \times\;\;\left. \left( \prod \limits _{j=1}^{p^k-1}\prod \limits _{i=1}^{p^{(2d-u)k}} E_{i+jp^{(2d-u)k},i}(-1)\right) \right) , \end{aligned}$$ (10) can be reduced to the form \(P_1\cdot {\mathcal {Q}}_{k,d}\), which only has the following nonzero rows $$\begin{aligned} &{\mathbf q} _0 \otimes {\mathbf q} _0;\\& {\mathbf q} _0\otimes {\mathbf p} _i+{\mathbf p} _i\otimes {\mathbf q} _0+{\mathbf p} _i\otimes {\mathbf p} _i,\quad i=1,\ldots ,2d;\\ &{\mathbf p} _i\otimes {\mathbf p} _j+{\mathbf p} _j\otimes {\mathbf p} _i,\quad 2d\ge i>j\ge 1;\\ & (j^2-j){\mathbf p} _i\otimes {\mathbf p} _i,\quad i=1,\ldots ,2d,j=2,\ldots ,p^k-1.\\ \end{aligned}$$ From now we begin to investigate the solutions of the matrix equation (8) by distinguishing two cases: \(p\ne 2\) and p = 2. We first consider Case 1: \(p\ne 2\). By multiplying \(P_1\cdot {\mathcal {Q}}_{k,d}\) on the left with the following matrix: $$\begin{aligned} P_2&= \left( \prod \limits _{v=1}^{2d-2}\prod \limits _{u=0}^{v}\prod \limits _{i=p^{vk}+p^{uk}-(v+3)}^0E_{1+p^{vk}+p^{uk}+(v-u)-i+\sum \limits _{j=0}^{2d-2-v}(2d+1-j),p^{vk}+p^{uk}+(v-u)-i+\sum \limits _{j=0}^{2d-2-v}(2d+1-j)}\right) \\&\quad \times \left( \prod \limits _{u=0}^{2d-1}\prod \limits _{i=p^{(2d-1)k}+p^{uk}-(2d+2)}^0E_{1+p^{(2d-1)k}+p^{uk}+(2d-1-u)-i,p^{(2d-1)k}+p^{uk}+(2d-1-u)-i})\right) \\&\quad \times \left( \prod \limits _{u=1}^{2d-1}\prod \limits _{i=p^{uk}-2}^{0}E_{p^{uk}+2d-u-i,p^{uk}+2d-u-1-i}\right) \\&\quad \times \left( \prod \limits _{u=2d-1}^{0}\left[ (E_{1+p^{uk},1+2p^{uk}}(-1))\times \left( \prod \limits _{i=3}^{p^k-1}E_{1+ip^{uk},1+2p^{uk}}(i^2-i)\right) \right. \right. \\&\quad \left. \left. \times \left( E_{1+2p^{uk}}\left( \frac{p^k+1}{2}\right) \right) \right] \mod p^k\right) . \end{aligned}$$ \(P_1\cdot {\mathcal {Q}}_{k,d}\) can be changed into the matrix \(\left( \begin{array}{c}\bar{\mathcal {Q}}_{k,d,p\ne 2}\\ {\mathbf{0}}_{(p^{2kd}-2d^2-3d-1)\times (4d^2+4d+1)}\end{array}\right)\), where \(\bar{\mathcal {Q}}_{k,d,p\ne 2}\) is of full row rank. $$P_2\cdot P_1\cdot [{\mathcal {Q}}_{k,d},b_1,\ldots ,b_d]=\left( \begin{array}{cccc}\bar{\mathcal {Q}}_{k,d,p\ne 2} &{} \bar{b}_1 &{} \cdots &{} \bar{b}_d\\ {\mathbf{0}} &{} \tilde{b}_1 &{}\cdots &{} \tilde{b}_d \end{array}\right) ,$$ then (8) has solution if and only if \([\tilde{b}_1, \ldots , \tilde{b}_d]={\mathbf{0}}_{(p^{2kd}-2d^2-3d-1)\times d}\). Next, suppose (8) has solution, then the solution matrix can be obtained. Note that $${\mathcal {Q}}_{k,d} [x_1,\ldots ,x_d]=[b_1,\ldots ,b_d]$$ is equivalent to the matrix equation $$\bar{\mathcal {Q}}_{k,d,p\ne 2}[x_1,\ldots ,x_d]=[\bar{b}_1,\ldots ,\bar{b}_d].$$ Since the rank of \(\bar{\mathcal {Q}}_{k,d,p\ne 2}\) is \(2d^2+3d+1\), there exists an invertible matrix \(Q_1\) of order \((2d+1)^2\), such that $$\bar{\mathcal {Q}}_{k,d,p\ne 2}Q_1=\left[ I_{2d^2+3d+1},{\mathbf{0}}_{(2d^2+3d+1)\times (2d^2+d)}\right] ,$$ $$\begin{aligned} Q_1&= \left( \prod \limits _{j=0}^{2d-1}\prod \limits _{i=j+2}^{2d+1}E_{j(2d+1)+i,(i-1)(2d+1)+j+1}(-1)\right) \\&\quad \times \left( \prod \limits _{u=1}^{2d}\prod \limits _{j=u+1}^{2d+1}\prod \limits _{i=0}^{\left( \sum \nolimits _{l=1}^ul\right) - 1}E_{u(2d+1)+j-i,u(2d+1)+j-i-1}\right) . \end{aligned}$$ Obviously, (14) is equivalent to the matrix equation $$\bar{\mathcal {Q}}_{k,d,p\ne 2}Q_1Q_1^{-1}[x_1,\ldots ,x_d]=[\bar{b}_1,\ldots ,\bar{b}_d].$$ Let \(Q_1^{-1}[x_1,\ldots ,x_d]=[y_1,\ldots ,y_d]\), then (17) takes the form $$\left[ I_{2d^2+3d+1},{\mathbf{0}}_{(2d^2+3d+1)\times (2d^2+d)}\right] [y_1,\ldots ,y_d]=[\bar{b}_1,\ldots ,\bar{b}_d].$$ According to the theory of linear system, the solution matrices of (18) can be represented by $$[y_1,\ldots ,y_d]=\left( \begin{array}{cccc} \bar{b}_1&{} \bar{b}_2&{} \cdots &{} \bar{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+d,1}&{}k_{2d^2+d,2}&{}\cdots &{} k_{2d^2+d,d} \end{array}\right) ,$$ where \(k_{uv}\) are randomly selected from \(GF(p^k)\), \((u=1,\ldots , 2d^2+d;v=1,\ldots ,d)\). Furthermore, (14) has the following solution matrices $$[x_1,\ldots ,x_d]=Q_1\cdot \left( \begin{array}{cccc} \bar{b}_1&{} \bar{b}_2&{} \cdots &{} \bar{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+d,1}&{}k_{2d^2+d,2}&{}\cdots &{} k_{2d^2+d,d} \end{array}\right) ,$$ namely, $$[\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]=Q_1\cdot \left( \begin{array}{cccc} \bar{b}_1&{} \bar{b}_2&{} \cdots &{} \bar{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+d,1}&{}k_{2d^2+d,2}&{}\cdots &{} k_{2d^2+d,d} \end{array}\right) .$$ Since \(k_{uv}\) is sampled from \(GF(p^k)\), \((u=1,\ldots , 2d^2+d;v=1,\ldots ,d)\), it is obvious that the number of such solution matrices is \(p^{kd\cdot (2d^2+d)}\). For an arbitrary solution matrix, \(\{{\mathcal {A}}_1,\ldots ,{\mathcal {A}}_d\}\) can be obtained immediately. Furthermore, by (4) we can obtain d quadratic functions of MQQ. We summarize the above deduction for Case 1 as the following theorem. Suppose \(p\ne 2\) and $$P_2\cdot P_1\cdot [b_1,\ldots ,b_d]=\left( \begin{array}{ccc} \bar{b}_1 &{} \cdots &{} \bar{b}_d\\ \tilde{b}_1 &{}\cdots &{} \tilde{b}_d \end{array}\right) ,$$ then (8) has solution if and only if \([\tilde{b}_1 ,\ldots , \tilde{b}_d]={\mathbf{0}}_{(p^{2kd}-2d^2-3d-1)\times d}\) . Furthermore, its solution are the matrices of the form $$[\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]=Q_1\cdot \left( \begin{array}{cccc} \bar{b}_1&{} \bar{b}_2&{} \cdots &{} \bar{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+d,1}&{}k_{2d^2+d,2}&{}\cdots &{} k_{2d^2+d,d} \end{array}\right) ,$$ where \(k_{uv}\in GF(p^k),(u=1,\ldots , 2d^2+d;v=1,\ldots ,d)\) , and \(P_1,P_2,Q_1\) are defined as (11),(13) and (16). Now we begin to consider Case 2: \(p=2\). By multiplying \(P_1\cdot {\mathcal {Q}}_{k,d}\) on the left by the following matrix: $$\begin{aligned} P_3&= \left( \prod \limits _{v=1}^{2d-2}\prod \limits _{u=0}^{v-1}\prod \limits _{i=p^{vk}+p^{uk}-(v+3)}^0E_{p^{vk}+p^{uk}+(v-u)-i+\sum \limits _{j=0}^{2d-2-v}(2d-j),p^{vk}+p^{uk}+(v-u)-i-1+\sum \limits _{j=0}^{2d-2-v}(2d-j)}\right) \\&\quad \times \left( \prod \limits _{u=0}^{2d-2}\prod \limits _{i=p^{(2d-1)k}+p^{uk}-(2d+2)}^0E_{p^{(2d-1)k}+p^{uk}+(2d-1-u)-i,p^{(2d-1)k}+p^{uk}+(2d-1-u)-i-1}\right) \\&\quad \times \left( \prod \limits _{u=1}^{2d-1}\prod \limits _{i=p^{uk}-2}^{0}E_{p^{uk}+2d-u-i,p^{uk}+2d-u-1-i}\right) \\&\quad \times \left( \prod \limits _{u=2d-1}^{0}\prod \limits _{i=2}^{p^k-1}E_{1+ip^{uk}}(p^{k-1})\mod p^k\right) , \end{aligned}$$ \(P_1\cdot {\mathcal {Q}}_{k,d}\) can be changed into the matrix \(\left( \begin{array}{c}\bar{\mathcal {Q}}_{k,d,p=2}\\ {\mathbf{0}}_{(2^{2kd}-2d^2-d-1)\times (4d^2+4d+1)}\end{array}\right)\), where \(\bar{\mathcal {Q}}_{k,d,p=2}\) is of full row rank. $$P_3\cdot P_1\cdot [{\mathcal {Q}}_{k,d},b_1,\ldots ,b_d]=\left( \begin{array}{cccc}\bar{\mathcal {Q}}_{k,d,p=2} &{} \hat{b}_1 &{} \cdots &{} \hat{b}_d\\ {\mathbf{0}} &{} \check{b}_1 &{}\cdots &{} \check{b}_d \end{array}\right) ,$$ then (8) has solution if and only if \([\check{b}_1, \ldots , \check{b}_d]={\mathbf{0}}_{(2^{2kd}-2d^2-d-1)\times d}\). Suppose (8) has solution, then we show how the solution matrix can be obtained. Since \({\mathcal {Q}}_{k,d} [x_1,\ldots ,x_d]=[b_1,\ldots ,b_d]\) is equivalent to the matrix equation $$\bar{\mathcal {Q}}_{k,d,p=2}[x_1,\ldots ,x_d]=[\hat{b}_1,\ldots ,\hat{b}_d]$$ and the rank of \(\bar{\mathcal {Q}}_{k,d,p=2}\) is \(2d^2+d+1\), then there exists an invertible matrix \(Q_2\) of order \((2d+1)^2\) such that $$\bar{\mathcal {Q}}_{k,d,p=2}Q_2=\left[ I_{2d^2+d+1},{\mathbf{0}}_{(2d^2+d+1)\times (2d^2+3d)}\right] ,$$ $$\begin{aligned} Q_2&= \left( \prod \limits _{i=2}^{2d+1}E_{i,(i-1)(2d+1)+1}(-1)E_{i,(i-1)(2d+1)+i}(-1)\right) \\&\quad \times \left( \prod \limits _{j=1}^{2d-1}\prod \limits _{i=j+2}^{2d+1}E_{j(2d+1)+i,(i-1)(2d+1)+j+1}(-1)\right) \\&\quad \times \left( \prod \limits _{u=1}^{2d-1}\prod \limits _{j=u+2}^{2d+1}\prod \limits _{i=0}^{\left( \sum \limits _{l=2}^{u+1}l\right) -1}E_{u(2d+1)+j-i,u(2d+1)+j-i-1}\right) . \end{aligned}$$ $$\bar{\mathcal {Q}}_{k,d,p=2}Q_2Q_2^{-1}[x_1,\ldots ,x_d]=[\hat{b}_1,\ldots ,\hat{b}_d].$$ Let \(Q_2^{-1}[x_1,\ldots ,x_d]=[z_1,\ldots ,z_d]\), then (27) takes the form $$\left[ I_{2d^2+d+1},{\mathbf{0}}_{(2d^2+d+1)\times (2d^2+3d)}\right] [z_1,\ldots ,z_d]=[\hat{b}_1,\ldots ,\hat{b}_d].$$ $$[z_1,\ldots ,z_d]=\left( \begin{array}{cccc} \hat{b}_1&{} \hat{b}_2&{} \cdots &{} \hat{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+3d,1}&{}k_{2d^2+3d,2}&{}\cdots &{} k_{2d^2+3d,d} \end{array}\right) ,$$ where \(k_{uv}\) is sampled from \(GF(2^k)\) \((u=1,\ldots , 2d^2+3d;v=1,\ldots ,d)\). Furthermore, (24) has the following solution matrices $$[x_1,\ldots ,x_d]=Q_2\cdot \left( \begin{array}{cccc} \hat{b}_1&{} \hat{b}_2&{} \cdots &{} \hat{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+3d,1}&{}k_{2d^2+3d,2}&{}\cdots &{} k_{2d^2+3d,d} \end{array}\right) ,$$ $$[\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]=Q_2\cdot \left( \begin{array}{cccc} \hat{b}_1&{} \hat{b}_2&{} \cdots &{} \hat{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+3d,1}&{}k_{2d^2+3d,2}&{}\cdots &{} k_{2d^2+3d,d} \end{array}\right) .$$ Since \(k_{uv}\) is sampled from \(GF(2^k)\), \((u=1,\ldots , 2d^2+3d;v=1,\ldots ,d)\), it is obvious that the number of such solution matrices is \(2^{kd\cdot (2d^2+3d)}\). For an arbitrary solution matrix, \(\{{\mathcal {A}}_1,\ldots ,{\mathcal {A}}_d\}\) can be got immediately. Furthermore, according to (4) we can obtain d quadratic functions of MQQ. Suppose \(p=2\) and $$P_3\cdot P_1\cdot [b_1,\ldots ,b_d]=\left( \begin{array}{ccc} \hat{b}_1 &{} \cdots &{} \hat{b}_d\\ \check{b}_1 &{}\cdots &{} \check{b}_d \end{array}\right) ,$$ then (8) has solution if and only if \([\check{b}_1 ,\ldots , \check{b}_d]={\mathbf{0}}_{(2^{2kd}-2d^2-d-1)\times d}\) . Furthermore, its solution are the matrices of the form $$[\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]=Q_2\cdot \left( \begin{array}{cccc} \hat{b}_1&{} \hat{b}_2&{} \cdots &{} \hat{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+3d,1}&{}k_{2d^2+3d,2}&{}\cdots &{} k_{2d^2+3d,d} \end{array}\right) ,$$ where \(k_{uv}\in GF(2^k),(u=1,\ldots , 2d^2+3d;v=1,\ldots ,d)\) , and \(P_1,P_3,Q_2\) are defined as (11),(23) and (26). To end this section, we summarize our proposed algorithm as follows: Algorithm for checking whether a given quasigroup of order \(GF(p^{kd})\) is an MQQ over \(GF(p^{k})\) Write the given quasigroup of order p kd in a vector with the form of (3). Convert every element of the vector into a d-ary sequence over \(GF(p^{k})\), then a \(p^{2kd}\times d\) Boolean matrix \([b_1,\ldots ,b_d]\) is obtained, where every \(b_s(s=1,\ldots , d)\) is \(p^{2kd}\) dimensional column vector. If \(p\ne 2\), for given k and d, compute the corresponding \(P_1,P_2,Q_1\) according to (11),(13) and (16). Compute \(P_2\cdot P_1\cdot [b_1,\ldots ,b_d]=\left( \begin{array}{ccc} \bar{b}_1 &{} \cdots &{} \bar{b}_d\\ \tilde{b}_1 &{}\cdots &{} \tilde{b}_d \end{array}\right)\). If \([\tilde{b}_1 ,\ldots , \tilde{b}_d]\ne {\mathbf{0}}_{(p^{2kd}-2d^2-3d-1)\times d}\), then output "no MQQ". If \([\tilde{b}_1 ,\ldots , \tilde{b}_d]={\mathbf{0}}_{(p^{2kd}-2d^2-3d-1)\times d}\), choose randomly \(k_{uv}\in GF(p^k),(u=1,\ldots , 2d^2+d;v=1,\ldots ,d)\), and compute \(Q_1\cdot \left( \begin{array}{cccc} \bar{b}_1&{} \bar{b}_2&{} \cdots &{} \bar{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+d,1}&{}k_{2d^2+d,2}&{}\cdots &{} k_{2d^2+d,d} \end{array}\right) =[\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]\). Write out \(\{{\mathcal {A}}_1,\ldots ,{\mathcal {A}}_d\}\) according to \([\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]\). Compute \(\{f_1,\ldots ,f_d\}\) by (4) and output "\(f_1,\ldots ,f_d\) of MQQ". If \(p=2\), compute \(P_1,P_3,Q_2\) according to (11), (23) and (26). Compute \(P_3\cdot P_1\cdot [b_1,\ldots ,b_d]=\left( \begin{array}{ccc} \hat{b}_1 &{} \cdots &{} \hat{b}_d\\ \check{b}_1 &{}\cdots &{} \check{b}_d \end{array}\right)\). If \([\check{b}_1 ,\ldots , \check{b}_d]\ne {\mathbf{0}}_{(2^{2kd}-2d^2-d-1)\times d}\), then output "no MQQ" . If \([\check{b}_1 ,\ldots , \check{b}_d]={\mathbf{0}}_{(2^{2kd}-2d^2-d-1)\times d}\), choose randomly \(k_{uv}\in GF(2^k),(u=1,\ldots , 2d^2+3d;v=1,\ldots ,d)\), and compute \(Q_2\cdot \left( \begin{array}{cccc} \hat{b}_1&{} \hat{b}_2&{} \cdots &{} \hat{b}_d\\ k_{11}&{} k_{12}&{}\cdots &{}k_{1d}\\ k_{21} &{} k_{22} &{}\cdots &{}k_{2d}\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ k_{2d^2+3d,1}&{}k_{2d^2+3d,2}&{}\cdots &{} k_{2d^2+3d,d} \end{array}\right) =[\overline{vec}({\mathcal {A}}_1),\ldots ,\overline{vec}({\mathcal {A}}_d)]\). Two examples In this section, we use two examples which are dealing with quasigroups of order \(2^4\) and \(3^2\) respectively, to illustrate the validity of the theorems and the effectiveness of the proposed algorithm. A quasigroup \((Q,*)\) of order \(2^4\) and its corresponding representations based on \(GF(2^2)\) are given in Table 2. Table 2 A quasigroup \((Q,*)\) of order 24 and its representations based on \(GF(2^2)\) Suppose \(P_3\cdot P_1\cdot [b_1,b_2]=\left( \begin{array}{cc} \hat{b}_1 &{} \hat{b}_2\\ \check{b}_1 &{} \check{b}_2 \end{array}\right) .\) Since \((\check{b}_1 , \check{b}_2)={\mathbf{0}}_{245,2}\), according to Theorem 3, the quasigroup is a MQQ. For a random matrix $$(k_{uv})_{14\times 2}=\left( \begin{array}{cccccccccccccc} 3&{}1&{}2&{}1&{}3&{}2&{}1&{}0&{}2&{}1&{}2&{}2&{}3&{}2\\ 1&{}1&{}3&{}3&{}1&{}3&{}0&{}1&{}2&{}0&{}0&{}1&{}1&{}2\\ \end{array}\right) ^T,$$ the corresponding functions are achieved as follows: $$\begin{aligned} f_1&= (1,x_1,x_2,x_3,x_4){\mathcal {A}}_1\left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) =(1,x_1,x_2,x_3,x_4)\left( \begin{array}{ccccc} 0 &{} 1 &{} 3 &{}1 &{}1\\ 3&{} 1 &{} 3&{}1&{}2\\ 2 &{}1 &{} 3&{}0&{}2\\ 2&{}1&{}0&{}2&{}1\\ 1&{}2&{}2&{}3&{}2 \end{array}\right) \left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) \\&= x_2+3x_3+2x_4+x_1^2+3x_2^2+2x_1x_3+2x_3^2+2x_4^2,\\ f_2&= (1,x_1,x_2,x_3x_4){\mathcal {A}}_1\left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) =(1,x_1,x_2,x_3x_4)\left( \begin{array}{ccccc} 0 &{} 2 &{} 1 &{}3 &{}3\\ 1&{} 1 &{} 1 &{}0 &{}0\\ 3 &{}3 &{} 1 &{}3 &{}1\\ 3&{}0&{}1&{}2&{}3\\ 0&{}0&{}1&{}1&{}2 \end{array}\right) \left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) \\&= 3x_1+2x_3+3x_4+x_1^2+x_2^2+2x_2x_4+2x_3^2+2x_4^2. \end{aligned}$$ A quasigroup \((Q,*)\) of order \(3^2\) and its corresponding representations based on GF(3) are given in Table 3. Table 3 A quasigroup \((Q,*)\) of order \(3^2\) and its representations based on GF(3) Suppose \(P_2\cdot P_1\cdot [b_1,b_2]=\left( \begin{array}{cc} \hat{b}_1 &{} \hat{b}_2\\ \check{b}_1 &{} \check{b}_2 \end{array}\right) .\) Since \((\check{b}_1 , \check{b}_2)={\mathbf{0}}_{66,2}\), according to Theorem 2, the quasigroup is an MQQ. For a random matrix \((k_{uv})_{10\times 2}\in GF(3)\) $$(k_{uv})_{10\times 2}=\left( \begin{array}{cccccccccc} 2&{}0&{}1&{}1&{}2&{}2&{}1&{}0&{}2&{}1\\ 1&{}2&{}2&{}1&{}0&{}1&{}2&{}2&{}1&{}2\\ \end{array}\right) ^T,$$ $$\begin{aligned} f_1&= (1,x_1,x_2,x_3,x_4){\mathcal {A}}_1\left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) =(1,x_1,x_2,x_3,x_4)\left( \begin{array}{ccccc} 0 &{} 2 &{} 0 &{}0 &{}2\\ 2&{} 0 &{} 2&{}1 &{}0\\ 0 &{}1 &{}0 &{}1 &{}1\\ 1&{}2&{}2&{}0&{}2\\ 1&{}0&{}2&{}1&{}0 \end{array}\right) \left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) \\&=x_1+x_3,\\ f_2&= (1,x_1,x_2,x_3,x_4){\mathcal {A}}_1\left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) =(1,x_1,x_2,x_3,x_4)\left( \begin{array}{ccccc} 0 &{} 2 &{} 2 &{}2 &{}2\\ 1&{} 0 &{} 1 &{}0 &{}1\\ 2 &{}2 &{} 0 &{}2 &{}2\\ 1&{}0&{}1&{}0&{}1\\ 2&{}2&{}1&{}2&{}0 \end{array}\right) \left( \begin{array}{c}1\\ x_1\\ x_2\\ x_3\\ x_4\end{array}\right) \\&=x_2+x_4. \end{aligned}$$ In this paper, a necessary and sufficient condition, which reveals that a given quasigroup \((Q,*)\) of order p kd is an MQQ over \(GF(p^k)\) of type \(Quad_{d-k}Lin_k\) if and only if the matrix equation (8) has solution, has been established. This condition provides a deep insight into the relationship between MQQ and the corresponding multiplication table from the point of view of quasigroup theory. Based on this condition, an algorithm has been developed to justify whether the given quasigroup is an MQQ over \(GF(p^{k})\) and generate the polynomials if it is. Compared with the previous work (Zhang and Zhang 2013), this algorithm can identify both bilinear MQQs and non-bilinear ones, and the algebraic operation in Galois fields provides more flexibility in choosing p, k and d, which is beneficial for applying MQQ-design to various platforms. The validity of the theorems and the effectiveness of the proposed algorithm have been verified by two examples. Ahlawat R, Gupta K, Pal SK (2009) Fast generation of multivariate quadratic quasigroups for cryptographic applications. In: IMA conference on mathematics in defence, Farnborough, UK Chen Yl, Knapskog SJ, Gligoroski D (2010) Multivariate quadratic quasigroups (MQQs): construction, bounds and complexity. In: 6th international conference on information security and cryptology. Science Press of China, Beijing Christov A (2009) Quasigroup based cryptography. Ph.D Thesis, Charles University, Prague Faugère JC, Ødegård RS, Perret L, Gligoroski D (2010) Analysis of the MQQ public key cryptosystem. Lect Notes Comput Sci 6467:169–183 Garey MR, Johnson DS (1979) Computers and intractability—a guide to the theory of NP-completeness. W.H. Freeman and Company, New York Gligoroski D, Markovski S, Knapskog SJ (2008) A public key block cipher based on multivariate quadratic quasigroups. Cryptology ePrint Archive, Report 320 Gligoroski D, Ødegård RS, Jensen RE, Perret L, Faugère J-C, Knapskog SJ, Markovski S (2012) MQQ-SIG, an ultra-fast and provably CMA resistant digital signature scheme. Lect Notes Comput Sci 7222:184–203 Golub GH, Loan CFV (1996) Matrix computations, Johns Hopkins studies in the mathematical sciences, 3rd edn. Johns Hopkins University Press, Baltimore Hadedy ME, Gligoroski D, Knapskog SJ (2008) High performance implementation of a public key block cipher-MQQ, for FPGA platforms. In: International conference on reconfigurable computing and FPGAs, pp 427–432 Koblitz N (1987) Elliptic curve cryptosystems. Math Comput 48:203–209 Maia RJM, Barreto PSLM, de Oliveira BT (2010) Implementation of multivariate quadratic quasigroup for wireless sensor network. Lect Notes Comput Sci 6480:64–78 Mohamed MS, Ding JT, Buchmann J, Werner F (2009) Algebraic attack on the MQQ public key cryptosystem. In: Cryptology and network security, LNCS, vol 5888. pp 392–401 Rivest R, Shamir A, Adleman L (1978) A method for obtaining digital signatures and public-key cryptosystems. Commun ACM 21(2):120–126 Samardjiska S, Markovski S, Gligoroski D (2010) Multivariate quasigroups defined by t-functions. Symb Comput Cryptogr 2010:117–127 Samardjiska S, Chen Y, Gligoroski D (2011) Construction of multivariate quadratic quasigroups (MQQs) in arbitrary Galois fields. In: 7th international conference on information assurance and security, pp 314–319 Shor PW (1994) Algorithms for quantum computation: discrete logarithms and factoring. In: 35-th annual symposium on foundation of computer science Zhang Y, Zhang H (2013) An algorithm for judging and generating bilinear multivariate quadratic quasigroups. Appl Math Inf Sci 7(5):2071–2076 This work was carried out in collaboration between the authors. YZ conceived and designed the study. YZ and HZ performed the proof of theorems. The manuscript was drafted by YZ and edited by HZ. Both authors read and approved the final manuscript. This work was supported by the National Nature Science Foundation of China (Nos. 61402071, 61671099), Liaoning Province Nature Science Foundation (Nos. 2015020006, 2015020011), and the Fundamental Research Funds for the Central Universities (Nos. 3132015230, 3132016111). Department of Mathematics, Dalian Maritime University, Dalian, 116024, China Ying Zhang & Huisheng Zhang Search for Ying Zhang in: Search for Huisheng Zhang in: Correspondence to Ying Zhang. Zhang, Y., Zhang, H. An algorithm for judging and generating multivariate quadratic quasigroups over Galois fields. SpringerPlus 5, 1845 (2016) doi:10.1186/s40064-016-3525-2 Received: 29 June 2016 Quasigroup Multivariate quadratic quasigroup Vector-valued boolean functions Judging method Generating algorithm
CommonCrawl
Why are non-Abelian gauge theories Lorentz invariant quantum mechanically? What defines a large gauge transformation, really? Clarification: Why the gauge symmetry of pure Yang-Mills is $PU(n)$ and not $SU(n)$? Wightman axioms and gauge symmetries Yang-Mills theory in manifolds that are not simply connected In what conceivable way can supersymmetric Yang-Mills theory help us understand traditional Yang-Mills? Wilson/Polyakov loops in Weinberg's QFT books Why can't compact symplectic groups $Sp(n)\equiv USp(2n)\equiv U(2n)\cap Sp(2n,\mathbb{C})$ be gauge groups in Yang-Mills theory? Where is the $OSp(N)$ Lie group used in gauge theory or string theory? Is there a direct way to see the failure of semiclassical approximation in infrared Yang-Mills from the bare lattice theory? Infrared Divergences in Gauge Theories There are some confusions regarding infrared (IR) divergences in gauge theory: What is the primary reason for the appearance of IR divergences in gauge theory? Anything other than the existence of massless particles in the spectrum of the theory. Why there is no IR divergence for massive theories? The propagator is given by: $$K(p)\propto\frac{1}{p^2-m^2}$$ Therefore, it is expected that the internal momentum integration crosses the pole given by mass. If $i\epsilon$ prescription is used to avoid this, why does it can not be used in the case of massless theories with propagator proportional to $\frac{1}{p^2}$? Do IR divergences appear only at the loop-level? The S-matrix of a QFT is defined by assuming that the interactions die off at spatial infinity. But in a theory with massless particles, this can not be assumed. Then how should we interpret the results of scattering amplitude computations in gauge theory? gauge-theory yang-mills asked Sep 3, 2017 in Theoretical Physics by SI1989 (85 points) [ revision history ] recategorized Sep 3, 2017 by Dilaton In QM we calculate the probabilities of events. Now imagine a compound target consisting of many interacting "particles" and thus having many internal degrees of freedom. When you hit one of these particles with a projectile, the target starts moving as a whole and generally gets "excited" too. The latter means a change of the initial state of internal motion of its constituents. If there is a non-zero threshold for exciting the target, then there is a probability of not exciting the internal motion while hitting one of its constituents. In other words, there is a non-zero probability to "push" the system "elastically". In particular, for the transferred energy $\Delta E\le E_{Threshold}$ there are no excitations in the final state, i.e., there are no inelastic channels open - they are inaccessible due to lack of transferred energy. The system gets the transferred energy as whole. So the probability of elastic scattering to any angle is unity. But If there is no threshold of exciting the target internal motion, then the probability of elastic scattering is always zero - you cannot push a system acting on one of its parts and not change the target internal "order". The energy of exciting may be very small, but we calculate probability, not energy. Now let us imagine that in our calculations of transition probabilities we threat the interaction of the projectile with one of the target constituents perturbatively. For example, we use the first Born approximation. If the state of the target is known exactly, then the probability of elastic scattering will be immediately obtained as zero. But, but! If the interaction of target particles is neglected in the first Born approximation (it is postponed to next perturbation orders), then you can get a unity probability for elastic scattering. Thus, in perturbative calculations of probabilities we may occasionally start from unity, which is very far from the exact result - zero. No wonder the perturbation series may diverge in higher orders since the initial approximation for elastic amplitude was chosen too bad. This is the true reason of IR divergences- you cannot push a charge without automatically exciting electromagnetic (or other massless gauge) degrees of freedom coupled to the charge in the exact solution, but you start unfortunately from too inexact initial solution where the charge and soft modes are decoupled. What is done in QFT in the end is taking into account the coupling to soft target modes perturbatively in all orders (i.e., exactly) and obtaining a non-zero results for inclusive transitions. Inclusive cross sections are different from zero and slightly depend on how much energy is allocated to soft modes. This is the only correct result while dealing with "soft" targets. commented Sep 3, 2017 by Vladimir Kalitvianski (132 points) [ revision history ] edited Sep 29, 2017 by Vladimir Kalitvianski @VladimirKalitvianski Thank you for the nice comment. commented Sep 9, 2017 by SI1989 (85 points) [ revision history ] There are a bunch of questions here, so I will just do my best to answer them as efficiently as possible. First of all, one can integrate a propagator to a force law. A massive propagator $1/(p^2+m^2)$ induces a force law $e^{-rm}$, in natural units, while a propagator $1/p^2$ induces a force law of Coulomb type, eg. $1/r^2$ in 3+1D. Notice that the first has no small r divergence while the second does. This is the basic statement, though it is modified in many interesting ways beyond tree level. In formulating scattering problems in gauge theory, one must be quite careful about how to specify boundary conditions of the gauge field at infinity. Peskin and Schroeder have a great description of this, but there are also some new perspectives coming from Andy Strominger and his collaborators. See, for example, these lecture notes (there are also videos on youtube). answered Sep 9, 2017 by Ryan Thorngren (1,895 points) [ revision history ] @RyanThorngren Thank you for the answer. By your last point about boundary conditions, do you mean that the S-matrix can be defined but we need to specify a specific boundary condition for gauge field at infinity? Also, can you please specify the relevant sections of Peskin & Schroeder? Photon plane waves are themselves a boundary condition for the gauge field at infinity, but we will need to account for radiative corrections to define the S-matrix. I don't have my PS on hand, but it's in there, just read the whole thing! :P commented Sep 11, 2017 by Ryan Thorngren (1,895 points) [ no revision ] Massive theories give rise to a partially discrete spectrum in the Kallen-Lehmann formula, corresponding in Haag-Ruelle scattering theory to ordinary particles. Massless theories produce only a continuous spectrum, corresponding in a generalized Haag-Ruelle scattering theory to infraparticles. This means that the infraparticles belong to end points of branch cuts (in the complex scaled spectrum) rather than to poles. This changes the whole physics and produces the IR problem (divergences due to treating the branch cuts as if they were poles). In particular, in a theory with massless states the asymptotic states are no longer plane wave states but coherent states of some sort with many more degrees of freedom. All this is not very well understood. answered Sep 16, 2017 by Arnold Neumaier (14,437 points) [ no revision ] Thank you for the very nice answer. I am very interested to know about the issue on a more fundamental level. Is there any reference on these topics? Also, you can see that in the comments of the above answer, Ryan mentions that photon plane wave (together with radiative corrections) provides a boundary condition for the gauge field. However, you are mentioning that the boundary conditions are given by coherent states. These two seems to be in contradiction. Am I correct? commented Sep 29, 2017 by SI1989 (85 points) [ no revision ] @SI1989: I think there is no contradiction here: Coherent states contain infinite number of (soft) photons or plane waves, that's it. They all are included (at least) in the final state after scattering. commented Sep 29, 2017 by Vladimir Kalitvianski (132 points) [ no revision ] @SI1989: One must take linear combinations of plane waves (giving coherent states) in order to have in-and out states with a finite (i,.e.., nonzero) S-matrix contribution. The standard reference on coherent states and the IR problem is P. Kulish and L. Faddeev, Asymptotic conditions and infrared divergences in quantum electrodynamics, Theor. Math. Phys., 4 (1970), p. 745. For infraparticles, see, e.g., https://arxiv.org/abs/0709.2493. commented Oct 21, 2017 by Arnold Neumaier (14,437 points) [ no revision ] p$\hbar$ysics$\varnothing$verflow
CommonCrawl
Enhanced features of a constitutive equation gap identification method for heterogeneous elastoplastic behaviours T. Madani ORCID: orcid.org/0000-0001-9225-59131,2,3, Y. Monerie2,3, S. Pagano2,3, C. Pelissou1,3 & B. Wattrisse2,3 To identify mechanical properties in heterogeneous materials, the local stress fields have to be estimated. The recent developments in imaging techniques allow reaching precise and spatially dense kinematic fields (e.g. displacement, strain ...). In this paper, an iterative procedure is used to identify the distribution of elastoplastic material parameters and the local stress fields. The formulation and the principle of the method are briefly presented while attention is paid to check its reliability and efficiency on finite element simulation data as reference full-field measurements. The method is also applied to noisy measured displacement fields to assess its robustness. Various identification techniques have been developed to identify mechanical behaviours and stress fields using kinematic field variables such as displacements or strains obtained by full-field measurement techniques (e.g. digital image correlation, interferometric techniques, grid methods, etc.): the finite element model updating method (FEMU) [1,2,3,4], the reciprocity gap method (RGM) [5, 6], the constitutive equation gap method (CEGM) [7,8,9], the virtual field method (VFM) [10,11,12,13,14,15,16,17,18] and the equilibrium gap method (EGM) [19, 20]. An overview of these identification procedures and their applications on experimental data can be found in [21]. More recently, some authors proposed to further integrate the displacement measurements and the identification procedures leading to so-called integrated-DIC (or I-DIC) formalism [22, 23]. In this work, we extend and adapt the approach developed in [24] to identify the constitutive laws and their mechanical parameters for heterogeneous materials. Since the method proposed in [24] is based on Airy functions, the approach is limited to simple geometries and regular meshes. This limitation is here removed and any geometry can be addressed. Moreover, the initial work [24] was limited to elastoplastic behaviours with a linear hardening. For sake of simplicity, the present paper also focuses on linear hardening but it is now straightforward to deal with any kinematic hardening law. The last improvement presented here concerns the identification of the yield stress and of the hardening modulus. The formulation was modified in order to allow the simultaneous identification of the two plastic parameters: the initial requirement of a plastic zone whose size remains constant on two successive load steps is no more needed. The new formulation thus allows to deal with multilinear hardening behaviours on complex geometries and several load steps. The simultaneous use of several load steps for the identification significantly decreases the sensitivity to measurement noise. The class of models that we have in mind belongs to J2 elastoplasticity with hardening. The Constitutive Equation Gap Method (CEGM) originally used as an error estimator for finite element simulations is here adopted in order to identify the stress fields and the constitutive parameters of heterogeneous materials. The change from an Airy's type approach to a Finite-Element approach to the CEGM allows investigating enhanced boundary conditions and more complex geometries. In this application, we introduce the elastoplastic secant stiffness tensor \(\underline{\underline{B^{s}}}\). For a Prager's linear kinematic hardening model, the tensor \(\underline{\underline{B^{s}}}\) is directly expressed as a function of the material properties (Young's modulus E, Poisson's ratio v for isotropic elasticity and shear modulus G for cubic elasticity, yield stress \(\sigma _{0}\) and hardening coefficient h) and of the loading history. In the following, we briefly present the identification procedure, we illustrate its performance on various simulated data obtained under small perturbations and plane stress assumptions. Finally, we check its robustness with respect to measurement noise. Inverse method Identification procedure Since we have interest in the sequel for thin and flat samples observed via in-plane DIC techniques, we focus on the identification of elastoplastic constitutive laws in a 2D framework (plane stress). In this context, a maximum of three parameters can be locally identified because we have only access to three local in-plane strain measurements related to three in-plane stress components. The CEGM is based on the minimization of an energetic functional depending on two sets of parameters: the stress field and the mechanical material properties. This procedure can be applied to any identification problem and can be used both with data extracted from numerical simulations and with experimental measurements. For a sequence of successive load steps (subscript \(1\le n\le N\) for each step), we denote by \(\overrightarrow{u_{n}^{m}}\) the measured displacement field on a given region of interest \(\Omega \) and we consider an elastoplastic body governed by the set of Eqs. (1–4): $$\begin{aligned}&div\, \underline{\sigma _{n}^{c}}=0\,\quad in\, \Omega \end{aligned}$$ $$\begin{aligned}&\underline{\sigma _{n}^{c}}=\underline{\underline{B_{n}^{s}}}:\underline{\varepsilon }\left( \overrightarrow{u_{n}^{c}} \right) \, \quad in\, \Omega \end{aligned}$$ $$\begin{aligned}&\left\{ \begin{array}{ll} \vec {R}_{n}^{j}=\int _{\partial {\Omega _{j}}} \underline{\sigma _{n}^{c}}\cdot \vec {n}\, ds\, \quad on\, \partial \Omega _{j}\, \\ \underline{\sigma _{n}^{c}}\cdot \vec {n}=0\,\quad on\, {\partial \Omega }_{i}\\ \end{array} \right. \end{aligned}$$ $$\begin{aligned}&\overrightarrow{u_{n}^{c}}=\overrightarrow{u_{n}^{m}}\, \quad on\, {\partial \Omega }_{u} \end{aligned}$$ where \(\underline{\sigma _{n}^{c}}\) represents a statically admissible stress field associated with the displacement \(\overrightarrow{u_{n}^{c}}\) via the fourth order secant elastoplastic tensor \(\underline{\underline{B_{n}^{s}}}\) (corresponding to the Hooke tensor \(\underline{\underline{B^{e}}}\) for an elastic step) for each load step n. It is worth noticing that for a heterogeneous material, all these quantities depend on the position. The overall forces \(\vec {R}_{n}^{j}\) are known for each time step n on the boundary \({\partial \Omega }_{j}\) of \(\Omega \). The free boundaries \({\partial \Omega }_{i}\) satisfy the relations: \({\partial \Omega }_{j}\cup {\partial \Omega }_{i}\cup {\partial \Omega }_{u}=\partial \Omega \) and \({\partial \Omega }_{j}\cap {\partial \Omega }_{i}=\emptyset \), \({\partial \Omega }_{j}\cap {\partial \Omega }_{u}=\emptyset \) and \({\partial \Omega }_{i}\cap {\partial \Omega }_{u}=\emptyset \). On some boundaries \({\partial \Omega }_{u}\), we also impose that the average displacement \(\overrightarrow{u_{n}^{c}}\) is equal to the average displacement \(\overrightarrow{u_{n}^{m}}\) which not only allows to eliminate the rigid body motion but also to further constrain the identification problem and to reduce the influence of measurement noise. The energetic functional can be expressed in its simplest form (small strain hypothesis, equilibrium): $$\begin{aligned}&E_{rc}\left( \left( \overrightarrow{u_{n}^{c}} \right) _{n\in \left[ 1,N \right] },\left( \underline{\underline{B_{n}^{s}}} \right) _{n\in \left[ 1,N \right] } \right) \nonumber \\&\quad =\sum \limits _{n=1}^N {\int _\Omega {\underline{\underline{B_{n}^{s}}}:\left[ \underline{\varepsilon }\left( \overrightarrow{u_{n}^{c}} \right) -\underline{\varepsilon }\left( \overrightarrow{u_{n}^{m}} \right) \right] :} \left[ \underline{\varepsilon }\left( \overrightarrow{u_{n}^{c}} \right) -\underline{\varepsilon }\left( \overrightarrow{u_{n}^{m}} \right) \right] d\Omega } \end{aligned}$$ Here N is the overall number of time steps used for the identification and \(\overrightarrow{u_{n}^{c}}\) a displacement field compatible with the equilibrium of the studied domain \(\Omega \) at load step n. The identification procedure consists in minimizing \(E_{{\textit{rc}}}\) with respect to its two arguments \(\overrightarrow{u_{n}^{c}}\) and \(\underline{\underline{B_{n}^{s}}}\). The method can be applied to any behaviour for which an expression of the secant tensor \(\underline{\underline{B_{n}^{s}}}\) is available. Consequently, it can be used on reversible behaviours (linear and non-linear elasticity) or irreversible behaviours (viscoelasticity, elastoplasticity ...). When dealing with irreversible behaviours, it is necessary to take the loading history into account. In the case of elastoplasticity, this amounts to separate elastic loading steps from plastic ones. The method was numerically implemented to deal with elastoplastic behaviours and monotonic loadings. The consistency between the numerical implementation and the main hypotheses underlying the description of plastic flow (e.g. existence of a yield stress, isochoric plastic strain, normality rule) is ensured from the formulation, thus minimizing the set of parameters to identify. In the case of cubic material, the Hooke tensor \(\underline{\underline{B^{e}}}\) depends only on three elastic constants: E, v and G. According to [25], the fourth order secant elastoplastic tensor can be written at load step n as: $$\begin{aligned} \underline{\underline{B_{n}^{s}}}=\left[ \underline{\underline{B^{e}}}^{-1}+\frac{\Delta \gamma _{n}}{1+\frac{2}{3}k\Delta \gamma _{n}}\underline{\underline{P}} \right] ^{-1} \end{aligned}$$ where \(\underline{\underline{P}}\) is the constant mapping matrix: $$\begin{aligned} \underline{\underline{P}}=\frac{1}{3}\left[ {\begin{array}{l@{\quad }l@{\quad }l} 2 &{} -1 &{} 0\\ -1 &{} 2 &{} 0\\ 0 &{} 0 &{} 6\\ \end{array} } \right] \end{aligned}$$ and \(\Delta \gamma _{n}\) is the plastic multiplier that depends, for linear kinematic hardening, on the material parameters: $$\begin{aligned} \Delta \gamma _{n}\left( \sigma _{0},k \right) =\frac{3}{2k}\left\langle \sqrt{\frac{3}{2}}\frac{\alpha _{n}}{\sigma _{0}} -1\right\rangle ^{+} \end{aligned}$$ with \(\langle \alpha \rangle ^{+}\) the positive part of a, \(\alpha _{n}\) the second invariant of the effective stress, \(\underline{X_{n}}\) the backstress tensor reached at the current load step: $$\begin{aligned} \alpha _{n}^{2}=\left( \underline{\sigma _{n}^{c}}-\underline{X_{n}} \right) ^{T}.\underline{\underline{P}}.\left( \underline{\sigma _{n}^{c}}-\underline{X_{n}} \right) \end{aligned}$$ The expression of the secant modulus \(\underline{\underline{B_{n}^{s}}}\) plays a central role in the proposed method since it governs the definition of the plastic parameter \(K_{n}\) which involves the plastic parameters to be identified. The secant tensor at time step n can also be expressed with respect to material parameters: $$\begin{aligned} \underline{\underline{B_{n}^{s}}}=\left[ {\begin{array}{*{20}c} \frac{E(1+2K_{n}E)}{3K_{n}^{2}E^{2}-2K_{n}E\left( \nu -2 \right) +1-\nu ^{2}} &{} \frac{E(\nu +K_{n}E)}{3K_{n}^{2}E^{2}-2K_{n}E\left( \nu -2 \right) +1-\nu ^{2}} &{} 0\\ \frac{E(\nu +K_{n}E)}{3K_{n}^{2}E^{2}-2K_{n}E\left( \nu -2 \right) +1-\nu ^{2}} &{} \frac{E(1+2K_{n}E)}{3K_{n}^{2}E^{2}-2K_{n}E\left( \nu -2 \right) +1-\nu ^{2}} &{} 0\\ 0 &{} 0 &{} \frac{2G}{1+12K_{n}G}\\ \end{array} } \right] \end{aligned}$$ Finally, since the plastic deformation at load step n is equal to \(\underline{\varepsilon _{n}^{p}}=\underline{\varepsilon }-\underline{\underline{B^{e}}}^{-1}:\underline{\sigma _{n}^{c}}\), the plastic parameter \(K_{n}\), can be expressed as a function of two material parameters a and b: $$\begin{aligned} K_{n}=a\frac{\left\| \varepsilon _{n}^{p} \right\| }{b+\left\| \varepsilon _{n}^{p} \right\| } \end{aligned}$$ with \(a=\frac{1}{2h}\) and \(b=\frac{\sigma _{0}}{h}\) in the case of a linear kinematic hardening. Note that when the load step n is purely elastic, the plastic strain is vanishing and the plastic parameter \(K_{n}\) is equal to zero: the plastic secant tensor \(\underline{\underline{B_{n}^{s}}}\) is thus equal to the elastic tensor \(\underline{\underline{B^{e}}}\). Furthermore, the Eqs. (10) and (11) show that the plastic secant tensor depends on set of the elastoplastic parameters \(p=\left[ E,v,G,\sigma _{0},h \right] \) and also on the norm of the plastic deformation \(\left\| \varepsilon _{n}^{p} \right\| \) so: \(\underline{\underline{B_{n}^{s}}}=\underline{\underline{B_{n}^{s}}}\left( q^{p},\left\| \varepsilon _{n}^{p} \right\| \right) \) where q are the phases (material domains) of the specimen and \(q^{p}\) the material parameters of phase q. To compute the ERC functional, several fields are to be defined: (1) the phase distribution (related to the material heterogeneity), (2) the stress fields (related to the development of structural effects in the specimen), and finally (3) the experimental displacement fields obtained by DIC. These three fields are discretized using different meshes with adapted mesh size and shape functions. The meshes are "nested", the coarser being the material mesh, the finer being the DIC mesh, and the intermediate being the stress mesh. These three meshes are described with different shape functions. The stresses are determined through a FE computation using bilinear displacement elements and bilinear shape functions are used to perform the local DIC computation. The continuity of the measured displacement field is enforced by averaging the displacement vector on the mesh vertices and the mechanical properties are constant on each material domain. Moreover, it is possible to use different stress meshes in both elastic and plastic identification. The 'plastic' mesh was a subdivision of the 'elastic' mesh in order to reduce the influence of the noise on the identification while maintaining a convenient description of the stress gradients. As conform meshes, these nested meshes simplify the transfers of fields from one mesh to another. Finally, the direct simulations are performed by imposing constant vertical displacement on the upper boundary and blocking vertical displacement on the lower boundary. The left and right boundaries are stress free. Moreover we set the displacement of one point of the lower boundary to zero in order to suppress the possible horizontal rigid body motion. Numerical method Due to the convexity of the \(E_{rc}\) functional, the minimization is performed in two steps, leading to the relaxation algorithm presented in Fig. 1a: the function is minimized with respect to the displacement field \(\overrightarrow{u_{n}^{c}}\) associated with a statically admissible stress field, and then with respect to the secant tensor \(\underline{\underline{B_{n}^{s}}}\). a Example of \({E}_{{\textit{rc}}}\) minimization algorithm used for the step (i) and step (iii), b global identification algorithm As already mentioned, we focus on a J2 elastoplastic model with kinematic hardening and no more than three material constants can be locally identified at each load step. The identification algorithm involves three steps (see Fig. 1b): (i) an elastic identification, (ii) a plasticity detection and (iii) a plastic identification. The elastic and plastic parameters are thus identified separately and are based on the minimization of the \(E_{rc}\) cost function. Nevertheless, in either situation, the first minimization is identical: it consists in a classical direct finite element computation for a known heterogeneous material under given boundary conditions. This minimization will thus not be discussed in the following. The elastic constants (i) are identified by minimizing the functional \(E_{rc}\) with respect to the elastic tensor \(\underline{\underline{B^{e}}}\). The iterative minimization process starts with an initial set of parameters \(\left( \underline{\underline{B_{0}}} \right) \) chosen by the user at \(n=0\) and taken equal to the elastic parameters identified on the previous load step when \(n>1\). The procedure is stopped using a convergence criterion defined on the norm of the correction in secant tensor. The plastic identification problem consists in determining the elastoplastic parameters involved in the secant stiffness tensor \(\underline{\underline{B_{n}^{s}}}\). The plastic identification (step (iii)) is less direct since the secant tensor expression requires knowledge of the stress state. For all the plastic load steps, the secant tensor is initialized using the results of an elastic estimation \(\underline{\underline{B_{0,n}^{s}}}\), and gives a statically admissible stress state. The plastic identification consists in minimizing \(E_{rc}\) with respect to the plastic parameters (a and b for a linear kinematic hardening). Once the procedure has converged, we get the stress field \(\underline{\sigma _{n}^{c}}\, \)and the optimal values \((a_{opt},b_{opt})\) that are directly related to the material parameters \(\sigma _{0}\) and h. In this case, the minimization is performed numerically, due to the lack of analytical solution. For a given behaviour (elasticity or plasticity), and a given load step n, the identification algorithm presented in Fig. 1a is performed until convergence. In both cases, the convergence criterion is computed on the secant tensor \(\left( \underline{\underline{B_{n}^{s}}} \right) _{i}\, \)identified at iteration i and on the one identified at the previous \(\left( i-1 \right) \) iteration: \(\left( \underline{\underline{B_{n}^{s}}} \right) _{i-1}\), with typical values of the convergence criterion \(\epsilon _{a}\) of 0.001: $$\begin{aligned} \left\| \left( \underline{\underline{B_{n}^{s}}} \right) _{i}-\left( \underline{\underline{B_{n}^{s}}} \right) _{i-1}\, \right\| _{2}<\epsilon _{a}\, \left\| \left( \underline{\underline{B_{n}^{s}}} \right) _{i} \right\| _{2} \end{aligned}$$ The plasticity detection (step (ii)) consists in comparing the secant tensor \(\underline{\underline{B_{N}^{s}}}\, \)identified at the current load step N and the one identified at the previous (\(N-1\)) load step \(\underline{\underline{B_{N-1}^{s}}}\): $$\begin{aligned} \left\| \underline{\underline{B_{N}^{s}}}-\underline{\underline{B_{N-1}^{s}}} \right\| _{2}>\epsilon _{b}\left\| \underline{\underline{B_{N}^{s}}} \right\| _{2} \end{aligned}$$ with \(\epsilon _{b}\) about 0.05. If this criterion is validated, we assume that the plasticity has started developing during the current load step N. The last elastic load step is denoted by \(N_{e}\). The plastic identification (step (iii)) is performed only for the load steps greater than \(N_{e}\) (the overall loading is supposed to be monotonous). Once the procedure has been completed, we get the statically admissible stress field for all the load steps and the set of identified material properties. In this section, the efficiency of the proposed procedure is examined using reference simulated measurements obtained with the finite element code Comsol Multiphysics. Only simulated measurements are considered in this paper in order to focus only on the identification procedure performance and on the enhanced features of the formulation. The in-plane components of the displacement field are extracted at the nodes of the finite element mesh and the global load levels are extracted on the outer boundaries. We have used different meshes for the direct computation and for the definition of the "measurement grid". The errors on the identified parameters can have different origins. They can be related to errors on the measured displacement (i.e. DIC errors), on the geometry description (domain and boundaries), on the behavior law (description of the secant tensor \(\underline{\underline{B_{n}^{s}}})\), on the phase description (material mesh), or on the stress description (stress mesh and boundary conditions used for the stress computation). In this paper we address the influence of DIC errors, of phase description errors and, in a lesser extent of stress description errors. Since conform meshes are used, phase description errors are related both to stress description errors and to measured displacement errors. The influence of the measured displacement field characteristics (mesh size and noise level) on the identification results is illustrated on numerical examples corresponding to homogeneous and heterogeneous materials subjected to a tensile test. It is well known that several error regimes can be encountered using DIC, noticeably the shape function mismatch regime and the ultimate error regime [26]. The shape function error is preponderant when the shape function used in the DIC formulation does not match the actual displacement field. It is governed by the first neglected term in the shape function and it tends to increase when the subset size is increased. The ultimate error is encountered when the shape function is rich enough to describe the real displacement field. It is governed by the image noise \(\sigma _{\mathrm{n}}\), the average image gradient and the subset size d and it tends to decrease when the subset size increases [27]. These two aspects were separately investigated. The influence of the shape function mismatch error is examined using different displacement meshes leading to different mismatch error levels. The influence of the ultimate error is investigated solely by randomly perturbing the measured displacement field. Elastic identification The first test (specimen 1) is performed on an elastic bi-material sample: a soft circular isotropic inclusion (Young's modulus 100 GPa, Poisson ratio 0.15) is embedded in a stiff isotropic matrix (Young's modulus 210 GPa, Poisson ratio 0.3). Two types of identification mesh are used (Fig. 2): An identification mesh perfectly consistent with the material domains (two identification domains \(\hbox {D}_{1}\) and \(\hbox {D}_{2}\) corresponding respectively to the inclusion and the matrix); An identification mesh that does not match the material heterogeneity by splitting it into 400 domains \(D_{j}\) with \(j=1\)–400. Geometry for the identification: a two identification domains respecting the material heterogeneity and b 400 identification domains that do not respect the material heterogeneity Identified Young modulus distributions: a 2 "consistent" identification domains and b 400 non-consistent identification domains Distribution of transversal stress fields: a "measured" stress field \(\sigma _{yy}^{m}\) (from FE simulation), b identified stress field \(\sigma _{yy}^{c}\) using 2 "consistent" identification domains and c identified stress field \(\sigma _{yy}^{c}\) using 400 non-consistent identification domains Only one load level is used in the identification procedure. The typical computing time on a Z820 workstation (Intel Xeon 2.40 GHz) computer is about 5 mins for a measurement mesh involving 1448 linear elements. For the first identification, Fig. 3a shows a good prediction of the parameter sets with a relative error of about 1% on the Young's modulus. In this case, the shape function mismatch error is small since the meshes used for the direct computation (considered as the "ground truth") and for the identification are identical. Figure 4b shows that the identified stress fields are very close to the reference values. The maximum error is about 4% of the maximum stress (295 MPa) and is located in the zones of maximum stress gradient. For the second identification (involving 400 non-consistent material domains), the position of the inclusion is very well identified (see Fig. 3b). This result shows the ability of the method to identify heterogeneous elastic properties without any a priori knowledge of the phase distribution. The error increases with the stress and strain gradients where the shape function mismatch error is the most significant. Furthermore, it can be observed that the error on the identified Young's modulus is concentrated above and under the inclusion where the deformation energy is minimal. This error is also important on the material domains intersecting the actual boundary between the inclusion and the matrix where the method tends to average the elastic constants of the two phases. The computational time depends on the number of unknowns which increases here from 4 in the previous case up to 800 in the case of 400 non-consistent material domains. It goes up to 8 mins on the same computer for the elastic identification. Elastoplastic identification The second test (specimen 2) concerns a standard tensile test performed at constant velocity on an isotropic elastoplastic material (Young's modulus 210 GPa, Poisson ratio 0.3, yield stress 300 MPa and hardening modulus 1 GPa) (Fig. 5a). The material parameters are identified using data associated with 5 load steps (2 steps in the elastic domain, and 3 in the plastic domain) (Fig. 5b). Although the material is homogeneous, the identification is made on 4 material domains (Fig. 5a) in order to demonstrate the ability of the procedure to identify the elastoplastic properties in several domains. a Geometry and identification domains and b the 5 load steps Distribution of transversal stress fields: a \(\sigma _{yy}^{m}\) from a FE simulation, b \(\sigma _{yy}^{c}\) the identified and c von Mises norm of the absolute error on stress Identified parameters obtained from each zone are collected in Table 1. As can be noticed, the identified parameter values are very close to the reference values and are very similar from one identification domain to another. The reference ("measured") stress fields presented in Fig. 6a are obtained by solving the direct problem whereas the stress fields presented in Fig. 6b are obtained by the inverse method. We notice a close similarity between the distributions and the orders of magnitude for this stress component. The procedure converges in few iterations. The identification of the parameters and of the given stress fields is very accurate. In this case, different meshes were used for the direct computation and the identification: the former is triangular and the latter is quadrangular (see Fig. 6a, b). The mesh used to describe the "measured" displacement field (not represented here) is quadrangular but similar in size to the one used for the direct simulation thus limiting the bias introduced in the description of the "measured" displacement. The mesh used to compute the stress field is coarser than the one used for the direct simulation thus introducing a shape function mismatch. Here, the fact that the identification results are very close to the reference values show that the shape function mismatches is too small to generate significant discrepancies in the identification. The typical computing time on the same computer is about 6 mins for a measurement mesh involving 1448 linear elements. This computing time depends on the number of the load step used for the elasto-plastic identification. Table 1 Identified parameters: specimen 2 Sensitivity to the initial set of parameters To assess the sensitivity of the identification results to initial values, different starting values of the parameters are selected for the procedure. The identification is performed on specimen 2 and we check the number of iterations required for the procedure to converge. Table 2 shows that the identified parameters are very stable with respect to the chosen initial values. As mentioned earlier, the initial set of parameters is only used for the elastic identification of the first load step. No ad hoc initiation is required for the identification of plastic parameters. Table 2 Sensitivity to initial set of parameters: specimen 2 Sensitivity to experimental noise The robustness of the CEGM approach with respect to noise is evaluated using a set of simulated displacement fields disturbed by a white noise at different levels. For this purpose, we perform an identification on a homogeneous isotropic material submitted to a tensile test. The reference FE-displacement fields are corrupted by a white Gaussian noise with the amplitude \({{\varvec{\gamma }}}\). The noise level is chosen to \({{\varvec{\gamma }}}={\mathbf {0}}.{\mathbf {01}}\, {\varvec{pixel}}\) while the maximum displacement is 1.5 pixels. This value was chosen to be consistent with our classical test configurations: image noise \(\sigma _{\mathrm{n}}~\approx ~0.54\) grey levels (obtained using a HR16070MFLGEC camera, and a 16-bits acquisition mode), coarse speckle (speckle dots with 3-pixels radius, corresponding to an average image gradient \(\overline{{{\varvec{\nabla }}}\mathbf {I}} ^{\mathbf {2}}\approx \mathbf {90})\), and 20-pixels subset size (d). This value is consistent with the value \({{\varvec{\sigma }}}_{\mathbf {n}}/\left( \mathbf {d}\sqrt{\overline{{{{\varvec{\nabla }}}}\mathbf {I}}^{\mathbf {2}}} \right) \approx \mathbf {0}.\mathbf {003}\) given in [27]. It can be seen that identification of all parameters is stable in presence of noise. As expected, the elastic constants are more corrupted by the noise level as, for a fixed noise level, the signal to noise ratio is smaller for the elastic identification than for the plastic one. Furthermore, the Poisson ratio is more sensitive to measurement noise. The results presented in Table 3 are obtained using a stress identification mesh equal to the mesh used to obtain the direct problem and also without filtering the noise. But in our minimization algorithm we have several types of meshes that can have equal or different sizes. In order to reduce the problem of measurement noise in the identification while maintaining a good description of the stress gradients, these meshes are not identical but they are imbricated i.e. the mesh of the plastic identification is a subdivision of the elastic meshes. Table 3 Sensitivity to noise: specimen 2 In the present work, we use full-field measurements and the constitutive equation gap method to identify the spatial distribution of a set of material parameters associated with a J2 elastoplastic behaviour. The identification approach is based on the minimization of the CEG energy norm and allows the identification of a set of unknown parameters in any chosen zone without any prior knowledge of the distribution of the spatial heterogeneities. We validate this approach on different materials in different situations (heterogeneous elastic materials, homogeneous plastic material involving heterogeneous stress fields). These different numerical tests give results which are in good agreement with the imposed value obtained using the direct problem. The identification accuracy strongly depends on the accuracy of the input data (measured load and displacement) and of the representativeness of the model (geometry, boundary conditions, material heterogeneity, behaviour law, stress distribution ...). Direct numerical simulations are performed in order to get the data required for the identification in a "perfectly controlled" situation. The results of these simulations are considered as a "ground truth" (thus neglecting the simulation error). The sensitivity of the identification results with respect to different parameters are investigated by perturbing the numerical solution. To restrict the scope of the study, we have focused here on three contributions: the measured displacement error, the shape function mismatch in the stress distribution and the description of the material heterogeneity. We have verified that typical error levels associated with the ultimate error regime of the displacement measurement led to relative errors smaller than 5% on the identification results (the elastic parameters being more sensitive to noise due to smaller signal to noise ratio). As expected, the shape function mismatch error is larger in stress concentration areas, and the identification error is more important in stress concentration zones. Special attention should be taken to choose a material mesh consistent with the phase distribution and to define a stress mesh fine enough to catch the stress concentrations. Finally, we have verified that the identification results are not affected by the choice of the initial guess on the elastic parameters required to start the procedure. Future works will focus on the use of experimental data and the improvement of the method to deal with respect to multi-linear problems. Kavanagh KT, Clough RW. Finite element applications in the characterization of elastic solids. Int J Solids Struct. 1971;7:11–23. Meuwissen MHH, Oomens CWJ, Baaijens FPT, Petterson R, Janssen JD. Determination of the elasto-plastic properties of aluminum using a mixed numerical-experimental method. J Mater Process Technol. 1998;75(1–3):204211. Cooreman S, Lecompte D, Sol H, Vantomme J, Debruyne D. Identification of mechanical material behavior through inverse modeling and DIC. Exp Mech. 2008;48(4):421–33. Guery A, Hild F, Latourte F, Roux S. Identification of crystal plasticity parameters using DIC measurements and weighted FEMU. Mech Mater. 2016;100:55–71. Bui HD, Constantinescu A, Maigre H. Numerical identification of linear cracks in 2D elastodynamics using the instantaneous reciprocity gap. Inverse Probl. 2004;20:993–1001. Sun Y, Guo Y, Ma F. The reciprocity gap functional method for the inverse scattering problem for cavities. Appl Anal. 2016;95(6):1327–46. Feissel P, Allix O. Modified constitutive relation error identification strategy for transient dynamics with corrupted data: the elastic case. Comput Eng Appl Mech Eng. 2007;196:1968–83. Florentin E, Lubineau G. Using constitutive equation gap method for identification of elastic material parameters: technical insights and illustrations. Int J Interact Des Manuf. 2011;5(4):227–34. Merzouki T, Nouri H, Roger F. Direct identification of nonlinear damage behavior of composite materials using the constitutive equation gap method. Int J Mech Sci. 2014;89:487–99. Grédiac M, Toussaint E, Pierron F. Special virtual fields for the direct determination of material parameters with the virtual fields method.1—principle and definition. Int J Solids Struct. 2002;39:2691–705. Grédiac M, Pierron F. Applying the virtual field method to the identification of elasto-plastic constitutive parameters. Int J Plast. 2006;22:602–27. Kim J-H, Pierron F, Wisnom MR, Syed-Muhamad K. Identification of the local stiffness reduction of a damaged composite plate using the virtual fields method. Compos Part A Appl Sci Manuf. 2007;38(9):2065–75. Avril S, Huntley JM, Pierron F, Steele DD. 3D heterogeneous stiffness reconstruction using MRI and the virtual fields method. Exp Mech. 2008;48(4):479–94. Pierron F, Avril S, The Tran V. Extension of the virtual fields method to elastoplastic material identification with cyclic loads and kinematic hardening. Int J Solids Struct. 2010;47(22–23):2993–3010. Grama SN, Subramanian SJ, Pierron F. On the identifiability of Anand visco-plastic model parameters using the virtual fields method. Acta Mater. 2015;86:118–36. Rossi M, Pierron F, Štamborská M. Application of the virtual fields method to large strain anisotropic plasticity. Int J Solids Struct. 2016;97–98:322–35. Wang P, Pierron F, Rossi M, Lava P, Thomsen OT. Optimised experimental characterisation of polymeric foam material using DIC and the virtual fields method. Strain. 2016;52(1):59–79. Marek A, Davis FM, Pierron F. Sensitivity-based virtual fields for computational mechanics. 2017. https://doi.org/10.1007/s00466-017-1411-6. Roux S, Hild F, Pagano S. A stress scale in full-field identification procedures: a diffuse stress gauge. Eur J Mech A Solids. 2005;24:442451. Ben Azzouna M, Périé J-N, Guimard J-M, Hild F, Roux S. On the identification and validation of an anisotropic damage model using full-field measurements. Int J Damage Mech. 2011;20(8):1130–50. Avril S, Bonnet M, Bretelle AS, Grediac M, Hild F, Ienny P, Latourte F, Lemosse D, Pagano S, Pagnacco E. Overview of identification methods of mechanical parameters based on full-field measurements. Exp Mech. 2008;48:381–402. Mathieu F, Leclerc H, Hild F, Roux S. Estimation of elastoplastic parameters via weighted FEMU and integrated DIC. Exp Mech. 2015;55(1):105–19. Bertin M, Hild F, Roux S, Mathieu F, Leclerc H, Aimedieu P. Integrated digital image correlation applied to elasto-plastic identification in a biaxial experiment. J Strain Anal. 2016;51(2):118–31. Latourte F, Chrysochoos A, Pagano S, Wattrisse B. Elastoplastic behavior identification for heterogeneous loadings and materials. Exp Mech. 2008;48:435–49. Simo JC, Hughes TJR. Computational inelasticity. 1998: Springer; 1998. p. 126–30. Bornert M, Brémand F, Doumalin P, Dupré M, Fazzini J-C, Grédiac M, Hild F, Mistou S, Molimard J, Orteu J-J, Robert L, Surrel Y, Vacher P, Wattrisse B. Assessment of digital image correlation measurement errors: methodology and results. Exp Mech. 2008;49:353–70. Roux S, Hild F. Stress intensity factor measurements from digital image correlation: post-processing and integrated approaches. Int J Fract. 2006;140(1–4):141–57. Author's contributions TM performed the research and wrote the paper. YM supervised the research and the paper. CP supervised the research and the paper. SP supervised the research and the paper. BW supervised the research and the paper. All authors read and approved the final manuscript. IRSN/PSN-RES/SEMIA/LPTM, Institut de Radioprotection et de Sûreté Nucléaire, BP3-13115, Saint-Paul-lez-Durance Cedex, France T. Madani & C. Pelissou LMGC, Université de Montpellier, CNRS, Montpellier, France , Y. Monerie , S. Pagano & B. Wattrisse Laboratoire de micromécanique et intégrité des structures (MIST), IRSN-CNRS-Université de Montpellier, Montpellier, France , C. Pelissou Search for T. Madani in: Search for Y. Monerie in: Search for S. Pagano in: Search for C. Pelissou in: Search for B. Wattrisse in: Correspondence to T. Madani. Madani, T., Monerie, Y., Pagano, S. et al. Enhanced features of a constitutive equation gap identification method for heterogeneous elastoplastic behaviours. Adv. Model. and Simul. in Eng. Sci. 4, 5 (2017) doi:10.1186/s40323-017-0092-1 Elastoplasticity Digital image correlation
CommonCrawl
Efficient expression of sortase A from Staphylococcus aureus in Escherichia coli and its enzymatic characterizations Zhimeng Wu1,2, Haofei Hong1, Xinrui Zhao1 & Xun Wang1 Sortase A (SrtA) is a transpeptidase found in Staphylococcus aureus, which is widely used in site-specific protein modification. However, SrtA was expressed in Escherichia coli (E. coli) in rather low level (ranging from several milligrams to 76.9 mg/L at most). The present study aims to optimize fermentation conditions for improving SrtA expression in E. coli. Under the optimized media (0.48 g/L glycerol, 1.37 g/L tryptone, 0.51 g/L yeast extract, MOPS 0.5 g/L, PBS buffer 180 mL/L) and condition (30 °C for 8 h) in a 7-L fermentor, the enzyme activity and the yield of SrtA reached 2458.4 ± 115.9 U/mg DCW and 232.4 ± 21.1 mg/L, respectively, which were higher by 5.8- and 4.5-folds compared with initial conditions, respectively. The yield of SrtA also represented threefold increase than the previously reported maximal level. In addition, the enzymatic characterizations of SrtA (optimal temperature, optimal pH, the influence of metal irons, and tolerance to water-soluble organic solvents) were determined. Enhanced expression of SrtA was achieved by optimization of medium and condition. This result will have potential application for production levels of SrtA on an industry scale. Moreover, the detailed enzymatic characterizations of SrtA were examined, which will provide a useful guide for its future application. Sortase A (SrtA, EC number: 3.4.22.70) is a membrane-anchored transpeptidase first found in Staphylococcus aureus, which anchors surface proteins to cell wall by a cell-wall sorting reaction (Mazmanian et al. 1999). It recognizes a specific sorting signal peptide (Leu-Pro-X-Thr-Gly, with X standing for any amino acid except cysteine) at the C-terminal of target proteins, and cleaves the amide bond between threonine and glycine to generate a thioester intermediate; then the intermediate reacts with the amino group of pentaglycine cross-bridges resulting in the attachment of proteins to cell surface (Perry et al. 2002a, b). SrtA is a promiscuous enzyme, which can accept a variety of oligoglycine-modified nucleophiles as its substrate (Bentley et al. 2008). In addition, the truncated SrtA (Δ59 SrtA), by removing its original N-terminal transmembrane domain, showed good water solubility and retained the same transpeptidation activity as full-length SrtA (Ilangovan et al. 2001). Therefore, in recent years, SrtA-mediated ligation (SML) was widely used in protein-to-protein fusions (Witte et al. 2012), peptide and protein cyclizations (Wu et al. 2011), immobilization of biocatalyst onto solid surface (Chan et al. 2007), preparation of complex glycoconjugates (Guo et al. 2009), antibody–drug conjugation (Beerli et al. 2015,Voloshchuk et al. 2015), and in vivo protein modification (Glasgow et al. 2016). With the development of SML, the demand for high-level expression of SrtA is very high. E. coli are the most studied expression system for SrtA production because of its effective genetic manipulation and cheap culturing cost (Vincentelli and Romier 2013). Since the recombinant Δ59 SrtA was first achieved in 1999 using pQE30 (Qiagen) as vector in E. coli (Ton-That et al. 1999), several groups have cloned and expressed SrtA using several commercial plasmids, including pET23b (Novagen) (Guo et al. 2009), pTWIN1 (New England Biolabs) (Bentley et al. 2008) and pBAD (Invitrogen) (Kim et al. 2002), etc., with slightly modified protocols. However, the reported yield of SrtA varied differently and remained at low level ranging from several milligrams to the maximum of 76.9 mg/L (Kruger et al. 2004). In our ongoing studies on SML for the synthesis of complex glycoconjugates (Wu et al. 2010, 2013), we realized that the low-level production of SrtA is difficult to meet the requirement of future industrial applications. Therefore, more endeavors should be made to improve the expression level of this important enzyme. Heterologous expression of recombinant protein in E. coli is influenced by many factors, including medium composition, induction temperature, initial pH, and so on (Lee et al. 1997). In this study, Δ59 SrtA was cloned and successfully expressed in E. coli at first. Then, the optimization of fermentation conditions for SrtA production was manipulated by the combination of traditional one-factor-at-a-time approach and response surface methodology (RSM) at shake-flask stage, which is an effective and simplified method that has gained great success in the production of recombinant proteins (Papaneophytou and Kontopidis 2014). Finally, high-level expression of Δ59 SrtA was achieved in a 7-L fermentor, and the enzymatic characterizations of SrtA were examined. Strain, plasmid, and media The host strain E. coli BL21 (DE3) and plasmid pET28a were purchased from Novagen (Madison, WI). Luria broth (LB) medium (Tryptone 10 g/L, Yeast extract 5 g/L, NaCl 10 g/L), Terrific broth (TB) medium (Tryptone 12 g/L, Yeast extract 24 g/L, Glycerol 4 g/L) with 100 mL/L PBS buffer (K2HPO4·3H20 164.3 g/L, KH2PO4 23.1 g/L), Super broth (SB) medium (Tryptone 30 g/L, Yeast extract 20 g/L, 3-(N-Morpholino) propanesulfonic acid (MOPS) 10 g/L), Soybean–peptone–yeast extract broth (SOB) medium (Tryptone 20 g/L, Yeast extract 5 g/L, NaCl 0.5 g/L, KCl 0.2 g/L) and 2× Yeast extract/tryptone (2× YT) medium (Tryptone 16 g/L, Yeast extract 10 g/L, NaCl 5 g/L), were used to perform the expression of SrtA, respectively. Expression of SrtA in E. coli BL21 (DE3) The genome of S. aureus was extracted by Genomic Extraction Kit (Qiagen, Valencia, CA, USA) and applied as the template for amplifying Δ59-srtA with the primer pairs: CAT GCC ATG GAA GCT AAA CCT CAA ATT CCG; and CGC GGA TCC TTA GTG GTG GTG ATG ATG ATG TTT GAC TTC TGT AGC TAC AAA GAT. The gel-purified PCR-amplified Δ59-srtA fragments were digested and inserted into the NcoI/BamHI site of pET28a. The plasmid (pET28a-Δ59-srtA) confirmed by DNA sequencing was transformed into E. coli BL21 (DE3) strain. The positive clones were propagated in LB medium at 37 °C overnight with constant shaking at 200 rpm. The seed culture (2%) was inoculated into 25 mL fermentation medium at 37 °C until the OD600 reached 0.6 and then incubated with 1 mM IPTG (isopropyl β-d-thiogalactoside) at 25 °C for 8 h. When the fermentation was completed, cells were pelletized and ultrasonicated by a probe VCX800 system (Sonics, Newtown, USA). The lysate supernatant were collected for the activity assay and purification. SrtA activity assay The specific substrate of SrtA (Dabcyl-QALPETGEE-Edans) was obtained from GL biochem Ltd. (Shanghai, China). The SrtA activity arrays were performed in 200-μL volume of 50 mM Tris–HCl buffer (including 150 mM NaCl, 30 mM CaCl2, 0.5 mg Dabcyl-QALPETGEE-Edans, pH 7.8), and 10 μL SrtA lysate supernatant. The reactions were carried out at 37 °C for 1 h by means of a Synergy H4 hybrid microplate reader (BioTek, Vermont, America), and the fluorescence intensity (FI) was detected with 350 nm for excitation and 495 nm for recordings. One unit of SrtA activity was defined as the amount of enzyme (mg) that was able to increase at the rate of one FI per minute in the 200-μL reaction mixture. Purification of SrtA The lysate supernatant and Ni–NTA agarose (Qiagen, Hilden, Germany) were loaded onto a gravity-flow column and incubated for 4 h at 4 °C. Then, the agarose was washed with a stepwise gradient of imidazole (10–40 mM) to eliminate intracellular contaminating proteins. The C-terminal His-tagged SrtA was eluted from the column using 500 mM imidazole and desalted using an Amicon Ultra 3 K device (Millipore, Billerica, USA). The concentration of purified SrtA was determined by the Bradford method and was used to calculate the yield of SrtA for each sample. Single-factor optimization The optimal medium and several important fermentation conditions (induction time, induction temperature, and initial pH) for SrtA production were obtained by single factor optimization in 250-mL shaking flasks containing 25 mL of sterilized medium. The concentrations of alternative carbohydrates and nitrogenous compounds were equivalent to the concentrations of carbon and nitrogen resources in the initial medium (12 g/L tryptone; 4 g/L glycerol). In each experiment, one factor was changed, while the other factors were held constant. After fermentation, cell growth was monitored by measuring OD600 and correlated it with dry cell weight (DCW); the intracellular SrtA activity was measured by the method described in SrtA activity assay. Each experiment was performed in triplicate for the biological replicates, and the average values of enzymatic activity and biomass were used to select the optimal medium and conditions. In order to enhance SrtA production, it is necessary to select those variables with major effects at first. Applying Plackett–Burman Design, fractional two-level factorial designs can be used for the efficient and economical screening. Experimental design was formulated according to the Plackett–Burman Design tool of Design Expert 7.0 software (Statease, USA) for the selection of significant factors. Nine factors were selected (the concentration of glycerol, tryptone, yeast extract, PBS buffer, and MOPS in the medium, induction occasion, the concentration of IPTG, inoculation, loading volume), each of which was coded with two levels (Table 1). The twelve experiments were performed in shaking-flask fermentation under conditions fixed by single factor optimization (all cultivations were carried out in 250 mL shake flasks at 200 rpm). All experiments were performed in triplicates, and the average values of enzymatic activity and biomass were used to select the optimal medium and conditions. The P values were calculated by Duncan's multiple range tests. The optimal values of non-significant factors were determined by the prediction tool of Design Expert 7.0. Table 1 The Plackett–Burman design and effects of nine factors for SrtA production Box–Behnken Design Box and Behnken Design can propose three level designs for fitting response surfaces to get the best values for different variables by second-order polynomial model. Therefore, the concentration of three significant factors (glycerol, tryptone, and yeast extract) in the medium was optimized by the Box–Behnken Design tool of Design Expert 7.0 software. Each factor was coded with three levels (Additional file 1: Table S1) and seventeen experiments were performed in shaking-flask fermentation under previous fixed conditions (all cultivations were carried out in 250 mL shake flasks at 200 rpm). All experiments were performed in triplicates, and the average values of enzymatic activity and biomass were used to select the optimal medium and conditions. The P values were calculated by Duncan's multiple range tests. The optimal concentrations of glycerol, tryptone, and yeast extract were determined by the prediction tool of software. The enzymatic characterizations of SrtA The effect of pH on SrtA activity was measured in the 200 μL reaction system at 37 °C (150 mM NaCl, 10 mM CaCl2, 0.5 mg Dabcyl-QALPETGEE-Edans) in a pH range of 3.0–11.0, using the appropriate buffers at concentration of 50 mM (3.0–5.0, sodium citrate; 6.0–8.0, sodium phosphate; 8.0–9.0, Tris–HCl; 10.0-11.0, and NaHCO3–NaOH). The optimal temperature for SrtA activity was determined at various temperatures ranging from 20 to 80 °C. To determine the effect of metal ions on SrtA activity, the enzyme assays were measured without additional metal ion (control) or with 5 mM different metal ions (Ca2+, Mn2+, Mg2+, Co2+, Cu2+, Fe2+, Ni2+ and Zn2+). As for the optimal concentration of Ca2+, the SrtA activities were detected at a concentration ranging from 1 mM to 100 mM. At last, the effect of soluble organic solvents (acetone, methanol, ethanol, acetonitrile, and dimethyl sulfoxide) on SrtA activity were determined, the enzyme assays were measured at a content of ranging from 10 to 50%. All experiments were performed in triplicate, and the mean values were used for calculations. Selection of the components of fermentation medium by single factor optimization Fermentation medium has a profound influence on the expression of recombinant proteins in E. coli among many factors [Tseng and Leng 2012, Li et al (2014)]. Thus, starting from the E. coli BL21 (DE3) strain encoding pET28a-Δ59-srtA plasmid, five media (LB, TB, SOB, SB and 2×YT) were screened to express SrtA at shaking-flask level at 37 °C for 4 h in an initial experiment. As shown in Fig. 1a, when expressed in TB and SB media, the SrtA activity values were 1245.2 and 1210.7 U/mg DCW, respectively, which were higher than those in LB, SOB, and 2×YT media (855.2, 783.2 and 1032.9 U/mg DCW, respectively). This result indicated that using the rich media improved the expression level of SrtA. By comparing the medium compositions of TB and SB, defined components may affect the SrtA expression level. Therefore, glycerol, PBS buffer, and MOPs were added into the control media (12 g/L tryptone and 24 g/L yeast extract), which were used as fermentation medium in SrtA expression, respectively. It was interesting to observe that the addition of glycerol and PBS enhanced the enzymatic activities by 18.3 and 17.3%, respectively. Addition of MOPS increased the SrtA activity by 23.5% (Fig. 1b). Thus, we choose TB medium (12 g/L tryptone, 24 g/L yeast extract, 4 g/L glycerol, and 100 mL/L PBS buffer) with the addition of 10 g/L MOPS as the initial SrtA fermentation medium. a Effects of media on the cell growth and SrtA expression. Light gray and dark gray bars represent the SrtA activity and biomass in mediums of LB, TB, SOB, SB and 2×YT. b Effects of addition of defined compositions in media on the expression of SrtA. Light gray and dark gray bars represent the SrtA activity and biomass in the TB medium with addition of glycerol, MOPS, and PBS buffer. The TB medium without glycerol and PBS buffer was used as the control. c Effects of various carbon sources on the expression of SrtA. Light gray and dark gray bars represent the SrtA activity and biomass in the medium with various carbon sources of glucose, fructose, galactose, maltose, lactose, sucrose, glycerol, dextrin, and starch. d Effects of nitrogen sources on the expression of SrtA. Light gray and dark gray bars represent the SrtA activity and biomass in the medium with various nitrogen sources of tryptone, beef extract, soya peptone, fishmeal, casein, urea, ammonium chloride, ammonium sulfate, and ammonium citrate Because both carbon and nitrogen sources are important nutrients in the media for protein expression (Scott et al. 2002), these two nutrients were screened to find the optimal alternatives for SrtA expression. Five different types of carbohydrates (glucose, fructose, galactose, maltose, lactose, sucrose, dextrin, and starch) and nine nitrogenous compounds (beef extract, soya peptone, fishmeal, casein, urea, ammonium chloride, ammonium sulfate, and ammonium citrate) were used to replace the primary carbon and nitrogen sources in TB medium. As shown in Fig. 1c, glycerol was the suitable carbon source for SrtA expression which showed the best strain growth and enzymatic activity (1446.7 U/mg DCW). As for the nitrogen source, organic nitrogen source was superior to inorganic nitrogen source. As Fig. 1d shows, tryptone was the most suitable nitrogen source for SrtA expression, which gave the highest values of cell biomass and enzymatic activity (1407.1 U/mg DCW). Hence, glycerol and tryptone were selected in the subsequent experiments. Influences of induction time, induction temperature, and initial pH on SrtA production After the fermentation medium was optimized, the induction time, induction temperature, and initial pH were revisited as these parameters varied greatly in previous reports (Ton-That et al. 1999; Hirakawa et al. 2012; Lee et al. 2002). In previous study on SrtA expression, higher induction temperature (37 °C) (Ilangovan et al. 2001) and shorter induction time (within 6 h) (Kim et al. 2002) were mostly applied. However, it was found that the lower induction temperature (below 30 °C) (Tanaka et al. 2008) and longer induction time (more than 8 h) (Matsushita et al. 2009) is helpful for SrtA production. In this study, induction time (4, 8, 12, 16, 20 and 24 h) and induction temperature (16, 20, 26, 30 and 37 °C) were combined to perform shaking-flask fermentation (Fig. 2). The highest SrtA activity (1431.3 U/mg DCW) was obtained at 16 °C for 4 h (Fig. 2a), but the strain growth was extremely poor under this condition (Fig. 2b). Therefore, to maintain sufficient cell density for SrtA expression, fermentation at 30 °C for 8 h (the second highest SrtA activity) was chosen as the optimal condition in the following experiments. As for initial pH, another important factor for the heterogeneous expression in E. coli (Buchanan and Klawitter 1992), strain growth was inhibited when pH was below 6.0 or above 9.0; while SrtA activity reached the highest enzymatic activity (1411.4 U/mg DCW) when pH was 7.0 (Fig. 3). Therefore, the optimal initial pH was chosen at pH 7.0. Effects of induction time and induction temperature on the expression of SrtA. a The SrtA activities under various induction times and induction temperatures; b the biomasses under various induction times and induction temperatures Effect of initial pH on the expression of SrtA. Light gray and dark gray bars represent the SrtA activities and biomasses in the medium with different initial pH values Determination of significant factors for SrtA production by Plackett–Burman design Besides the optimal parameters mentioned above, there are still many factors that influence SrtA production. Based on the SrtA activity of each experiment designed by the principle of Plackett–Burman, the effect of these factors were evaluated (Additional file 1: Table S2). The fitted equation (in coded value) was obtained from the twelve tests based on the first-order model: $$Y \, ({\text{sortase A activity}}) = 645.67 - 162.08 \times {\text{A }} - 243.91 \times {\text{B}} - 201.77 \times {\text{C}} - 17.02 \times {\text{D}} - 85.82\times {\text{E}} + 113.59 \times {\text{F}} + 33.43 \times {\text{G}}- 4.80 \times {\text{H}}-109.76 \times {\text{I}}$$ The coefficient of determination (R 2 = 0.9884) indicates that 98.84% of model terms satisfy the model, and the fitting degree of the regression equation is good. The results revealed that the concentration of glycerol, tryptone, yeast extract are the significant factors (P value <0.05) and the other six factors have weaker effect (P value >0.05). Forecasted by Design-Expert software, the value of these six nonsignificant factors are defined as IPTG 1.5 mM, MOPS 0.5 g/L, PBS buffer 180 mL/L, induction occasion (OD600) 1.0, inoculum 2%, and loading volume 10%. Furthermore, the negative coefficients of glycerol (−162.08), tryptone (−243.91), and yeast extract (−201.77) indicated that the low concentrations of glycerol, tryptone, and yeast extract were beneficial for SrtA production. Box–Behnken design results for the significant factors In the following, Box–Behnken design was applied to refine the optimal levels of three selected significant factors (glycerol, tryptone, and yeast extract) for SrtA production. The high levels of glycerol, tryptone, and yeast extract were maintained at 4.0 g/L, while their low levels were reduced into 0.5 g/L. Seventeen experiments were performed in shaking-flask fermentation under previously fixed conditions (Table 2). The experimental data were fitted to obtain the second-order polynomial model (in coded value) equation: Table 2 The Box–Behnken design matrix and experimental results $$Y \, ({\text{sortase A activity}}) = 1479.80 + 18.11 \times {\text{A}}- 265.51 \times {\text{B}}- 67.58 \times {\text{C}}- 1.5 7 \times {\text{AB}}+ 22.40 \times {\text{AC}}+ 33.35 \times {\text{BC}} -12.06 \times {\text{A}}^{2} -269.56 \times {\text{B}}^{ 2} +45.81 \times {\text{C}}^{ 2}$$ The results of ANOVA (analysis of variance) are summarized in Additional file 1: Table S3. The value of R 2 is 0.9833, suggesting that 98.33% of model terms satisfy the model and the equation was suitable for representing the experimental data. Compared with the Plackett–Burman design results, the Box–Behnken design results further confirmed the significant negative effects of tryptone and yeast extracts on the SrtA production (both P values <0.01), while the effect of glycerol was not apparent (P value >0.05). The interaction of the three factors were further analyzed, and the 3D surface plots are shown in Fig. 4. Based on the credible model equation, the maximum SrtA production (1674 U/mg DCW predicted) could be obtained at 0.48 g/L glycerol, 1.37 g/L tryptone, and 0.51 g/L yeast extract. The response 3D surface plots of three significant factors. a Glycerol with tryptone; b glycerol with yeast extract; c Tryptone with yeast extract. The response 3D surface plots were obtained using the Design Expert 7.0 software directly To verify this model, SrtA expression was performed in a 7-L fermentor under the predicted optimal medium and conditions. The enzyme activity and yield of SrtA were 2458.4 ± 115.9 U/mg DCW and 232.4 ± 21.1 mg/L, respectively. Compared with the control (LB medium) and initial conditions (37 °C for 4 h), the enzyme activity and yield of SrtA were increased by 5.8- and 4.5-folds, respectively (Fig. 5a, b; Table 3). And to the best of our knowledge, this result represented the highest yield of SrtA expression ever reported, which is threefold increase compared with the highest production of SrtA in literature (76.9 mg/L) (Naik et al. 2006). Validation of the optimal conditions for SrtA expression. The LB medium was used as the control at 37 °C for 4-h fermentation, while the optimized media (0.48 g/L glycerol, 1.37 g/L tryptone, 0.51 g/L yeast extract, MOPS 0.5 g/L, PBS buffer 180 mL/L) were operated at 30 °C for 8 h. a The SrtA activity in the control and optimized media at 7-L fermentor level; b the concentration of purified SrtA in the control and optimized media at 7-L fermentor level Table 3 The enzyme activity and yield of SrtA performed in the optimized conditions (optimized medium, 30 °C for 8 h) and initial conditions (LB medium, 37 °C for 4 h) with shaking-flask fermentation and at 7-L fermentor level The enzymatic characterizations of Srt A SrtA-mediated ligation has been widely used in site-specific protein modification in recent years. However, to our knowledge, the enzymatic characterizations of SrtA were explored incompletely. These parameters are particularly important for the application of SrtA in ligation reactions. The optimal pH of SrtA activity was measured from pH 3–11 using FRET substrate Dabcyl-QALPETGEE-Edans. SrtA retained good activity over a pH range of 7.0–9.0 and displayed the optimal activity at pH 8.0 (Fig. 6a). For thermostability, SrtA remained its 50% of activity over a range of 20–60 °C and the optimal temperature was 35 °C (Fig. 6b). SrtA was Ca2+-dependent enzyme. However, the influence of other metal irons (Co2+, Cu2+, Fe2+, Mg2+, Mn2+, Ni2+, and Zn2+) and the most suitable concentration of Ca2+ to the activity of SrtA were unknown. With regard to the influence of metal irons, although Ca2+ has been proven to be necessary for SrtA activity and 5–10 mM Ca2+ was added in previous SrtA-catalyzed reactions (Wu et al. 2013), the effects of other metal irons (such as Mn2+, Cu2+, Fe2+, etc.) on the catalytic efficiency of SrtA and the most suitable concentration of Ca2+ for SrtA activity were unknown. As shown in Fig. 7, Ca2+ could enhance the activity of SrtA dramatically, which is consistent with the literature data. The inhibition effect on SrtA in the presence of other metal ions in the decreasing order is Cu2+>Zn2+>Ni2+>Co2+>Fe2+>Mg2+>Mn2+. As for the optimal concentration of Ca2+, there was positive correlation between SrtA activity and calcium concentration to certain extent, and the best concentration of Ca2+ for SrtA activity is 30 mM (Fig. 8). Finally, organic solvents were frequently used as co-solvents to help in dissolving substrates in the SrtA-mediated reaction (Madej et al. 2012). The tolerance of SrtA to water-soluble organic solvents was examined. As shown in Fig. 9, SrtA is tolerant to 10% of methanol, ethanol, acetonitrile, and acetone; the enzyme activities remained at 72, 59, 70 and 73%, respectively; and SrtA lost its majority of activities with the increasing concentrations of these solvents to 30%. However, SrtA is well tolerant to dimethyl sulfoxide (DMSO) up to as high as 30% without any obvious activity loss. 40% of DMSO could decrease the enzyme activity to 70, and 50% of DMSO is detrimental to its activity. The optimal pH and temperature for SrtA. a Effect of pH on SrtA activity under a pH range of 3–11; b effect of temperature on SrtA activity at various temperatures ranging from 20 to 80 °C Effects of metal ions (5 mM) on SrtA activity. Effect of metal ions on SrtA activity. The enzyme assays were performed in the absence (control) and the presence of 5 mM various metal ions of Ca2+, Co2+, Cu2+, Fe2+, Mg2+, Mn2+, Ni2+, and Zn2+ Effect of Ca2+ concentration on SrtA activity. Effect of Ca2+ concentration on SrtA activity in the range of 1–100 mM Ca2+ Effect of organic solvents on SrtA activity. Effects of organic solvents on SrtA activity in the presence of 10–50% of methanol, ethanol, acetonitrile, acetone, and DMSO In summary, high-level expression of enzyme SrtA was achieved by a combination of single-factor optimization and response surface methodology in this study. Applying optimized medium (0.48 g/L glycerol, 1.37 g/L tryptone, 0.51 g/L yeast extract, MOPS 0.5 g/L, and PBS buffer 180 mL/L) and conditions (30 °C for 8 h), the highest enzyme activity and yield of SrtA could reach up to 2458.4 ± 115.9 U/mg DCW and 232.4 ± 21.1 mg/L, respectively. This formulation will have potential application of production levels of SrtA on an industry scale. In addition, the detailed enzymatic characterizations of SrtA were examined, which will provide a useful guide for its future application. Beerli RR, Hell T, Merkel AS, Grawunder U (2015) Sortase enzyme-mediated generation of site-specifically conjugated antibody drug conjugates with high in vitro and in vivo potency. PLoS ONE 10:131177 Bentley ML, Lamb EC, McCafferty DG (2008) Mutagenesis studies of substrate recognition and catalysis in the sortase A transpeptidase from Staphylococcus aureus. J Biol Chem 283:14762–14771 Buchanan RL, Klawitter LA (1992) The effect of incubation temperature, initial pH, and sodium chloride on the growth kinetics of Escherichia coli O157:H7. Food Microbiol 9:185–196 Chan L, Cross HF, She JK, Cavalli G, Martins HF, Neylon C (2007) Covalent attachment of proteins to solid supports and surfaces via sortase-mediated ligation. PLoS ONE 2:e1164 Glasgow JE, Salit ML, Cochran JR (2016) In vivo site-specific protein tagging with diverse amines using an engineered sortase variant. J Am Chem Soc 138:7496–7499 Guo X, Wang Q, Swarts BM, Guo Z (2009) Sortase-catalyzed peptide-glycosylphosphatidylinositol analogue ligation. J Am Chem Soc 131:9878–9879 Hirakawa H, Ishikawa S, Nagamune T (2012) Design of Ca2+-independent Staphylococcus aureus sortase A mutants. Biotechnol Bioeng 109:2955–2961 Ilangovan U, Ton-That H, Iwahara J, Schneewind O, Clubb RT (2001) Structure of sortase, the transpeptidase that anchors proteins to the cell wall of Staphylococcus aureus. Proc Natl Acad Sci USA 98:6056–6061 Kim SW, Chang IM, Oh KB (2002) Inhibition of the bacterial surface protein anchoring transpeptidase sortase by medicinal plants. Biosci Biotechnol Biochem 66:2751–2754 Kruger RG, Dostal P, McCafferty DG (2004) Development of a high-performance liquid chromatography assay and revision of kinetic parameters for the Staphylococcus aureus sortase transpeptidase SrtA. Anal Biochem 326:42–48 Lee C, Sun WJ, Burgess BW, Junker BH, Reddy J, Buckland BC et al (1997) Process optimization for large-scale production of TGF-alpha-PE40 in recombinant Escherichia coli: effect of medium composition and induction timing on protein expression. J Ind Microbiol Biotechnol 18:260–266 Lee KY, Shin DS, Yoon JM, Heonjoong K, Oh KB (2002) Expression of sortase, a transpeptidase for cell wall sorting reaction, from Staphylococcus aureus ATCC 6538p in Escherichia coli. J Microbiol Biotechnol 12:530–533 Li Z, Nimtz M, Rinas U (2014) The metabolic potential of Escherichia coli BL21 in defined and rich medium. Microb Cell Fact 13:45 Madej MP, Coia G, Williams CC, Caine JM, Pearce LA, Attwood R et al (2012) Engineering of an anti-epidermal growth factor receptor antibody to single chain format and labeling by sortase A-mediated protein ligation. Biotechnol Bioeng 109:1461–1470 Matsushita T, Sadamoto R, Ohyabu N, Nakata H, Fumoto M, Fujitani N et al (2009) Functional neoglycopeptides: synthesis and characterization of a new class of MUC1 glycoprotein models having core 2-based O-glycan and complex-type N-glycan chains. Biochemistry 48:11117–11133 Mazmanian SK, Liu G, Ton-That H, Schneewind O (1999) Staphylococcus aureus sortase, an enzyme that anchors surface proteins to the cell wall. Science 285:760–763 Naik MT, Suree N, Ilangovan U, Liew CK, Thieu W, Campbell DO et al (2006) Staphylococcus aureus Sortase A transpeptidase. Calcium promotes sorting signal binding by altering the mobility and structure of an active site loop. J Biol Chem 281:1817–1826 Papaneophytou CP, Kontopidis G (2014) Statistical approaches to maximize recombinant protein expression in Escherichia coli: a general review. Protein Expr Purif 94:22–32 Perry AM, Ton-That H, Mazmanian SK, Schneewind O (2002a) Anchoring of surface proteins to the cell wall of Staphylococcus aureus. III. Lipid II is an in vivo peptidoglycan substrate for sortase-catalyzed surface protein anchoring. J Biol Chem 277:16241–16248 Perry AM, Ton-That H, Mazmanian SK, Schneewind O (2002b) Anchoring of surface proteins to the cell wall of Staphylococcus aureus. III. Lipid II is an in vivo peptidoglycan substrate for sortase-catalyzed surface protein anchoring. J Biol Chem 277:16241–16248 Scott CJ, McDowell A, Martin SL, Lynas JF, Vandenbroeck K, Walker B (2002) Irreversible inhibition of the bacterial cysteine protease-transpeptidase sortase (SrtA) by substrate-derived affinity labels. Biochem J 366:953–958 Tanaka T, Yamamoto T, Tsukiji S, Nagamune T (2008) Site-specific protein modification on living cells catalyzed by Sortase. ChemBioChem 9:802–807 Ton-That H, Liu G, Mazmanian SK, Faull KF, Schneewind O (1999) Purification and characterization of sortase, the transpeptidase that cleaves surface proteins of Staphylococcus aureus at the LPXTG motif. Proc Natl Acad Sci USA 96:12424–12429 Tseng CL, Leng CH (2012) Influence of medium components on the expression of recombinant lipoproteins in Escherichia coli. Appl Microbiol Biotechnol 93:1539–1552 Vincentelli R, Romier C (2013) Expression in Escherichia coli: becoming faster and more complex. Curr Opin Struct Biol 23:326–334 Voloshchuk N, Liang D, Liang JF (2015) Sortase A mediated protein modifications and peptide conjugations. Curr Drug Disc Technol 12:205–213 Witte MD, Cragnolini JJ, Dougan SK, Yoder NC, Popp MW, Ploegh HL (2012) Preparation of unnatural N-to-N and C-to-C protein fusions. Proc Natl Acad Sci USA 109:11993–11998 Wu Z, Guo X, Wang Q, Swarts BM, Guo Z (2010) Sortase A-catalyzed transpeptidation of glycosylphosphatidylinositol derivatives for chemoenzymatic synthesis of GPI-anchored proteins. J Am Chem Soc 132:1567–1571 Wu Z, Guo X, Guo Z (2011) Sortase A-catalyzed peptide cyclization for the synthesis of macrocyclic peptides and glycopeptides. Chem Commun 47:9218–9220 Wu Z, Guo X, Gao J, Guo Z (2013) Sortase A-mediated chemoenzymatic synthesis of complex glycosylphosphatidylinositol-anchored protein. Chem Commun 49:11689–11691 ZW and XZ deigned the experiment and analyzed the data, HH and XW performed the experiments. All the authors read and approved the final manuscript. All authors have read and approved to submit it to Bioresources and Bioprocessing. There is no conflict of interest of any author in relation to the submission. This work was supported by the National Natural Science Foundation of China (21472070), the Project for Jiangsu Scientific and Technological Innovation Team, the Fund for Jiangsu Distinguished Professorship Program, and the State Key Laboratory of Natural and Biomimetic Drugs (K20140216), The Project was funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions,the 111 Project (No. 111-2-06), and the Jiangsu province "Collaborative Innovation Center for Advanced Industrial Fermentation" industry development program. The Key Laboratory of Carbohydrate Chemistry and Biotechnology, Ministry of Education, School of Biotechnology, Jiangnan University, 1800 Lihu Road, Wuxi, China Zhimeng Wu, Haofei Hong, Xinrui Zhao & Xun Wang State Key Laboratory of Natural and Biomimetic Drugs, Peking University, Beijing, 100191, China Zhimeng Wu Haofei Hong Xinrui Zhao Xun Wang Correspondence to Zhimeng Wu. Additional tables. Wu, Z., Hong, H., Zhao, X. et al. Efficient expression of sortase A from Staphylococcus aureus in Escherichia coli and its enzymatic characterizations. Bioresour. Bioprocess. 4, 13 (2017). https://doi.org/10.1186/s40643-017-0143-y DOI: https://doi.org/10.1186/s40643-017-0143-y Sortase A 7-L fermentor Enzymatic characterizations
CommonCrawl
Autoregressive transitional ordinal model to test for treatment effect in neurological trials with complex endpoints Lorenzo G. Tanadini1, John D. Steeves2, Armin Curt3 & Torsten Hothorn1 A number of potential therapeutic approaches for neurological disorders have failed to provide convincing evidence of efficacy, prompting pharmaceutical and health companies to discontinue their involvement in drug development. Limitations in the statistical analysis of complex endpoints have very likely had a negative impact on the translational process. We propose a transitional ordinal model with an autoregressive component to overcome previous limitations in the analysis of Upper Extremity Motor Scores, a relevant endpoint in the field of Spinal Cord Injury. Statistical power and clinical interpretation of estimated treatment effects of the proposed model were compared to routinely employed approaches in a large simulation study of two-arm randomized clinical trials. A revisitation of a key historical trial provides further comparison between the different analysis approaches. The proposed model outperformed all other approaches in virtually all simulation settings, achieving on average 14 % higher statistical power than the respective second-best performing approach (range: -1 %, +34 %). Only the transitional model allows treatment effect estimates to be interpreted as conditional odds ratios, providing clear interpretation and visualization. The proposed model takes into account the complex ordinal nature of the endpoint under investigation and explicitly accounts for relevant prognostic factors such as lesion level and baseline information. Superior statistical power, combined with clear clinical interpretation of estimated treatment effects and widespread availability in commercial software, are strong arguments for clinicians and trial scientists to adopt, and further extend, the proposed approach. Neurological research is responsible for the investigation of many devastating disorders such as stroke, Alzheimer's and Parkinson's diseases. In terms of health costs, brain-related disorders are a greater socio-economic burden than cancer, cardiovascular diseases and diabetes combined [1], with yearly costs for the European society estimated at almost 400 billion € [2]. Despite several therapeutic approaches [3–6] based on recent discoveries of cellular and molecular processes of degeneration, but also spontaneous regeneration following injury, pharmaceutical and health companies have been withdrawing from neuroscience, as a number of trials intended to show efficacy of treatments for neurological disorders failed [7]. In the field of Spinal Cord Injury (SCI), four decades after the first pharmacological treatment of acute injuries [8], the promises of preclinical discoveries have yet to be translated into a standard treatment [9]. To streamline the translational process, the International Campaign for Cures of Spinal Cord Injury Paralysis (ICCP) appointed in 2007 an international panel with the task to reviewing strengths and weaknesses of clinical trials in spinal cord injury. Their recommendations for the planning and conduction of future trials were condensed in a series of publications [10–13], which strongly influenced the conception of clinical trials thereafter [14]. Nonetheless, the ICCP reviews [10–13] did not solicit the application of the most appropriate and recent statistical techniques available for the analysis of complex SCI trial endpoints, and many clinical trials failed to do so too [15–19]. In fact, virtually all routinely performed clinical assessments in spinal cord injury are measured on ordinal scales, which are characterized by an arbitrary numerical score establishing a ranking of observations. The difference between two following ranks is by no means bound to be equivalent across the range of the scale, preventing standard operations such as addition, and making the use of statistical methods developed for continuous endpoints inappropriate. Despite this, clinical trials designed and powered for a primary ordinal endpoint often resorted to adding several ordinal endpoints to form a single overall summed score, which is in some cases subsequently collapsed to a binary outcome [15–19]. These approaches have been shown to be inappropriate in a number of aspects [20], and practical consequences such as biased parameter estimates, misleading associations and loss of power are some of the known consequences of assuming metric properties for ordinal endpoints [21–23]. In this study, we propose for the first time in SCI a transitional ordinal model with an autoregressive component for testing for treatment effect on a multivariate ordinal endpoint such as the Upper Extremity Motor Scores (UEMS), while comparing it to current analysis approaches in terms of statistical power and clinical interpretation of treatment effect estimates. The objective was to propose a new approach to the analysis of complex ordinal endpoints in neurological clinical trials, and provide statistical power comparisons of procedures for treatment effect testing. Two-armed Randomized Clinical Trials (RCT) with specific levels of experimental conditions were generated and analysed. Current approaches to the analysis of multivariate ordinal endpoints such as the Upper Extremity Motor Scores (UEMS) were compared to the proposed autoregressive transitional ordinal model. The proposed approach models the transition, e.g. the change in UEMS distribution, from trial baseline to trial end. The autoregressive term of the model describes the anatomical structure of the spinal cord by postulating a direct dependency between contiguous segments. Data source and trial endpoint The data utilized in this study was extracted from the European Multicenter Study about Spinal Cord Injury (EMSCI, ClinicalTrials.gov Identifier: NCT01571531, www.emsci.org). EMSCI tracks the functional and neurological recovery of patients during the first year after spinal cord injury in a highly standardized manner. All patients gave written informed consent. The ethical committee of the Canton of Zurich, Switzerland, has previously approved the EMSCI project, upon which this project is based, and the approval is also valid for any statistical analysis/re-analysis. To reflect the time frame of a possible future clinical trial, we considered baseline (within 2 weeks after injury, t=1) and one follow-up (6 months after injury, t=2) examination. For this simulation study, we extracted and utilized records of N=405 patients with a Motor Level (ML) defined between spinal segments C5-T1 (see Additional file 1 for details) and with available baseline information. The trial endpoint considered is the Upper Extremity Motor Scores. UEMS represents a subset of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) [24] and describes the muscle contraction force for 10 key muscles on the arms and hands (5 on each body side), each one being rated on a 6-point ordinal scale (0: total paralysis, through 5: active movement against full resistance, see Additional file 1 for details). Accordingly, Y i,m,t is the muscle contraction score for patient i (i=1,…,n) and key muscle m (m=1,…,10) measured at time point t (t=1,2). Each key muscle Y i,m,t is therefore an ordinal variable with k=6 levels 0<1<…<5, and UEMS is a multivariate ordinal endpoint. The chosen endpoint is particularly relevant in SCI. A change in total UEMS over trial period has been employed repeatedly in clinical trials [15, 19] and has been suggested to correlate with changes in activities of daily living that rely on recovery of upper extremity function [25]. RCT simulation An autoregressive transitional ordinal model of the form: $$ {}\begin{aligned} logit \left[ P(y_{i,m,2} \leq k)\right] &= \alpha_{j} + \beta_{\text{lev}}\hspace{1mm}x_{\text{lev},i,m,1} + \beta_{\text{base}} \hspace{1mm}y_{\text{base},i,m,1}\\ & \quad + \beta_{\text{auto}} \hspace{1mm}y_{\text{auto},i,m-1,2} \end{aligned} $$ was fitted on the EMSCI data. α j are the k−1=5 intercept parameters, xlev is a 10-level nominal factor denoting the combination of Motor Level and the distance from the Motor Level to the key muscle m being analysed, expressed as number of key muscles along the spine (reference: motor level: cervical C5, distance: -1 (first muscle below the level)), ybase,i,m,1 is the ordered factor for baseline motor score of key muscle m, and yauto,i,m−1,2 is the ordered factor for motor score of the key muscle just above the one being analysed at t=2. The autoregressive term of the model describes the anatomical structure of the spinal cord, and postulates that the motor score of a given key muscle depends on the Motor Score of the key muscle just rostral to it. As a consequence, the observed pattern of lower motor scores with increasing distance from the ML is reproduced. In accordance with the above description, Eq. 1 simulated and analysed only key muscle score below the Motor Level. Motor scores yi,m,2 for key muscles at ML were multinomially sampled from corresponding observed EMSCI frequencies at Motor Level, while motor scores yi,m,2 for key muscles above the ML were given the maximal score. The parameter estimates recovered from the model specified in Eq. 1 describe the spontaneous neurological recovery for patients under standard of care and were subsequently used to simulate participants in the control arm of the trial. From the EMSCI data we also computed the observed frequencies of Motor Level combinations for the left and right body side at baseline. Given that patients having both left and right ML at the lowest UEMS key muscles T1 are very rare (3 % in our EMSCI sample) and do not contribute to the analysis (no key muscles in the UEMS below the ML), they were not included into the simulation. Equation 1 models the spontaneous neurological recovery for patients under standard of care. We introduced an additional parameter βtrt representing a postulated treatment effect, leading to an autoregressive transitional ordinal model of the form: $$ {}\begin{aligned} logit \left[ P(y_{i,m,2} \leq k)\right] &= \alpha_{j} + \beta_{\text{lev}}\hspace{1mm}x_{\text{lev},i,m,1} + \beta_{\text{base}} \, y_{\text{base},i,m,1}\\ & \quad + \beta_{\text{auto}} \hspace{1mm}y_{\text{auto},i,m-1,2} + \beta_{\text{trt}}\hspace{1mm}x_{\text{trt},i,1} \end{aligned} $$ As previously defined, α j are the k−1=5 intercept parameters, x lev is a 10-level nominal factor denoting the combination of Motor Level and the distance from the Motor Level to the key muscle m being analysed, expressed as number of key muscles along the spine (reference: Motor Level: C5, distance: -1), ybase,i,m,1 is the ordered factor for baseline motor score of key muscle m, yauto,i,m−1,2 is ordered factor for motor score of the key muscle just above the one being analysed at t=2, and xtrt is an indicator for treatment arm with placebo as reference. The autoregressive term of the model describes the anatomical structure of the spinal cord, and postulates that the motor score of a given key muscle depends on the motor score of the key muscle just rostral to it. As a consequence, the observed pattern of lower motor scores with increasing distance from the ML is reproduced. Besides the postulated treatment effect βtrt, which is set to different values depending on the simulation settings, all other parameters in Eq. 2 were kept equal to the estimates recovered by fitting Eq. 1 to the EMSCI data. We thus simulated randomized clinical trials with two treatment arms and specific levels of experimental conditions. To cover possible SCI early phase as well as phase III settings, we generated total trial sample sizes of 50, 75, 100, 125, 150, 175, 200 participants. To our knowledge, there is to date no publication on the magnitude of possible treatment effects for UEMS which could have guided us in defining more tailored scenarios. We therefore postulated a rather wide range of six possible treatment effects (from no treatment effect (βtrt=0.0= log(1)) to strong treatment effect (βtrt=0.4055= log(1.5)) in 0.1 steps). A total of 42 scenarios resulted from simulating all possible combinations of the 7 trial sample sizes and 6 possible treatment effects considered. Being a proportional odds model, the exponentiated βtrt can be interpreted as conditional Odds Ratio (OR) between trial arms, meaning that, conditional on all other prognostic factors being equal, it specifies the ratio of the odds for a key muscle to achieve a motor score of less than or equal to k in the treatment arm divided by the same odds in the control arm. OR is a statistically sensible and clinical widely accepted way of quantifying effects of categorical variables. The 42 trial scenarios resulting from all combinations of 7 trial sample sizes and 6 possible treatment effects were simulated in the following way: Right and left Motor Levels for the hypothesized number of trial participants were drawn from a multinomial distribution with category probabilities set to the corresponding observed EMSCI frequencies. Baseline UEMS for each trial participant were sampled with replacement from all EMSCI patients having the same left-right ML constellation. Each simulated participant was randomly allocated to either the control or the treatment arm with a 1:1 allocation scheme. UEMS at six months for the key muscle at ML were drawn from a multinomial distribution with category probabilities set to the corresponding observed EMSCI frequencies. UEMS at six months below the ML were simulated using the previously fitted model for spontaneous recovery (Eq. 1) for participants in the control arm, and the same model with the addition of a postulated treatment effect (Eq. 2) for participants in the treatment arm of the trial. Each one of the 42 trial scenarios was replicated 1000 times. A battery of 6 different tests for treatment effect (see below "Endpoint analysis approaches" Section) were applied to each simulated trial. The statistical power =P(reject H0|H1 is true) was estimated as the fraction of significant tests for treatment effect at the nominal level 0.05 among the 1000 replications. Endpoint analysis approaches In neurology in general, and SCI in particular, very common approaches to the analysis of UEMS or similar endpoints are as the total sum of all motor scores \(Y^{*}_{i,2} = \sum _{m=1}^{10} Y_{i,m,2}\) or as difference between two time points \(Y^{**}_{i} = \sum _{m=1}^{10} Y_{i,m,2} - Y_{i,m,1}\). Accordingly, treatment effect for UEMS was tested with: t-test: t-test for \( Y^{*}_{i,2}\), comparing mean total UEMS in the two treatment groups. t-test delta: t-test for \( Y^{**}_{i}\), comparing the mean difference in total UEMS from baseline to the end of the trial between the two treatment groups. ANCOVA: Analysis of covariance for \(Y^{*}_{i,2}\), comparing mean total UEMS in the two treatment groups with baseline total UEMS \( Y^{*}_{i,1}\) as controlling continuous variable. Even though not commonly done in SCI, we considered necessary that the Motor Level should be incorporated into the analysis of motor function. In fact, its importance has been reported before [26, 27]. We therefore applied a conditional test of independence between outcome and treatment arm which was stratified according to the Motor Level of each trial participant. We predicted that this approach would perform better than the previous, not stratified ones, and explored the possibility to utilise them as "ad hoc" approach for the analysis of UEMS. Accordingly, treatment effect for UEMS was tested with: i-test: stratified independence test for \(Y^{*}_{i,2}\), comparing total UEMS in the two treatment groups. i-test delta: stratified independence test for \(Y^{**}_{i}\), comparing the difference in total UEMS from baseline to the end of the trial between the two treatment groups. Both tests are implemented in the R add-on package coin [28, 29]. The last approach for the analysis of UEMS in a RCT is a model that takes into account the ordinal nature of each key muscle and explicitly incorporates baseline UEMS as well as ML into the analysis: transitional: transitional ordinal model for Yi,m,2 of the form specified in Eq. 2, comparing the shift in motor score probabilities associated with treatment. The proposed model is a proportional odds model with an autoregressive component. The latter takes into account the spatial orientation of the key muscles along the spinal cord by postulating a direct dependency of adjacent spinal segments. As a consequence, the observed pattern of lower Motor Scores with increasing distance from the ML is reproduced. This model was fitted using function polr from the R add-on package MASS [30, 31]. The parameter βtrt, which quantifies the treatment effect on the link scale, is the focus of the proposed model. Its significance testing was based on a permutation test [32, 33], where the distribution of the test statistics under H0 (no treatment effect) was based on refitting the same model 1000 times after randomly rearranging the labels for arm allocation. This type of statistical significance test does not rely on any distributional assumption. In addition, by permuting trial arm allocation at participant level, we accounted for the hierarchical structure of the data analysed, where multiple key muscles are measured on the same participant. All computations were performed in the R system for statistical computing [34], version 3.1.3. The R code implementing the simulation study is available online (doi: http://dx.doi.org/10.5281/zenodo.47600). Revisiting a key SCI trial As a practical application, we analysed a subset of the data collected during a past clinical trial. The Sygen ®;trial recruited N =760 SCI participants in 28 centres in North-America in a 5-year period between 1992 and 1997 [17, 35, 36]. Sygen ®;is a naturally occurring compound in cell membranes which has been associated with neuroprotective and regenerative effects in a number of experimental models and early-phase human trials. The trial is an example where a promising therapeutic approach was finally abandoned, as no significant treatment effect could be assessed on the primary endpoint despite a considerable final sample size (N =760). The primary endpoint assessed the overall neurological status of a patient and was defined as a dichotomization derived from an ordinal scale (see [36] for the exact definition). The primary endpoint was analysed by means of logistic regression. Several ancillary analyses were performed and mostly preferred the treatment arm, even though the differences were not always statistically significant. To our knowledge, no analysis performed at the level of motor scores of the upper extremity key muscles UEMS as reported here have been published. We revisited the trial by testing for treatment effect on the UEMS with all six approaches outlined before (see "Endpoint analysis approaches" Section). The proposed autoregressive transitional ordinal model (Eq. 2) can be easily fitted as proportional odds model to the segment-wise UEMS data in the long format. The autoregressive component yi,m−1,2 can be incorporated by shifting the six-month, muscle-wise UEMS entries so as to be aligned to the key muscle yi,m,2 just caudal to them. To reflect our simulation study, we selected participants with a ML between C5 - C8 (T1 were discarded, because there is no key muscle caudal to the ML on the UEMS), and considered only patients treated with a low dosage (the original trial had two treatment doses, the higher of which was abandoned during the study). After patients selection, we analysed a finale sample of N =284 participants, 127 (45 %) of which in the control arm. This analysis is intended to give an example of the application of the proposed transitional ordinal model, but is not intended and should not be taken as a definitive conclusion about the value or outcome of the trial. Given the strongly selected patients sample utilised, the different endpoint analysed and the different scope of our analysis, generalizations of this type cannot be drawn. For the purpose of this study, we simulated 1000 times each one of the 42 different combinations of trials size and postulated treatment effect. Statistical power, which is defined as the probability of rejecting the H0 of no treatment when there is in fact a treatment effect, was estimated as the fraction of this 1000 iterations where the test for treatment effect resulted significant at the 0.05 level. Table 1 reports the statistical power of all treatment testing approaches for all simulated settings. Figure 1 shows the statistical power of all six approaches for the intermediate treatment effect simulated. Figure 2 displays graphically the statistical power of all treatment testing approaches for all simulated settings. The nominal level 0.05 was maintained by all approaches when no treatment effect was introduced in the simulation, making further comparisons between different approaches straightforward. Comparison of statistical power for the median treatment effect. The statistical power of all six approaches for treatment effect testing are plotted against total trial size (1:1 randomization) for the median simulated treatment effect βtrt=0.2624= log(1.3) Contour plots of statistical power for all simulation settings. The statistical power of all testing approaches is represented using loess smooth approximation. Contour curves visualize combinations of trial size and treatment effect with equivalent statistical power, which is reported as numerical value. The colour key differentiates regions of low statistical power (violet) from regions of high statistical power (blue) Table 1 Statistical power for all simulation settings. Point estimates, as well as Wilson confidence intervals are reported for all analysis approaches For the smallest treatment effect βtrt=0.0953= log(1.1), all six tests for treatment effect showed a low power, never exceeding P(reject H0|H1 is true) ≤0.135. The transitional ordinal model was nonetheless superior to all other approaches in virtually every trial size setting, its power point estimates averaging 2.3 % higher than the respective second best-performing approach. Already at the next higher treatment effect simulated βtrt=0.1823= log(1.2), the transitional ordinal model showed roughly twice as much power as the second-best performing approach, though it did not exceed P(reject H0|H1 is true) ≤0.36. This held for all simulation settings except the smallest sample size. Statistical power point estimates for the transitional ordinal model were on average 10.3 % higher than the respective second best-performing approach. In the settings with median simulated treatment effect βtrt=0.2624= log(1.3) shown in Fig. 1, the transitional ordinal model was superior for all trial sizes. Power point estimates for the proposed model were on average 19.4 % higher than the respective second best-performing approach, with this difference in performance increasing with increasing trial size. With the simulated treatment effect of βtrt=0.3365= log(1.4), the transitional ordinal model had superior statistical power of 26.3 % on average, compared to the respective second best-performing approach, with this difference increasing with increasing trial size. For the largest simulated treatment effect of βtrt=0.4055= log(1.5), the transitional ordinal model had an average superior statistical power of 27.9 %, compared to the respective second best-performing approach. The difference in performance increased strongly up to trial size N =100, but then declined with larger sizes. Overall, despite a comparably poor performance of all approaches for small simulated treatment effects, a stable pattern in the ranking of performance emerged: the proposed transitional ordinal approach provided best power results in virtually all settings. ANCOVA was usually the second-best approach, closely followed by the independence test on the difference of UEMS from baseline \( Y^{**}_{i}\), the similarly performing t-test on the difference of UEMS from baseline \( Y^{**}_{i}\) and the independence test on the UEMS after six months \( Y^{*}_{i,2}\). The t-test on the UEMS after six months \( Y^{*}_{i,2}\) performed worst in almost all settings. We analysed a subset of the data collected during the Sygen ®;trial [17, 35, 36]. To our knowledge, no analysis on this data has been performed at the level of motor scores of the upper extremity key muscles UEMS as reported here. The results of the six analysis approaches (see Endpoint analysis approaches section) are reported here: t-test: No significant difference in the estimated means \(\widehat {\mu _{\text {ctrl}}}=30.370\) and \(\widehat {\mu _{\text {trt}}}=30.170\) of UEMS at 6 months between trial arms: t(275)=0.130, p−value=0.896. t-test delta: No significant difference in the estimated mean change \(\widehat {\mu _{\text {ctrl}}}=11.978\) and \(\widehat {\mu _{\text {trt}}}=10.540\) of UEMS between trial arms: t(259)=1.239, p−value=0.216. ANCOVA: No significant difference in the estimated means of UEMS at 6 months between trial arms, controlling for baseline UEMS: \(\widehat {\beta _{\text {trt}}}=-1.165\), p−value=0.307. i-test: No significant dependency between UEMS at 6 months and treatment arm: Z=0.553, p− value = 0.58. i-test delta: No significant dependency between change in UEMS and treatment arm: Z=1.525, p−value=0.127. transitional: No significant shift in motor score probabilities associated with treatment arm: \(\widehat {\beta _{\text {trt}}}= -0.197\), p−value=0.207. Summarizing, all six approached did not show significant results at the nominal level 0.05, but they all showed a tendency to less positive outcomes for patients in the treatment arm. This analysis is intended to give an example of the application for the proposed transitional ordinal model, but is not intended and should not be taken as a definitive conclusion about the value or outcome of the trial. The aim of this simulation study was to compare several approaches of testing for treatment effect in two-armed RCT in a neurological setting. We therefore simulated clinical trials with cervical SCI participants with specific levels of experimental conditions and tested for treatment effect with six different approaches. Routinely employed analysis approaches not only rely on strong assumptions about the properties of the endpoints being analysed, but were also outperformed in virtually all settings by the the proposed autoregressive transitional ordinal model for the analysis of UEMS. Adding ordinal endpoints to form a single overall score is generally not valid Common approaches to the analysis of UEMS and similar neurological endpoints are as the total sum of all motor scores \(Y^{*}_{i,2} = \sum _{m=1}^{10} Y_{i,m,2}\) or as difference between two time points \(Y^{**}_{i} = \sum _{m=1}^{10} Y_{i,m,2} - Y_{i,m,1}\). Whether it is appropriate to combine a set of ordinal variables to generate a total score is usually not checked in neurology [37]. It should nonetheless be a requirement, as there are at least two strong assumptions related to the analysis of summed motor scores as a metric endpoint: unidimensionality and equal differences. Unidimensionality refers to the property of several scores to measure a single, common patient's characteristic. While there is some preliminary evidence that unidimensionality holds for UEMS [38], the opposite was reported for both the Functional Independence Measure FIM [39], the Spinal Cord Independence Measure SCIM [40], a situation which is very likely to be found in functional endpoints and Patients Reported Outcomes PRO. Equal differences imply that a unit change in motor scores represent exactly the same clinical change, independently of where the change took place on the scale (e.g. a change from 0 to 1 is assumed to be of the same magnitude as a change from 3 to 4 in motor scores), or of which key muscle are considered (the previous example is assumed to hold even when the changes took place on different key muscles, say e.g. one proximal and one distal from the lesion level). The widely used method of adding up several ordinal endpoints to form a single overall score is therefore generally not valid with regard to the two assumptions exemplified above, and has been repeatedly reported in neurological and related physical functioning settings [39–44]. From a practical point of view, biased parameter estimates, as well as misleading associations and loss of power are some of the known consequences of assuming metric property for ordinal endpoints [21–23]. There is therefore a compelling need to embrace statistical models specifically designed for the analysis of complex ordinal endpoints. The proposed autoregressive transitional ordinal model is the first attempt in SCI to model and analyse a complex endpoint with a regression model which reflects its ordinal nature and takes into account important prognostic factors. The proposed model for the analysis of UEMS in cervical SCI patients outperformed all other approaches in virtually all settings. The sensibly lower statistical power achieved by commonly used approaches, in addition to their implicit assumptions, indicate that their use as default analysis methods in not justified. Contrary to our expectations, a stratification of the t-test based on the Motor Level did not provide a discernible improvement in statistical power (Table 1). In fact, even though blocked independence tests showed a slightly higher power than their corresponding t-tests (Fig. 2), the gain in power was not such that their application as "ad hoc" solution resulted substantiated. In terms of clinical interpretation of treatment effect estimates, we note that by applying the proposed model, the exponentiated treatment effect estimate \(\widehat {\beta _{\text {trt}}}\) can be interpreted as the conditional odds ratio between the treatment and control trial arms, which is a common and accepted way of quantifying treatment effect in the clinical setting. Even when the proportional odds assumption is not fully met, it still provides an interpretable parameter that summarizes the treatment effect over all levels of the outcome [23]. In addition, the transitional model provides motor score probabilities for each combination of prognostic variables, making the direct comparison and visual representation of treated and untreated participants straightforward (see Fig. 3). Visualization of median treatment effect βtrt=0.2624=log(1.3). In contrast to all other analysis approaches, the transitional ordinal model allows to graphically represent shifts in motor score distributions for any constellation of relevant prognostic factors, permitting a much more detailed investigation of treatment effect. As illustrative example, represented is the distribution of motor score probabilities for participants in the control (left panel) and treatment arm (right panel). Lower scores became less, while higher score became more probable in the treatment arm. The treatment effect βtrt=0.2624=log(1.3) corresponds to an Odds Ratio of OR=1.3.The specific constellation of prognostics factor represented refers to a C8 key muscle, with a Motor Level C5 (x lev =C5.-3), a baseline motor score of ybase,i,m,1=1, and an autoregressive component yauto,i,m−1,2=3 for the motor score of the key muscle just above the one being reported On the contrary, clear interpretation of the results produced by common approaches is precluded by summed scores of suppositional metric endpoints, providing little insight for trial scientists and clinicians. Importantly, small and possibly localized treatment effects, which are a hallmark of many neurological disorders, can be disentangled using ordinal approaches for motor scores, but become lost in the analysis of summed total scores. Finally, our simulation showed (Table 1) that a statistical power of 80 %, which is a common goal for clinical trials planners, is reached by the ordinal model only for large trial size and large postulated treatment effects. As a total trial size of N =200 seems to currently represent the practical upper limit for conducting SCI trials, the statistical detection of an existing treatment effect seems to rely on a rather strong effect. Further improvements of the ordinal model will likely result in lowered requirements for treatment detection. To provide a concrete application of our approach, we analysed a subset of participants of the Sygen®; trial [17, 35, 36]. Many ancillary analyses in the original publication were based on t-test and ANCOVA approaches and favoured the treatment group over placebo [17]. In particular, treated participants showed a faster initial recovery than control subjects, who nonetheless caught up at slightly later time points. On the subsample of patients we considered, no one of the six approaches was significant at the conventional nominal level 0.05. Nonetheless, all approaches showed a tendency towards negative effect of treatment on the UEMS, meaning that treated patient showed on average a slightly worse recovery than patients in the control arm. Especially for the ordinal approach, the results imply that the odds of participants in the treatment group of achieving up to a given motor score were only \(\mathrm {e}^{\widehat {\beta _{\text {trt}}}}= 0.82\) times the odds of a participant with similar characteristics in the control arm, indicating a worse recovery for treated patients. The negative estimate of treatment effect in cervical participants is rather unexpected. The observed unbalance toward more severe lesions in the treatment arm may explain at least in part these results, which nonetheless might be examined more closely to rule out potentially unintended detrimental effects. Nevertheless, we retain that generalizations of our results to the overall validity of the trial and its compound cannot be drawn. Are summed overall scores not "good enough" ? In our application, all six approaches presented delivered comparable results, namely statistically non-significant negative trends for participants in the treatment arm. One may therefore wonder what the added value of an ordinal approach like the proposed transitional ordinal model is. Briefly, routinely employed approaches based on summed overall scores imply: Unmet assumptions: adding ordinal endpoints to form a single overall score requires equal differences across all ordinal scales as well as unidimensionality. Both assumptions are usually not further investigated [37], but the first can be rejected on medical reasons only, while the latter does not hold for several SCI endpoints (e.g. FIM [39], SCIM [40]). Flawed inference and estimation: known practical consequences of assuming metric property for ordinal endpoints are biased parameter estimates and misleading associations [21–23]. Reduced statistical power: small and possibly localised effects are expected to be the hallmark of spinal cord injury rehabilitation strategies. The simulation reported provide evidence for a much lower capacity of approaches based on summed scores to detect existing treatment effects. Lower power also translates in higher requirement for trial participants. Unclear interpretation of treatment effect: a clear interpretation of treatment effect estimates as conditional OR, which can be visualised for each key muscle separately (see Fig. 3), is not possible for summed scores. Limited future extensions: future refinement of routinely employed approaches are strongly limited by the underlying, inappropriate analysis approach. Instead, ordinal approaches, which are based on a regression framework, easily accommodates for extensions (e.g. further prognostic factors, interactions, localised effects). Concluding, from a theoretical point of view, routinely employed approaches have little scientific validity and have been replaced by more rigorous approaches. Even more importantly, they are also potentially misleading on practical terms. Our flexible model represents therefore an improved and pragmatic solution to the analysis of this type of complex ordinal endpoints. Brain Injury: similar issues, similar solutions We observe that most of the discussion points we raised link to the report by the International Mission on Prognosis and Clinical Trial Design in Traumatic Brain Injury TBI [45]. TBI is a related clinical field which faced very similar challenges, mainly related to the heterogeneity of the patient population, and had a similar history of clinical testing as SCI. In fact, TBI also experienced a disappointing progression of clinical testing of treatment interventions in spite of extremely promising pre-clinical data and early phase trials. Maas et al. [45] reported that a key difficulty has been the inherent heterogeneity TBI subjects, and that the observed development was due, at least to some extent, to limitations in the trial designs and analyses. Both aspects have also been reported as hallmarks of SCI research. Summarizing, The TBI Mission solicited the TBI community to [45]: provide details of the major baseline prognostic characteristics broaden inclusion criteria as much as is it compatible with the current understanding of the mechanisms of action of the intervention incorporate pre-specified covariate adjustment into the statistical analysis use an ordinal approach for the statistical analysis A part from the first recommendation, which is mainly concerned with the way clinical studies are reported, the following three points regard the planning and especially the analysis of clinical trials in TBI, and are implemented in this publication. Selection of patients is based only on the initial Motor Level, which relates to the understanding of motor function. The proposed model (see Eq. 2) both include the most relevant covariates adjustment, namely baseline motor scores as well as motor lesion, and uses and ordinal approach for ordinal data based on the proportional odds model. Latent variable models: an improved, readily available framework More generally speaking, the statistical foundations of regression models for ordinal endpoints were developed more than 4 decades ago [46–48], and have ever since undergone a steady development. There is a huge body of literature pertaining to the analysis of ordinal variables, including Item Response Theory IRT and mixed-effects models for ordinal variables [49]. Despite this development, most clinical trials in neurology still rely on surpassed approaches [44], corroborating the negative trend of methodological errors related to the analysis of ordinal scales in medical research [50]. The proposed transitional ordinal model (Eq. 2) is an extension of the well known proportional odds model (e.g. [51]). The latter can be seen as an important special case within the IRT framework, and is closely related to the Rasch model [46]. All these statistical models are generally referred to as latent variable models, because they find application in situations where a set of ordinal variables are seen as indicators of a latent variable. This latent variable is the main interest of the analysis, and, although it cannot be measured directly, it can be inferred from the available ordinal variables. The latent variable approach seems both appropriate and appealing for applications in the clinical setting, and the transitional ordinal model proposed draws a concrete link from SCI to latent variable models. Further extensions of our approach can be tailored to the analysis of other endpoints such as functional assessments and PROs. In fact, the analysis of PRO, and the related trial powering based on Rasch models has recently received much attention [52, 53]. We believe that the transition from currently employed analysis approaches to more sophisticated models within the readily available framework of latent variable models would represent a great scientific progression for the planning and analysis of complex neurological endpoints. We propose an autoregressive transitional ordinal model for the analysis of a specific SCI endpoint which takes into account the complex ordinal nature of the endpoint under investigation and explicitly accounts for relevant prognostic factors. Superior statistical power in virtually all settings, combined with a clear clinical interpretation of treatment effect and widespread availability on commercial softwares, are strong arguments for clinicians and trial scientists to adopt, and further refine, the proposed approach. EMSCI: European multicenter study about spinal cord injury Functional independence measure ICCP: International campaign for cures of spinal cord injury paralysis Item response theory ISNCSCI: Int. standards for neurological classification of spinal cord injury ML: Motor level Randomized clinical trial SCI: SCIM: Spinal cord independece measure TBI: UEMS: Upper Extremity Motor Scores Gustavsson A, Svensson M, Jacobi F, Allgulander C, Alonso J, Beghi E, Dodel R, Ekman M, Faravelli C, Fratiglioni L, Gannon B, Jones DH, Jennum P, Jordanova A, Jönsson L, Karampampa K, Knapp M, Kobelt G, Kurth T, Lieb R, Linde M, Ljungcrantz C, Maercker A, Melin B, Moscarelli M, Musayev A, Norwood F, Preisig M, Pugliatti M, Rehm J, Salvador-Carulla L, Schlehofer B, Simon R, Steinhausen HC, Stovner LJ, Vallat JM, den Bergh PV, van Os J, Vos P, Xu W, Wittchen HU, Jönsson B, Olesen J. Cost of disorders of the brain in Europe 2010. Eur Neuropsychopharmacol. 2011; 21(10):718–79. Andlin-Sobocki P, Jönsson B, Wittchen HU, Olesen J. Cost of Disorders of the Brain in Europe. Eur J Neurol. 2005; 12:1–27. Thuret S, Moon LDF, Gage FH. Therapeutic interventions after spinal cord injury. Nat Rev Neurosci. 2006; 7(8):628–43. Tator CH. Review of treatment trials in human spinal cord injury: issues, difficulties, and recommendations. Neurosurgery. 2006:957–987. Hawryluk GW, Rowland J, Kwon BK, Fehlings MG. Protection and repair of the injured spinal cord: a review of completed, ongoing, and planned clinical trials for acute spinal cord injury: A review. Neurosurgical focus. 2008; 25(5):14. Liu K, Tedeschi A, Park KK, He Z. Neuronal Intrinsic Mechanisms of Axon Regeneration. Ann Rev Neurosci. 2011; 34(1):131–52. Schwab ME, Buchli AD. Drug research: plug the real brain drain. Nature. 2012; 483(7389):267–8. Ducker TB, Hamit HF. Experimental treatments of acute spinal cord injury. J Neurosurg. 1969; 30(6):693–7. Lammertse DP. Clinical trials in spinal cord injury: lessons learned on the path to translation. The 2011 International Spinal Cord Society Sir Ludwig Guttmann Lecture. Spinal Cord. 2012; 51(1):2–9. Fawcett JW, Curt A, Steeves JD, Coleman WP, Tuszynski MH, Lammertse D, Bartlett PF, Blight AR, Dietz V, Ditunno J, et al.Guidelines for the conduct of clinical trials for spinal cord injury as developed by the ICCP panel: spontaneous recovery after spinal cord injury and statistical power needed for therapeutic clinical trials. Spinal Cord. 2006; 45(3):190–205. Steeves JD, Lammertse D, Curt A, Fawcett JW, Tuszynski MH, Ditunno JF, Ellaway PH, Fehlings MG, Guest JD, Kleitman N, et al.Guidelines for the conduct of clinical trials for spinal cord injury (SCI) as developed by the ICCP panel: clinical trial outcome measures. Spinal Cord. 2006; 45(3):206–21. Tuszynski MH, Steeves JD, Fawcett JW, Lammertse D, Kalichman M, Rask C, Curt A, Ditunno JF, Fehlings MG, Guest JD, et al.Guidelines for the conduct of clinical trials for spinal cord injury as developed by the ICCP Panel: clinical trial inclusion/exclusion criteria and ethics. Spinal Cord. 2006; 45(3):222–31. Lammertse D, Tuszynski MH, Steeves JD, Curt A, Fawcett JW, Rask C, Ditunno JF, Fehlings MG, Guest JD, Ellaway PH, et al.Guidelines for the conduct of clinical trials for spinal cord injury as developed by the ICCP panel: clinical trial design. Spinal Cord. 2006; 45(3):232–42. Sorani MD, Beattie MS, Bresnahan JC. A Quantitative Analysis of Clinical Trial Designs in Spinal Cord Injury Based on ICCP Guidelines. J Neurotrauma. 2012; 29(9):1736–46. Bracken MB, Shepard MJ, Collins WF, Holford TR, Young W, Baskin DS, Eisenberg HM, Flamm E, Leo-Summers L, Maroon J, Marshall LF, Perot PL, Piepmeier J, Sonntag VKH, Wagner FC, Wilberger JE, Winn HR. A randomized, Controlled Trial of Methylprednisolone or Naloxone in the Treatment of Acute Spinal-Cord Injury - Results of the Second National Acute Spinal Cord Injury Study. New England J Med. 1990; 322(20):1405–11. Hansebout RR, Blight AR, Fawcett S, Reddy K. 4-Aminopyridine in chronic spinal cord injury: a controlled, double-blind, crossover study in eight patients. J Neurotrauma. 1993; 10(1):1–18. Geisler FH, Coleman WP, Grieco G, Poonian D, Group SS. The Sygen®;multicenter acute spinal cord injury study. Spine. 2001; 26(24S):87–98. Lammertse DP, Jones LAT, Charlifue SB, Kirshblum SC, Apple DF, Ragnarsson KT, Falci SP, Heary RF, Choudhri TF, Jenkins AL, Betz RR, Poonian D, Cuthbert JP, Jha A, Snyder DA, Knoller N. Autologous incubated macrophage therapy in acute, complete spinal cord injury: results of the phase 2 randomized controlled multicenter trial. Spinal Cord. 2012; 50(9):661–71. Casha S, Zygun D, McGowan MD, Bains I, Yong VW, John Hurlbert R. Results of a phase II placebo-controlled randomized trial of minocycline in acute spinal cord injury. Brain. 2012; 135(4):1224–36. Agresti A. Analysis of Ordinal Categorical Data, 2nd ed. Series in Probability and Statistics. Hoboken: Wiley; 2010. Winship C, Mare RD. Regression models with ordinal variables. Am Sociol Rev. 1984:512–25. Hastie TJ, Botha JL, Schnitzler CM. Regression with an ordered categorical response. Stat Med. 1989; 8(7):785–94. Scott SC, Goldberg MS, Mayo NE. Statistical assessment of ordinal outcomes in comparative studies. J Clin Epidemiol. 1997; 50(1):45–55. Kirshblum SC, Waring W, Biering-Sorensen F, Burns SP, Johansen M, Schmidt-Read M, Donovan W, Graves DE, Jha A, Jones L, Mulcahey MJ, Krassioukov A. Reference for the 2011 revision of the international standards for neurological classification of spinal cord injury. J Spinal Cord Med. 2011; 34(6):547–54. Rudhe C, van Hedel HJA. Upper Extremity Function in Persons with Tetraplegia: Relationships Between Strength, Capacity, and the Spinal Cord Independence Measure. Neurorehabil Neural Repair. 2009; 23(5):413–21. Coleman W. Injury severity as primary predictor of outcome in acute spinal cord injury: retrospective results from a large multicenter clinical trial*1. Spine J. 2004; 4(4):373–8. Tanadini LG, Hothorn T, Jones LA, Lammertse DP, Abel R, Maier D, Rupp R, Weidner N, Curt A, Steeves JD. Toward Inclusive Trial Protocols in Heterogeneous Neurological Disorders Prediction-Based Stratification of Participants With Incomplete Cervical Spinal Cord Injury. Neurorehabil Neural Repair. 2015; 29(9):867–77. Hothorn T, Hornik K, van de Wiel MA, Winell H, Zeileis A. Coin: Conditional Inference Procedures in a Permutation Test Framework. 2015. http://CRAN.R-project.org/package=coin. Hothorn T, Hornik K, van de Wiel MA, Zeileis A. A Lego System for Conditional Inference. Am Stat. 2006; 60(3):257–63. Ripley B. MASS: Support Functions and Datasets for Venables and Ripley's MASS. 2015. http://CRAN.R-project.org/package=MASS. Nenables WN, Ripley BD. Modern Applied Statistics With S, 4th ed.New York: Springer; 2002. http://www.stats.ox.ac.uk/pub/MASS4. Kennedy P, Cade B. Randomization tests for multiple regression. Commun Stat - Simulation Comput. 1996; 25(4):923–36. Parhat P, Rosenberger WF, Diao G. Conditional Monte Carlo randomization tests for regression models. Stat Med. 2014; 33(18):3078–088. R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2015. https://www.R-project.org/. Geisler FH, Coleman WP, Grieco G, Poonian D, Group SS, et al.Recruitment and early treatment in a multicenter study of acute spinal cord injury. Spine. 2001; 26(24S):58–67. Geisler FH, Coleman WP, Grieco G, Poonian D, Group SS. Measurements and recovery patterns in a multicenter study of acute spinal cord injury. Spine. 2001; 26(24S):68–86. Hobart J. Rating scales for neurologists. J Neurol Neurosurg Psychiatry. 2003; 74(suppl 4):22–6. Furlan JC, Fehlings MG, Tator CH, Davis AM. Motor and Sensory Assessment of Patients in Clinical Trials for Pharmacological Therapy of Acute Spinal Cord Injury: Psychometric Properties of the ASIA Standards. J Neurotrauma. 2008; 25(11):1273–1301. Ravaud JF, Delcey M, Yelnik A, et al.Construct validity of the functional independence measure (FIM): Questioning the unidimensionality of the scale and the "value" of FIM scores. Scand J Rehabil Med. 1999; 31(1):31–42. Catz A, Itzkovich M, Tesio L, Biering-Sorensen F, Weeks C, Laramee MT, Craven BC, Tonack M, Hitzig SL, Glaser E, et al.A multicenter international study on the Spinal Cord Independence Measure, version III: Rasch psychometric validation. Spinal Cord. 2007; 45(4):275–91. McHorney CA, Haley SM, Ware JEJ. Evaluation of the MOS SF-36 Physical Functioning Scale (PF-10): II, Comparison of relative Precision Using Likert and Rasch Scoring Methods. J Clin Epidemiol. 1997; 50(4):451–61. Fink P, Ewald H, Jensen J, Sorensen L, Engberg M, Holm M, Munk-Jorgensen P. Screening for somatization and hypochondriasis in primary care and neurological in-patients: a seven-item scale for hypochondriasis and somatization. J Psychosomatic Res. 1999; 46(3):261–73. Luther SL, Kromrey J, Powell-Cope G, Rosenberg D, Nelson A, Ahmed S, Quigley P. A Pilot Study to Modify the SF-36v Physical Functioning Scale for Use With Veterans With Spinal Cord Injury. Arch Phys Med Rehabil. 2006; 87(8):1059–66. Hobart JC, Cano SJ, Zajicek JP, Thompson AJ. Rating scales as outcome measures for clinical trials in neurology: problems, solutions, and recommendations. Lancet Neurology. 2007; 6(12):1094–1105. Maas AI, Steyerberg EW, Marmarou A, McHugh GS, Lingsma HF, Butcher I, Lu J, Weir J, Roozenbeek B, Murray GD. IMPACT recommendations for improving the design and analysis of clinical trials in moderate to severe traumatic brain injury. Neurotherapeutics. 2010; 7(1):127–34. Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests vol.Copenhagen: Paedagogiske Institut; 1960. McKelvey RD, Zavoina W. A statistical model for the analysis of ordinal level dependent variables. J Math Sociol. 1975; 4:103–20. McCullagh P. Regression models for Ordinal Data. J Royal Stat Soc. 1980; 42(2):109–42. Mehta PD, Neale MC, Flay BR. Squeezing Interval Change From Ordinal Panel Data: Latent Growth Curves With Ordinal Outcomes. Psychol Methods. 2004; 9(3):301–33. Forrest M, Andersen B. Ordinal scale and statistics in medical research. Br Med J (Clinical research ed.) 1986; 292(6519):537. Ananth CV, Kleinbaum DG. Regression Models for Ordinal Responses: A Review of Methods and Applications. Int J Epidemiol. 1997; 26(6):1323–33. McHugh GS, Butcher I, Steyerberg EW, Marmarou A, Lu J, Lingsma HF, Weir J, Maas AIR, Murray GD. A simulation study evaluating approaches to the analysis of ordinal outcome data in randomized controlled trials in traumatic brain injury: results from the IMPACT Project. Clinical Trials. 2010; 7(1):44–57. Hardouin JB, Blanchin M, Feddag ML, Néel TL, Perrot B, Sébille V. Power and sample size determination for group comparison of patient-reported outcomes using polytomous Rasch models. Stat Med. 2015; 34(16):2444–455. We appreciate the continuous assistance of René Koller with the EMSCI database. LGT was partially financially supported by the International Foundation for Research in Paraplegia. The Foundation had no influence on any aspect of this publication. The datasets supporting the conclusions of this article are not publicly available. Interested researcher may apply for data access to the responsible organization, which is usually granted for research-only purposes. The R code implementing the simulation study is freely available (doi:http://dx.doi.org/10.5281/zenodo.47600). LGT conceived the study, implemented the simulation and performed the analysis, and drafted the manuscript. JDS participated in the interpretation of the analyses and revision of the manuscript. AC participated in the interpretation of the analyses and revision of the manuscript. TH conceived the study, and participated in the simulation, interpretation, and drafting of the manuscript. All authors read and approved the final manuscript. The data utilized in this study was extracted from the European Multicenter Study about Spinal Cord Injury (EMSCI, ClinicalTrials.gov Identifier: NCT01571531, www.emsci.org). All patients gave written informed consent. The ethical committee of the Canton of Zurich, Switzerland, has previously approved the EMSCI project, upon which this project is based, and the approval is also valid for any statistical analysis/re-analysis. Department of Biostatistics; Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Hirschengraben 84, Zurich, 8001, Switzerland Lorenzo G. Tanadini & Torsten Hothorn ICORD, University of British Columbia and Vancouver Coastal Health, Vancouver, Canada John D. Steeves Spinal Cord Injury Center, Balgrist University Hospital, Zurich, Switzerland Armin Curt Lorenzo G. Tanadini Torsten Hothorn Correspondence to Lorenzo G. Tanadini. International Standards for Neurological Classification of Spinal Cord Injury. (PDF 935 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Tanadini, L.G., Steeves, J.D., Curt, A. et al. Autoregressive transitional ordinal model to test for treatment effect in neurological trials with complex endpoints. BMC Med Res Methodol 16, 149 (2016). https://doi.org/10.1186/s12874-016-0251-y DOI: https://doi.org/10.1186/s12874-016-0251-y Summed overall score Multivariate ordinal endpoints Proportional odds model Sygen®; trial Rasch models Latent variable models
CommonCrawl
Nonlinear dynamics in tumor-immune system interaction models with delays Topological phase transition III: Solar surface eruptions and sunspots January 2021, 26(1): 515-539. doi: 10.3934/dcdsb.2020261 Rich dynamics of a simple delay host-pathogen model of cell-to-cell infection for plant virus Tin Phan 1, , Bruce Pell 2, , Amy E. Kendig 3, , Elizabeth T. Borer 4, and Yang Kuang 1,, School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287, USA Department of Mathematics and Computer Science, Lawrence Technological University, Southfield, MI 48075, USA Agronomy Department, University of Florida, Gainesville, FL 32611, USA Department of Ecology, Evolution, and Behavior, University of Minnesota, St. Paul, MN 55108, USA * Corresponding author: Yang Kuang Received March 2020 Revised July 2020 Published August 2020 Fund Project: TP and YK are supported by NSF grants DMS-1615879 and DEB-1930728. YK is also partially supported by the NIGMS of the National Institutes of Health (NIH) under award number R01GM131405 Figure(5) / Table(4) Viral dynamics within plant hosts can be important for understanding plant disease prevalence and impacts. However, few mathematical modeling efforts aim to characterize within-plant viral dynamics. In this paper, we derive a simple system of delay differential equations that describes the spread of infection throughout the plant by barley and cereal yellow dwarf viruses via the cell-to-cell mechanism. By incorporating ratio-dependent incidence function and logistic growth of the healthy cells, the model can capture a wide range of biologically relevant phenomena via the disease-free, endemic, mutual extinction steady states, and a stable periodic orbit. We show that when the basic reproduction number is less than $ 1 $ ($ R_0 < 1 $), the disease-free steady state is asymptotically stable. When $ R_0>1 $, the dynamics either converge to the endemic equilibrium or enter a periodic orbit. Using a ratio-dependent transformation, we show that if the infection rate is very high relative to the growth rate of healthy cells, then the system collapses to the mutual extinction steady state. Numerical and bifurcation simulations are provided to demonstrate our theoretical results. Finally, we carry out parameter estimation using experimental data to characterize the effects of varying nutrients on the dynamics of the system. Our parameter estimates suggest that varying the nutrient supply of nitrogen and phosphorous can alter the dynamics of the infection in plants, specifically reducing the rate of viral production and the rate of infection in certain cases. Keywords: plant virus, Barley and cereal yellow dwarf viruses, logistic growth, ratio-dependent, cell-to-cell transmission, resource modeling, delay differential equation, stability analysis, Lotka-Volterra. Mathematics Subject Classification: Primary: 34K20, 92C80; Secondary: 92D25, 92D40. Citation: Tin Phan, Bruce Pell, Amy E. Kendig, Elizabeth T. Borer, Yang Kuang. Rich dynamics of a simple delay host-pathogen model of cell-to-cell infection for plant virus. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 515-539. doi: 10.3934/dcdsb.2020261 M. Ali, S. Hameed and M. Tahir, Luteovirus: Insights into pathogenicity, Archives of Virology, 159 (2014), 2853-2860. doi: 10.1007/s00705-014-2172-6. Google Scholar R. Antia, B. R. Levin and R. M. May, Within-host population dynamics and the evolution and maintenance of microparasite virulence, The American Naturalist, 144 (1994), 457-472. doi: 10.1086/285686. Google Scholar F. Atkinson and J. Haddock, Criteria for asymptotic constancy of solutions of functional differential equations, Journal of Mathematical Analysis and Applications, 91 (1983), 410-423. doi: 10.1016/0022-247X(83)90161-0. Google Scholar J. Bak, D. J. Newman and D. J. Newman, Complex Analysis, Springer, 2010. doi: 10.1007/978-1-4419-7288-0. Google Scholar Y. M. Bar-On, R. Phillips and R. Milo, The biomass distribution on earth, Proceedings of the National Academy of Sciences, 115 (2018), 6506-6511. doi: 10.1073/pnas.1711842115. Google Scholar M. Begon, M. Bennett, R. G. Bowers, N. P. French, S. Hazel and J. Turner, A clarification of transmission terms in host-microparasite models: numbers, densities and areas, Epidemiology & Infection, 129 (2002), 147-153. doi: 10.1017/S0950268802007148. Google Scholar C. Bendix and J. D. Lewis, The enemy within: Phloem-limited pathogens, Molecular Plant Pathology, 19 (2018), 238-254. doi: 10.1111/mpp.12526. Google Scholar E. Beretta and Y. Kuang, Modeling and analysis of a marine bacteriophage infection, Mathematical Biosciences, 149 (1998), 57-76. doi: 10.1016/S0025-5564(97)10015-3. Google Scholar E. Beretta and Y. Kuang, Modeling and analysis of a marine bacteriophage infection with latency period, Nonlinear Analysis. Real World Applications, 2 (2001), 35-74. doi: 10.1016/S0362-546X(99)00285-0. Google Scholar E. Beretta and Y. Kuang, Geometric stability switch criteria in delay differential systems with delay dependent parameters, SIAM Journal on Mathematical Analysis, 33 (2002), 1144-1165. doi: 10.1137/S0036141000376086. Google Scholar P. Bernardo, T. Charles-Dominique, M. Barakat, P. Ortet, E. Fernandez, D. Filloux, P. Hartnady, T. A. Rebelo, S. R. Cousins, F. Mesleard et al., Geometagenomics illuminates the impact of agriculture on the distribution and prevalence of plant viruses at the ecosystem scale, The ISME Journal, 12 (2018), 173-184. doi: 10.1038/ismej.2017.155. Google Scholar E. T. Borer, A.-L. Laine and E. W. Seabloom, A multiscale approach to plant disease using the metacommunity concept, Annual Review of Phytopathology, 54 (2016), 397-418. doi: 10.1146/annurev-phyto-080615-095959. Google Scholar J. C. Carrington, K. D. Kasschau, S. K. Mahajan and M. C. Schaad, Cell-to-cell and long-distance transport of viruses in plants., The Plant Cell, 8 (1996), 1669. Google Scholar R. V. Culshaw, S. Ruan and G. Webb, A mathematical model of cell-to-cell spread of hiv-1 that includes a time delay, Journal of Mathematical Biology, 46 (2003), 425-444. doi: 10.1007/s00285-002-0191-5. Google Scholar C. J. D'Arcy and P. A. Burnett, Barley Yellow Dwarf: 40 Years of Progress, 1995. Google Scholar V. Eastop, Worldwide importance of aphids as virus vectors, in Aphids as Virus Vectors, Elsevier, 1977, 3–62. doi: 10.1016/B978-0-12-327550-9.50006-9. Google Scholar S. Eikenberry, S. Hews, J. D. Nagy and Y. Kuang, The dynamics of a delay model of hbv infection with logistic hepatocyte growth, Math. Biosc. Eng, 6 (2009), 283-299. doi: 10.3934/mbe.2009.6.283. Google Scholar G. F. Gause, The Struggle for Existence: A Classic of Mathematical Biology and Ecology, Courier Dover Publications, 2019. Google Scholar M. A. Gilchrist, D. Coombs and A. S. Perelson, Optimizing within-host viral fitness: Infected cell lifespan and virion production rate, Journal of theoretical biology, 229 (2004), 281-288. doi: 10.1016/j.jtbi.2004.04.015. Google Scholar C. Gill and J. Chong, Cytopathological evidence for the division of barley yellow dwarf virus isolates into two subgroups, Virology, 95 (1979), 59-69. doi: 10.1016/0042-6822(79)90401-X. Google Scholar S. A. Gourley, Y. Kuang and J. D. Nagy, Dynamics of a delay differential equation model of hepatitis b virus infection, Journal of Biological Dynamics, 2 (2008), 140-153. doi: 10.1080/17513750701769873. Google Scholar Z. Grossman, M. B. Feinberg and W. E. Paul, Multiple modes of cellular activation and virus transmission in hiv infection: a role for chronically and latently infected cells in sustaining viral replication, Proceedings of the National Academy of Sciences, 95 (1998), 6314-6319. doi: 10.1073/pnas.95.11.6314. Google Scholar S. Hews, S. Eikenberry, J. D. Nagy and Y. Kuang, Rich dynamics of a hepatitis b viral infection model with logistic hepatocyte growth, Journal of Mathematical Biology, 60 (2010), 573-590. doi: 10.1007/s00285-009-0278-3. Google Scholar S.-B. Hsu, T.-W. Hwang and Y. Kuang, Global analysis of the michaelis–menten-type ratio-dependent predator-prey system, Journal of Mathematical Biology, 42 (2001), 489-506. doi: 10.1007/s002850100079. Google Scholar M. Jackson and B. M. Chen-Charpentier, Modeling plant virus propagation with delays, Journal of Computational and Applied Mathematics, 309 (2017), 611-621. doi: 10.1016/j.cam.2016.04.024. Google Scholar A. E. Kendig, E. T. Borer, E. N. Boak, T. C. Picard and E. W. Seabloom, Soil nitrogen and phosphorus effects on plant virus density, transmission, and species interactions, URLhttps://doi.org/10.6073/pasta/01e7bf593676a942f262623710acba13. Google Scholar D. A. Kennedy, V. Dukic and G. Dwyer, Pathogen growth in insect hosts: Inferring the importance of different mechanisms using stochastic models and response-time data, The American Naturalist, 184 (2014), 407-423. doi: 10.1086/677308. Google Scholar Y. Kuang and E. Beretta, Global qualitative analysis of a ratio-dependent predator–prey system, Journal of Mathematical Biology, 36 (1998), 389-406. doi: 10.1007/s002850050105. Google Scholar P. Kumberger, K. Durso-Cain, S. Uprichard, H. Dahari and F. Graw, Accounting for space–quantification of cell-to-cell transmission kinetics using virus dynamics models, Viruses, 10 (2018), 200. doi: 10.3390/v10040200. Google Scholar C. Lacroix, E. W. Seabloom and E. T. Borer, Environmental nutrient supply alters prevalence and weakens competitive interactions among coinfecting viruses, New Phytologist, 204 (2014), 424-433. doi: 10.1111/nph.12909. Google Scholar C. Lacroix, E. W. Seabloom and E. T. Borer, Environmental nutrient supply directly alters plant traits but indirectly determines virus growth rate, Frontiers in Microbiology, 8 (2017), 2116. doi: 10.3389/fmicb.2017.02116. Google Scholar P. Lefeuvre, D. P. Martin, S. F. Elena, D. N. Shepherd, P. Roumagnac and A. Varsani, Evolution and ecology of plant viruses, Nature Reviews Microbiology, 17 (2019), 632-644. doi: 10.1038/s41579-019-0232-3. Google Scholar R. F. Luck, Evaluation of natural enemies for biological control: A behavioral approach, Trends in Ecology & Evolution, 5 (1990), 196-199. doi: 10.1016/0169-5347(90)90210-5. Google Scholar G. Neofytou, Y. Kyrychko and K. Blyuss, Mathematical model of plant-virus interactions mediated by rna interference, Journal of Theoretical Biology, 403 (2016), 129-142. doi: 10.1016/j.jtbi.2016.05.018. Google Scholar J. C. Ng and K. L. Perry, Transmission of plant viruses by aphid vectors, Molecular Plant Pathology, 5 (2004), 505-511. doi: 10.1111/j.1364-3703.2004.00240.x. Google Scholar M. A. Nowak, S. Bonhoeffer, A. M. Hill, R. Boehme, H. C. Thomas and H. McDade, Viral dynamics in hepatitis b virus infection, Proceedings of the National Academy of Sciences, 93 (1996), 4398-4402. doi: 10.1073/pnas.93.9.4398. Google Scholar B. Pell, A. E. Kendig, E. T. Borer and Y. Kuang, Modeling nutrient and disease dynamics in a plant-pathogen system 2, Mathematical Biosciences and Engineering, 16 (2019), 234-264. Google Scholar M. J. Roossinck and E. R. Bazán, Symbiosis: Viruses as intimate partners, Annual Review of Virology, 4 (2017), 123-139. doi: 10.1146/annurev-virology-110615-042323. Google Scholar M. J. Roossinck, P. Saha, G. B. Wiley, J. Quan, J. D. White, H. Lai, F. Chavarria, G. Shen and B. A. Roe, Ecogenomics: Using massively parallel pyrosequencing to understand virus ecology, Molecular Ecology, 19 (2010), 81-88. doi: 10.1111/j.1365-294X.2009.04470.x. Google Scholar M. L. Rosenzweig, Paradox of enrichment: Destabilization of exploitation ecosystems in ecological time, Science, 171 (1971), 385-387. doi: 10.1126/science.171.3969.385. Google Scholar A. Sigal, J. T. Kim, A. B. Balazs, E. Dekel, A. Mayo, R. Milo and D. Baltimore, Cell-to-cell spread of hiv permits ongoing replication despite antiretroviral therapy, Nature, 477 (2011), 95-98. doi: 10.1038/nature10347. Google Scholar A. L. Vuorinen, J. Kelloniemi and J. P. Valkonen, Why do viruses need phloem for systemic invasion of plants?, Plant Science, 181 (2011), 355-363. doi: 10.1016/j.plantsci.2011.06.008. Google Scholar X. Wang, S. Tang, X. Song and L. Rong, Mathematical analysis of an hiv latent infection model including both virus-to-cell infection and cell-to-cell transmission, Journal of Biological Dynamics, 11 (2017), 455-483. doi: 10.1080/17513758.2016.1242784. Google Scholar Z. Wu, T. Phan, J. Baez, Y. Kuang and E. J. Kostelich, Predictability and identifiability assessment of models for prostate cancer under androgen suppression therapy, Mathematical Biosciences and Engineering, 16 (2019), 3512-3536. Google Scholar Y. Yang, L. Zou and S. Ruan, Global dynamics of a delayed within-host viral infection model with both virus-to-cell and cell-to-cell transmissions, Mathematical Biosciences, 270 (2015), 183-191. doi: 10.1016/j.mbs.2015.05.001. Google Scholar P. Zhong, L. M. Agosto, J. B. Munro and W. Mothes, Cell-to-cell transmission of viruses, Current Opinion in Virology, 3 (2013), 44-50. doi: 10.1016/j.coviro.2012.11.004. Google Scholar Figure 1. Increasing $ \tau $ changes the stability of the positive equilibrium, which gives rise to a stable orbit. For this simulation, we use $ r = 0.3,K = 10^3,\beta = 0.1,\delta = 0.0001 $ and $ \tau $ varies from $ 1 $ to $ 80 $. We plot $ \tau $ over a viable region. For smaller value of $ \tau $, either the condition for the theorem is not satisfied or the positive steady does not exist. The switching between a stable positive steady state and a stable orbit takes place around $ \tau \approx 50.5 $ Figure 2. Corresponding examples for Figure 1. (a) $ \tau = 50 $, the oscillation is damping toward the positive steady state. (b) $ \tau = 51 $, the oscillation is stable Figure 3. For this simulation, we start with the following values $ r = 0.3,K = 10^3,\beta = 0.1,\delta = 0.0001 $ and $ \tau = 51 $. (a) increasing the infection rate $ \beta $ can have a destabilizing effect on the endemic equilibrium; however, as $ \beta $ increases, $ S $ and $ I $ approach closely to 0. (b) Decreasing the death rate $ \delta $ can be destabilizing as well. (c) The growth rate $ r $ can be both stabilizing or destabilizing as it varies. As $ r $ decreases, it can result in mutual extinction. (d) shows additional details of Figure 1 and 2. Note that varying the carrying capacity $ K $ only changes the size but not the stability Figure 4. Parameter fitting result for the cell-to-cell transmission model. The description of each experiment is given in subsection 1.2 and additional details can be found in Kendig et al. [26] Figure 5. Comparison of data fitting between mass action model and ratio-dependent model. The description of each experiment is given in subsection 1.2 and additional details can be found in Kendig et al. [26] Table 1. Estimated parameter for cell-to-cell model. Note that $ \delta $ is fixed to be $ 1/13 $ day$ ^{-1} $. The value of $ R_0 $ is calculated based on the estimated parameters. The description of each experiment is given in subsection 1.2 and additional details can be found in Kendig et al. [26] Parameter Fitted (CTRL) Fitted (+N) Fitted (+P) Fitted (+NP) Units $ r $ 0.9000 0.9000 0.9000 0.8860 day$ ^{-1} $ $ K $ 515024 719563 400294 400000 cells $ \beta $ 0.5387 0.4355 0.8925 0.6710 cells virion$ ^{-1} $ day$ ^{-1} $ $ b $ 65 94 62 80 virions cell$ ^{-1} $ day$ ^{-1} $ $ \tau $ 8.27 12.00 12.00 12.00 days $ R_0 $ 1.62 2.07 2.30 2.23 unitless Table 2. Fitting errors for the cell-to-cell transmission model. The description of each experiment is given in subsection 1.2 and additional details can be found in Kendig et al. [26] Experiment control +N +P +NP RMSE 4.27e+6 5.97e+6 3.66e+6 8.96e+6 MAPE 8.03e-1 6.35e-1 4.10e-1 7.14e-1 Table 3. Stability results and open questions in terms of $ \beta, \delta, r $ and $ \tau $ $ \left(\text{note: } R_0 = \frac{\beta-\delta}{\beta e^{-\delta\tau}}\right) $ Conditions Results or question 1. $ \beta<\delta $ $ (K,0) $ is globally asymptotically stable 2. $ \beta>\delta $ and $ \frac{\beta-\delta}{\beta e^{-\delta\tau}}<1 $ $ (K,0) $ is locally asymptotically stable 3. $ \beta>\delta $ and $ \frac{\beta-\delta}{\beta-\delta-r}>\frac{\beta-\delta}{\beta e^{-\delta\tau}}>1 $ Open question 1: is $ E^* $ stable? when does a periodic orbit occurs? 4. $ \beta>\delta+r $ and $ \frac{\beta-\delta}{\beta-\delta-r}<\frac{\beta-\delta}{\beta e^{-\delta\tau}} $ (0, 0) is globally asymptotically stable Table 4. Estimated parameter for mass action model. The description of each experiment is given in subsection 1.2 and additional details can be found in Kendig et al. [26] $ K $ 4.0000e+5 6.0164e+5 4.0038e+5 1.0987e+6 cells $ \beta $ 2.0273e-6 2.8651e-7 1.9817e-6 1.9188e-6 cells virion$ ^{-1} $ day$ ^{-1} $ $ d $ 0.7129 0.1001 0.1001 0.1001 day$ ^{-1} $ $ b $ 118.2189 199.9803 60.4637 56.2613 virions cell$ ^{-1} $ day$ ^{-1} $ $ \tau $ 9.6880 21.0000 4.9741 7.4480 days Lin Niu, Yi Wang, Xizhuang Xie. Carrying simplex in the Lotka-Volterra competition model with seasonal succession with applications. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021014 Xianyong Chen, Weihua Jiang. Multiple spatiotemporal coexistence states and Turing-Hopf bifurcation in a Lotka-Volterra competition system with nonlocal delays. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021013 Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362 Rajendra K C Khatri, Brendan J Caseria, Yifei Lou, Guanghua Xiao, Yan Cao. Automatic extraction of cell nuclei using dilated convolutional network. Inverse Problems & Imaging, 2021, 15 (1) : 27-40. doi: 10.3934/ipi.2020049 Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391 Bilel Elbetch, Tounsia Benzekri, Daniel Massart, Tewfik Sari. The multi-patch logistic equation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021025 H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020433 Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021002 Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457 Wenbin Lv, Qingyuan Wang. Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term. Evolution Equations & Control Theory, 2021, 10 (1) : 25-36. doi: 10.3934/eect.2020040 Jun Zhou. Lifespan of solutions to a fourth order parabolic PDE involving the Hessian modeling epitaxial growth. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5581-5590. doi: 10.3934/cpaa.2020252 Huijuan Song, Bei Hu, Zejia Wang. Stationary solutions of a free boundary problem modeling the growth of vascular tumors with a necrotic core. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 667-691. doi: 10.3934/dcdsb.2020084 Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321 Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107 John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044 Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042 Tin Phan Bruce Pell Amy E. Kendig Elizabeth T. Borer Yang Kuang
CommonCrawl
Theorems for nothing (and the proofs for free) [closed] Some theorems give far more than you feel they ought to: a weak hypothesis is enough to prove a strong result. Of course, there's almost always a lot of machinery hidden below the waterline. Such theorems can be excellent starting-points for someone to get to grips with a new(ish) subject: when the surprising result is no longer surprising then you can feel that you've gotten it. Let's have some examples. soft-question big-list 2 revisions, 2 users Andrew Stacey 100% closed as no longer relevant by Loop Space, Harry Gindi, S. Carnahan♦ Jun 23 '10 at 18:32 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. $\begingroup$ I like the Dire straits reference ;) $\endgroup$ – GMRA Nov 13 '09 at 14:58 $\begingroup$ Time to put this one to bed. (ie, time to close it, I deem.) $\endgroup$ – Loop Space Jun 23 '10 at 18:06 $\begingroup$ What happened between November and June that rendered this question no longer relevant? $\endgroup$ – I. J. Kennedy Oct 9 '10 at 23:52 $\begingroup$ Wow Andrew. I am not normally surprised when I see you on the list of closers of a question (and most times it is actually there for a good reason), but I did not expect you to go that far, particularly leaving us the mystery of how your question all of a sudden became no longer relevant. Did you intend to use the theorems-for-nothing list, e. g. a popular talk? If yes, what did you use and how well was it absorbed? But even then, I am not sure whether the question is no longer relevant just because it is not relevant to you... $\endgroup$ – darij grinberg May 4 '11 at 7:06 $\begingroup$ Darij: Vaguely relevant meta threads: tea.mathoverflow.net/discussion/210 and tea.mathoverflow.net/discussion/459 (If you really want to discuss this, start a thread on meta about it) $\endgroup$ – Loop Space May 4 '11 at 9:01 Every compact metric space is (unless it's empty) a topological quotient of the Cantor set. What, every compact metric space? Yes, every compact metric space. Tom Leinster $\begingroup$ That's quite surprising! What are some good references for this? $\endgroup$ – Justin DeVries Nov 13 '09 at 16:59 $\begingroup$ Surprising, yes, but once you know about it, it seems easy enough to cook up a proof. Just write the set as a union of two closed subsets, decide to map the left half of the Cantor set onto one and the right half to the other, then do the same to each of these two sets, and so on. In the limit you have the map you want, provided you have arranged for the diameters of the parts to go to zero. $\endgroup$ – Harald Hanche-Olsen Nov 13 '09 at 19:04 $\begingroup$ Right! Once you know it, it's fine. But I think it's capable of changing one's intuition on what spaces and maps are. After all, the Cantor set is just a sprinkling of dust; how could it be capable of covering a big fat space like the 3-ball? $\endgroup$ – Tom Leinster Nov 14 '09 at 0:25 $\begingroup$ After this theorem has done its job changing your intuition, though, it's pretty easy to believe. A surjective continuous map glues stuff together. And the Cantor set is not "just" a sprinkling of dust; it's a sprinkling of a whole lot of rather clumpy dust. So it shouldn't be surprising that you can glue all this clumpy dust into many different forms. $\endgroup$ – Mark Meckes Dec 2 '09 at 14:36 $\begingroup$ I also like the fact that every countable compact Hausdorff space is homeomorphic to a countable successor ordinal equipped with its order topology. This is the Mazurkiewicz-Sierpinski theorem, published originally in French (I think) but also available in English in Z. Semadeni's book 'Banach spaces of continuous functions' in section 8 (the chapter on compact 0-dimensional spaces). A proof of the Alexandroff-Hausdorff theorem (i.e., every compact metric space is a continuous image of the Cantor set) is also there, as well as a bunch of other tasty topology. $\endgroup$ – Philip Brooker Feb 21 '10 at 12:04 For me, the theorem that every subgroup of a free group is free is a good example of this: it seems to come for free from covering spaces and the fundamental group, but really all the heavy machinery is just moved underground. Pierre Weil Wedderburn's theorem: "Every finite division ring is a field." This is really astonishing if you think of quaternions: nothing analogous in the finite case. Then of course the classification of finite fields is also very beautiful: exactly one with p^n elements (p a prime and n an integer) and no others. And as a bonus, Wedderburn's theorem is one of the crispest in all of mathematics: seven words ( or six and a half if you replace division ring by skew-field). Georges Elencwajg $\begingroup$ You can save one word by replacing "a field" by "commutative" (but maybe we should count syllables rather than words). $\endgroup$ – Andreas Blass Jun 20 '11 at 19:44 $\begingroup$ I just noticed that Bourbaki does it with five words (in several languages) with nine syllables (at least in french)... $\endgroup$ – Fred Rohrer May 15 '14 at 8:52 $\begingroup$ Finite division rings commute. $\endgroup$ – Allen Knutson Jul 24 '14 at 17:24 $\begingroup$ "Wedderburn's theorem" $\endgroup$ – Pablo Zadunaisky Dec 12 '18 at 8:47 Isn't almost every theorem in mathematics an example of a theorem "for free"? One defines natural numbers, and then it follows each of them is a sum of four squares; one defines a notion of a continuous function and of Euclidean space, and Brouwer's fixed point theorem follows. Surely, that is amazing! With that said here are a handful of the example that lie closer to the surface: Complex-differentiable functions are infinitely-differentiable, and in fact analytic. A function of several complex variables that is holomorphic in each variable is holomorphic in all of them (if it reminds you of 'theorem' that a function that is continuous in each variable separately is continuous ... well, then, it should). That is Hartogs' theorem. Any bound on the error term in primes number theorem of the form $\psi(x)=x+O_{\varepsilon}(x^{a+\varepsilon})$ implies the bound $\psi(x)=x+O(x^a \log x)$. Morally related to (3) is the tensor power trick, of which the earliest widely-known example is perhaps the proof of Cotlar-Stein lemma. One of my favorite examples is lemma 2.1 from a paper of Katz and Tao on Kakeya's conjecture. Boris Bukh $\begingroup$ Four squares is nothing, every natural number is also the sum of three triangular numbers. $\endgroup$ – Zsbán Ambrus Apr 16 '10 at 19:32 I had that feeling of getting more than you ought to a couple of weeks ago when reading the first chapter of Rota and Klain's Introduction to Geometric Probability. In particular, I was familiar with the usual derivation of the probability of Buffon's needle crossing a line. So it was amazing to read the solution to a harder problem, Buffon's noodle, which is solved by appealing to a much simpler seeming general symmetry argument. And like you describe, it forms a kind of teaser trailer to draw you you into the rest of the subject. Dan Piponi $\begingroup$ I agree, this is a completely wonderful argument. It's also a spectacular example of a more general theorem that's easier to prove. $\endgroup$ – Tom Leinster Nov 13 '09 at 18:00 $\begingroup$ Here is a related discussion gilkalai.wordpress.com/2009/08/03/… $\endgroup$ – Gil Kalai Nov 21 '09 at 11:15 My "canonical example" is Banach-Steinhaus in functional analysis: that, in nice locally convex topological spaces (Banach will do), weakly bounded (or pointwise bounded) implies bounded. The machinery is quite technical, usually involving the Baire category theorem, but the result is very simple and very surprising. One especial point I like about this is that when you compare normed vector spaces with Banach spaces, then the process of adding more stuff (i.e. completion) actually limits the things that can go wrong. My intuition is that if you want to limit the bad behaviour then you need to work in smaller spaces rather than larger. $\begingroup$ My intuition (for this kind of issue, anyway) is actually the opposite. If you work with a larger space, then there's more "stuff" that nice things (functions, sequences, whatever) have to play nicely with. So the bigger the space, the nicer they must be. $\endgroup$ – Mark Meckes Nov 13 '09 at 15:12 $\begingroup$ I agree with Mark: adding stuff tends to rigidify things, think for example of localization. $\endgroup$ – Alex Collins Nov 13 '09 at 15:18 $\begingroup$ I agree. It always does seem like you get something for nothing. $\endgroup$ – Dinakar Muthiah Nov 13 '09 at 15:59 $\begingroup$ There is a Zabreiko's theorem which extracts the juice of Baire's Category and by invoking it, the Banach-Steinhaus, Open Mapping and Closed Graph theorems come just easily. It says: Every countably subadditive seminorm on a Banach space is continuous. Unfortunately I don't know a good reference. $\endgroup$ – Abhishek Parab Feb 21 '10 at 2:47 $\begingroup$ @Abhishek Parab: Zabreiko's theorem is proved in Megginson's book 'An Introduction to Banach Space Theory'. It is near the beginning of Section 1.6, which is entitled 'Three Fundamental Theorems'. $\endgroup$ – Philip Brooker Feb 21 '10 at 11:45 Faithfully-flat descent: It tells you that you can construct quasicoherent sheaves locally on a faithfully-flat cover. This is pretty amazing, because quasicoherent sheaves are, a priori, only Zariski local. So to specify a sheaf it requires a lot less data than it initially appears. Dinakar Muthiah Kuratowski's theorem is a great example of a theorem of the form "the only obstructions are the obvious ones," which are always fun to learn about. Qiaochu Yuan I can´t resist to mention the Cayley-Hamilton theorem. Something intuitively correct turns out to be mathematically correct too, but for non-intuitive reasons! I still remember, its proof (I´m here referring to the one using the correspondence between operation and representation) worked from my perspective like a magic, clear, simple, non-trivial and beautiful, and it also made me interested in algebra, beyond the lecture in linear algebra for first-year students. It was nice time... M.G. $\begingroup$ Indeed this is a wonderful theorem. Why is it intuitive correct? From all the first year algebra theorems it was the one where I had no intuition whatsoever. $\endgroup$ – Gil Kalai Nov 22 '09 at 6:48 $\begingroup$ The cheezy-easy proof that works over the real or complex numbers is to observe diagonalizable matrices are dense in the space of matrices, and the theorem is true for diagonalizable matrices (by computation) then notice the set of matrices that satisfy the theorem are closed. If you want to avoid this kind of argument you can enhance your intuition with the Jordan Canonical Form. :) $\endgroup$ – Ryan Budney Nov 22 '09 at 10:19 $\begingroup$ Then you just realize that det(tI-A) evaluated at A is some matrix whose entries are monstrously complicated polynomials in the n^2 entries of the matrix A, and since they're identically 0 on C^{n^2} each of those entries must be the zero polynomial; thus the theorem holds over any commutative ring as well. $\endgroup$ – Steven Sivek Nov 22 '09 at 13:46 $\begingroup$ Gil: maybe what was meant was the following: consider det(tI-A), and plug in A for t. Personally I wouldn't say this makes C-H "intuitively correct"; instead C-H is suggested by this simple heuristic. $\endgroup$ – Mark Meckes Nov 23 '09 at 14:03 Tychonoff's theorem — product of any collection of compact spaces is still compact — is amazing and incredibly useful. zvasilyev $\begingroup$ it is not surprising thinking of net convergence and that the product topology is not the box topology (which is not compact). $\endgroup$ – Martin Brandenburg Dec 30 '09 at 2:35 The Kline sphere characterization, proven by Bing: A compact connected metric space (with at least two points) is the 2-sphere if and only if every circle separates and no pair of points does. $\begingroup$ Nitpick: A singleton set seems to be an exception. The wikipedia article seems to have missed that. I'll edit it later if nobody beats me to it. (I have a bus to catch.) $\endgroup$ – Harald Hanche-Olsen Nov 13 '09 at 19:11 Once the machinery of (co)homology is developed, Brouwer's Fixed Point seems to come for free, it's extremely straightforward to prove and has quite a lot of important consequences. Sam Derbyshire $\begingroup$ I'm not sure I really understand the question though; do you just mean surprisingly easy to prove results (that have many substantial consequences)? $\endgroup$ – Sam Derbyshire Nov 13 '09 at 20:50 $\begingroup$ For the two-dimensional case, there's a simple (and perhaps surprising) proof from the fact that the game of Hex never ends in a tie. $\endgroup$ – Akiva Weinberger May 28 '17 at 16:42 The only group with order $p$ a prime is $\mathbb{Z}/p\mathbb{Z}$ Although not exactly what you're after, the question reminds me of Reynolds' parametricity theorem, or as Philip Wadler puts it: Theorems for Free! The basic idea is that a polymorphic construction (in a polymorphic lambda calculus) must behave uniformly, and so must preserve relations. For example, any term of type $\Pi X. X\to X$ must be the identity function, and every term of type $\Pi X Y. X\times Y\to X$ must be the first projection. Ulrik Buchholtz The Gauss-Bonnet theorem is a deep result relating the geometry of a surface to its topology, and its proof is very simple (the local version comes almost from nothing, and the main difficulties for the global one are topological results about triangulations). Also, it has some amazing corollaries: the integral of the gaussian curvature over a compact orientable surface is a topological invariant (${\int\int}_{S}{K}d\sigma = 2\pi\chi(S)$, where $\chi(S)$ is the Euler-Poincaré characteristic of $S$); every compact regular surface with positive gaussian curvature is homeomorphic to the sphere $S^2$; and so on. Rodrigo Barbosa To me, the canonical example is the Poincare Conjecture. Why SHOULD a three dimensional manifold with trivial fundamental group actually be the sphere? In higher dimensions, there are LOTS of simply connected things, but in two and three, simply connected and compact manifold determines the manifold uniquely. Charles Siegel $\begingroup$ The proof in this case seems rather pricey. $\endgroup$ – Ryan Budney Nov 13 '09 at 18:59 $\begingroup$ Well, there's a lot of machinery hidden underneath it, yeah. But the statement looks like you're getting a huge amount of specificity from just a small hypothesis. $\endgroup$ – Charles Siegel Nov 13 '09 at 19:26 I am not sure I fully understand the question. Is it the case that the theorem itself gives you a huge mileage while its proof is extremely difficult, (Characterization of finite simple group is an ultimate example; the Atiyah-Singer index theorem and the BBD(G)-decomposition theorem are other examples; or is it a case that understanding the proof (which is feasible) gives you a lot of mileage and a feeling that you got grip with the subject. Anyway, a theorem which, to some extent, has both these features is Adams's theorem asserting that d-dimensional vectors form an algebra (even non-associative) in which division (except by 0) is always possible only for , 2, 4, and 8. (In these cases there are examles: the Complex, Quaternions and Cayley algebras.) Gil Kalai Artin-Schreier Theorem: If k is a field of characteristic p and strictly contained in its algebraic closure K and such that [K:k] is finite THEN (was surprising for me..) p is actually 0 and K = k(sqrt(-1)) and k is a real closed field! A not so well known but deserving result from the "failed" thesis of Abhyankar: If K and L are algebraically closed fields contained in another algebraically closed field, then the compositum KL is not necessarily algebraically closed. Jose Capco $\begingroup$ Abhyankar's result is probably not that surprising to many of us. But I was simply amazed since we take algebra in undergrad and know algebraically closed fields and compositums and we hardly ask that question.. I needed to answer that question later while writing my PhD and to my surprise Abhyankar was doing the same in his thesis. $\endgroup$ – Jose Capco Nov 21 '09 at 22:01 Unfortunately, a lot of these kinds of statements in combinatorics are only conjectural. One example (again, only conjectural) that came up in conversation the other day doesn't give a particularly natural result, but it's hugely surprising: the Erdos-Gyarfas conjecture in graph theory, which has pretty much the weakest possible condition for any statement of its form. Now that I think about it, though, Ramsey theory is all about "theorems for nothing." I'm a big fan of the sunflower lemma when it comes to Ramsey-theoretical statements that deserve to be better known -- the only condition there is that your sets have to be relatively small, and there have to be a lot of them. (And that second part is conjecturally not even necessary...) Harrison Brown That there are infinitely many primes has some simple proofs, but I remember being shown that the sum of the reciprocals of the primes diverges which had some more machinery in it that was kind of neat to my mind. I'd say the Tutte-Berge formula, which is a wonderful result that tells you (almost) everything you want to know about matchings in graphs. Although there are many proofs of this theorem, there is a beautiful proof for free using matroids. Strictly speaking, there is a proof for free of Gallai's Lemma (from which Tutte-Berge follows easily). Gallai's Lemma. Let $G$ be a connected graph such that $\nu(G-x)=\nu(G)$, for all $x \in V(G)$. Then $|V(G)|$ is odd and def$(G)=1.$ Remark: $\nu(G)$ is the size of a maximum matching of $G$, and def$(G)$ denotes the number of vertices of $G$ not covered by a maximum matching. Proof for free. In any matroid $M$ define the relation $x \sim y$ to mean $r(x)=r(y)=1$ and $r(\{x,y\})=1$ or if $x=y$. (Here, $r$ is the rank function of $M$). We say that $x \sim^* y$ if and only if $x \sim y$ in the dual of $M$. It is trivial to check that $\sim$ (and hence also $\sim^*$) defines an equivalence relation on the ground set of $M$. Now let $G$ satisfy the hypothesis of Gallai's Lemma and let $M(G)$ be the matching matroid of $G$. By hypothesis, $M(G)$ does not contain any co-loops. Therefore, if $x$ and $y$ are adjacent vertices we clearly have $x \sim^* y$. But since $G$ is connected, this implies that $V(G)$ consists of a single $\sim^*$ equivalence class. In particular, $V(G)$ has co-rank 1, and so def$(G)$=1, as required. Edit. For completeness, I decided to include the derivation of Tutte-Berge from Gallai's lemma. Choose $X \subset V(G)$ maximal such that def$(G-X) -|X|=$ def$(G)$. By maximality, every component of $G-X$ satisfies the hypothesis in Gallai's lemma. Applying Gallai's lemma to each component, we see that $X$ gives us equality in the Tutte-Berge formula. 12 revisions, 2 users Tony Huynh 96% The Riesz-Thorin interpolation theorem; the complex analysis behind it never fails to surprise me. Akhil Mathew Oh! From uniqueness of the countable dense linear order without endpoints: take (for instance) a countable ordinal $\lambda$, and consider the anti-lex order on $\mathbb{Q}\times\lambda$. This is a countable dense linear without endpoints, so it's order-isomorphic to $\mathbb{Q}$; in particular, $\mathbb{Q}$ contains a subset with order-type $\lambda$ — e.g. the isomorphs of anything $(\frac{5}{8},j)$. The same result for subsets of $\mathbb{R}$ is a more usual application of transfinite induction/AC/Zorn's lemma; here it's all hidden in the $\aleph_0$-categoricity result about dlow/oep. some guy on the street I like the theorem, I think it's Gallagher's, that says: Most polynomials with integer coefficients are irreducible and have the full symmetric group as Galois group (over the rational numbers). The precise formulation asserts that the number of bad polynomials, i.e., the number of polynomials $X^r + a_1 X^{r-1} + \cdots + a_r$ with $|a_i|\leq N$ that DO NOT have the full symmetric group as Galois group is $$O(r^3(2N+1)^{r-\frac{1}{2}}\log N)$$ (out of $(2N+1)^r$ polynomials). Lior Bary-Soroker Another good example is the Johnson-Lindenstrauss Lemma that says that any $n$ points in a Hilbert space can be embedded in a $O(\log n)$-dimensional Euclidean space with distances preserved upto any factor. It turns out that JL-style results crop up in many different versions, the main result itself has proofs ranging from 1 page to 10 pages, and it just keeps on giving :) Suresh Venkat Not the answer you're looking for? Browse other questions tagged soft-question big-list or ask your own question. Fundamental Examples What are some deep theorems, and why are they considered deep? A search for theorems which appear to have very few, if any hypotheses
CommonCrawl
General Math What type is a polynomial function? Thread starter iteratee iteratee Is there a universal definition and purpose for considering these a distinct category? I often encounter functions called "polynomial" in numerous fields. I don't see an obvious common trait other than that they're usually describing a real-valued continuous function. What aspects are typical or universal or distinct? What structures can be polynomial? Some sources say that polynomials may be defined as conforming to a grammar of sorts, as basically a sum of products (assuming numeric algebras), but in some contexts they're expressed in an implicit equation with no distinct features other than having an = sign buried within. I can't judge how such scrambled equations were derived, whether they're a special subset of a larger class of function, whether / when they can be uniquely mapped back to a common normalized form. This is looking like either a frequently misused `term or perhaps overloaded with meanings making it oddly hard to research. Related General Math News on Phys.org Secrets behind 'Game of Thrones' unveiled by data science and network theory Novel method for measuring spatial dependencies turns less data into more data Interactions within larger social groups can cause tipping points in contagion flow PeroK https://www.mathsisfun.com/algebra/polynomials.html Polynomial functions can be defined by a polynomial. It's that easy. A polynomial function doesn't have to be real-valued. Every polynomial function is continuous but not every continuous function is a polynomial function. There are many interesting theorems that only apply to polynomial functions. Wikipedia has examples. mfb said: Polynomial functions can be defined by a polynomial. Indeed I'm aware of the ubiquitously self-referential jargon surrounding this subject. Maybe there's a reason... It's largely why I've resorted to asking such a general question. Drakkith iteratee said: It's not self-referential. It's a definition. A polynomial function is a type of function that is defined as being composed of a polynomial, which is a mathematical expression that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponentiation of variables. Other types of functions aren't polynomials, such as the function ##f(x) = e^x##, which is an exponential function, and both types are more generally elementary functions. See here for a list of mathematical functions. If we start with a single variable, then a polynomial (of degree ##n##) is a function of the form: $$p(x) = a_0 + a_1 x + a_2 x^2 \dots + a_n x^n$$ And that's all there is to it. Likes epenguin jbriggs444 PeroK said: One can dig a bit deeper and distinguish between a polynomial function and a formal polynomial. With a polynomial function, one has a function (with a domain and a range and a mapping of elements in the domain to elements in the range) where the mapping matches a polynomial expression. One can add, subtract or multiply polynomial functions to get new polynomial functions. With a formal polynomial, one has the algebraic field from which the coefficients are drawn and a finite array of coefficients ##a_0## through ##a_n##. One can add, subtract or multiply these formal polynomials to get new formal polynomials. It is tempting to think that the distinction is merely one of naming. However the truth is otherwise. Consider polynomials over the finite field with two elements (one and zero). There are only 4 distinct functions over this domain: f(x) = 0, f(x) = 1, f(x) = x and f(x) = x - 1. All four are polynomial functions. However, there are infinitely many distinct formal polynomials. Stephen Tashi You have a legitimate concern, but I wouldn't call the usual way of defining polynomials "self-referential". Some sources say that polynomials may be defined as conforming to a grammar of sorts, as basically a sum of products (assuming numeric algebras) Yes, that's the standard definition. but in some contexts they're expressed in an implicit equation with no distinct features other than having an = sign buried within. I'm not sure what you mean. Of course, a text can say something like "Let ##f(x)## be a polynomial function of degree 3" without writing out ##f(x) = Ax^3 + Bx^2 + Cx + D##. Likewise a text may say "Let ##M## be a 3x3 matrix" without writing out the 9 entries of ##M##. The legitimate concern about defining a polynomial the standard way (as a symbolic expression that obeys certain syntax) is that this type of definition can fail. For example, many USA secondary school textbooks define raising a number to a rational power by saying "## x^\frac{n}{m} ## is defined to be the ##n##th power of the ##m##th root of ##x## when that ##m##th root exists in the real number system. By this definition ##(-1)^\frac{1}{3} = -1## but ##(-1)^\frac{2}{6}## does not exist. So the definition does not define how to raise a number to a rational power. It only defines how to raise a number to a rational power when that power is denoted in a certain way . Using that definition we cannot conclude that if ##a## and ##b## are equal rational numbers then ##x^a = x^b## So, technically, the definition of polynomial as a function that can be written in a certain symbolic way should be accompanied by proofs that the notation actually defines a unique function and that polynomials denoted by inequivalent symbolic expressions define different families of functions. Also, if we wish to assert the function ##\sin(x)## is not a polynomial, we should prove this statement, not assume it on the basis that the function is denoted in a particular way. Polynomials are usually introduced to students at an early stage of their education in algebra. At that stage they tend to assume that notation is unambiguous. Introductory texts don't deal with the technical questions involved in showing definitions based on notation are actually proper mathematical definitions. In the case of defining polynomials in terms of notation, the definition does succeed, but only an advanced text would offer proofs of this. Likes Drakkith and epenguin Related Threads on What type is a polynomial function? A What type of function satisfy a type of growth condition? Is this a polynomial function? I What non-polynomial functions can be "factored"? B What type of graph is this? Is this cublc polynomial function solvable? Multilinear Functions and Polynomials Special Functions and Polynomials Sine/cosine function and polynomial function B What is the name of this type of fraction I Is it possible to solve for "t?" I Isn't it terrifying that AI can become smarter than any Mathematician? B Classification of mathematical objects B Digression on squaring I Representations, Wittgenstein, and all the rest
CommonCrawl
Different definitions of unit step signal While I was learning signal theory, I have come across different definitions for the unit step signal. For example. $u(t)=\begin{cases} 1 & t\geq0\\ 0 & t<0\\ \end{cases}$ $u(t)=\begin{cases} 1 & t>0\\ 0 & t<0\\ \frac{1}{2} & t=0\\ \end{cases}$ $u(t)=\begin{cases} 1 & t>0\\ 0 & t\leq0\\ \end{cases}$ However, in the context of LTI systems the outputs are not different. i.e LTI systems with these inputs produce same outputs. How can one justify this? continuous-signals JaalaPJaalaP $\begingroup$ Compute $\int_{-1}^1 u(t) \mathrm dt$ for the three definitions of $u(t)$ and report back on the difference. The integral is the output (at $t=0$) of a (non causal) short-term integrator, which is an example of an LTI system. $\endgroup$ – Dilip Sarwate $\begingroup$ @Dilip-I agree what u say but that does not justify for all LTI systems. $\endgroup$ – JaalaP $\begingroup$ And that is why I did not write an answer to your question but merely posted a comment. Note that what you wish to prove is false for all LTI systems that are pure delays. $\endgroup$ $\begingroup$ @Dilip- the integral result is 1 by the way, I forgot to add. Please don't mind. $\endgroup$ $\begingroup$ @Dilip-Thanks. So, we can say for all (LCCDE) continuous LTI systems this is true. I mean systems whose i/p-o/p relation can be represented as Linear Constant-Coefficient Differential Equation. $\endgroup$ $$\begin{align} u(t) &= \begin{cases} 1 & t\geq0\\ 0 & t<0\\ \end{cases} \\ \\ u(t) &= \begin{cases} 1 & t>0\\ 0 & t<0\\ \tfrac{1}{2} & t=0\\ \end{cases} \\ \\ u(t) &= \begin{cases} 1 & t>0\\ 0 & t\leq0\\ \end{cases} \end{align}$$ the reason these all produce the same output when $u(t)$ is used as an input or when $u(t)$ multiplies something else that doesn't have a dirac impulse $\delta(t)$ at $t=0$, is because this expression will find itself in an integral, specifically the convolution integral. there is zero difference between those three expressions in the integral. there is zero area underneath the point $u(t)\Big|_{t=0}$ no matter what the value is of $u(0)$. robert bristow-johnsonrobert bristow-johnson $\begingroup$ (+1) recently in a question I had to sample the unit step function $u(t)$ and I did it as $u[n]$ which didn't of course work. Then I redefined $u[0]=0.5$ then it worked ok. Of course $u(t)$ is not a bandlimited signal. But in some problems where one has to discretize an analog equation you find yourself in need of replacing $u(t)$ with something discrete? $\endgroup$ – Fat32 $\begingroup$ usually the discrete-time unit step is $$ u[n] = \begin{cases} 1 & n \ge 0\\ 0 & n < 0\\ \end{cases} \qquad n \in \mathbb{Z} \\ $$ which works with sampling the top definition of $u(t)$ if you wanted a bandlimited continuous-time unit step, it would have to be the integral of a sinc function and would be equal to $\tfrac12$ at $t=0$. or, i s'pose, you could reconstruct a $u(t)$ directly with the sum of sinc functions and the definition of the samples $u[n]$ above. $\endgroup$ – robert bristow-johnson The Wikipedia article on Heaviside has a section that discusses the 3 different conventions on $H(0)$: https://en.wikipedia.org/wiki/Heaviside_step_function Essentially, the step function is used in probability theory as well but is called something different. I believe that the "symmetric" $H(0)=\frac{1}{2}$ version is probably most suitable in Linear Systems Theory. In discussions on the Gibbs phenomena, the truncated Fourier series interpolates discontinuities at the mid-value which in this case is $\frac{1}{2}$. Also, if you look at the article, there are a number of limits of continuous functions like Erf, that approach the step as some parameter approaches some value. If you were to set up a physical experiment, you would never be able to realize an ideal step function, you would need infinite bandwidth, so in reality, the best you could achieve would be something like a one of those continuous functions. $H(0)=1/2$ is a vestige of physical realizability. Not the answer you're looking for? Browse other questions tagged continuous-signals or ask your own question. How do I use the step and ramp functions to create a specific signal? Adding Dirac Delta with Unit Step How to differentiate the product signal $f(t)\theta(t)$, where $\theta(t)$ is Heaviside's unit step function? Difference Between Dirac delta and unit impulse function Help regarding property of unit impulse function Unit impulse vs Kronecker delta vs Dirac delta? Conceptualising the continuous time unit impulse function as derivative of unit step Which step response matches the system transfer function Different mathematical signal models for different applications
CommonCrawl
Proving $\lfloor f(\lfloor x\rfloor)\rfloor=\lfloor f(x)\rfloor$ Let $f:\mathbb{R}\to\mathbb{R}$ be a continuous increasing function such that $$\forall x\in\mathbb{R} \;f(x)\in\mathbb{Z}\implies x\in\Bbb{Z.}\quad (1)$$ I would like to prove that $\lfloor f(\lfloor x\rfloor)\rfloor=\lfloor f(x)\rfloor.$ Denote $m=\lfloor f(\lfloor x\rfloor)\rfloor$, if I am not mistaken I just need to prove that $m\le f(x)<m+1.$ I get that $m\le f(x):$ We have $\lfloor x\rfloor\le x<\lfloor x\rfloor+1$ so that $f(\lfloor x\rfloor)\le f(x)\le f(\lfloor x\rfloor+1).$ By definition of foor function we also have $m\le f(\lfloor x\rfloor)<m+1$ and therefore $$m\le f(x).$$ I need to prove that $f(x)<m+1.$ Not sur how can I do that, I didn't (yet) the fact that $f$ is continuous and the property $(1).$ real-analysis continuity floor-function I think you're on the right track. I would prove $f(x)<m+1$ by contradiction. Assume that $f(x)\geq m+1$. We also know that $f(\lfloor x \rfloor) < m+1$. Putting these two facts together we know (by continuity of $f$ and using the intermediate value theorem) that there must exist some $x_0\in[\lfloor x \rfloor, x]$ such that $f(x_0)=m+1$. We also know that $x_0\neq \lfloor x \rfloor$, since $f(x_0)\neq f(\lfloor x \rfloor)$, and we know from the property of $f$, that $x_0$ is an integer. We now separate two options: $x_0 = x$, in which case $x$ is an integer, and $\lfloor x \rfloor = x$, a contradiction since we know $x_0\neq \lfloor x \rfloor$ $x_0 \neq x$, which means $x_0$ is integer between $\lfloor x\rfloor$ and $x$, a contradiction. In other words, when we increase the value of $x$ in the expression $f(x)$ (starting from $\lfloor x\rfloor$), we cannot hit the value $m+1$ before the input $x$ increases to the next integer. 5xum5xum $\begingroup$ I try to use IVT as well but not to $m+1,$ thanks! $\endgroup$ – user575807 Sep 25 '18 at 12:09 $\begingroup$ +1. Is there a complete reference including almost all about the floor function? Regards. $\endgroup$ – mrs Sep 25 '18 at 12:12 Let us prove that $f(x) < m+1$. If $f(y) < m+1$ for all $y \ge x$, then we are done. Note that we can suppose that $x \notin \mathbb{Z}$, since the statement is true, if $x$ is an integer. In the other case, there exists a first $y \ge x$ with $f(y) = m+1$. The condition implies that $y =n$ for some $n \in \mathbb{N}$. Because we have assumed that $x$ is not an integer, we must have $x <n$. On the other hand $y$ was chosen minimal with $f(y) = m+1$. Thus $f(x) < f(y)= m+1$. p4schp4sch Another way to see the proof : Let $x \in \mathbb{R}$. The function $f$ is increasing and continuous, so $f(]\lfloor x \rfloor, x ]) = ]f(\lfloor x \rfloor), f(x) ]$. If this interval contains an integer, that means that there exists $y \in ]\lfloor x \rfloor, x ]$ such that $f(y) \in \mathbb{Z}$, so by the property of your function, $y \in \mathbb{Z}$. That's impossible because $]\lfloor x \rfloor, x ]$ contains no integer. So $]f(\lfloor x \rfloor), f(x) ]$ contains no integer, so $\lfloor f(\lfloor x \rfloor) \rfloor = \lfloor f(x) \rfloor$. TheSilverDoeTheSilverDoe $\begingroup$ Nicely done, thank you. $\endgroup$ – user575807 Sep 25 '18 at 12:10 Proving $\lfloor e^x \rfloor =\lfloor e^{\lfloor x \rfloor} \rfloor$ Solve $x-\lfloor x\rfloor= \frac{2}{\frac{1}{x} + \frac{1}{\lfloor x\rfloor}}$ Is there a simple way of proving that $\lfloor\sqrt{n}\rfloor+\lfloor\sqrt{4n+1}\rfloor = \lfloor\frac{3}{2} \lfloor \sqrt{4n+1} \rfloor\rfloor$? How to prove or disprove $\forall x\in\Bbb{R}, \forall n\in\Bbb{N},n\gt 0\implies \lfloor\frac{\lfloor x\rfloor}{n}\rfloor=\lfloor\frac{x}{n}\rfloor$. Prove that $\lfloor(n+1)a\rfloor-1$ is divisible by $(n+1)$ if $n= \left\lfloor \frac {1}{ a- \lfloor a \rfloor } \right\rfloor$ Prove that $ \lfloor 2x \rfloor \leq 2\lfloor x \rfloor + 1$ Question about floor function $\lfloor nx\rfloor$ and what it equals Prove that, $\left\lfloor{\frac{x}{n}}\right\rfloor=\left\lfloor{\frac{\lfloor{x}\rfloor}{n}}\right\rfloor$ where $n \in{\mathbb{N}}$ Prove $\forall x,y \in \mathbb{R} :\lfloor{x+y}\rfloor=\lfloor{x}\rfloor+\lfloor{y}\rfloor∨\lfloor{x+y}\rfloor=\lfloor{x}\rfloor+\lfloor{y}\rfloor+1$ Prove that $\forall n, \, \exists N,x :\lfloor{x^{N}}\rfloor =n \, \land \,\lfloor{x^{N+1}}\rfloor =n+1$
CommonCrawl
Reanalyzing Head et al. (2015): No widespread p-hacking after all? C.H.J. Hartgerink Statistical significance seeking (i.e., p-hacking) is a serious problem for the validity of research, especially if it occurs frequently. Head et al. provided evidence for widespread p-hacking throughout the sciences, which would indicate that the validity of science is in doubt. Previous substantive concerns about their selection of p-values indicated they were too liberal in selecting all reported p-values, which would result in including results that would not be interesting to have been p-hacked. Despite this liberal selection of p-values Head et al. found evidence for p-hacking, which raises the question why p-hacking was detected despite it being unlikely a priori. In this paper I reanalyze the original data and indicate Head et al. their results are an artefact of rounding in the reporting of p-values. The accretion histories of brightest cluster galaxies from their stellar population g... Paola Oliva-Altamirano _Sarah Brough, Jimmy, Kim-Vy Tran, Warrick J. Couch, Richard M. McDermid, Chris Lidman, Anja von der Linden, Rob Sharp_ Ontology-based Learning Content Management System in Programming Languages Domain Anton Anikin INTRODUCTION A learning content management system (LCMS) ,, is a computer application that allows creating, editing and modifying learning content, organizing, deleting as well as maintenance from a central interface. The LCMS provides a complex platform meant for developing learning content used in e-learning educational systems. Many LCMS packages available on the market also contain tools that resemble those used in learning management systems (LMS), and most assume that an LMS is already in place. The emphasis in an LCMS is the ability for developers to create a new learning content in accordance to learning objectives as well as cognitive peculiarities and experience of learner. Most content-management systems have several aspects in common: a focus on creating, developing, and managing content for on-line courses, with far less emphasis placed on managing the experience of learners; a multi-user environment that allows several developers to interact and exchange tools; a learning object repository containing learning materials, which are commonly used components that are archived so as to be searchable and adaptable to any on-line course. A new trend in LCMS development is using the Smart Learning Content (SLC) approach. Apart from adaptive personalization and sophisticated forms of feedback, smart learning content often also authenticates the user, models the learner, aggregates data, and supports learning analytics. That is especially important in computer science education because of expediency of using the program and algorithm visualization tools, automatic assessment, coding tools, algorithm and program simulation tools, problem-solving tools and other learning resources that process input data provided by the learner and generate customized output. The same approach can be used to generate adaptive learning content based on the some content elements. So the creation of SLC implies personalized search of learning resources and adaptive visualization of information retrieval . In this paper we describe the ontology-based learning content management system which allows to create a new smart learning content in programming languages domain in form of personal learning collection. Use of the Temperament and Character Inventory to predict response to repetitive tran... Shan H. Siddiqi ABSTRACT OBJECTIVE: We investigated the utility of the Temperament and Character Inventory (TCI) in predicting antidepressant response to rTMS. BACKGROUND: Although rTMS of the dorsolateral prefrontal cortex (DLPFC) is an established antidepressant treatment, little is known about predictors of response. The TCI measures multiple personality dimensions (harm avoidance, novelty seeking, reward dependence, persistence, self-directedness, self-transcendence, and cooperativeness), some of which have predicted response to antidepressants and cognitive-behavioral therapy. A previous study suggested a possible association between higher self-directedness and rTMS response specifically in melancholic depression, although this was limited by the fact that melancholic depression is associated with a limited range of TCI profiles. METHODS: Sixteen patients in a major depressive episode completed a TCI prior to a clinical course of rTMS over the DLPFC. Treatment response was defined as ≥50% decrease in Hamilton Depression Rating Scale (HDRS). Baseline scores on each TCI dimension were compared between responders and non-responders via paired t-test with Bonferroni correction. Temperament/character scores were also subjected to regression analysis against percentage improvement in HDRS. RESULTS: Ten of the sixteen patients responded to rTMS. T-scores for Persistence were significantly higher in responders (48.3, 95% CI 40.9-55.7) than in non-responders (35.3, 95% CI 29.2-39.9) (p=0.006). Linear regression revealed a correlation between persistence score and percentage improvement in HRDS (R=0.65±0.29). CONCLUSIONS: Higher persistence predicted antidepressant response to rTMS. This may be explained by rTMS-induced enhancement of cortical excitability, which has been found to be decreased in patients with high persistence. Personality assessment that includes measurement of TCI persistence may be a useful component of precision medicine initiatives in rTMS for depression. The human experience with intravenous levodopa ABSTRACT OBJECTIVE: To compile a comprehensive summary of published human experience with levodopa given intravenously, with a focus on information required by regulatory agencies. BACKGROUND: While safe intravenous use of levodopa has been documented for over 50 years, regulatory supervision for pharmaceuticals given by a route other than that approved by the U.S. Food and Drug Administration (FDA) has become increasingly cautious. If delivering a drug by an alternate route raises the risk of adverse events, an investigational new drug (IND) application is required, including a comprehensive review of toxicity data. METHODS: Over 200 articles referring to intravenous levodopa (IVLD) were examined for details of administration, pharmacokinetics, benefit and side effects. RESULTS: We identified 144 original reports describing IVLD use in humans, beginning with psychiatric research in 1959-1960 before the development of peripheral decarboxylase inhibitors. At least 2781 subjects have received IVLD, and reported outcomes include parkinsonian signs, sleep variables, hormones, hemodynamics, CSF amino acid composition, regional cerebral blood flow, cognition, perception and complex behavior. Mean pharmacokinetic variables were summarized for 49 healthy subjects and 190 with Parkinson disease. Side effects were those expected from clinical experience with oral levodopa and dopamine agonists. No articles reported deaths or induction of psychosis. CONCLUSION: At least 2781 patients have received i.v. levodopa with a safety profile comparable to that seen with oral administration. Orthostatic stability with intravenous levodopa Intravenous levodopa has been used in a multitude of research studies due to its more predictable pharmacokinetics compared to the oral form, which is used frequently as a treatment for Parkinson's disease (PD). Levodopa is the precursor for dopamine, and intravenous dopamine would strongly affect vascular tone, but peripheral decarboxylase inhibitors are intended to block such effects. Pulse and blood pressure, with orthostatic changes, were recorded before and after intravenous levodopa or placebo—after oral carbidopa—in 13 adults with a chronic tic disorder and 16 tic-free adult control subjects. Levodopa caused no statistically or clinically significant changes in blood pressure or pulse. These data add to previous data that support the safety of i.v. levodopa when given with adequate peripheral inhibition of DOPA decarboxylase. An Atlas of Human Kinase Regulation David Ochoa The coordinated regulation of protein kinases is a rapid mechanism that integrates diverse cues and swiftly determines appropriate cellular responses. However, our understanding of cellular decision-making has been limited by the small number of simultaneously monitored phospho-regulatory events. Here, we have estimated changes in activity in 215 human kinases in 399 conditions derived from a large compilation of phosphopeptide quantifications. This atlas identifies commonly regulated kinases as those that are central in the signaling network and defines the logic relationships between kinase pairs. Co-regulation along the conditions predicts kinase-complex and kinase-substrate associations. Additionally, the kinase regulation profile acts as a molecular fingerprint to identify related and opposing signaling states. Using this atlas, we identified essential mediators of stem cell differentiation, modulators of Salmonella infection and new targets of AKT1. This provides a global view of human phosphorylation-based signaling and the necessary context to better understand kinase driven decision-making. Stochastic inversion workflow using the gradual deformation in order to predict and m... Lorenzo Perozzi ABSTRACT Due to budget constraints, CCS in deep saline aquifers is often carried out using only one injector well and one control well, which seriously limits infering the dynamics of the CO_2 plume. In such case, monitoring of the plume of CO_2 only rely on geological assumptions or indirect data. In this paper, we present a new two-step stochastic P- and S-wave, density and porosity inversion approach that allows reliable monitoring of CO_2 plume using time-lapse VSP. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian cosimulations. Each realization within a set of static models are then iteratively combined together using a modified gradual deformation optimization technique with the difference between computed and observed raw traces as objective function. In the second step, this statics models serves as input for a CO_2 injection history matching using the same modified gradual deformation scheme. At each gradual deformation step the CO_2 injection is simulated and the corresponding full-wave traces are computed and compared to the observed data. The method has been tested on a synthetic heterogeneous saline aquifer model mimicking the environment of the CO_2 CCS pilot in Becancour area, Quebec. The results show that the set of optimized models of P- and S-wave, density and porosity showed an improved structural similarity with the reference models compared to conventional simulations. The Resource Identification Initiative: A cultural shift in publishing Anita Bandrowski ABSTRACT A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to identify the exact resources that are reported or answer basic questions such as "How did other studies use resource X?". To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in the methods sections of papers and thereby improve identifiability and reproducibility. The pilot engaged over 25 biomedical journal editors from most major publishers, as well as scientists and funding officials. Authors were asked to include Research Resource Identifiers (RRIDs) in their manuscripts prior to publication for three resource types: antibodies, model organisms, and tools (i.e. software and databases). RRIDs are assigned by an authoritative database, for example a model organism database, for each type of resource. To make it easier for authors to obtain RRIDs, resources were aggregated from the appropriate databases and their RRIDs made available in a central web portal (scicrunch.org/resources). RRIDs meet three key criteria: they are machine readable, free to generate and access, and are consistent across publishers and journals. The pilot was launched in February of 2014 and over 300 papers have appeared that report RRIDs. The number of journals participating has expanded from the original 25 to more than 40. Here, we present an overview of the pilot project and its outcomes to date. We show that authors are able to identify resources and are supportive of the goals of the project. Identifiability of the resources post-pilot showed a dramatic improvement for all three resource types, suggesting that the project has had a significant impact on reproducibility relating to research resources. Rapid Environmental Quenching of Satellite Dwarf Galaxies in the Local Group Andrew Wetzel In the Local Group, nearly all of the dwarf galaxies ($\mstar\lesssim10^9\msun$) that are satellites within $300\kpc$ (the virial radius) of the Milky Way (MW) and Andromeda (M31) have quiescent star formation and little-to-no cold gas. This contrasts strongly with comparatively isolated dwarf galaxies, which are almost all actively star-forming and gas-rich. This near dichotomy implies a _rapid_ transformation after falling into the halos of the MW or M31. We combine the observed quiescent fractions for satellites of the MW and M31 with the infall times of satellites from the ELVIS suite of cosmological simulations to determine the typical timescales over which environmental processes within the MW/M31 halos remove gas and quench star formation in low-mass satellite galaxies. The quenching timescales for satellites with $\mstar<10^8\msun$ are short, $\lesssim2\gyr$, and decrease at lower $\mstar$. These quenching timescales can be $1-2\gyr$ longer if environmental preprocessing in lower-mass groups prior to MW/M31 infall is important. We compare with timescales for more massive satellites from previous works, exploring satellite quenching across the observable range of $\mstar=10^{3-11}\msun$. The environmental quenching timescale increases rapidly with satellite $\mstar$, peaking at $\approx9.5\gyr$ for $\mstar\sim10^9\msun$, and rapidly decreases at higher $\mstar$ to less than $5\gyr$ at $\mstar>5\times10^9\msun$. Thus, satellites with $\mstar\sim10^9\msun$, similar to the Magellanic Clouds, exhibit the longest environmental quenching timescales. Ebola virus epidemiology, transmission, and evolution during seven months in Sierra L... SUMMARY The 2013-2015 Ebola virus disease (EVD) epidemic is caused by the Makona variant of Ebola virus (EBOV). Early in the epidemic, genome sequencing provided insights into virus evolution and transmission, and offered important information for outbreak response. Here we analyze sequences from 232 patients sampled over 7 months in Sierra Leone, along with 86 previously released genomes from earlier in the epidemic. We confirm sustained human-to-human transmission within Sierra Leone and find no evidence for import or export of EBOV across national borders after its initial introduction. Using high-depth replicate sequencing, we observe both host-to-host transmission and recurrent emergence of intrahost genetic variants. We trace the increasing impact of purifying selection in suppressing the accumulation of nonsynonymous mutations over time. Finally, we note changes in the mucin-like domain of EBOV glycoprotein that merit further investigation. These findings clarify the movement of EBOV within the region and describe viral evolution during prolonged human-to-human transmission. Top-quark electroweak couplings at the FCC-ee Patrick Janot INTRODUCTION The design study of the Future Circular Colliders (FCC) in a 100-km ring in the Geneva area has started at CERN at the beginning of 2014, as an option for post-LHC particle accelerators. The study has an emphasis on proton-proton and electron-positron high-energy frontier machines . In the current plans, the first step of the FCC physics programme would exploit a high-luminosity ${\rm e^+e^-}$ collider called FCC-ee, with centre-of-mass energies ranging from below the Z pole to the ${\rm t\bar t}$ threshold and beyond. A first look at the physics case of the FCC-ee can be found in Ref. . In this first look, the focus regarding top-quark physics was on precision measurements of the top-quark mass, width, and Yukawa coupling through a scan of the ${\rm t\bar t}$ production threshold, with $$ comprised between 340 and 350GeV. The expected precision on the top-quark mass was in turn used, together with the outstanding precisions on the Z peak observables and on the W mass, in a global electroweak fit to set constraints on weakly-coupled new physics up to a scale of 100TeV. Although not studied in the first look, measurements of the top-quark electroweak couplings are of interest, as new physics might also show up via significant deviations of these couplings with respect to their standard-model predictions. Theories in which the top quark and the Higgs boson are composite lead to such deviations. The inclusion of a direct measurement of the ttZ coupling in the global electroweak fit is therefore likely to further constrain these theories. It has been claimed that both a centre-of-mass energy well beyond the top-quark pair production threshold and a large longitudinal polarization of the incoming electron and positron beams are crucially needed to independently access the ttγ and the ttZ couplings for both chirality states of the top quark. In Ref. , it is shown that the measurements of the total event rate and the forward-backward asymmetry of the top quark, with 500${\rm fb}^{-1}$ at $=500$GeV and with beam polarizations of ${\cal P} = \pm 0.8$, ${\cal P}^\prime = \mp 0.3$, allow for this distinction. The aforementioned claim is revisited in the present study. The sensitivity to the top-quark electroweak couplings is estimated here with an optimal-observable analysis of the lepton angular and energy distributions of over a million events from ${\rm t\bar t}$ production at the FCC-ee, in the $\ell \nu {\rm q \bar q b \bar b}$ final states (with $\ell = {\rm e}$ or μ), without incoming beam polarization and with a centre-of-mass energy not significantly above the ${\rm t\bar t}$ production threshold. Such a sensitivity can be understood from the fact that the top-quark polarization arising from its coupling to the Z is maximally transferred to the final state particles via the weak top-quark decay ${\rm t \to W b}$ with a 100% branching fraction: the lack of initial polarization is compensated by the presence of substantial final state polarization, and by a larger integrated luminosity. A similar situation was encountered at LEP, where the measurement of the total rate of ${\rm Z} \to \tau^+\tau^-$ events and of the tau polarization was sufficient to determine the tau couplings to the Z, regardless of initial state polarization . This letter is organized as follows. First, the reader is briefly reminded of the theoretical framework. Next, the statistical analysis of the optimal observables is described, and realistic estimates for the top-quark electroweak coupling sensitivities are obtained as a function of the centre-of-mass energy at the FCC-ee. Finally, the results are discussed, and prospects for further improvements are given. A new method for identifying the Pacific-South American pattern and its influence on... Damien Irving The Pacific-South American (PSA) pattern is an important mode of climate variability in the mid-to-high southern latitudes. It is widely recognized as the primary mechanism by which the El Niño-Southern Oscillation (ENSO) influences the south-east Pacific and south-west Atlantic, and in recent years has also been suggested as a mechanism by which longer-term tropical sea surface temperature trends can influence the Antarctic climate. This study presents a novel methodology for objectively identifying the PSA pattern. By rotating the global coordinate system such that the equator (a great circle) traces the approximate path of the pattern, the identification algorithm utilizes Fourier analysis as opposed to a traditional Empirical Orthogonal Function approach. The climatology arising from the application of this method to ERA-Interim reanalysis data reveals that the PSA pattern has a strong influence on temperature and precipitation variability over West Antarctica and the Antarctic Peninsula, and on sea ice variability in the adjacent Amundsen, Bellingshausen and Weddell Seas. Identified seasonal trends towards the negative phase of the PSA pattern are consistent with warming observed over the Antarctic Peninsula during autumn, but are inconsistent with observed winter warming over West Antarctica. Only a weak relationship is identified between the PSA pattern and ENSO, which suggests that the pattern might be better conceptualized as preferred regional atmospheric response to various external (and internal) forcings. The spin rate of pre-collapse stellar cores: wave driven angular momentum transport i... Jim Fuller The core rotation rates of massive stars have a substantial impact on the nature of core collapse supernovae and their compact remnants. We demonstrate that internal gravity waves (IGW), excited via envelope convection during a red supergiant phase or during vigorous late time burning phases, can have a significant impact on the rotation rate of the pre-SN core. In typical (10 M⊙ ≲ M ≲ 20 M⊙) supernova progenitors, IGW may substantially spin down the core, leading to iron core rotation periods $P_{\rm min,Fe} \gtrsim 50 \, {\rm s}$. Angular momentum (AM) conservation during the supernova would entail minimum NS rotation periods of $P_{\rm min,NS} \gtrsim 3 \, {\rm ms}$. In most cases, the combined effects of magnetic torques and IGW AM transport likely lead to substantially longer rotation periods. However, the stochastic influx of AM delivered by IGW during shell burning phases inevitably spin up a slowly rotating stellar core, leading to a maximum possible core rotation period. We estimate maximum iron core rotation periods of $P_{\rm max,Fe} \lesssim 10^4 \, {\rm s}$ in typical core collapse supernova progenitors, and a corresponding spin period of $P_{\rm max, NS} \lesssim 400 \, {\rm ms}$ for newborn neutron stars. This is comparable to the typical birth spin periods of most radio pulsars. Stochastic spin-up via IGW during shell O/Si burning may thus determine the initial rotation rate of most neutron stars. For a given progenitor, this theory predicts a Maxwellian distribution in pre-collapse core rotation frequency that is uncorrelated with the spin of the overlying envelope. Software Use in Astronomy: An Informal Survey Ivelina Momcheva INTRODUCTION Much of modern Astronomy research depends on software. Digital images and numerical simulations are central to the work of most astronomers today, and anyone who is actively involved in astronomy research has a variety of software techniques in their toolbox. Furthermore, the sheer volume of data has increased dramatically in recent years. The efficient and effective use of large data sets increasingly requires more than rudimentary software skills. Finally, as astronomy moves towards the open code model, propelled by pressure from funding agencies and journals as well as the community itself, readability and reusability of code will become increasingly important (Figure [fig:xkcd]). Yet we know few details about the software practices of astronomers. In this work we aim to gain a greater understanding of the prevalence of software tools, the demographics of their users, and the level of software training in astronomy. The astronomical community has, in the past, provided funding and support for software tools intended for the wider community. Examples of this include the Goddard IDL library (funded by the NASA ADP), IRAF (supported and developed by AURA at NOAO), STSDAS (supported and developed by STScI), and the Starlink suite (funded by PPARC). As the field develops, new tools are required and we need to focus our efforts on ones that will have the widest user base and the lowest barrier to utilization. For example, as our work here shows, the much larger astronomy user base of Python relative to the language R suggests that tools in the former language are likely to get many more users and contributers than the latter. More recently, there has been a growing discussion of the importance of data analysis and software development training in astronomy (e.g., the special sessions at the 225th AAS "Astroinformatics and Astrostatistics in Astronomical Research Steps Towards Better Curricula" and "Licensing Astrophysics Codes", which were standing room only). Although astronomy and astrophysics went digital long ago, the formal training of astronomy and physics students rarely involves software development or data-intensive analysis techniques. Such skills are increasingly critical in the era of ubiquitous "Big Data" (e.g., , or the 2015 NOAO Big Data conference). Better information on the needs of researchers as well as the current availability of training opportunities (or lack thereof) can be used to inform, motivate and focus future efforts towards improving this aspect of the astronomy curriculum. In 2014 the Software Sustainability Institute carried out an inquiry into the software use of researchers in the UK (, see also the associated presentation). This survey provides useful context for software usage by researchers, as well as a useful definition of "research software": Software that is used to generate, process or analyze results that you intend to appear in a publication (either in a journal, conference paper, monograph, book or thesis). Research software can be anything from a few lines of code written by yourself, to a professionally developed software package. Software that does not generate, process or analyze results - such as word processing software, or the use of a web search - does not count as 'research software' for the purposes of this survey. However, this survey was limited to researchers at UK institutions. More importantly, it was not focused on astronomers, who may have quite different software practices from scientists in other fields. Motivated by these issues and related discussions during the .Astronomy 6 conference, we created a survey to explore software use in astronomy. In this paper, we discuss the methodology of the survey in §[sec:datamethods], the results from the multiple-choice sections in §[sec:res] and the free-form comments in §[sec:comments]. In §[sec:ssicompare] we compare our results to the aforementioned SSI survey and in §[sec:conc] we conclude. We have made the anonymized results of the survey and the code to generate the summary figures available at https://github.com/eteq/software_survey_analysis. This repository may be updated in the future if a significant number of new respondents fill out the survey[1]. [1] http://tinyurl.com/pvyqw59 A minimum standard for publishing computational results in the weather and climate sc... Weather and climate science has undergone a computational revolution in recent decades, to the point where all modern research relies heavily on software and code. Despite this profound change in the research methods employed by weather and climate scientists, the reporting of computational results has changed very little in relevant academic journals. This lag has led to something of a reproducibility crisis, whereby it is impossible to replicate and verify most of today's published computational results. While it is tempting to simply decry the slow response of journals and funding agencies in the face of this crisis, there are very few examples of reproducible weather and climate research upon which to base new communication standards. In an attempt to address this deficiency, this essay describes a procedure for reporting computational results that was employed in a recent _Journal of Climate_ paper. The procedure was developed to be consistent with recommended computational best practices and seeks to minimize the time burden on authors, which has been identified as the most important barrier to publishing code. It should provide a starting point for weather and climate scientists looking to publish reproducible research, and it is proposed that journals could adopt the procedure as a minimum standard. IEDA EarthChem: Supporting the sample-based geochemistry community with data resource... Leslie Hsu ABSTRACT Integrated sample-based geochemical measurements enable new scientific discoveries in the Earth sciences. However, integration of geochemical data is difficult because of the variety of sample types and measured properties, idiosyncratic analytical procedures, and the time commitment required for adequate documentation. To support geochemists in integrating and reusing geochemical data, EarthChem, part of IEDA (Integrated Earth Data Applications), develops and maintains a suite of data systems to serve the scientific community. The EarthChem Library focuses on dataset publication, accessibility, and linking with other sources. Topical synthesis databases (e.g., PetDB, SedDB, Geochron) integrate data from several sources and preserve metadata associated with analyzed samples. The EarthChem Portal optimizes data discovery and provides analysis tools. Contributing authors obtain citable DOI identifiers, usage reports of their data, and increased discoverability. The community benefits from open access to data leading to accelerated scientific discoveries. Growing citations of EarthChem systems demonstrate its success. Parameter estimation on gravitational waves from neutron-star binaries with spinning... Ben Farr INTRODUCTION As we enter the advanced-detector era of ground-based gravitational-wave (GW) astronomy, it is critical that we understand the abilities and limitations of the analyses we are prepared to conduct. Of the many predicted sources of GWs, binary neutron-star (BNS) coalescences are paramount; their progenitors have been directly observed , and the advanced detectors will be sensitive to their GW emission up to ∼400 Mpc away . When analyzing a GW signal from a circularized compact binary merger, strong degeneracies exist between parameters describing the binary (e.g., distance and inclination). To properly estimate any particular parameter(s) of interest, the marginal distribution is estimated by integrating the joint posterior probability density function (PDF) over all other parameters. In this work, we sample the posterior PDF using software implemented in the LALINFERENCE library . Specifically we use results from LALINFERNCE_NEST , a nest sampling algorithm , and LALINFERENCE_MCMC , a Markov-chain Monte Carlo algorithm \citep[chapter 12]{Gregory2005}. Previous studies of BNS signals have largely assessed parameter constraints assuming negligible neutron-star (NS) spin, restricting models to nine parameters. This simplification has largely been due to computational constraints, but the slow spin of NSs in short-period BNS systems observed to date \citep[e.g.,][]{Mandel_2010} has also been used as justification. However, proper characterization of compact binary sources _must_ account for the possibility of non-negligible spin; otherwise parameter estimates will be biased . This bias can potentially lead to incorrect conclusions about source properties and even misidentification of source classes. Numerous studies have looked at the BNS parameter estimation abilities of ground-based GW detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory \citep[aLIGO;][]{Aasi_2015} and Advanced Virgo \citep[AdV;][]{Acernese_2014} detectors. assessed localization abilities on a simulated non-spinning BNS population. looked at several potential advanced-detector networks and quantified the parameter-estimation abilities of each network for a signal from a fiducial BNS with non-spinning NSs. demonstrated the ability to characterize signals from non-spinning BNS sources with waveform models for spinning sources using Bayesian stochastic samplers in the LALINFERENCE library . used approximate methods to quantify the degeneracy between spin and mass estimates, assuming the compact objects' spins are aligned with the orbital angular momentum of the binary \citep[but see][]{Haster_2015}. simulated a collection of loud signals from non-spinning BNS sources in several mass bins and quantified parameter estimation capabilities in the advanced-detector era using non-spinning models. introduced precession from spin–orbit coupling and found that the additional richness encoded in the waveform could reduce the mass–spin degeneracy, helping BNSs to be distinguished from NS–black hole (BH) binaries. conducted a similar analysis of a large catalog of sources and found that it is difficult to infer the presence of a mass gap between NSs and BHs , although, this may still be possible using a population of a few tens of detections . Finally, and the follow-on represent an (almost) complete end-to-end simulation of BNS detection and characterization during the first 1–2 years of the advanced-detector era. These studies simulated GWs from an astrophysically motivated BNS population, then detected and characterized sources using the search and follow-up tools that are used for LIGO–Virgo data analysis . The final stage of the analysis missing from these studies is the computationally expensive characterization of sources while accounting for the compact objects' spins and their degeneracies with other parameters. The present work is the final step of BNS characterization for the simulations using waveforms that account for the effects of NS spin. We begin with a brief introduction to the source catalog used for this study and in section [sec:sources]. Then, in section [sec:spin] we describe the results of parameter estimation from a full analysis that includes spin. In section [sec:mass] we look at mass estimates in more detail and spin-magnitude estimates in section [sec:spin-magnitudes]. In section [sec:extrinsic] we consider the estimation of extrinsic parameters: sky position (section [sec:sky]) and distance (section [sec:distance]), which we do not expect to be significantly affected by the inclusion of spin in the analysis templates. We summarize our findings in section [sec:conclusions]. A comparison of computational costs for spinning and non-spinning parameter estimation is given in appendix [ap:CPU]. A novel approach to diagnosing Southern Hemisphere planetary wave activity and its in... Southern Hemisphere mid-to-upper tropospheric planetary wave activity is characterized by the superposition of two zonally-oriented, quasi-stationary waveforms: zonal wavenumber one (ZW1) and zonal wavenumber three (ZW3). Previous studies have tended to consider these waveforms in isolation and with the exception of those studies relating to sea ice, little is known about their impact on regional climate variability. We take a novel approach to quantifying the combined influence of ZW1 and ZW3, using the strength of the hemispheric meridional flow as a proxy for zonal wave activity. Our methodology adapts the wave envelope construct routinely used in the identification of synoptic-scale Rossby wave packets and improves on existing approaches by allowing for variations in both wave phase and amplitude. While ZW1 and ZW3 are both prominent features of the climatological circulation, the defining feature of highly meridional hemispheric states is an enhancement of the ZW3 component. Composites of the mean surface conditions during these highly meridional, ZW3-like anomalous states (i.e. months of strong planetary wave activity) reveal large sea ice anomalies over the Amundsen and Bellingshausen Seas during autumn and along much of the East Antarctic coastline throughout the year. Large precipitation anomalies in regions of significant topography (e.g. New Zealand, Patagonia, coastal Antarctica) and anomalously warm temperatures over much of the Antarctic continent were also associated with strong planetary wave activity. The latter has potentially important implications for the interpretation of recent warming over West Antarctica and the Antarctic Peninsula. Satellite Dwarf Galaxies in a Hierarchical Universe: Infall Histories, Group Preproce... In the Local Group, almost all satellite dwarf galaxies that are within the virial radius of the Milky Way (MW) and M31 exhibit strong environmental influence. The orbital histories of these satellites provide the key to understanding the role of the MW/M31 halo, lower-mass groups, and cosmic reionization on the evolution of dwarf galaxies. We examine the virial-infall histories of satellites with $\mstar=10^{3-9} \msun$ using the ELVIS suite of cosmological zoom-in dissipationless simulations of 48 MW/M31-like halos. Satellites at z = 0 fell into the MW/M31 halos typically $5-8 \gyr$ ago at z = 0.5 − 1. However, they first fell into any host halo typically $7-10 \gyr$ ago at z = 0.7 − 1.5. This difference arises because many satellites experienced "group preprocessing" in another host halo, typically of $\mvir \sim 10^{10-12} \msun$, before falling into the MW/M31 halos. Satellites with lower-mass and/or those closer to the MW/M31 fell in earlier and are more likely to have experienced group preprocessing; half of all satellites with $\mstar < 10^6 \msun$ were preprocessed in a group. Infalling groups also drive most satellite-satellite mergers within the MW/M31 halos. Finally, _none_ of the surviving satellites at z = 0 were within the virial radius of their MW/M31 halo during reionization (z > 6), and only <4% were satellites of any other host halo during reionization. Thus, effects of cosmic reionization versus host-halo environment on the formation histories of surviving dwarf galaxies in the Local Group occurred at distinct epochs and are separable in time. Distinguishing disorder from order in irreversible decay processes Jonathan Nichols Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay i A → products, i = 1, 2, 3, …n governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with i ≥ 2. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time. Real-space grids and the Octopus code as tools for the development of new simulation... Xavier Andrade Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems. The "Paper" of the Future Alyssa Goodman _A 5-minute video demonstration of this paper is available at this YouTube link._ PREAMBLE A variety of research on human cognition demonstrates that humans learn and communicate best when more than one processing system (e.g. visual, auditory, touch) is used. And, related research also shows that, no matter how technical the material, most humans also retain and process information best when they can put a narrative "story" to it. So, when considering the future of scholarly communication, we should be careful not to do blithely away with the linear narrative format that articles and books have followed for centuries: instead, we should enrich it. Much more than text is used to communicate in Science. Figures, which include images, diagrams, graphs, charts, and more, have enriched scholarly articles since the time of Galileo, and ever-growing volumes of data underpin most scientific papers. When scientists communicate face-to-face, as in talks or small discussions, these figures are often the focus of the conversation. In the best discussions, scientists have the ability to manipulate the figures, and to access underlying data, in real-time, so as to test out various what-if scenarios, and to explain findings more clearly. THIS SHORT ARTICLE EXPLAINS—AND SHOWS WITH DEMONSTRATIONS—HOW SCHOLARLY "PAPERS" CAN MORPH INTO LONG-LASTING RICH RECORDS OF SCIENTIFIC DISCOURSE, enriched with deep data and code linkages, interactive figures, audio, video, and commenting. Compressed Sensing for the Fast Computation of Matrices: Application to Molecular Vib... Jacob Sanders This article presents a new method to compute matrices from numerical simulations based on the ideas of sparse sampling and compressed sensing. The method is useful for problems where the determination of the entries of a matrix constitutes the computational bottleneck. We apply this new method to an important problem in computational chemistry: the determination of molecular vibrations from electronic structure calculations, where our results show that the overall scaling of the procedure can be improved in some cases. Moreover, our method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations, resulting in a significant 3\(\times\) speed-up in actual calculations.
CommonCrawl
Academic Tutorials Engineering Tutorials Exams Syllabus Famous Monuments GATE Exams Tutorials Mainframe Development Management Tutorials Mathematics Tutorials Misc tutorials Python Technologies SAP Tutorials Programming Scripts Telecom Tutorials UPSC IAS Exams Sports Tutorials XML Technologies Graph Theory Tutorial Graph Theory - Home Graph Theory - Introduction Graph Theory - Fundamentals Graph Theory - Basic Properties Graph Theory - Types of Graphs Graph Theory - Trees Graph Theory - Connectivity Graph Theory - Coverings Graph Theory - Matchings Graph Theory - Independent Sets Graph Theory - Coloring Graph Theory - Isomorphism Graph Theory - Traversability Graph Theory - Examples Graph Theory Useful Resources Graph Theory - Quick Guide Graph Theory - Useful Resources Graph Theory - Discussion In the domain of mathematics and computer science, graph theory is the study of graphs that concerns with the relationship among edges and vertices. It is a popular subject having its applications in computer science, information technology, biosciences, mathematics, and linguistics to name a few. Without further ado, let us start with defining a graph. What is a Graph? A graph is a pictorial representation of a set of objects where some pairs of objects are connected by links. The interconnected objects are represented by points termed as vertices, and the links that connect the vertices are called edges. Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of edges, connecting the pairs of vertices. Take a look at the following graph − In the above graph, V = {a, b, c, d, e} E = {ab, ac, bd, cd, de} Applications of Graph Theory Graph theory has its applications in diverse fields of engineering − Electrical Engineering − The concepts of graph theory is used extensively in designing circuit connections. The types or organization of connections are named as topologies. Some examples for topologies are star, bridge, series, and parallel topologies. Computer Science − Graph theory is used for the study of algorithms. For example, Kruskal's Algorithm Prim's Algorithm Dijkstra's Algorithm Computer Network − The relationships among interconnected computers in the network follows the principles of graph theory. Science − The molecular structure and chemical structure of a substance, the DNA structure of an organism, etc., are represented by graphs. Linguistics − The parsing tree of a language and grammar of a language uses graphs. General − Routes between the cities can be represented using graphs. Depicting hierarchical ordered information such as family tree can be used as a special type of graph called tree. A graph is a diagram of points and lines connected to the points. It has at least one line joining a set of two vertices with no vertex connecting itself. The concept of graphs in graph theory stands up on some basic terms such as point, line, vertex, edge, degree of vertices, properties of graphs, etc. Here, in this chapter, we will cover these fundamentals of graph theory. A point is a particular position in a one-dimensional, two-dimensional, or three-dimensional space. For better understanding, a point can be denoted by an alphabet. It can be represented with a dot. Here, the dot is a point named 'a'. A Line is a connection between two points. It can be represented with a solid line. Here, 'a' and 'b' are the points. The link between these two points is called a line. A vertex is a point where multiple lines meet. It is also called a node. Similar to points, a vertex is also denoted by an alphabet. Here, the vertex is named with an alphabet 'a'. An edge is the mathematical term for a line that connects two vertices. Many edges can be formed from a single vertex. Without a vertex, an edge cannot be formed. There must be a starting vertex and an ending vertex for an edge. Here, 'a' and 'b' are the two vertices and the link between them is called an edge. A graph 'G' is defined as G = (V, E) Where V is a set of all vertices and E is a set of all edges in the graph. In the above example, ab, ac, cd, and bd are the edges of the graph. Similarly, a, b, c, and d are the vertices of the graph. In this graph, there are four vertices a, b, c, and d, and four edges ab, ac, ad, and cd. In a graph, if an edge is drawn from vertex to itself, it is called a loop. In the above graph, V is a vertex for which it has an edge (V, V) forming a loop. In this graph, there are two loops which are formed at vertex a, and vertex b. Degree of Vertex It is the number of vertices adjacent to a vertex V. Notation − deg(V). In a simple graph with n number of vertices, the degree of any vertices is − deg(v) ≤ n – 1 ∀ v ∈ G A vertex can form an edge with all other vertices except by itself. So the degree of a vertex will be up to the number of vertices in the graph minus 1. This 1 is for the self-vertex as it cannot form a loop by itself. If there is a loop at any of the vertices, then it is not a Simple Graph. Degree of vertex can be considered under two cases of graphs − Undirected Graph Directed Graph Degree of Vertex in an Undirected Graph An undirected graph has no directed edges. Consider the following examples. Take a look at the following graph − In the above Undirected Graph, deg(a) = 2, as there are 2 edges meeting at vertex 'a'. deg(b) = 3, as there are 3 edges meeting at vertex 'b'. deg(c) = 1, as there is 1 edge formed at vertex 'c' So 'c' is a pendent vertex. deg(d) = 2, as there are 2 edges meeting at vertex 'd'. deg(e) = 0, as there are 0 edges formed at vertex 'e'. So 'e' is an isolated vertex. deg(a) = 2, deg(b) = 2, deg(c) = 2, deg(d) = 2, and deg(e) = 0. The vertex 'e' is an isolated vertex. The graph does not have any pendent vertex. Degree of Vertex in a Directed Graph In a directed graph, each vertex has an indegree and an outdegree. Indegree of a Graph Indegree of vertex V is the number of edges which are coming into the vertex V. Notation − deg−(V). Outdegree of a Graph Outdegree of vertex V is the number of edges which are going out from the vertex V. Notation − deg+(V). Consider the following examples. Take a look at the following directed graph. Vertex 'a' has two edges, 'ad' and 'ab', which are going outwards. Hence its outdegree is 2. Similarly, there is an edge 'ga', coming towards vertex 'a'. Hence the indegree of 'a' is 1. The indegree and outdegree of other vertices are shown in the following table − Indegree Outdegree b 2 0 c 2 1 d 1 1 e 1 1 f 1 1 g 0 2 Take a look at the following directed graph. Vertex 'a' has an edge 'ae' going outwards from vertex 'a'. Hence its outdegree is 1. Similarly, the graph has an edge 'ba' coming towards vertex 'a'. Hence the indegree of 'a' is 1. Pendent Vertex By using degree of a vertex, we have a two special types of vertices. A vertex with degree one is called a pendent vertex. Here, in this example, vertex 'a' and vertex 'b' have a connected edge 'ab'. So with respect to the vertex 'a', there is only one edge towards vertex 'b' and similarly with respect to the vertex 'b', there is only one edge towards vertex 'a'. Finally, vertex 'a' and vertex 'b' has degree as one which are also called as the pendent vertex. Isolated Vertex A vertex with degree zero is called an isolated vertex. Here, the vertex 'a' and vertex 'b' has a no connectivity between each other and also to any other vertices. So the degree of both the vertices 'a' and 'b' are zero. These are also called as isolated vertices. Here are the norms of adjacency − In a graph, two vertices are said to be adjacent, if there is an edge between the two vertices. Here, the adjacency of vertices is maintained by the single edge that is connecting those two vertices. In a graph, two edges are said to be adjacent, if there is a common vertex between the two edges. Here, the adjacency of edges is maintained by the single vertex that is connecting two edges. In the above graph − 'a' and 'b' are the adjacent vertices, as there is a common edge 'ab' between them. 'a' and 'd' are the adjacent vertices, as there is a common edge 'ad' between them. ab' and 'be' are the adjacent edges, as there is a common vertex 'b' between them. be' and 'de' are the adjacent edges, as there is a common vertex 'e' between them. 'c' and 'b' are the adjacent vertices, as there is a common edge 'cb' between them. 'ad' and 'cd' are the adjacent edges, as there is a common vertex 'd' between them. 'ac' and 'cd' are the adjacent edges, as there is a common vertex 'c' between them. Parallel Edges In a graph, if a pair of vertices is connected by more than one edge, then those edges are called parallel edges. In the above graph, 'a' and 'b' are the two vertices which are connected by two edges 'ab' and 'ab' between them. So it is called as a parallel edge. Multi Graph A graph having parallel edges is known as a Multigraph. In the above graph, there are five edges 'ab', 'ac', 'cd', 'cd', and 'bd'. Since 'c' and 'd' have two parallel edges between them, it a Multigraph. In the above graph, the vertices 'b' and 'c' have two edges. The vertices 'e' and 'd' also have two edges between them. Hence it is a Multigraph. Degree Sequence of a Graph If the degrees of all vertices in a graph are arranged in descending or ascending order, then the sequence obtained is known as the degree sequence of the graph. Connecting to b,c a,d a,d c,b,e d In the above graph, for the vertices {d, a, b, c, e}, the degree sequence is {3, 2, 2, 2, 1}. b,e a,c b,d c,e a,d - In the above graph, for the vertices {a, b, c, d, e, f}, the degree sequence is {2, 2, 2, 2, 2, 0}. Graphs come with various properties which are used for characterization of graphs depending on their structures. These properties are defined in specific terms pertaining to the domain of graph theory. In this chapter, we will discuss a few basic properties that are common in all graphs. Distance between Two Vertices It is number of edges in a shortest path between Vertex U and Vertex V. If there are multiple paths connecting two vertices, then the shortest path is considered as the distance between the two vertices. Notation − d(U,V) There can be any number of paths present from one vertex to other. Among those, you need to choose only the shortest one. Here, the distance from vertex 'd' to vertex 'e' or simply 'de' is 1 as there is one edge between them. There are many paths from vertex 'd' to vertex 'e' − da, ab, be df, fg, ge de (It is considered for distance between the vertices) df, fc, ca, ab, be da, ac, cf, fg, ge Eccentricity of a Vertex The maximum distance between a vertex to all other vertices is considered as the eccentricity of vertex. Notation − e(V) The distance from a particular vertex to all other vertices in the graph is taken and among those distances, the eccentricity is the highest of distances. In the above graph, the eccentricity of 'a' is 3. The distance from 'a' to 'b' is 1 ('ab'), from 'a' to 'c' is 1 ('ac'), from 'a' to 'd' is 1 ('ad'), from 'a' to 'e' is 2 ('ab'-'be') or ('ad'-'de'), from 'a' to 'f' is 2 ('ac'-'cf') or ('ad'-'df'), from 'a' to 'g' is 3 ('ac'-'cf'-'fg') or ('ad'-'df'-'fg'). So the eccentricity is 3, which is a maximum from vertex 'a' from the distance between 'ag' which is maximum. In other words, e(b) = 3 e(c) = 3 e(d) = 2 e(e) = 3 e(f) = 3 e(g) = 3 Radius of a Connected Graph The minimum eccentricity from all the vertices is considered as the radius of the Graph G. The minimum among all the maximum distances between a vertex to all other vertices is considered as the radius of the Graph G. Notation − r(G) From all the eccentricities of the vertices in a graph, the radius of the connected graph is the minimum of all those eccentricities. In the above graph r(G) = 2, which is the minimum eccentricity for 'd'. Diameter of a Graph The maximum eccentricity from all the vertices is considered as the diameter of the Graph G. The maximum among all the distances between a vertex to all other vertices is considered as the diameter of the Graph G. Notation − d(G) − From all the eccentricities of the vertices in a graph, the diameter of the connected graph is the maximum of all those eccentricities. In the above graph, d(G) = 3; which is the maximum eccentricity. If the eccentricity of a graph is equal to its radius, then it is known as the central point of the graph. If e(V) = r(V), then 'V' is the central point of the Graph 'G'. In the example graph, 'd' is the central point of the graph. e(d) = r(d) = 2 The set of all central points of 'G' is called the centre of the Graph. In the example graph, {'d'} is the centre of the Graph. The number of edges in the longest cycle of 'G' is called as the circumference of 'G'. In the example graph, the circumference is 6, which we derived from the longest cycle a-c-f-g-e-b-a or a-c-f-d-e-b-a. The number of edges in the shortest cycle of 'G' is called its Girth. Notation: g(G). Example − In the example graph, the Girth of the graph is 4, which we derived from the shortest cycle a-c-f-d-a or d-f-g-e-d or a-b-e-d-a. Sum of Degrees of Vertices Theorem If G = (V, E) be a non-directed graph with vertices V = {V1, V2,…Vn} then n Σ i=1 deg(Vi) = 2|E| Corollary 1 If G = (V, E) be a directed graph with vertices V = {V1, V2,…Vn}, then n Σ i=1 deg+(Vi) = |E| = n Σ i=1 deg−(Vi) In any non-directed graph, the number of vertices with Odd degree is Even. In a non-directed graph, if the degree of each vertex is k, then k|V| = 2|E| In a non-directed graph, if the degree of each vertex is at least k, then k|V| ≤ 2|E| | Corollary 5 In a non-directed graph, if the degree of each vertex is at most k, then k|V| ≥ 2|E| There are various types of graphs depending upon the number of vertices, number of edges, interconnectivity, and their overall structure. We will discuss only a certain few important types of graphs in this chapter. Null Graph A graph having no edges is called a Null Graph. In the above graph, there are three vertices named 'a', 'b', and 'c', but there are no edges among them. Hence it is a Null Graph. Trivial Graph A graph with only one vertex is called a Trivial Graph. In the above shown graph, there is only one vertex 'a' with no other edges. Hence it is a Trivial graph. Non-Directed Graph A non-directed graph contains edges but the edges are not directed ones. In this graph, 'a', 'b', 'c', 'd', 'e', 'f', 'g' are the vertices, and 'ab', 'bc', 'cd', 'da', 'ag', 'gf', 'ef' are the edges of the graph. Since it is a non-directed graph, the edges 'ab' and 'ba' are same. Similarly other edges also considered in the same way. In a directed graph, each edge has a direction. In the above graph, we have seven vertices 'a', 'b', 'c', 'd', 'e', 'f', and 'g', and eight edges 'ab', 'cb', 'dc', 'ad', 'ec', 'fe', 'gf', and 'ga'. As it is a directed graph, each edge bears an arrow mark that shows its direction. Note that in a directed graph, 'ab' is different from 'ba'. Simple Graph A graph with no loops and no parallel edges is called a simple graph. The maximum number of edges possible in a single graph with 'n' vertices is nC2 where nC2 = n(n – 1)/2. The number of simple graphs possible with 'n' vertices = 2nc2 = 2n(n-1)/2. In the following graph, there are 3 vertices with 3 edges which is maximum excluding the parallel edges and loops. This can be proved by using the above formulae. The maximum number of edges with n=3 vertices − nC2 = n(n–1)/2 = 3(3–1)/2 = 3 edges The maximum number of simple graphs with n=3 vertices − 2nC2 = 2n(n-1)/2 = 23(3-1)/2 These 8 graphs are as shown below − Connected Graph A graph G is said to be connected if there exists a path between every pair of vertices. There should be at least one edge for every vertex in the graph. So that we can say that it is connected to some other vertex at the other side of the edge. In the following graph, each vertex has its own edge connected to other edge. Hence it is a connected graph. Disconnected Graph A graph G is disconnected, if it does not contain at least two connected vertices. The following graph is an example of a Disconnected Graph, where there are two components, one with 'a', 'b', 'c', 'd' vertices and another with 'e', 'f', 'g', 'h' vertices. The two components are independent and not connected to each other. Hence it is called disconnected graph. In this example, there are two independent components, a-b-f-e and c-d, which are not connected to each other. Hence this is a disconnected graph. Regular Graph A graph G is said to be regular, if all its vertices have the same degree. In a graph, if the degree of each vertex is 'k', then the graph is called a 'k-regular graph'. In the following graphs, all the vertices have the same degree. So these graphs are called regular graphs. In both the graphs, all the vertices have degree 2. They are called 2-Regular Graphs. Complete Graph A simple graph with 'n' mutual vertices is called a complete graph and it is denoted by 'Kn'. In the graph, a vertex should have edges with all other vertices, then it called a complete graph. In other words, if a vertex is connected to all other vertices in a graph, then it is called a complete graph. In the following graphs, each vertex in the graph is connected with all the remaining vertices in the graph except by itself. In graph I, Not Connected Connected Connected Connected Not Connected Connected Connected Connected Not Connected In graph II, Not Connected Connected Connected Connected Connected Not Connected Connected Connected Connected Connected Not Connected Connected Connected Connected Connected Not Connected Cycle Graph A simple graph with 'n' vertices (n >= 3) and 'n' edges is called a cycle graph if all its edges form a cycle of length 'n'. If the degree of each vertex in the graph is two, then it is called a Cycle Graph. Notation − Cn Take a look at the following graphs − Graph I has 3 vertices with 3 edges which is forming a cycle 'ab-bc-ca'. Graph II has 4 vertices with 4 edges which is forming a cycle 'pq-qs-sr-rp'. Graph III has 5 vertices with 5 edges which is forming a cycle 'ik-km-ml-lj-ji'. Hence all the given graphs are cycle graphs. Wheel Graph A wheel graph is obtained from a cycle graph Cn-1 by adding a new vertex. That new vertex is called a Hub which is connected to all the vertices of Cn. Notation − Wn No. of edges in Wn = No. of edges from hub to all other vertices + No. of edges from all other nodes in cycle graph without a hub. = (n–1) + (n–1) = 2(n–1) Take a look at the following graphs. They are all wheel graphs. In graph I, it is obtained from C3 by adding an vertex at the middle named as 'd'. It is denoted as W4. Number of edges in W4 = 2(n-1) = 2(3) = 6 In graph II, it is obtained from C4 by adding a vertex at the middle named as 't'. It is denoted as W5. In graph III, it is obtained from C6 by adding a vertex at the middle named as 'o'. It is denoted as W7. Number of edges in W4 = 2(n-1) = 2(6) = 12 Cyclic Graph A graph with at least one cycle is called a cyclic graph. In the above example graph, we have two cycles a-b-c-d-a and c-f-g-e-c. Hence it is called a cyclic graph. Acyclic Graph A graph with no cycles is called an acyclic graph. In the above example graph, we do not have any cycles. Hence it is a non-cyclic graph. Bipartite Graph A simple graph G = (V, E) with vertex partition V = {V1, V2} is called a bipartite graph if every edge of E joins a vertex in V1 to a vertex in V2. In general, a Bipertite graph has two sets of vertices, let us say, V1 and V2, and if an edge is drawn, it should connect any vertex in set V1 to any vertex in set V2. In this graph, you can observe two sets of vertices − V1 and V2. Here, two edges named 'ae' and 'bd' are connecting the vertices of two sets V1 and V2. Complete Bipartite Graph A bipartite graph 'G', G = (V, E) with partition V = {V1, V2} is said to be a complete bipartite graph if every vertex in V1 is connected to every vertex of V2. In general, a complete bipartite graph connects each vertex from set V1 to each vertex from set V2. The following graph is a complete bipartite graph because it has edges connecting each vertex from set V1 to each vertex from set V2. If |V1| = m and |V2| = n, then the complete bipartite graph is denoted by Km, n. Km,n has (m+n) vertices and (mn) edges. Km,n is a regular graph if m=n. In general, a complete bipartite graph is not a complete graph. Km,n is a complete graph if m=n=1. The maximum number of edges in a bipartite graph with n vertices is − [n2/4] If n=10, k5, 5= [n2/4] = [102/4] = 25. Similarly, K6, 4=24 K7, 3=21 K9, 1=9 If n=9, k5, 4 = [n2/4] = [92/4] = 20 'G' is a bipartite graph if 'G' has no cycles of odd length. A special case of bipartite graph is a star graph. Star Graph A complete bipartite graph of the form K1, n-1 is a star graph with n-vertices. A star graph is a complete bipartite graph if a single vertex belongs to one set and all the remaining vertices belong to the other set. In the above graphs, out of 'n' vertices, all the 'n–1' vertices are connected to a single vertex. Hence it is in the form of K1, n-1 which are star graphs. Complement of a Graph Let 'G−' be a simple graph with some vertices as that of 'G' and an edge {U, V} is present in 'G−', if the edge is not present in G. It means, two vertices are adjacent in 'G−' if the two vertices are not adjacent in G. If the edges that exist in graph I are absent in another graph II, and if both graph I and graph II are combined together to form a complete graph, then graph I and graph II are called complements of each other. In the following example, graph-I has two edges 'cd' and 'bd'. Its complement graph-II has four edges. Note that the edges in graph-I are not present in graph-II and vice versa. Hence, the combination of both the graphs gives a complete graph of 'n' vertices. Note − A combination of two complementary graphs gives a complete graph. If 'G' is any simple graph, then |E(G)| + |E('G-')| = |E(Kn)|, where n = number of vertices in the graph. Let 'G' be a simple graph with nine vertices and twelve edges, find the number of edges in 'G-'. You have, |E(G)| + |E('G-')| = |E(Kn)| 12 + |E('G-')| = 9(9-1) / 2 = 9C2 12 + |E('G-')| = 36 |E('G-')| = 24 'G' is a simple graph with 40 edges and its complement 'G−' has 38 edges. Find the number of vertices in the graph G or 'G−'. Let the number of vertices in the graph be 'n'. We have, |E(G)| + |E('G-')| = |E(Kn)| 40 + 38 = n(n-1)/2 156 = n(n-1) 13(12) = n(n-1) Trees are graphs that do not contain even a single cycle. They represent hierarchical structure in a graphical form. Trees belong to the simplest class of graphs. Despite their simplicity, they have a rich structure. Trees provide a range of useful applications as simple as a family tree to as complex as trees in data structures of computer science. A connected acyclic graph is called a tree. In other words, a connected graph with no cycles is called a tree. The edges of a tree are known as branches. Elements of trees are called their nodes. The nodes without child nodes are called leaf nodes. A tree with 'n' vertices has 'n-1' edges. If it has one more edge extra than 'n-1', then the extra edge should obviously has to pair up with two vertices which leads to form a cycle. Then, it becomes a cyclic graph which is a violation for the tree graph. The graph shown here is a tree because it has no cycles and it is connected. It has four vertices and three edges, i.e., for 'n' vertices 'n-1' edges as mentioned in the definition. Note − Every tree has at least two vertices of degree one. In the above example, the vertices 'a' and 'd' has degree one. And the other two vertices 'b' and 'c' has degree two. This is possible because for not forming a cycle, there should be at least two single edges anywhere in the graph. It is nothing but two edges with a degree of one. A disconnected acyclic graph is called a forest. In other words, a disjoint collection of trees is called a forest. The following graph looks like two sub-graphs; but it is a single disconnected graph. There are no cycles in this graph. Hence, clearly it is a forest. Spanning Trees Let G be a connected graph, then the sub-graph H of G is called a spanning tree of G if − H is a tree H contains all vertices of G. A spanning tree T of an undirected graph G is a subgraph that includes all of the vertices of G. In the above example, G is a connected graph and H is a sub-graph of G. Clearly, the graph H has no cycles, it is a tree with six edges which is one less than the total number of vertices. Hence H is the Spanning tree of G. Circuit Rank Let 'G' be a connected graph with 'n' vertices and 'm' edges. A spanning tree 'T' of G contains (n-1) edges. Therefore, the number of edges you need to delete from 'G' in order to get a spanning tree = m-(n-1), which is called the circuit rank of G. This formula is true, because in a spanning tree you need to have 'n-1' edges. Out of 'm' edges, you need to keep 'n–1' edges in the graph. Hence, deleting 'n–1' edges from 'm' gives the edges to be removed from the graph in order to get a spanning tree, which should not form a cycle. For the graph given in the above example, you have m=7 edges and n=5 vertices. Then the circuit rank is − G = m – (n – 1) = 7 – (5 – 1) Let 'G' be a connected graph with six vertices and the degree of each vertex is three. Find the circuit rank of 'G'. By the sum of degree of vertices theorem, 6 × 3 = 2|E| |E| = 9 Circuit rank = |E| – (|V| – 1) = 9 – (6 – 1) = 4 Kirchoff's Theorem Kirchoff's theorem is useful in finding the number of spanning trees that can be formed from a connected graph. The matrix 'A' be filled as, if there is an edge between two vertices, then it should be given as '1', else '0'. $$A=\begin{vmatrix}0 & a & b & c & d\\a & 0 & 1 & 1 & 1 \\b & 1 & 0 & 0 & 1\\c & 1 & 0 & 0 & 1\\d & 1 & 1 & 1 & 0 \end{vmatrix} = \begin{vmatrix} 0 & 1 & 1 & 1\\1 & 0 & 0 & 1\\1 & 0 & 0 & 1\\1 & 1 & 1 & 0\end{vmatrix}$$ By using kirchoff's theorem, it should be changed as replacing the principle diagonal values with the degree of vertices and all other elements with -1.A $$=\begin{vmatrix} 3 & -1 & -1 & -1\\-1 & 2 & 0 & -1\\-1 & 0 & 2 & -1\\-1 & -1 & -1 & 3 \end{vmatrix}=M$$ $$M=\begin{vmatrix}3 & -1 & -1 & -1\\-1 & 2 & 0 & -1\\-1 & 0 & 2 & -1\\-1 & -1 & -1 & 3 \end{vmatrix} =8$$ $$Co\:\:factor\:\:of\:\:m1\:\:= \begin{vmatrix} 2 & 0 & -1\\0 & 2 & -1\\-1 & -1 & 3\end{vmatrix}$$ Thus, the number of spanning trees = 8. Whether it is possible to traverse a graph from one vertex to another is determined by how a graph is connected. Connectivity is a basic concept in Graph Theory. Connectivity defines whether a graph is connected or disconnected. It has subtopics based on edge and vertex, known as edge connectivity and vertex connectivity. Let us discuss them in detail. A graph is said to be connected if there is a path between every pair of vertex. From every vertex to any other vertex, there should be some path to traverse. That is called the connectivity of a graph. A graph with multiple disconnected vertices and edges is said to be disconnected. In the following graph, it is possible to travel from one vertex to any other vertex. For example, one can traverse from vertex 'a' to vertex 'e' using the path 'a-b-e'. In the following example, traversing from vertex 'a' to vertex 'f' is not possible because there is no path between them directly or indirectly. Hence it is a disconnected graph. Cut Vertex Let 'G' be a connected graph. A vertex V ∈ G is called a cut vertex of 'G', if 'G-V' (Delete 'V' from 'G') results in a disconnected graph. Removing a cut vertex from a graph breaks it in to two or more graphs. Note − Removing a cut vertex may render a graph disconnected. A connected graph 'G' may have at most (n–2) cut vertices. In the following graph, vertices 'e' and 'c' are the cut vertices. By removing 'e' or 'c', the graph will become a disconnected graph. Without 'g', there is no path between vertex 'c' and vertex 'h' and many other. Hence it is a disconnected graph with cut vertex as 'e'. Similarly, 'c' is also a cut vertex for the above graph. Cut Edge (Bridge) Let 'G' be a connected graph. An edge 'e' ∈ G is called a cut edge if 'G-e' results in a disconnected graph. If removing an edge in a graph results in to two or more graphs, then that edge is called a Cut Edge. In the following graph, the cut edge is [(c, e)]. By removing the edge (c, e) from the graph, it becomes a disconnected graph. In the above graph, removing the edge (c, e) breaks the graph into two which is nothing but a disconnected graph. Hence, the edge (c, e) is a cut edge of the graph. Note − Let 'G' be a connected graph with 'n' vertices, then a cut edge e ∈ G if and only if the edge 'e' is not a part of any cycle in G. the maximum number of cut edges possible is 'n-1'. whenever cut edges exist, cut vertices also exist because at least one vertex of a cut edge is a cut vertex. if a cut vertex exists, then a cut edge may or may not exist. Cut Set of a Graph Let 'G'= (V, E) be a connected graph. A subset E' of E is called a cut set of G if deletion of all the edges of E' from G makes G disconnect. If deleting a certain number of edges from a graph makes it disconnected, then those deleted edges are called the cut set of the graph. Take a look at the following graph. Its cut set is E1 = {e1, e3, e5, e8}. After removing the cut set E1 from the graph, it would appear as follows − Similarly, there are other cut sets that can disconnect the graph − E3 = {e9} – Smallest cut set of the graph. E4 = {e3, e4, e5} Edge Connectivity Let 'G' be a connected graph. The minimum number of edges whose removal makes 'G' disconnected is called edge connectivity of G. Notation − λ(G) In other words, the number of edges in a smallest cut set of G is called the edge connectivity of G. If 'G' has a cut edge, then λ(G) is 1. (edge connectivity of G.) Take a look at the following graph. By removing two minimum edges, the connected graph becomes disconnected. Hence, its edge connectivity (λ(G)) is 2. Here are the four ways to disconnect the graph by removing two edges − Vertex Connectivity Let 'G' be a connected graph. The minimum number of vertices whose removal makes 'G' either disconnected or reduces 'G' in to a trivial graph is called its vertex connectivity. Notation − K(G) In the above graph, removing the vertices 'e' and 'i' makes the graph disconnected. If G has a cut vertex, then K(G) = 1. Notation − For any connected graph G, K(G) ≤ λ(G) ≤ δ(G) Vertex connectivity (K(G)), edge connectivity (λ(G)), minimum number of degrees of G(δ(G)). Calculate λ(G) and K(G) for the following graph − From the graph, δ(G) = 3 K(G) ≤ λ(G) ≤ δ(G) = 3 (1) K(G) ≥ 2 (2) Deleting the edges {d, e} and {b, h}, we can disconnect G. λ(G) = 2 2 ≤ λ(G) ≤ δ(G) = 2 (3) From (2) and (3), vertex connectivity K(G) = 2 A covering graph is a subgraph which contains either all the vertices or all the edges corresponding to some other graph. A subgraph which contains all the vertices is called a line/edge covering. A subgraph which contains all the edges is called a vertex covering. Line Covering Let G = (V, E) be a graph. A subset C(E) is called a line covering of G if every vertex of G is incident with at least one edge in C, i.e., deg(V) ≥ 1 ∀ V ∈ G because each vertex is connected with another vertex by an edge. Hence it has a minimum degree of 1. Its subgraphs having line covering are as follows − C1 = {{a, b}, {c, d}} C2 = {{a, d}, {b, c}} C3 = {{a, b}, {b, c}, {b, d}} C4 = {{a, b}, {b, c}, {c, d}} Line covering of 'G' does not exist if and only if 'G' has an isolated vertex. Line covering of a graph with 'n' vertices has at least [n/2] edges. Minimal Line Covering A line covering C of a graph G is said to be minimal if no edge can be deleted from C. In the above graph, the subgraphs having line covering are as follows − Here, C1, C2, C3 are minimal line coverings, while C4 is not because we can delete {b, c}. Minimum Line Covering It is also known as Smallest Minimal Line Covering. A minimal line covering with minimum number of edges is called a minimum line covering of 'G'. The number of edges in a minimum line covering in 'G' is called the line covering number of 'G' (α1). In the above example, C1 and C2 are the minimum line covering of G and α1 = 2. Every line covering contains a minimal line covering. Every line covering does not contain a minimum line covering (C3 does not contain any minimum line covering. No minimal line covering contains a cycle. If a line covering 'C' contains no paths of length 3 or more, then 'C' is a minimal line covering because all the components of 'C' are star graph and from a star graph, no edge can be deleted. Vertex Covering Let 'G' = (V, E) be a graph. A subset K of V is called a vertex covering of 'G', if every edge of 'G' is incident with or covered by a vertex in 'K'. The subgraphs that can be derived from the above graph are as follows − K1 = {b, c} K2 = {a, b, c} K3 = {b, c, d} K4 = {a, d} Here, K1, K2, and K3 have vertex covering, whereas K4 does not have any vertex covering as it does not cover the edge {bc}. Minimal Vertex Covering A vertex 'K' of graph 'G' is said to be minimal vertex covering if no vertex can be deleted from 'K'. In the above graph, the subgraphs having vertex covering are as follows − Here, K1 and K2 are minimal vertex coverings, whereas in K3, vertex 'd' can be deleted. Minimum Vertex Covering It is also known as the smallest minimal vertex covering. A minimal vertex covering of graph 'G' with minimum number of vertices is called the minimum vertex covering. The number of vertices in a minimum vertex covering of 'G' is called the vertex covering number of G (α2). In the following graph, the subgraphs having vertex covering are as follows − Here, K1 is a minimum vertex cover of G, as it has only two vertices. α2 = 2. A matching graph is a subgraph of a graph where there are no edges adjacent to each other. Simply, there should not be any common vertex between any two edges. Let 'G' = (V, E) be a graph. A subgraph is called a matching M(G), if each vertex of G is incident with at most one edge in M, i.e., deg(V) ≤ 1 ∀ V ∈ G which means in the matching graph M(G), the vertices should have a degree of 1 or 0, where the edges should be incident from the graph G. Notation − M(G) In a matching, if deg(V) = 1, then (V) is said to be matched if deg(V) = 0, then (V) is not matched. In a matching, no two edges are adjacent. It is because if any two edges are adjacent, then the degree of the vertex which is joining those two edges will have a degree of 2 which violates the matching rule. Maximal Matching A matching M of graph 'G' is said to maximal if no other edges of 'G' can be added to M. M1, M2, M3 from the above graph are the maximal matching of G. Maximum Matching It is also known as largest maximal matching. Maximum matching is defined as the maximal matching with maximum number of edges. The number of edges in the maximum matching of 'G' is called its matching number. For a graph given in the above example, M1 and M2 are the maximum matching of 'G' and its matching number is 2. Hence by using the graph G, we can form only the subgraphs with only 2 edges maximum. Hence we have the matching number as two. Perfect Matching A matching (M) of graph (G) is said to be a perfect match, if every vertex of graph g (G) is incident to exactly one edge of the matching (M), i.e., deg(V) = 1 ∀ V The degree of each and every vertex in the subgraph should have a degree of 1. In the following graphs, M1 and M2 are examples of perfect matching of G. Note − Every perfect matching of graph is also a maximum matching of graph, because there is no chance of adding one more edge in a perfect matching graph. A maximum matching of graph need not be perfect. If a graph 'G' has a perfect match, then the number of vertices |V(G)| is even. If it is odd, then the last vertex pairs with the other vertex, and finally there remains a single vertex which cannot be paired with any other vertex for which the degree is zero. It clearly violates the perfect matching principle. Note − The converse of the above statement need not be true. If G has even number of vertices, then M1 need not be perfect. It is matching, but it is not a perfect match, even though it has even number of vertices. Independent sets are represented in sets, in which there should not be any edges adjacent to each other. There should not be any common vertex between any two edges. there should not be any vertices adjacent to each other. There should not be any common edge between any two vertices. Independent Line Set Let 'G' = (V, E) be a graph. A subset L of E is called an independent line set of 'G' if no two edges in L are adjacent. Such a set is called an independent line set. Let us consider the following subsets − L1 = {a,b} L2 = {a,b} {c,e} L3 = {a,d} {b,c} In this example, the subsets L2 and L3 are clearly not the adjacent edges in the given graph. They are independent line sets. However L1 is not an independent line set, as for making an independent line set, there should be at least two edges. Maximal Independent Line Set An independent line set is said to be the maximal independent line set of a graph 'G' if no other edge of 'G' can be added to 'L'. L1 = {a, b} L2 = {{b, e}, {c, f}} L3 = {{a, e}, {b, c}, {d, f}} L4 = {{a, b}, {c, f}} L2 and L3 are maximal independent line sets/maximal matching. As for only these two subsets, there is no chance of adding any other edge which is not an adjacent. Hence these two subsets are considered as the maximal independent line sets. Maximum Independent Line Set A maximum independent line set of 'G' with maximum number of edges is called a maximum independent line set of 'G'. Number of edges in a maximum independent line set of G (β1) = Line independent number of G = Matching number of G L3 is the maximum independent line set of G with maximum edges which are not the adjacent edges in graph and is denoted by β1 = 3. Note − For any graph G with no isolated vertex, α1 + β1 = number of vertices in a graph = |V| Line covering number of Kn/Cn/wn, $$\alpha 1 = \lceil\frac{n}{2}\rceil\begin{cases}\frac{n}{2}\:if\:n\:is\:even \\\frac{n+1}{2}\:if\:n\:is\:odd\end{cases}$$ Line independent number (Matching number) = β1 = [n/2] α1 + β1 = n. Independent Vertex Set Let 'G' = (V, E) be a graph. A subset of 'V' is called an independent set of 'G' if no two vertices in 'S' are adjacent. Consider the following subsets from the above graphs − S1 = {e} S2 = {e, f} S3 = {a, g, c} S4 = {e, d} Clearly S1 is not an independent vertex set, because for getting an independent vertex set, there should be at least two vertices in the from a graph. But here it is not that case. The subsets S2, S3, and S4 are the independent vertex sets because there is no vertex that is adjacent to any one vertex from the subsets. Maximal Independent Vertex Set Let 'G' be a graph, then an independent vertex set of 'G' is said to be maximal if no other vertex of 'G' can be added to 'S'. Consider the following subsets from the above graphs. S2 and S3 are maximal independent vertex sets of 'G'. In S1 and S4, we can add other vertices; but in S2 and S3, we cannot add any other vertex. Maximum Independent Vertex Set A maximal independent vertex set of 'G' with maximum number of vertices is called as the maximum independent vertex set. Consider the following subsets from the above graph − Only S3 is the maximum independent vertex set, as it covers the highest number of vertices. The number of vertices in a maximum independent vertex set of 'G' is called the independent vertex number of G (β2). For the complete graph Kn, Vertex covering number = α2 = n−1 Vertex independent number = β2 = 1 You have α2 + β2 = n In a complete graph, each vertex is adjacent to its remaining (n − 1) vertices. Therefore, a maximum independent set of Kn contains only one vertex. Therefore, β2=1 and α2=|v| − β2 = n-1 Note − For any graph 'G' = (V, E) α2 + β2 = |v| If 'S' is an independent vertex set of 'G', then (V – S) is a vertex cover of G. Graph coloring is nothing but a simple way of labelling graph components such as vertices, edges, and regions under some constraints. In a graph, no two adjacent vertices, adjacent edges, or adjacent regions are colored with minimum number of colors. This number is called the chromatic number and the graph is called a properly colored graph. While graph coloring, the constraints that are set on the graph are colors, order of coloring, the way of assigning color, etc. A coloring is given to a vertex or a particular region. Thus, the vertices or regions having same colors form independent sets. Vertex Coloring Vertex coloring is an assignment of colors to the vertices of a graph 'G' such that no two adjacent vertices have the same color. Simply put, no two vertices of an edge should be of the same color. Chromatic Number The minimum number of colors required for vertex coloring of graph 'G' is called as the chromatic number of G, denoted by X(G). χ(G) = 1 if and only if 'G' is a null graph. If 'G' is not a null graph, then χ(G) ≥ 2. Note − A graph 'G' is said to be n-coverable if there is a vertex coloring that uses at most n colors, i.e., X(G) ≤ n. Region Coloring Region coloring is an assignment of colors to the regions of a planar graph such that no two adjacent regions have the same color. Two regions are said to be adjacent if they have a common edge. Take a look at the following graph. The regions 'aeb' and 'befc' are adjacent, as there is a common edge 'be' between those two regions. Similarly, the other regions are also coloured based on the adjacency. This graph is coloured as follows − The chromatic number of Kn is [n/2] Consider this example with K4. In the complete graph, each vertex is adjacent to remaining (n – 1) vertices. Hence, each vertex requires a new color. Hence the chromatic number of Kn = n. Applications of Graph Coloring Graph coloring is one of the most important concepts in graph theory. It is used in many real-time applications of computer science such as − Image capturing Image segmentation Processes scheduling A graph can exist in different forms having the same number of vertices, edges, and also the same edge connectivity. Such graphs are called isomorphic graphs. Note that we label the graphs in this chapter mainly for the purpose of referring to them and recognizing them from one another. Isomorphic Graphs Two graphs G1 and G2 are said to be isomorphic if − Their number of components (vertices and edges) are same. Their edge connectivity is retained. Note − In short, out of the two isomorphic graphs, one is a tweaked version of the other. An unlabelled graph also can be thought of as an isomorphic graph. There exists a function 'f' from vertices of G1 to vertices of G2 [f: V(G1) ⇒ V(G2)], such that Case (i): f is a bijection (both one-one and onto) Case (ii): f preserves adjacency of vertices, i.e., if the edge {U, V} ∈ G1, then the edge {f(U), f(V)} ∈ G2, then G1 ≡ G2. If G1 ≡ G2 then − |V(G1)| = |V(G2)| |E(G1)| = |E(G2)| Degree sequences of G1 and G2 are same. If the vertices {V1, V2, .. Vk} form a cycle of length K in G1, then the vertices {f(V1), f(V2),… f(Vk)} should form a cycle of length K in G2. All the above conditions are necessary for the graphs G1 and G2 to be isomorphic, but not sufficient to prove that the graphs are isomorphic. (G1 ≡ G2) if and only if (G1− ≡ G2−) where G1 and G2 are simple graphs. (G1 ≡ G2) if the adjacency matrices of G1 and G2 are same. (G1 ≡ G2) if and only if the corresponding subgraphs of G1 and G2 (obtained by deleting some vertices in G1 and their images in graph G2) are isomorphic. Which of the following graphs are isomorphic? In the graph G3, vertex 'w' has only degree 3, whereas all the other graph vertices has degree 2. Hence G3 not isomorphic to G1 or G2. Taking complements of G1 and G2, you have − Here, (G1− ≡ G2−), hence (G1 ≡ G2). Planar Graphs A graph 'G' is said to be planar if it can be drawn on a plane or a sphere so that no two edges cross each other at a non-vertex point. Every planar graph divides the plane into connected areas called regions. Degree of a bounded region r = deg(r) = Number of edges enclosing the regions r. deg(1) = 3 Degree of an unbounded region r = deg(r) = Number of edges enclosing the regions r. deg(R1) = 4 In planar graphs, the following properties hold good − In a planar graph with 'n' vertices, sum of degrees of all the vertices is − According to Sum of Degrees of Regions/ Theorem, in a planar graph with 'n' regions, Sum of degrees of regions is − n Σ i=1 deg(ri) = 2|E| Based on the above theorem, you can draw the following conclusions − In a planar graph, If degree of each region is K, then the sum of degrees of regions is − K|R| = 2|E| If the degree of each region is at least K(≥ K), then K|R| ≤ 2|E| If the degree of each region is at most K(≤ K), then K|R| ≥ 2|E| Note − Assume that all the regions have same degree. According to Euler's Formulae on planar graphs, If a graph 'G' is a connected planar, then |V| + |R| = |E| + 2 If a planar graph with 'K' components, then |V| + |R|=|E| + (K+1) Where, |V| is the number of vertices, |E| is the number of edges, and |R| is the number of regions. Edge Vertex Inequality If 'G' is a connected planar graph with degree of each region at least 'K' then, |E| ≤ k / (k-2) {|v| - 2} You know, |V| + |R| = |E| + 2 K.|R| ≤ 2|E| K(|E| - |V| + 2) ≤ 2|E| (K - 2)|E| ≤ K(|V| - 2) If 'G' is a simple connected planar graph, then |E| ≤ 3|V| − 6 |R| ≤ 2|V| − 4 There exists at least one vertex V •∈ G, such that deg(V) ≤ 5. If 'G' is a simple connected planar graph (with at least 2 edges) and no triangles, then |E| ≤ {2|V| – 4} Kuratowski's Theorem A graph 'G' is non-planar if and only if 'G' has a subgraph which is homeomorphic to K5 or K3,3. Homomorphism Two graphs G1 and G2 are said to be homomorphic, if each of these graphs can be obtained from the same graph 'G' by dividing some edges of G with more vertices. Take a look at the following example − Divide the edge 'rs' into two edges by adding one vertex. The graphs shown below are homomorphic to the first graph. If G1 is isomorphic to G2, then G is homeomorphic to G2 but the converse need not be true. Any graph with 4 or less vertices is planar. Any graph with 8 or less edges is planar. A complete graph Kn is planar if and only if n ≤ 4. The complete bipartite graph Km, n is planar if and only if m ≤ 2 or n ≤ 2. A simple non-planar graph with minimum number of vertices is the complete graph K5. The simple non-planar graph with minimum number of edges is K3, 3. Polyhedral graph A simple connected planar graph is called a polyhedral graph if the degree of each vertex is ≥ 3, i.e., deg(V) ≥ 3 ∀ V ∈ G. 3|V| ≤ 2|E| 3|R| ≤ 2|E| A graph is traversable if you can draw a path between all the vertices without retracing the same path. Based on this path, there are some categories like Euler's path and Euler's circuit which are described in this chapter. Euler's Path An Euler's path contains each edge of 'G' exactly once and each vertex of 'G' at least once. A connected graph G is said to be traversable if it contains an Euler's path. Euler's Path = d-c-a-b-d-e. Euler's Circuit In a Euler's path, if the starting vertex is same as its ending vertex, then it is called an Euler's circuit. Euler's Path = a-b-c-d-a-g-f-e-c-a. Euler's Circuit Theorem A connected graph 'G' is traversable if and only if the number of vertices with odd degree in G is exactly 2 or 0. A connected graph G can contain an Euler's path, but not an Euler's circuit, if it has exactly two vertices with an odd degree. Note − This Euler path begins with a vertex of odd degree and ends with the other vertex of odd degree. Euler's Path − b-e-a-b-d-c-a is not an Euler's circuit, but it is an Euler's path. Clearly it has exactly 2 odd degree vertices. Note − In a connected graph G, if the number of vertices with odd degree = 0, then Euler's circuit exists. Hamiltonian Graph A connected graph G is said to be a Hamiltonian graph, if there exists a cycle which contains all the vertices of G. Every cycle is a circuit but a circuit may contain multiple cycles. Such a cycle is called a Hamiltonian cycle of G. Hamiltonian Path A connected graph is said to be Hamiltonian if it contains each vertex of G exactly once. Such a path is called a Hamiltonian path. Hamiltonian Path− e-d-b-a-c. Euler's circuit contains each edge of the graph exactly once. In a Hamiltonian cycle, some edges of the graph can be skipped. For the graph shown above − Euler path exists – false Euler circuit exists – false Hamiltonian cycle exists – true Hamiltonian path exists – true G has four vertices with odd degree, hence it is not traversable. By skipping the internal edges, the graph has a Hamiltonian cycle passing through all the vertices. In this chapter, we will cover a few standard examples to demonstrate the concepts we already discussed in the earlier chapters. Find the number of spanning trees in the following graph. The number of spanning trees obtained from the above graph is 3. They are as follows − These three are the spanning trees for the given graphs. Here the graphs I and II are isomorphic to each other. Clearly, the number of non-isomorphic spanning trees is two. How many simple non-isomorphic graphs are possible with 3 vertices? There are 4 non-isomorphic graphs possible with 3 vertices. They are shown below. Let 'G' be a connected planar graph with 20 vertices and the degree of each vertex is 3. Find the number of regions in the graph. By the sum of degrees theorem, 20 Σ i=1 deg(Vi) = 2|E| 20(3) = 2|E| |E| = 30 By Euler's formula, 20+ |R| = 30 + 2 |R| = 12 Hence, the number of regions is 12. What is the chromatic number of complete graph Kn? In a complete graph, each vertex is adjacent to is remaining (n–1) vertices. Hence, each vertex requires a new color. Hence the chromatic number Kn = n. What is the matching number for the following graph? Number of vertices = 9 We can match only 8 vertices. Matching number is 4. What is the line covering number of for the following graph? Number of vertices = |V| = n = 7 Line covering number = (α1) ≥ [n/2] = 3 α1 ≥ 3 By using 3 edges, we can cover all the vertices. Hence, the line covering number is 3. Useful Video Courses 97 Lectures 7 hours Arnab Chakraborty Canva: Become a Graphic Designer 7 Lectures 1 hours All Excel Charts and Graphs: Data Visualization in Excel 31 Lectures 3.5 hours Abhishek And Pukhraj Graph Theory Algorithms William Fiset The Ultimate Canva Graphic Design Course Sasha Miller
CommonCrawl
BMC Ecology and Evolution Niche partitioning between sympatric wild canids: the case of the golden jackal (Canis aureus) and the red fox (Vulpes vulpes) in north-eastern Italy Elisa Torretta ORCID: orcid.org/0000-0002-4773-69771, Luca Riboldi1, Elena Costa2, Claudio Delfoco1, Erica Frignani2 & Alberto Meriggi1 BMC Ecology and Evolution volume 21, Article number: 129 (2021) Cite this article Two coexisting species with similar ecological requirements avoid or reduce competition by changing the extent of their use of a given resource. Numerous coexistence mechanisms have been proposed, but species interactions can also be aggressive; thus, generally a subordinate species modifies its realized niche to limit the probability of direct encounters with the dominant species. We studied niche partitioning between two sympatric wild canids in north-eastern Italy: the golden jackal and the red fox, which, based on competition theories, have a high potential for competition. We considered four main niche dimensions: space, habitat, time, and diet. We investigated three study areas monitoring target species populations from March 2017 to November 2018 using non-invasive monitoring techniques. Red fox presence was ascertained in every study area, while golden jackal presence was not ascertained in one study area, where we collected data regarding wolf presence. Considering the two target species, we observed partial diet partitioning based on prey size, with the golden jackal mainly feeding on wild ungulates and the red fox mainly feeding on small mammals. The two canids had an extensive temporal overlap along the diel cycle, having both predominant crepuscular and nocturnal activity patterns, but marked spatial partitioning and differential use of habitats. The golden jackal proved to be specialist concerning the habitat dimension, while the red fox resulted completely generalist: the former selected less human-modified habitats and avoided intensively cultivated lands, while the latter was present in all habitats, including intensively cultivated lands. The observed partitioning might be due partially to some ecological adaptations (e.g. specialist vs. generalist use of resources) and specific behaviours (e.g. cooperative vs. solitary hunting) and partially to the avoidance response of the red fox aimed at reducing the probability of direct encounters with the golden jackal. Sympatric species with similar ecological requirements can either coexist or competitively exclude each other depending on resources availability: the strength of the competition between them generally decreases with increased differentiated resources use ([1] and references therein). Considering carnivores, exploitation [2] and interference [3] have been identified as key mechanisms structuring the guild; the magnitude of interspecific aggressive behaviours is generally driven by relative differences in body sizes (i.e. aggressive behaviours are more frequent when the body mass ratio of the contenders ranges between 2 and 5.4), dietary overlap, predatory habits, and taxonomic similarity (i.e. aggressive behaviours are more frequent between species of the same family) [4, 5]. Because these interactions are generally asymmetric (subordinate vs. dominant), generally the subordinate species modify its realized niches by changing the extent of its use of resources. Competitive interactions among wild canids have been frequently recorded [3, 6] and have been widely investigated. In North America, for example, many researchers focused on the cascading interactions involving the wolf (Canis lupus), the coyote (Canis latrans), and foxes (Vulpes sp. and Urocyon cinereoargentus) ([7] and references therein, [8]). In Africa the complexity of the carnivores' guild promoted substantial prior researches focused on the interactions involving different wild canids, as the spotted hyena (Crocuta crocuta) and the African wild dog (Lycaon pictus), and other larger species, the lion (Panthera leo) above all (e.g. [9–12]). Regarding Eurasia, many studies dealing with canids interactions focused on a single (or a few) niche dimension ([13–17], but see [18]). Overall, two coexisting species with similar ecological requirements limit interspecific overlap by changing the extent of their use of a given resource to avoid or reduce competition. This process is known as niche partitioning [19]. Numerous coexistence mechanisms have been proposed, including spatial segregation, variations in habitat use, behavioural adaptations and altered activity periods or movements, trophic segregation and specialization [20]. To gain a significant understanding of the coexistence mechanisms between potentially competing species, more than a single dimension associated with an ecological niche should be considered [21, 22]. Therefore, we investigated multi-dimension niche partitioning between two sympatric wild canids: the golden jackal (Canis aureus) and the red fox (Vulpes vulpes). The golden jackal (7–15 kg) is a widespread species throughout southern Asia, the Middle East and south-eastern and central Europe [23], where it inhabits a wide variety of habitats in different bioclimatic areas: from semi-deserts and grasslands to forests, but also agricultural and semi-urban habitats [24–27]. The red fox (4–11 kg), being the most common mesocarnivore in the northern hemisphere, is widely distributed [28]; it is trophic and habitat generalist known for its opportunistic behaviour and adaptability to human-dominated landscapes [29–31]. Evidence of competition between the two species have been occasionally recorded: in Israel, for example, in areas where golden jackals became very abundant, the population size of red foxes decreased significantly, apparently because of exclusion by golden jackals (reported in [32]). Moreover, more recently, a mechanism of spatial segregation has been documented, in which red foxes avoided the core activity areas of golden jackals and restricted their activity to their peripheries [33]. These canids are sympatric in north-eastern Italy since the expansion of the golden jackal in this area started in 1984 [34]. We carried out an integrated research in which we considered golden jackal and red fox resource partitioning along different niche dimensions: (i) space, (ii) habitat, (iii) time, and (iv) diet. Considering multiple dimensions associated with an ecological niche is mandatory because partitioning and overlap along the main niche dimensions are counterbalancing; in other words, similarities along one dimension should imply dissimilarities along another one [19]. For each dimension, we evaluated golden jackal and red fox use of resources and their degree of overlap. According to the theories concerning the interference interactions and mechanisms of co-existence among carnivores [4, 6, 35] and the few evidences [33, 32] regarding the competition between these two species, we hypothesised that the coexistence between the golden jackal and the red fox would be favoured by the differentiation along one or more niche axes, which would relieve interspecific potential competition and facilitate coexistence between them, though their similar ecological requirements. We expected at least a partial partitioning between the two species at the spatial dimension because spatial segregation has already been documented between them [33, 32]. Further, we expected a partial partitioning at the trophic dimension, because of the differences in their body size [36]. Potentially connected with these dimensions, we expected also a partial partitioning along habitat dimension. Finally, in case of no partitioning over these dimensions (especially spatial and trophic), we predicted that temporal partitioning would have played a significant role in separating their ecological niches. This research was carried out in Friuli–Venezia Giulia region (north-eastern Italy), where we sampled three study areas covering a total surface of 750 km2 (Fig. 1). The Goritian Karst is located in the south-eastern part of the region and includes the Karst plateau and the surrounding intensively cultivated plain. The Magredi area is located in the western part of the region and it is characterized by the presence of subterranean rivers surrounded by an intensively cultivated landscape. Finally, the Tagliamento Valley is a typical mountain area located in the northern part of the region in the Alps (Table 1). The presence of both the golden jackal and the red fox was ascertained in every study area before the beginning of this research [37–40]. In particular, golden jackal reproduction was confirmed in the Karst since the early 1990s, while it was documented more recently (2010) in the Tagliamento Valley and the Magredi area [40]. According to the National Law 157/92, golden jackal hunting was forbidden, but red fox hunting was allowed in autumn–winter. Further details on the study areas are provided in Torretta et al. [41]. Location of the three study areas in Friuli–Venezia Giulia region Table 1 Main characteristics of the study areas We monitored golden jackal and red fox populations from March 2017 to November 2018; in particular, we carried out seasonal monitoring sessions (spring: March–May; summer: June–August; autumn: September–November; winter: December–February) during a first study period from March 2017 to February 2018 in each study area, whereas we carried out monthly monitoring sessions during a second study period from June to November 2018 in the Goritian Karst study area. We based data collection mainly on the recording of species indirect signs of presence. We adopted a Tessellation Stratified Sampling method [42, 43] subdividing each study area into 10 sample squares of 25 km2 (5 × 5 km) and randomly selecting three routes, among the existing foot-paths and dirt roads, within each square. During each monitoring session, we walked the selected routes to record species indirect signs of presence mainly corresponding to scats, footprints, and vocalizations; every sign of presence was autonomously evaluated by the researchers conducting the fieldwork and discordant or dubious records were discarded (Additional file 1: S1). We georeferred each sign of presence and recorded the type of vegetation where it was found. Scats were collected in polyethylene bags and stored for subsequent diet analyses. Besides the collection of signs of presence, we carried out camera trapping sessions to increase species detection. Thus, during each sampling session, we settled 10–13 camera traps (n = 8 MULTIPIR 12 HD; n = 2 IR-PLUS BF 110°; n = 5 Scout Guard SG520) in opportunistic sites, mainly located along foot-paths and dirt roads, for a minimum period of 5 days (min. sampling period per study area = 50 days during each sampling session). Camera traps were settled mainly on trees approximately 0.5–2.0 m above the ground and set to record time and date when triggered. We programmed cameras to record videos (60 s) during the 24 h with a minimum time delay between consecutive ones (0 s) [44]. We performed data analyses subdividing the total amount of observations into two seasons: a warm season, lasting from March to August (i.e., spring and summer), and a cold season, lasting from September to February (i.e., autumn and winter). Consequently, we considered nine sampling sessions within each season. The co-occurrence between species was evaluated using presence/absence data at different spatial scales: (i) a large spatial scale corresponding to the sample squares and (ii) a small spatial scale corresponding to the walked routes. We used the Sørensen similarity index: $${Ss}_{i,j}=\frac{{2a}_{ij}}{{2a}_{ij}+{b}_{ij}+{c}_{ij}}$$ where aij represents the number of sampled sites with the simultaneous presence of two species i and j, and bij and cij are the number of sampled sites with the presence of only one species. This index varies between 0 (maximum segregation) and 1 (maximum co-occurrence, i.e., both considered species are present in all sampled sites) [45]. Moreover, we delineated species utilization distributions non-parametrically through a probability density function using the kernel method [46]. We used a fixed estimator and the smoothing parameter selected by the process of least squares cross validation (LSCV) [47, 48] to obtain narrow kernels useful to reveal small-scale details of the data structure [49]. Consequently, we measured utilization distributions overlap through the Utilization Distribution Overlap Index (UDOI): $$UDOI = ~A_{{1,2}} \mathop {\iint }\limits_{{ - \infty }}^{\infty } UD_{1} \left( {x,y} \right)~ \times ~UD_{2} ~\left( {x,y} \right)~dxdy$$ which equals 0 for two ranges that do not overlap and equals 1 for two ranges that have complete overlap; values < 1 indicate less overlap relative to uniform space use, whereas values > 1 indicate that ranges have a non-uniform distribution and a high degree of overlap [50]. The analyses were performed using "adehabitatHR" package for R software [51]. Seven land cover variables were obtained from the habitat map of the Friuli–Venezia Giulia region (http://irdat.regione.fvg.it/WebGIS/): urban areas, intensively cultivated lands (mainly cereals and legumes), permanent crops (vineyards and fruit orchards), extensively cultivated lands (also including small cultivated land patches with different cultivation types and crops interspersed with natural or semi-natural areas), pastures and grasslands, shrublands, woodlands (broad-leaved and coniferous woodlands). Ecological Niche Factor Analysis (ENFA) [52, 53] was carried out using all seasonal occurrence points against the background habitats, expressed as relative percentages of the different land cover variables calculated within sample squares of 2.5 × 2.5 km. ENFA summarizes the variables into uncorrelated factors and extracts two measures of a species realized niche along two axes: the marginality (M), which describes how far the species optimum is from the average environmental conditions, and the specialization (S), which is an indication of niche breadth relative to the environmental background. M generally ranges from 0 to 1, although the value can exceed one; values > 1 indicate that the niche deviates more relative to the habitat background composition, in other words, the species has specific habitat preferences compared to the available environment. S ranges from 1 to infinite; values > 1 indicates some forms of niche specialization, in other words, a decreasing niche breadth. To facilitate comparisons and easily interpret niche breadth, we also calculated the tolerance (T) as the inverse of S [52]. ENFA was calculated using "CENFA" package for R software [54]. We estimated activity patterns non-parametrically through a probability density function using the kernel method [55] analysing data obtained by camera trap sampling. We considered as events only videos of the same species spaced 30 min to ensure capture independence [56–59]. We tested distribution uniformity using Watson's test (U2) [60]. We performed pairwise comparisons between golden jackal and red fox activity patterns by estimating the coefficient of overlap (Δ) [55, 61]. We considered Δ1 estimator as the smaller sample had less than 75 records in both seasons [62]. To test for the reliability of the index and obtain 95% confidence intervals, we performed a smoothed bootstrap generating 1000 resamples [62]. Then we compared seasonal golden jackal and red fox distributions through Watson's two-sample test (two-sample U2) to test for common distribution [60]. The activity pattern analyses were performed using "circular" and "overlap" packages for R software [62, 63]. Golden jackal and red fox diets were studied through scat analysis. We stored scats at − 20 °C for 30 days, then we analysed them to identify the consumed items from undigested remains: hairs, feathers, skulls, claws, and seeds. Each remain was identified by the comparison to a reference collection and atlas [64–67]. To describe the diet composition, the identified remains were grouped into nine food categories: (1) wild ungulates, (2) small mammals, (3) medium-sized mammals, (4) birds, (5) reptiles, (6) invertebrates, (7) fruit, (8) grasses, and (9) garbage. We estimated the proportion of consumed items for each scat and then we linked it to a percent volumetric class [68]. We assessed the adequacy of sample size with the Brillouin index (Hb; Additional file 1: S3). We tested for significance of seasonal variations of the consumed categories by nonparametric multivariate analysis of variance (NPMANOVA) [69]. Moreover, we verified seasonal variations within categories by nonparametric analysis of variance (Kruskal–Wallis test). We evaluated the trophic niche overlap between golden jackal and red fox through Pianka's index: $${O}_{vs}=\frac{\sum {p}_{iv}{p}_{is}}{\sqrt{\sum }{{p}^{2}}_{iv}\sum {{p}^{2}}_{is}}$$ where pij is the proportion of the resource i out of the total resources used by the golden jackal, while pif is the proportion of the resource i out of the total resources used by the red fox, and i could range from 1 to n, where n is the total number of food items considered. The value of index ranges from 0 (no overlap) to 1 (full overlap) [70]. To estimate the confidence intervals at 95% of the index distribution, we resampled the data 1000 times by the bootstrap method. Moreover, we tested for significance of specific variations of the consumed categories between seasons by two-way nonparametric multivariate analysis of variance (NPMANOVA) [69] and within categories by nonparametric analysis of variance (Kruskal–Wallis test). Diet analyses were performed using R software and specific packages as "stats" [71] and "npmv" [72]. During the study period we collected 277 records of golden jackal and 511 records of red fox presence. Considering the first study period (March 2017–February 2018), most of the records were collected in Goritian Karst study area (n = 105), while a few were collected in Tagliamento Valley study area (n = 18); no record was collected in Magredi study area (Table 2 and Additional file 1: Fig. 1 in S2). Conversely, the red fox was detected in every study area (Goritian Karst n = 153; Magredi n = 90; Tagliamento Valley n = 151) (Table 3 and Additional file 1: Fig. 2 in S2). Besides target species, in Magredi study area we recorded a few data of wolf presence (n = 6) (Additional file 1: Fig. 3 in S2). Table 2 Records of golden jackal presence collected in Friuli–Venezia Giulia region from March 2017 to November 2018 Table 3 Records of red fox presence collected in Friuli–Venezia Giulia region from March 2017 to November 2018 Taking into consideration the aim of the research, we decided to exclude from the analyses red fox data collected in Magredi study area. Co-occurrence between the golden jackal and the red fox was overall limited both at large spatial scale (warm season: Ss = 0.22; cold season: Ss = 0.13) and at small spatial scale (warm season: Ss = 0.07; cold season: Ss = 0.07). Kernel analysis revealed that golden jackal utilization distributions were restricted to a few sample squares of the study areas, whereas red fox utilization distributions were widespread and covered most of the study areas (Fig. 2). The estimated utilization distributions showed considerable variation in size between the two seasons both for the golden jackal (warm season: KDE95 [Kernel Density Estimate at 95%] = 92.9 km2; cold season: KDE95 = 44.9 km2) and the red fox (warm season: KDE95 = 388.9 km2; cold season: KDE95 = 442.1 km2). Overall, the spatial overlap between the golden jackal and the red fox was low during both seasons (warm season: UDOI = 0.20; cold season: UDOI = 0.17). Utilization distributions of the golden jackal and the red fox in Friuli–Venezia Giulia region (March 2017–November 2018) We considered 141 presence points (warm season: n = 63; cold season: n = 78) for the golden jackal and 329 presence points (warm season: n = 113; cold season: n = 216) for the red fox. The habitat of the golden jackal was rather different from the mean environmental conditions available within the study area, with the ENFA marginality factor slightly above the available background environment (warm season: M = 1.43; cold season: M = 1.42). The species had a narrow habitat niche breadth indicating highly specialized environmental requirements (warm season: S = 1.22 and T = 0.82; cold season: S = 1.14 and T = 0.88). Four and three significant ENFA factors explained 85.7% and 81.5% of the total variance respectively during warm and cold season (Table 4). The ENFA results indicated that the presence of the golden jackal was mainly linked to shrublands and pastures and grasslands during the warm season, while it was mainly linked to shrublands, extensively cultivated lands and pastures and grasslands during the cold season, as these variables had the highest coefficients on the marginality axis (Table 4 and Fig. 3). The habitat of the red fox did not deviate substantially from the mean environmental conditions available within the study areas, with the ENFA marginality below the available background environment (warm season: M = 0.21; cold season: M = 0.14). Obtained values indicated a negligible tendency of niche specialization (warm season: S = 1.02 and T = 0.98; cold season: S = 0.98 and T = 1.02). Accordingly, the coefficients on the marginality axis were rather low. Four significant ENFA factors explained 68.9% and 65.6% of the total variance respectively during warm and cold season (Table 5 and Fig. 4). Table 4 Variance explained by the most significant factors (Marg = Marginality; Spec = Specialization) in the Ecological Niche Factor Analysis (ENFA) for suitable habitat for the golden jackal in Friuli–Venezia Giulia region (March 2017–November 2018) Ecological Niche Factor Analysis (ENFA) for suitable habitat for the golden jackal in Friuli–Venezia Giulia region (March 2017–November 2018). X-axis corresponds to the marginality axis; Y-axis corresponds to the first axis of specialization. Arrow length indicates the magnitude with which each variable accounts for the variance on each of the two axes. The white and grey areas correspond to the minimum convex polygon enclosing all the projections of the available and used points, respectively. White circle indicates niche position (median marginality) relative to the average background environment (the plot origin) Table 5 Variance explained by the most significant factors (Marg = Marginality; Spec = Specialization) in an Ecological Niche Factor Analysis (ENFA) for suitable habitat for the red fox in Friuli–Venezia Giulia region (March 2017–November 2018) Ecological Niche Factor Analysis (ENFA) for suitable habitat for the red fox in Friuli–Venezia Giulia region (March 2017–November 2018). X-axis corresponds to the marginality axis; Y-axis corresponds to the first axis of specialization. Arrow length indicates the magnitude with which each variable accounts for the variance on each of the two axes. The white and grey areas correspond to the minimum convex polygon enclosing all the projections of the available and used points, respectively. White circle indicates niche position (median marginality) relative to the average background environment (the plot origin) During 1134 trapping days (warm season: n = 585; cold season: n = 549), we collected 178 videos recording golden jackal activities (warm season: n = 34; cold season: n = 144) and 118 videos recording red fox activities (warm season: n = 84; cold season: n = 34). Both species had non-uniform patterns of activity during the 24 h being active especially at night and at dawn and dusk. In particular, during the warm season both species showed a marked activity peak around midnight and a less pronounced activity peak at 06:00; during the cold season both species had a prolonged active bout between dusk and dawn, but their main activity peaks diverged with the golden jackal activity peak at dawn (around 06:00) and the red fox activity peak around 21:00 (Table 6 and Fig. 5). Nevertheless, temporal overlap between the golden jackal and the red fox was extensive during both seasons, as the coefficient of overlap was close to 1 (warm season: Δ1 = 0.77; cold season: Δ1 = 0.82), with no significant difference between activity patterns of the two species (Table 6). Table 6 Activity patterns of the golden jackal and the red fox in Friuli–Venezia Giulia region (March 2017–November 2018): non-uniformity of distributions and species temporal overlap Activity patterns of the golden jackal and the red fox and interspecific temporal overlap in Friuli–Venezia Giulia region (March 2017–November 2018) Food habits of the golden jackal and the red fox in Friuli–Venezia Giulia region (March 2017–November 2018): mean percent volume (VM% ± SE) of consumed categories We analysed 53 golden jackal scats (warm season: n = 31; cold season: n = 22) and 90 red fox scats (warm season: n = 56; cold season: n = 34). The sample size was adequate to represent both the golden jackal and the red fox diet (Additional file 1: Fig. 1 in S3). Scat analyses detected nine food categories consumed by the golden jackal and seven food categories consumed by the red fox (Fig. 6 and Additional file 1: Table 1 in S4). The most important consumed categories by the golden jackal were wild ungulates, followed by small mammals and medium-sized mammals; other seasonal important categories were birds and fruits (Fig. 6 and Additional file 1: Table 1 in S4). Golden jackal diet was significantly different between the seasons (NPMANOVA: F = 2.26; p = 0.024); in particular, the consumption of fruits (Kruskal–Wallis test: H = 7.91; df = 1; p = 0.005) and grasses (H = 7.59; df = 1; p = 0.006) was significantly higher during the cold season. The most important consumed categories by the red fox were small mammals, followed by fruits and wild ungulates (Fig. 6 and Additional file 1: Table 1 in S4). Even red fox diet was significantly different between the seasons (F = 2.76; p = 0.028), as the consumption of fruits was higher during the cold season (H = 8.89; df = 1; p = 0.003). Diet overlap between the two canids was medium–high during warm season (Pianka's index: O = 0.68; CI = 0.42–0.99) and cold season (O = 0.53; CI = 0.35–0.75) without seasonal significant difference. Nevertheless, golden jackal and red fox diets were significantly different between seasons (F = 4.98; p < 0.0001; warm season: p = 0.004; cold season: p = 0.002). During warm season the consumption of wild ungulates was significantly higher for the golden jackal (H = 13.80; df = 1; p = 0.001), whereas the consumption of invertebrates (H = 4.16; df = 1; p = 0.041), fruits (H = 8.92; df = 1; p = 0.003) and grasses (H = 5.04; df = 1; p = 0.025) was significantly higher for the red fox. Similarly, during cold season the consumption of wild ungulates (H = 6.03; df = 1; p = 0.014) and medium-sized mammals (H = 5.37; df = 1; p = 0.020) was significantly higher for the golden jackal, whereas the consumption of small mammals (H = 5.57; df = 1; p = 0.018) and fruits (H = 7.68; df = 1; p = 0.005) was significantly higher for the red fox (Fig. 6). Even though we choose our three study areas based on the most recent known distribution range of the golden jackal to carry out our research, we found no evidence of golden jackal presence in Magredi study area during our first study period; instead, interestingly we found evidence of wolf presence. In particular, the stable presence of a pair of wolves was documented throughout 2017, while the first event of reproduction was confirmed in 2018 [73]. The non-detection or the displacement of the golden jackal in newly established wolf ranges, due to some top-down effect induced by wolves on golden jackals, have been documented several times in Europe [74] and support the hypothesis that interspecific interactions between these large- and meso-predator may be similar to those observed in North America between the wolf and the coyote [75, 76]. The obtained results suggest marked spatial partitioning between the golden jackal and the red fox. Spatial segregation is one of the key mechanisms regulating coexistence within carnivore guild: species are sympatric across their range, but inverse relationships may be observed at local scales due to interspecific competition [20]. Accordingly, we observed not only a decreasing spatial overlap from large spatial scale to small spatial scale between the two species, but also a spatial displacement, considering the estimated utilization distributions, between species core areas: in other words, most frequented areas by the golden jackal overlapped less frequented areas by the red fox. Therefore, the observed spatial partitioning may represent the response of the subordinate species (i.e., the red fox) to the dominant species (i.e., the golden jackal) trying to reduce probabilities of direct encounters, namely a mechanism of dominant predator avoidance. Accordingly, spatial segregation between these two species has been documented elsewhere [33]. Interestingly Scheinin et al. [32] experimentally demonstrated that red foxes completely avoid direct encounters with golden jackals although the flight behaviour entails the abandonment of a very rich food patch. It is plausible that the extreme forms of interference competition, which are interspecific killing and intraguild predation [35, 77], may occur between these species similarly to what has been observed between coyotes and foxes ([8] and references therein). Evidence supporting this hypothesis may be the consumption of red foxes by golden jackals, which has been found in our study areas as well as in others [78]. Spatial partitioning is strongly related to habitat partitioning, another important mechanism promoting species coexistence. The golden jackal and the red fox are considered very adaptable species able to inhabit a wide range of habitats across the Eurasian continent [24, 26, 27, 29, 31]; however, our results suggest that the red fox may be noticeably more habitat generalist compared to the golden jackal. The main difference between species occurred in the magnitude of the marginality and specialization factors, where the golden jackal showed higher marginality and specialization than the red fox for most of the considered land cover variables. Indeed, the red fox occupied a broader habitat niche persisting in intensively cultivated areas and demonstrating higher tolerance to human-induced habitat modifications. The red fox occurred throughout the entire study areas, while the golden jackal was absent from approximately 50% of the red fox's range (Additional file 1: S2). The golden jackal resulted to be specialist concerning its habitat niche: shrublands, natural open areas, such as pastures and grasslands, and extensively cultivated lands were positively associated with species presence. These kinds of habitats provide adequate resources, such as abundant and diverse prey, den and resting sites [41, 79]. Conversely, the intensive agricultural lands predominant in the plain zone of Goritian Karst study area unlikely can provide adequate resources for the golden jackal, because they are characterized by mono-specific fields of cereals and legumes that lead to a uniform landscape [41]. On the other hand, the red fox resulted to be generalist concerning its habitat niche: the species had no specific habitat preferences compared to the availability of the study areas. It is plausible that the presence of the habitat specialist, but dominant, species in less human-modified habitats might have led the habitat generalist, but subordinate, species to massively occupy intensively cultivated areas. In other words, the red fox behaved as subordinate but superior exploitative competitor and species adaptations to human-modified habitats may have enabled it to exploit areas unsuitable or suboptimal for the golden jackal [80]. Besides spatial and habitat partitioning, partitioning at the temporal scale may play a major role in relaxing competition between species with similar ecological requirements [59]. Conversely, we observed extensive temporal overlap along the diel cycle between the golden jackal and the red fox, both having predominant crepuscular and nocturnal activity patterns. Anyway, following the categories defined by Monterroso et al. [58], the two canids may be considered facultative nocturnal species, as they showed occasional activity events during the daytime. Very similar patterns have been observed also in other canids [56, 18, 81, 82] and may depend on either the physiological periodicity or the adaptation to the biological rhythms of their main prey species [22]. Being active at the same time but exploring different areas and habitats may lead sympatric species to infrequent encounters, thus it may represent an adequate mechanism to reduce potential competition [11, 83, 84]. Alternatively, the observed spatial and habitat partitioning may also be related to the partial diet overlap; in other words, the two species may use different areas for foraging. The golden jackal and the red fox are both generalist carnivores consuming a wide range of prey species across the Eurasian continent, from small mammals to wild ungulates; moreover, both species can alternatively behave as scavengers or active predators, even considering larger prey species [13, 85–87]. Our research showed that in Friuli–Venezia Giulia region the golden jackal and the red fox mainly shared three important prey categories, which were wild ungulates, small mammals, and medium-sized mammals. However, the obtained results also underline that, despite the substantial diet overlap, the two species might have occupied distinct trophic niches with the golden jackal mainly consuming wild ungulates and the red fox mainly consuming small mammals. The consumption of medium-sized mammals, the third shared food category, was of secondary importance for both canids. Generally, the food habits of both species were similar to those reported by previous researches, in particular those carried out in more natural and heterogeneous landscapes [88, 89]. Conversely in human-modified landscapes, e.g. intensive agricultural lands, the food habits of the two species tended to converge towards increased consumption of small mammals, mainly rodents [15, 16, 90]. Thus, in altered and highly uniform landscapes, where prey abundance and diversity may be limited, a larger dietary overlap between the golden jackal and the red fox is expected leading to a major potential competition for trophic resource. Our results on trophic differentiation could have been allowed by golden jackal and red fox specific predation behaviours. Specific hunting behaviours, with the golden jackal capable of cooperative hunting and therefore of preying on larger species [16], may have produced the observed diet partitioning. This may be particularly true considering the fact that the most consumed wild ungulate species by the golden jackal was the roe deer. This ungulate species may represent an easy prey for jackals because of the small body size (15–35 kg) and the hiding and solitary behaviour, in particular at birth time and during fawn lactation [13]. Nevertheless, predation cannot exclude scavenging: considering carnivores' guild, canids are among the most avid scavengers [91]. Thus, as alternative explanation, the golden jackal may have excluded the red fox from wild ungulates' carcasses achieving exclusive access to such foraging subsidies. According to most recent researches related to the suppression vs. facilitation hypothesis, because carcasses increase the likelihood of encountering a competitor (the "fatal attraction" hypothesis [92]), the red fox may avoid scavenging when other resources (e.g., small mammals) were abundant [91]. The observed patterns of resource use by the golden jackal and the red fox in our research were generally consistent with the predictions of the niche partitioning hypothesis, which is expected to favour interspecific coexistence [6]. We found a marked spatial and habitat partitioning between the two canids, but extensive temporal overlap along the diel cycle having both predominant crepuscular and nocturnal activity patterns. The analysis regarding habitat showed a high specialization of the golden jackal and a pronounced generalism of the red fox. Moreover, partial diet partitioning based on prey size resulted, with the golden jackal mainly feeding on wild ungulates and the red fox preferring small mammals. These results provided evidence that coexistence between the two canids was allowed by partial niche partitioning, despite the existing potential for competition [4]. Aggressive interactions between pairs of species are not constant and it is plausible that the interactions between the golden jackal and the red fox may vary from tolerance to predation, similarly to those observed between the coyote and the red fox [93]. Based on the obtained results, we can deduce that the golden jackal and the red fox mainly segregated along the spatial, habitat, and trophic dimensions. This partitioning may be partially due to some ecological adaptations, i.e. the specialization in habitat use of the golden jackal vs. the superior exploitative ability in human-modified habitats of the red fox, and specific behaviours, i.e. alternative hunting behaviours (cooperative vs. solitary hunting), but it may be partially due also to the avoidance behaviour of the red fox aimed at reducing the competition with the golden jackal. Indeed, the observed spatial partitioning most likely reduced the probability of direct encounters between the two canids, in particular in more risky circumstances (e.g. scavenging on large prey carcasses) and may represent the response of the subordinate but superior exploitative species to relax interference competition with the dominant one. This research contributes to our knowledge of interspecific interactions between potentially competing species providing useful new insights into the ecological and behavioural adaptability of the considered carnivore species. Such findings also provide basic knowledge on the ecology of a species, i.e. the golden jackal, hitherto poorly studied in Italy. All data analysed during this study are included in the supplementary materials to this published article. The datasets used and analysed during the current study are available from the corresponding author on reasonable request. Sévêque A, Gentle LK, López-Bao JV, Yarnell RW, Uzal A. Human disturbance has contrasting effects on niche partitioning within carnivore communities. Biol Rev. 2020;95:1689–705. Hayward MW, Kerley GIH. Prey preferences and dietary overlap amongst Africa's large predators. S Afr J Wildl Res. 2008;38:93–108. Palomares F, Caro TM. Interspecific killing among mammalian carnivores. Am Nat. 1999;153:492–508. Donadio E, Buskirk SW. Diet, morphology, and interspecific killing in carnivora. Am Nat. 2006;167:524–36. de Oliveira TG, Pereira JA. Intraguild predation and interspecific killing as structuring forces of carnivoran communities in South America. J Mamm Evol. 2014;21:427–36. Linnell JDC, Strand O. Interference interactions, co-existence and conservation of mammalian carnivores. Divers Distrib. 2000;6:169–76. Levi T, Wilmers CC. Wolves–coyotes–foxes: a cascade among carnivores. Ecology. 2012;93:921–9. Newsome TM, Ripple WJ. A continental scale trophic cascade from wolves through coyotes to foxes. J Anim Ecol. 2015;84:49–59. Cozzi G, Broekhuis F, McNutt JW, Turnbull LA, Macdonald DW, Schmid B. Fear of the dark or dinner by moonlight? Reduced temporal partitioning among Africa's large carnivores. Ecology. 2012;93:2590–9. Creel S, Creel NM. Limitation of African wild dogs by competition with larger carnivores. Conserv Biol. 1996;10:526–38. Dröge E, Creel S, Becker MS, M'soka J. Spatial and temporal avoidance of risk within a large carnivore guild. Ecol Evol. 2017;7:189–99. Darnell AM, Graf JA, Somers MJ, Slotow R, Szykman Gunther M. Space use of African wild dogs in relation to other large carnivores. PLoS ONE. 2014;9:e98846. Hayward MW, Porter L, Lanszki J, Kamler JF, Beck JM, Kerley GIH, et al. Factors affecting the prey preferences of jackals (Canidae). Mamm Biol. 2017;85:70–82. Patalano M, Lovari S. Food habits and trophic niche overlap of the wolf Canis lupus, L. 1758 and the red Fox Vulpes vulpes (L. 1758) in a Mediterranean mountain area. Revue d'écologie. 1993;48:279–94. Lanszki J, Heltai M, Szabó L. Feeding habits and trophic niche overlap between sympatric golden jackal (Canis aureus) and red fox (Vulpes vulpes) in the Pannonian ecoregion (Hungary). Can J Zool. 2006;84:1647–56. Lanszki J, Kurys A, Szabó L, Nagyapáti N, Porter LB, Heltai M. Diet composition of the golden jackal and the sympatric red fox in an agricultural area (Hungary). Folia Zool. 2016;65:310–22. Bassi E, Donaggio E, Marcon A, Scandura M, Apollonio M. Trophic niche overlap and wild ungulate consumption by red fox and wolf in a mountain area in Italy. Mamm Biol. 2012;77:369–76. Ferretti F, Pacini G, Belardi I, ten Cate B, Sensi M, Oliveira R, et al. Recolonizing wolves and opportunistic foxes: interference or facilitation? Biol J Linn Soc. 2021;132:196–210. Schoener TW. Resource partitioning in ecological communities. Science. 1974;185:27–39. Manlick PJ, Woodford JE, Zuckerberg B, Pauli JN. Niche compression intensifies competition between reintroduced American martens (Martes americana) and fishers (Pekania pennanti). J Mammal. 2017;98:690–702. Chesson P. Mechanisms of maintenance of species diversity. Annu Rev Ecol Syst. 2000;31:343–66. Barrull J, Mate I, Ruiz-Olmo J, Casanovas JG, Gosàlbez J, Salicrú M. Factors and mechanisms that explain coexistence in a Mediterranean carnivore assemblage: an integrated study based on camera trapping and diet. Mamm Biol. 2014;79:123–31. Hoffmann M, Arnold J, Duckworth JW, Jhala Y, Kamler JF, Krofel M. Canis aureus. 2018. The IUCN red list of threatened species. doi:https://0-doi-org.brum.beds.ac.uk/10.2305/IUCN.UK.2018-2.RLTS.T118264161A163507876.en. Šálek M, Červinka J, Banea OC, Krofel M, Ćirović D, Selanec I, et al. Population densities and habitat use of the golden jackal (Canis aureus) in farmlands across the Balkan Peninsula. Eur J Wildl Res. 2014;60:193–200. Koepfli K-P, Pollinger J, Godinho R, Robinson J, Lea A, Hendricks S, et al. Genome-wide evidence reveals that African and Eurasian golden jackals are distinct species. Curr Biol. 2015;25:2158–65. Trouwborst A, Krofel M, Linnell JDC. Legal implications of range expansions in a terrestrial carnivore: the case of the golden jackal (Canis aureus) in Europe. Biodivers Conserv. 2015;24:2593–610. Ranc N, Álvares F, Banea O, Berce T, Caganacci F, Červinka J, et al. The golden jackal in Europe: where to go next? In: Proceedings of the 2nd international jackal symposium. Marathon Bay, Attiki, Greece. 2018. p. 9. Hoffmann M, Sillero-Zubiri C. Vulpes vulpes. 2016. The IUCN red list of threatened species. doi:https://0-doi-org.brum.beds.ac.uk/10.2305/IUCN.UK.2021-1.RLTS.T23062A193903628.en. Macdonald DW. The ecology of carnivore social behaviour. Nature. 1983;301:379–84. Larivière S, Pasitschniak-Arts M. Vulpes vulpes. Mamm Species. 1996. https://0-doi-org.brum.beds.ac.uk/10.2307/3504236. Šálek M, Drahníková L, Tkadlec E. Changes in home range sizes and population densities of carnivore species along the natural to urban habitat gradient. Mamm Rev. 2015;45:1–14. Scheinin S, Yom-Tov Y, Motro U, Geffen E. Behavioural responses of red foxes to an increase in the presence of golden jackals: a field experiment. Anim Behav. 2006;71:577–84. Shamoon H, Saltz D, Dayan T. Fine-scale temporal and spatial population fluctuations of medium sized carnivores in a Mediterranean agricultural matrix. Landsc Ecol. 2017. https://0-doi-org.brum.beds.ac.uk/10.1007/s10980-017-0517-8. Lapini L, Perco F, Benussi E. Nuovi dati sullo sciacallo dorato (Canis aureus L., 1758) in Italia (Mammalia, Carnivora, Canidae). Gortania Atti Mus Friul Storia Nat. 1993;14:233–40. Polis GA, Myers CA, Holt RD. The ecology and evolution of intraguild predation: potential competitors that eat each other. Annu Rev Ecol Syst. 1989;20:297–330. Gittleman JL. Carnivore body size: ecological and taxonomic correlates. Oecologia. 1985;67:540–54. Lapini L, Dall'Asta A, Dublo L, Spoto M, Vernier E. Materiali per una teriofauna dell'Italia nord-orientale (Mammalia, Friuli–Venezia Giulia). Gortania Atti Mus Friul Storia Nat. 1996;17:149–248. Lapini L, Molinari P, Dorigo L, Are G, Beraldo P. Reproduction of the golden jackal (Canis aureus moreoticus Geoffroy Saint Hilaire, 1835) in Julian pre-Alps, with new data on its range-expansion in the high-Adriatic hinterland (Mammalia, Carnivora, Canidae). Boll Mus Civ Storia Nat Venezia. 2009;60:169–86. Lapini L, Conte D, Zupan M, Kozlan L. Italian jackals 1984–2011: an updated review (Canis aureus: Carnivora, Canidae). Boll Mus Civ Storia Nat Venezia. 2011;62:219–32. Lapini L, Dreon AL, Caldana M, Luca M, Villa M. Distribuzione, espansione e problemi di conservazione di Canis aureus in Italia (Carnivora: Canidae). Quad Mus Civ Storia Nat Ferrara. 2018;6:89–96. Torretta E, Dondina O, Delfoco C, Riboldi L, Orioli V, Lapini L, et al. First assessment of habitat suitability and connectivity for the golden jackal in north-eastern Italy. Mamm Biol. 2020;100:631–43. Barabesi L, Franceschi S. Sampling properties of spatial total estimators under tessellation stratified designs. Environmetrics. 2011;22:271–8. Barabesi L, Fattorini L. Random versus stratified location of transects or points in distance sampling: theoretical results and practical considerations. Environ Ecol Stat. 2013;20:215–36. Ancrenaz M, Hearn AJ, Ross J, Sollmann R, Wilting A. Handbook for wildlife monitoring using camera-traps. BBEC II Secretariat. 2012. Sørensen T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. Kong Dan Vidensk Selsk Biol Skr. 1948;5:1–34. Worton BJ. Kernel methods for estimating the utilization distribution in home-range studies. Ecology. 1989;70:164–8. Silverman BW. Density estimation for statistics and data analysis. Boca Raton: Chapman & Hall/CRC; 1998. Seaman DE, Millspaugh JJ, Kernohan BJ, Brundige GC, Raedeke KJ, Gitzen RA. Effects of sample size on kernel home range estimates. J Wildl Manag. 1999;63:739. Seaman DE, Powell RA. An evaluation of the accuracy of kernel density estimators for home range analysis. Ecology. 1996;77:2075–85. Fieberg J, Kochanny CO. Quantifying home-range overlap: the importance of the utilization distribution. J Wildl Manag. 2005;69:1346–59. Calenge C. Home range estimation in R: The adehabitatHR package. 2018. Hirzel AH, Hausser J, Chessel D, Perrin N. Ecological-Niche Factor Analysis: how to compute habitat-suitability maps without absence data? Ecology. 2002;83:2027–36. Basille M, Calenge C, Marboutin É, Andersen R, Gaillard J-M. Assessing habitat selection using multivariate statistics: some refinements of the ecological-niche factor analysis. Ecol Model. 2008;211:233–40. Rinnan D. CENFA: climate and ecological niche factor analysis. 2018. Ridout MS, Linkie M. Estimating overlap of daily activity patterns from camera trap data. J Agric Biol Environ Stat. 2009;14:322–37. Torretta E, Serafini M, Puopolo F, Schenone L. Spatial and temporal adjustments allowing the coexistence among carnivores in Liguria (N-W Italy). Acta Ethologica. 2016;19:123–32. Kelly MJ, Holub EL. Camera trapping of carnivores: trap success among camera types and across species, and habitat selection by species, on Salt Pond Mountain, Giles County, Virginia. Northeast Nat. 2008;15:249–62. Monterroso P, Alves PC, Ferreras P. Plasticity in circadian activity patterns of mesocarnivores in Southwestern Europe: implications for species coexistence. Behav Ecol Sociobiol. 2014;68:1403–17. Torretta E, Mosini A, Piana M, Tirozzi P, Serafini M, Puopolo F, et al. Time partitioning in mesocarnivore communities from different habitats of NW Italy: insights into martens' competitive abilities. Behaviour. 2017;154:241–66. Pewsey A, Neuhäuser M, Ruxton GD. Circular statistics in R. 1st ed. Oxford, New York: Oxford University Press; 2013. Linkie M, Ridout MS. Assessing tiger-prey interactions in Sumatran rainforests: tiger-prey temporal interactions. J Zool. 2011;284:224–9. Meredith M, Ridout M. Overview of the "overlap" package. 2014. Lund U, Agostinelli C, Agostinelli MC. Package "circular." Repos CRAN. 2017. Brunner H, Coman BJ. The identification of mammalian hair. Melbourne: Inkata Press; 1974. Teerink BJ. Hair of West-European mammals: atlas and identification key. Cambridge: Cambridge Univ. Press; 1991. De Marinis AM, Asprea A. Hair identification key of wild and domestic ungulates from southern Europe. Wildl Biol. 2006;12:305–20. Dove CJ, Koch S. Microscopy of feathers: A practical guide for forensic feather identification. J Am Soc Trace Evid Exam. 2010;1:15–7. Kruuk H, Parish T. Feeding specialization of the European badger Meles meles in Scotland. J Anim Ecol. 1981;50:773. Anderson MJ. A new method for non-parametric multivariate analysis of variance. Austral Ecol. 2001;26:32–46. Pianka ER. The structure of lizard communities. Annu Rev Ecol Syst. 1973;4:53–74. R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2020. https://www.r-project.org/. Burchett WW, Ellis AR, Harrar SW, Bathke AC. Nonparametric inference for multivariate data: the R Package npmv. J Stat Softw. 2017;76:1–18. Marucco F, Avanzinelli E, Bassano B, Bionda R, Bisi F, Calderola S, et al. La popolazione di lupo sulle Alpi Italiane 2014–2018. Relazione tecnica, Progetto LIFE 12 NAT/ IT/00080 WOLFALPS – Azione A4 e D1. 2018. Krofel M, Giannatos G, Ćirovič D, Stoyanov S, Newsome T. Golden jackal expansion in Europe: a case of mesopredator release triggered by continent-wide wolf persecution? Hystrix Ital J Mammal. 2017. https://0-doi-org.brum.beds.ac.uk/10.4404/hystrix-28.1-11819. Carbyn L. Coyote population fluctuations and spatial distribution in relation to wolf territories in Riding Mountain National Park, Manitoba. Can Field-Naturalist. 1982;96:176–183. Merkle J, Stahler DR, Smith DW. Interference competition between gray wolves and coyotes in Yellowstone National Park. Can J Zool. 2009;87:56–63. Lourenço R, Penteriani V, Rabaça JE, Korpimäki E. Lethal interactions among vertebrate top predators: a review of concepts, assumptions and terminology: lethal interactions among vertebrate top predators. Biol Rev. 2014;89:270–83. Raichev EG, Tsunoda H, Newman C, Masuda R, Georgiev DM, Kaneko Y. The reliance of the golden jackal (Canis aureus) on anthropogenic foods in winter in central Bulgaria. Mammal Study. 2013;38:19–27. Chourasia P, Mondal K, Sankar K, Qureshi Q. Den site selection by golden jackal (Canis aureus) in a semi arid forest ecosystem in western India. Bull Pure Appl Sci Zool. 2020;39a:160. Robinson QH, Bustos D, Roemer GW. The application of occupancy modeling to evaluate intraguild predation in a model carnivore system. Ecology. 2014;95:3112–23. Johnson WE, Franklin WL. Spatial resource partitioning by sympatric grey fox (Dusicyon griseus) and culpeo fox (Dusicyon culpaeus) in southern Chile. Can J Zool. 1994;72:1788–93. Lucherini M, Reppucci JI, Walker RS, Villalba ML, Wurstten A, Gallardo G, et al. Activity pattern segregation of carnivores in the High Andes. J Mammal. 2009;90:1404–9. Viota M, Rodríguez A, López-Bao JV, Palomares F. Shift in microhabitat use as a mechanism allowing the coexistence of victim and killer carnivore predators. Open J Ecol. 2012;02:115–20. Gómez-Ortiz Y, Monroy-Vilchis O, Castro-Arellano I. Temporal coexistence in a carnivore assemblage from central Mexico: temporal-domain dependence. Mammal Res. 2019;64:333–42. Díaz-Ruiz F, Delibes-Mateos M, García-Moreno JL, María López-Martín J, Ferreira C, Ferreras P. Biogeographical patterns in the diet of an opportunistic predator: the red fox Vulpes vulpes in the Iberian Peninsula. Mammal Rev. 2013;43:59–70. Soe E, Davison J, Süld K, Valdmann H, Laurimaa L, Saarma U. Europe-wide biogeographical patterns in the diet of an ecologically and epidemiologically important mesopredator, the red fox Vulpes vulpes: a quantitative review. Mammal Rev. 2017;47:198–211. Tsunoda H, Saito MU. Variations in the trophic niches of the golden jackal Canis aureus across the Eurasian continent associated with biogeographic and anthropogenic factors. J Vertebr Biol. 2020. https://0-doi-org.brum.beds.ac.uk/10.25225/jvb.20056. Radović A, Kovačić D. Diet composition of the golden jackal (Canis aureus L.) on the Pelješac Peninsula, Dalmatia, Croatia. Period Biol. 2010;112:219–24. Tsunoda H, Raichev EG, Newman C, Masuda R, Georgiev DM, Kaneko Y. Food niche segregation between sympatric golden jackals and red foxes in central Bulgaria. J Zool. 2017;303:64–71. Lanszki J, Heltai M. Food preferences of golden jackals and sympatric red foxes in European temperate climate agricultural area (Hungary). Mammalia. 2010. https://0-doi-org.brum.beds.ac.uk/10.1515/mamm.2010.005. Prugh LR, Sivy KJ. Enemies with benefits: integrating positive and negative interactions among terrestrial carnivores. Ecol Lett. 2020;23:902–18. Sivy KJ, Pozzanghera CB, Grace JB, Prugh LR. Fatal attraction? Intraguild facilitation and suppression among predators. Am Nat. 2017;190:663–79. Gese EM, Stotts TE, Grothe S. Interactions between coyotes and red foxes in Yellowstone National Park, Wyoming. J Mammal. 1996;77:377–82. We thank L. Lapini, M. de Luca, L. Kozlan, N. Cernetti, M. Pavanello, F. Cimenti, I. Conti, F. Palla, B. Deltin, L. Dreon, G. Capaldi, N. Cesco, L. Vatta, A. Marusich, C. Parolin, G. Colombo, S. Ferfolja for their useful suggestions and help during fieldwork. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Department of Earth and Environmental Sciences, University of Pavia, Via Ferrata 1, 27100, Pavia, Italy Elisa Torretta, Luca Riboldi, Claudio Delfoco & Alberto Meriggi Department of Chemistry, Life Sciences and Environmental Sustainability, University of Parma, Parco Area delle Scienze 11/a, 43124, Parma, Italy Elena Costa & Erica Frignani Elisa Torretta Luca Riboldi Elena Costa Claudio Delfoco Erica Frignani Alberto Meriggi ET and AM conceived the study. ET, LR, EC, CD, and EF collected the field data. ET, LR, EC, and CD carried out scat analyses. ET conducted all data analyses and drafted the manuscript, while AM supervised the data analyses. All authors read and approved the final manuscript. Correspondence to Elisa Torretta. This study complies with the guidelines or rules for animal care and use for scientific research. 12862_2021_1860_MOESM1_ESM.pdf Additional file 1: S1. Details on the sampling effort: collection of indirect signs of presence. S2. Details on collected data. S3. Diet analysis: adequacy of sample size. S4. Food habits of the golden jackal and the red fox. Torretta, E., Riboldi, L., Costa, E. et al. Niche partitioning between sympatric wild canids: the case of the golden jackal (Canis aureus) and the red fox (Vulpes vulpes) in north-eastern Italy. BMC Ecol Evo 21, 129 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s12862-021-01860-3 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12862-021-01860-3 Niche overlap Niche partitioning Camera trapping Scat analysis Utilization distributions Activity patterns Ecological Niche Factor Analysis (ENFA) Submission enquiries: [email protected]
CommonCrawl
Ion chromatographic determination of chlorite and chlorate in chlorinated food using a hydroxide eluent Kim, Dasom;Jung, Sungjin;Lee, Gunyoung;Yun, Sang Soon;Lim, Ho Soo;Kim, Hekap 57 https://doi.org/10.5806/AST.2017.30.2.57 PDF KSCI This study was conducted to develop an analytical technique for determination of chlorite and chlorate concentrations in fresh-cut food and dried fish products by an ion chromatography/conductivity detection method using a hydroxide mobile phase. Deionized water was added to homogenized samples, which were then extracted by ultrasound extraction and centrifuged at high speed (8,500 rpm). Subsequently, a Sep-Pak tC18 cartridge was used to purify the supernatant. Chlorite and chlorate ions were separated using 20 mM KOH solution as the mobile phase and Dionex IonPac AS27 column as the stationary phase. Ethylenediamine was used as sample preservative and dibromoacetate was added to adjust for the disparity in extraction efficiencies between the food samples. The method detection limit) for chlorite and chlorate were estimated to be 0.2 mg/kg and 0.1 mg/kg, respectively, and the coefficient of determination ($r^2$) that denotes the linearity of their calibration curves were correspondingly measured to be 0.9973 and 0.9987. The recovery rate for each ion was 92.1 % and 96.3 %, with relative standard deviations of 7.47 % and 6.18 %, respectively. Although neither chlorite nor chlorate was detected in the food samples, the analytical technique developed in this study may potentially be used in the analysis of disinfected food products. Determination of volatile compounds by headspace-solid phase microextraction - gas chromatography / mass spectrometry: Quality evaluation of Fuji apple Lee, Yun-Yeol;Jeong, Moon-Cheol;Jang, Hae Won 68 The volatile components in 'Fuji' apple were effectively determined by a headspace solid-phase microextraction (HS-SPME) combined with gas chromatography-mass spectrometry (GC-MS). A total of 48 volatile components were identified and tentatively characterized based on National Institute of Standards and Technology (NIST) MS spectra library and the Kovats GC retention index I (RI). The harvested Fuji apples were divided into two groups: 1-methylcyclopropene (1-MCP) treated and non-treated (control) samples for finding important indicators between two groups. The major volatile components of both apples were 2-methylbutyl acetate, hexyl acetate, butyl 2-methylbutanoate, hexyl butanoate, hexyl 2-methylbutanoate, hexyl hexanoate and farnesene. No significant differences of these major compounds between 1-MCP treated and non-treated apples were observed during 1 month storage. Interestingly, the amount of off-flavors, including 1-butanol and butyl butanoate, in 1-MCP treated apples decreased over 5 months, and then increased after 7 months. However, non-treated apples did not show significant changes for off-flavors during 7 month storage (p<0.05). The non-treated apples also contained the higher levels of two off-flavors than 1-MCP treated apples. These two compounds, 1-butanol and butyl butanoate, can be used as quality indicators for the quality evaluation of Fuji apple. Development of dry-origin latent footwear impression on non-porous and semi-porous surfaces using a 5-methylthioninhydrin and L-alanine complex Hong, Sungwook;Kim, Yeounjeung;Park, Jihye;Lee, Hoseon 75 5-methylthioninhydrin (5-MTN) is an amino acid sensitive reagent used for the development of latent fingermarks deposited on porous surfaces such as paper and wood. The present study demonstrates that the 5-MTN can be used as a latent footwear impression enhancement reagent, by reacting with trace multivalent metal ions, which are the main components of the latent footwear impression. 5-MTN and L-alanine complex (MTN-ALA) used for the latent footwear impression development was prepared, by mixing $4.5{\times}10^{-3}M$ 5-MTN (in methanol) and $4.5{\times}10^{-3}M$ L-alanine (in methanol) in 1:1 ratio, and keeping undisturbed at room temperature for 24 h. The latent footwear impressions were deposited on white and black non-porous surfaces (glass plate, polyethylene panel, polypropylene panel, acryl panel, polyvinyl chloride (PVC) panel, poly(methyl methacrylate) (PMMA) panel, acrylonitrile-butadiene-styrene (ABS) panel, tile), and a semi-porous surfaces (painted wood). The latent footwear impressions on these surfaces were treated with MTN-ALA complex by spraying. The fluorescence of footwear impressions (occurred due to the reaction between MTN-ALA and metal complexes) was observed under a 505 nm forensic light source and an orange barrier filter. The enhancement of latent footwear impression was achieved from black surfaces without any blurring. However, the fluorescence (enhancement) of footwear impression was not observed on the white PVC, PMMA, and ABS surfaces, because the incident light interfered and reflected on the surface. The sensitivity of MTN-ALA was superior to 2,2'-dipyridil, which is a representative non-fluorescing footwear impression enhancement reagent, and similar to 8-hydroxyquinoline, which is a representative fluorescing footwear impression enhancement reagent. Development of simple HPLC-UV method for discrimination of Adenophorae Radix Vu, Thi Phuong Duyen;Kim, Kyung Tae;Pham, Yen;Bao, Haiying;Kang, Jong Seong 82 Adenophorae Radix (AR) is a frequently used medicinal herb; because of its popularity, products containing similar herbal products are often sold as substitutes, especially if their morphology is similar. However, any analytical method to identify AR based on quantitative analysis is not registered in Korea, Japan and China Pharmacopoeias. This study developed a simple HPLC method to discriminate between authentic AR and substitutes. Linoleic acid was used as a marker compound of AR. Our optimized HPLC-UV conditions included a mobile phase of 90 % acetonitrile under isocratic condition, and a flow rate of 1.0 mL/min at room temperature. Detection wavelength was set at 205 nm. Linoleic acid was detected at 13.5 minutes for a total analysis time of 20 minutes. The standard herb of AR contained 0.025 % of linoleic acid, while four authentic AR samples and eight substitutes contained 0.040~0.071 % and 0.004~0.014 %, respectively. Comparison of the linoleic acid concentrations of the sample types to reference AR showed that among 12 samples, only the four samples were authentic. Thus, our HPLC-UV method, along with our suggested content criterion for linoleic acid concentration, can be used for the quick and accurate determination whether the herbal products are authentic AR or substitute. Thermodynamics of the binding of Substance P to lipid membranes Lee, Woong Hyoung;Kim, Chul 89 The thermodynamic functions for the binding of the peptide Substance P (SP) on the surface of lipid vesicles made of various types of lipids were obtained by using isothermal titration calorimetry. The reaction enthalpies measured from the experiments were -0.11 to $-4.5kcal\;mol^{-1}$. The sizes of the lipid vesicles were measured with dynamic light scattering instrument in order to get the correlation between the reaction enthalpies and the vesicle sizes. The bindings of SP on the lipid vesicles with diameter of 37 to 108 nm were classified into the enthalpy-driven reaction or the entropy-driven reaction according to the size of the lipid vesicles. For the enthalpy-driven binding reaction, the significance of the electrostatic interactions between SP and lipid molecules was affirmed from the experimental results of the DMPC/DMPG/DMPH and DMPC/DMPS/DMPH vesicles as well as the importance of the hydrophobic interactions between hydrophobic groups of SP and lipid molecules. Effect of centrifugation on tryptic protein digestion Kim, Soohwan;Kim, Yeoseon;Lee, Dabin;Kim, Inyoung;Paek, Jihyun;Shin, Dongwon;Kim, Jeongkwon 96 This study investigated the effect of centrifugation on tryptic digestion. This was done by applying different centrifugation speeds (6,000, 8,000, 10,000, 20,000, and $30,000{\times}g$) over various durations (0, 10, 20, 30, 40, 50, and 60 min) to digest two model proteins - cytochrome c and myoglobin. The intact proteins and resulting peptides were identified using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Centrifugation greatly improved the tryptic digestion efficiency of cytochrome c, where either an increase in centrifugation speed or in digestion duration significantly improved the digestion of cytochrome c. However, centrifugation did not noticeably improve the digestion of myoglobin; 16 h of centrifuge-assisted tryptic digestion at $30,000{\times}g$ barely removed the myoglobin protein peak. Similar results were also obtained when using conventional tryptic digestion with gentle mixing. When acetonitrile (ACN) was added to make 10% ACN buffer solutions, the myoglobin protein peak disappeared after 6 h of digestion using both centrifuge-assisted and conventional tryptic digestions. Identification of triacylglycerols in coix seed extract by preparative thin layer chromatography and liquid chromatography atmospheric pressure chemical ionization tandem mass spectrometry Sim, Hee-Jung;Lee, Seul gi;Park, Na-Hyun;Kim, Youna;Cho, Hyun-Woo;Hong, Jongki 102 Here we reported a methodology for identification of triacylglycerols (TAGs) and diacylglycerols (DAGs) in coix seed by preparative thin layer chromatography (prep-TLC) and non-aqueous reversed-phase liquid chromatography (NARP LC)-atmospheric pressure chemical ionization (APCI) tandem mass spectrometry (MS/MS). Lipid components were extracted from coix seed by reflux extraction using n-hexane for 3 hr. TAGs and DAGs in coix seed extract were effectively purified and isolated from matrix interferences by prep-TLC and then analyzed by LC-APCI-MS and MS/MS for identification. TAGs were effectively identified taking into consideration of their LC retention behavior, APCI-MS spectra patterns, and MS/MS spectra of $[DAG]^+$ ions. In MS/MS spectra of TAGs, diacylglycerol-like fragment $[DAG]^+$ ions were useful to identify TAGs with isobaric fragment ions. Based on an established method, 27 TAGs and 8 DAGs were identified in coix seed extract. Among them, 15 TAGs and 8 DAGs were for the first time observed in coix seed. Interestingly, some of TAGs isolated by prep-TLC were partly converted into DAGs through probably photolysis process during storing in room temperature. Thus, degradation phenomenon of TAGs should be considered in the quality evaluation and nutritional property of coix seed. LC-APCI-MS/MS combined with prep-TLC will be practical method for precise TAG and DAG analysis of other herbal plants.
CommonCrawl
Calculate Gradient (Partial Derivatives) of Bezier Curve From this page I know that a Bezier curve of degree $N$ has a derivative which is a Bezier curve of degree $N-1$, and I know how to calculate the control points of it: Derivatives of a Bezier Curve However, how would i get the partial derivatives of X and Y to calculate a gradient, when I have a multivariate quadratic curve such as this: $X = f(t) = 3.0*(1-t)^2+2.0*(1-t)t+4.0*t^2$ $Y = g(t) = 9.0*(1-t)^2+1.0*(1-t)t+3.0*t^2$ Where the above describe $(X,Y)$ points in a two dimensional space. partial-derivative bezier-curve Alan Wolfe Alan WolfeAlan Wolfe $\begingroup$ A Bezier curve is not multivariate. So, there is no partial derivatvies. Its derivative is computed as (dX/dt, dY/dt). $\endgroup$ – fang Aug 5 '15 at 21:15 $\begingroup$ Bummer. Is there no (reasonably easy) way to calculate the gradient then? $\endgroup$ – Alan Wolfe Aug 5 '15 at 21:17 $\begingroup$ The "gradient" I know of is the partial derivatives of a multivariate scalar function. But Bezier curve is actually a univariate vector function. So, I really don't know how to compute its gradient. The closest thing would be the derivative vector, which is computed as (dX/dt, dY/dt). $\endgroup$ – fang Aug 6 '15 at 0:31 When you say "gradient", I assume you mean the slope $dy/dx$. First you get the derivative vector: $$ \left( \frac{dx}{dt}, \frac{dy}{dt} \right) $$ and then $$ \frac{dy}{dx} = \frac{\frac{dy}{dt} }{\frac{dx}{dt} } $$ As you might expect, this formula has problems when $dx/dt=0$, because this means you have a vertical tangent vector, so infinite slope. From the general theory of Bezier curves, we know that the curve $$ f(t) = (1-t)^2 A + 2t(1-t) B + t^2 C $$ has derivative $$ \frac{df}{dt} = 2(1-t)(B-A) + 2t(C-B) $$ So, in your example $$ \frac{dX}{dt} = 2(1-t)(-1) + 2t(2) = 6t-2 $$ $$ \frac{dY}{dt} = 2(1-t)(-8) + 2t(2) = 20t - 16 $$ and so $$ \frac{dY}{dX} = \frac{\frac{dY}{dt} }{\frac{dX}{dt} } = \frac{20t-16}{6t-2} $$ Your reference to partial derivatives is confusing; partial derivatives make sense only when you have a function of several independent variables. In the case we're considering here, there is only a single variable, namely the parameter $t$. bubbabubba Not the answer you're looking for? Browse other questions tagged partial-derivative bezier-curve or ask your own question. How to calculate the square area under a Bezier curve? how to calculate the normal vector for a bezier curve Gradient of a rational Bezier curve How to find the N control point of a bezier curve with N+1 points on the curve Bezier curve and deceleration Find Quadratic Bezier curve equation based on its control points Derivative of Bezier Rectangle Representing rational quadratic Bezier with non-rational cubic Bezier curve, change of parametrized variable Bezier curve, constrained shortest path and minimum time: is the optimal curve always of minimal degree?
CommonCrawl
Episodes, Season 4, Featured Articles, Desmond-centric Episodes without a Previously on Lost Episodes that are Rated TV-PG-LV This article is about the episode. For the fixed value, see Constant. For the orchestral piece, see "The Constant" (composition). Lost Season 4 << S1 • << S2 • << S3 (centric character in parentheses) S5 >> • S6 >> --- "Lost: Past, Present and Future" (recap) #01 "The Beginning of the End" (Hurley) #02 "Confirmed Dead" (Science team) #03 "The Economist" (Sayid) #04 "Eggtown" (Kate) #05 "The Constant" (Desmond) #06 "The Other Woman" (Juliet) #07 "Ji Yeon" (Sun & Jin) #08 "Meet Kevin Johnson" (Michael) #09 "The Shape of Things to Come" (Ben) #10 "Something Nice Back Home" (Jack) #11 "Cabin Fever" (Locke) #12 "There's No Place Like Home, Part 1" (Oceanic 6) "The Constant" Airdate Production code {{{pc}}} {{{flashback}}} Flash-forward {{{flash-forward}}} Flash sideways {{{flash-sideways}}} Centric character(s) Carlton Cuse & Jack Bender Naveen Andrews - Sayid Jarrah Henry Ian Cusick - Desmond Hume Jeremy Davies – Daniel Faraday Emilie de Ravin - Claire Littleton* Michael Emerson - Benjamin Linus* Matthew Fox - Jack Shephard Jorge Garcia - Hugo Reyes* Josh Holloway - James Ford* Daniel Dae Kim - Jin-Soo Kwon* Yunjin Kim - Sun-Hwa Kwon* Ken Leung – Miles Straume* Evangeline Lilly - Kate Austen* Rebecca Mader – Charlotte Lewis Elizabeth Mitchell - Juliet Burke Terry O'Quinn - John Locke* Harold Perrineau – Michael Dawson* * Did not appear in the episode. Special guest star(s) {{{specialguests}}} Guest starring Anthony Azizi - Omar Alan Dale - Charles Widmore Kevin Durand - Martin Keamy Jeff Fahey - Frank Lapidus Fisher Stevens - George Minkowski Sonya Walger - Penelope Widmore Co-starring Chris Barnes - Suited Guard Edward Conery - Auctioneer Chris Gibbon - Soldier Darren Keefe - Billy Graham McTavish - Sergeant Uncredited Marc Vann - Ray {{{archive}}} Episode transcript [[{{{transcript2}}}|Part Two]] Commentary transcript Episode images "The Constant" is the fifth episode of Season 4 of Lost, and the seventy-seventh produced hour of the series as a whole. It was originally broadcast on February 28, 2008. The helicopter hits turbulence on its way to the freighter, and Desmond experiences unexpected side effects; as his consciousness travels in time he and a key character discover their "constants." The episode follows Desmond's consciousness in a continuous narrative. 2004 – Helicopter Desmond panicking in the helicopter. Desmond and Sayid travel on the helicopter to the freighter piloted by Frank. Desmond is looking at the picture of him and Penny. Frank is flying the route that Daniel gave him, and Sayid worries when their course leads straight into a thunderhead. They experience turbulence which becomes progressively worse and Frank struggles to keep to the correct bearing. The helicopter drifts from the 305 heading to a heading of 310 and Desmond begins experiencing flashes. ♪ 1996 – Royal Scots Regiment Desmond wakes up in a military barrack. Desmond wakes in the Royal Scottish Regiment military barrack at Camp Millar (north of Glasgow, Scotland) as morning drill begins. The Sergeant-Major starts to shout at him because he did not respond promptly to the wake-up call. He demands an explanation and Desmond explains he was having a very vivid dream about being "in a helicopter, Sir,...and there was a storm, Sir, and...I...don't remember the rest, Sir." The Sergeant-Major comments that it was at least a military dream, but still mocks him. As punishment for Desmond's lack of concentration the Sergeant-Major orders the entire platoon to get ready for the morning routine in double time, four minutes instead of the normal eight minutes. 2004 – Helicopter/Freighter Keamy talks to Desmond. Desmond wakes up in present time after his flash and is very disoriented. He struggles with his seatbelt and is panicking, appearing as if he wants to jump out of the helicopter. Sayid tries to calm him down, asking him if everything is okay. Desmond, puzzled and afraid, asks Sayid what his name is, not recognizing him. Back on the beach, Jack and Juliet are worrying about the helicopter, as they haven't heard from Desmond or Sayid for a day. They question Charlotte and Daniel as to why there is no news of their friends. Charlotte is indifferent even though, as Juliet points out, the freighter is only 40 miles away which is about 20 minutes of flying. Juliet's main concern is that neither Charlotte or Daniel seem at all concerned. Against Charlotte's wishes, Daniel admits that the perception of time on the Island might be different than the time experienced off the Island. He says that as long Frank uses the bearings that were given to him, the people on the helicopter should be fine. If not, there could be "side-effects." Desmond then pulls out of his pocket the picture of him and Penny, which appears to relax him. The thunderstorm clears, and the helicopter lands on the freighter. Keamy and Omar come running toward Frank, asking who Desmond and Sayid are and why he brought them with him. Frank explains that they are survivors from Flight 815. ♪ Keamy, in charge, says that he shouldn't have brought them on board. Desmond is distressed and starts yelling that he doesn't know Sayid and Frank. Sayid agrees to let Keamy and Omar take Desmond to the doctor, but Desmond has another flash before they can. Desmond stands in the yard while the rest of his platoon do crunches. Desmond is in the rain, standing while all the others in his platoon do crunches. The Sergeant-Major ridicules him again and makes the entire platoon run as a punishment for Desmond's erratic behavior. Afterward, Desmond speaks to one of his friends in the platoon, and tells him about the "dreams" he has been experiencing. His friend is in disbelief and asks if there was anyone he knew in the "dream". Desmond remembers that in the helicopter he was holding a photo of Penny. He hurries to a nearby phone booth to call her. However, just as he is about to go in, someone bumps into him, sarcastically thanking him for the extra work the Sergeant-Major made them do on the account of Desmond's disobedience. As Desmond reaches down to pick up the dropped coins, his consciousness returns to the freighter. 2004 – Freighter Sayid and Frank do a crucial deal on the freighter's deck. Keamy and Omar introduce themselves and bring Desmond down to the sick bay, only to lock him in. Desmond panics and starts pounding on the door and screams to be let out. He hears a voice behind him saying "You have it, too." Turning, he sees a man lying on a bed, strapped to it. The man asks him if "it is happening" to Desmond as well. Ray gives Minkowski drugs to calm him. Meanwhile, on the upper deck, Sayid surveys the freighter. He notices a closed-circuit camera on the railing beside him. Looking up to a higher deck, he sees Keamy and Frank arguing. As Frank comes down the stairs to meet with Sayid, Sayid asks him why they left at dusk but arrived in midday. Frank doesn't know. Sayid asks for the phone, which Frank will give him only if Sayid gives him the gun. Sayid gives Frank the gun and calls Jack. He tells Jack that Desmond doesn't seem to remember anything; Jack puts the phone on speaker. Daniel asks if Desmond has been subjected to an intense dose of radiation or electromagnetic energy. He then says that by coming out of the Island, some people "might get a little confused," but it is not amnesia. In the medical room, after Minkowski falls asleep, a doctor, Ray, enters. Minkowski wakes up and tells Ray that "this" will happen to everyone else once they go back to the Island. The doctor injects Minkowski with a sedative. The doctor then examines Desmond's eyes with a light, so he can "help him." In the middle of a phrase, Desmond has another flash. Desmond calls Penny in the military base. After picking up the coins on the ground, Desmond enters the phone booth to call Penny. Troubled by his call, Penny tells him that he shouldn't be calling her as he broke off with her and then joined the army. She adds that she has moved, and hangs up after telling him to not call her anymore. When Desmond tries to respond, the flash ends. Faraday talks to Desmond, while going over his notes. Sayid and Frank come down to the medical bay to bring Desmond the satellite phone, but Ray pushes the alarm button. Sayid quickly gives the phone to Desmond, because Daniel, who is on the other end, urgently needs to speak with him. Daniel asks Desmond "what year" he thinks he is in. Desmond answers that it is 1996 (eight years ago). Daniel then asks him where he is "supposed to be." When Desmond tells Daniel that he is supposed to be on the base, Daniel tells him to catch a train to Oxford to meet with his past self and gives Desmond some mechanical settings to relay, along with an extra phrase that Daniel assures him will convince him that the story is legitimate. When Omar and Keamy force their way in to the medical bay to stop the call, Desmond has another flash. 1996 – Oxford University Daniel and Desmond watch the rat experiment. Desmond visits Daniel at Oxford. The 1996 Faraday chastises a student and rants to himself. Desmond approaches and tells Daniel that he has been to the future where Daniel told him to find him in the past. Daniel is suspicious, feeling that his colleagues are playing a prank on him. Desmond recites the settings Daniel gave him: 2.342 and 11 Hz. Faraday's interest is piqued, but not sufficiently. Desmond then uses the final piece of information Daniel had given him on the phone: "I know about Eloise." Desmond goes with Daniel to a secret room, where Daniel does what "Oxford frowns upon." Daniel then asks Desmond if his future-self remembers this particular meeting between the two men, to which Desmond responds negatively. Desmond adds that "maybe you just forgot," which makes Daniel laugh. Daniel adds that one cannot change the future, as he puts on an anti-radiation vest. Desmond asks why he doesn't get one, but Daniel says that it is necessary only for prolonged exposure (as he does this kind of thing 20 times a day). Desmond asks why Daniel doesn't put something on his head, but Daniel again laughs at the comment. Daniel pulls out Eloise the rat from a cage and puts it in a maze. After calibrating a machine according to Desmond's instructions, Daniel turns it on to "unstick Eloise in time". The machine emits a bright purple ray on the rat and then stops. Quickly, Daniel takes off his anti-radiation vest and looks at the rat without doing anything yet, as "she is not back yet". When she is, Daniel removes the little door at the beginning of the maze to let the rat walk in it and find the exit. Without any hesitation, the rat quickly finds the exit of the complex maze. ♪ Daniel is extremely happy, but Desmond doesn't understand why this is incredible. Daniel answers that he built the maze the morning before, and he is not going to teach the rat how to run it until an hour later: he has sent her consciousness, her mind, in the future. Desmond then confronts Daniel as to why he sent him here if not to help him. He then adds that Daniel in the future is on an Island, to which Daniel replies: "Why would I go to an Island?" ♪ Desmond tries to "get back". Back in the bay, Keamy takes the phone from Desmond. He and Omar take Frank outside the room, because the captain wants to talk to him. Sayid adds that he too wants to talk to the captain, to whom Keamy answers sarcastically: "I'll be sure to let him know. In the meantime have a seat." ♪ Desmond tries to "get back" using the doctor's flashlight, but doesn't succeed. After Sayid calls Desmond by name, Minkowski suddenly reacts. He tells them that he is George Minkowski, the communication officer. Before they strapped him down to a bed all the calls to and from the boat went through him in the radio room. Every so often, a light flashed on his console, to indicate an incoming call. They were under strict orders never to answer these calls, which were from Desmond's girlfriend: Penelope Widmore. Faraday explains the importance of a constant to Desmond. Desmond wakes up in Daniel's Oxford room. Desmond replies to Daniel that he was in the "future" for about five minutes. Daniel seems somewhat astonished that Desmond was "gone" for nearly 75 minutes and tells him that the more time he goes back and forth, the harder it gets, as in his case the progression is exponential. Desmond then notices that the rat is dead, from a brain aneurysm. Desmond confronts Daniel about it, but Daniel says that he doesn't know if Desmond is going to die also. Eloise's brain short-circuited, she couldn't tell the difference between the past, the present or the future, as she didn't have anything to attach herself to: she did not have a constant. Daniel tells him about the need for Desmond to have a constant, something that is present in both times and that he sincerely cares about and can recognize. ♪ Desmond grabs the phone and calls his constant, Penny, but the number has been disconnected. Sayid and Desmond release Minkowski from his bed. When Desmond's consciousness returns to 2004, he knows he has to contact Penny. He tells Sayid, but Minkowski interrupts them to say that two days earlier, someone sabotaged all the equipment. All communication with the mainland has been lost. Minkowski could have fixed it, but he's "unstable." Sayid asks where the radio room is and frees Minkowski to show them the way. Sayid asks how they are going to get out of the room, but Minkowski notices that the door is open: they seem to have "a friend on this boat," as Minkowski says. Desmond notices that George has started to bleed from his nose as they prepare to leave. 1996 – Auction Charles Widmore at the auction. Charles Widmore is bidder #755 at a Southfield's auction for lot #2342, the journal of the Black Rock's first mate. The contents have not been published and are unknown to anyone outside the family of the seller, Tovard Hanso. Charles wins the auction for £380,000. Meanwhile, Desmond arrives and tries to get past the guard to talk to Widmore. After the auction ends, Widmore walks out and agrees to talk to Desmond. In the men's bathroom, Desmond says that he needs to get in touch with Penny immediately. He doesn't know how to reach her as her number has been disconnected. After Widmore talks about Desmond's cowardice and his "second thoughts", he ultimately gives him her address written on a card so that she can tell him herself that she hates him. As Widmore leaves the bathroom, he leaves the faucet on. Just as Desmond is about turn off the faucet, the flash ends. ♪ Minkowski dies, after not being able to find a constant. On the boat, Sayid, Minkowski, and Desmond are going to the sabotaged radio room. Minkowski says that "it" is happening faster and is getting harder. Desmond then asks George how it happened to him. Minkowski tells him that they were bored out of their minds, waiting for their orders, anchored "here," so he and Brandon, another crew member, decided to take the ship's tender to see the Island. But Brandon started acting erratic so they turned around. Soon after, Brandon died and Minkowski grew unstable. Sayid looks at the destroyed equipment and asks Minkowski who is responsible for the damage. George doesn't seem to know and adds that he feels sorry for the person when the captain finds out. He suddenly phases out again. Sayid uses his military expertise to repair the phone. Desmond notices a calendar that says the year is 2004. Sayid sees it also and notes that it's almost Christmas. As Desmond realizes the date he starts to bleed from his nose, and suddenly Minkowski has a fit and bleeds again. Minkowski's final 'trip' cost him his life, as he returned to 2004 he dies, his last words are I can't get back. 1996 – Penny's flat Penny gets the Christmas Eve call 8 years later. Desmond wakes in the bathroom, the sink overflowing with water. After refreshing his face, he goes to see Penny. She opens the door to a distraught Desmond and says she wants to have a "clean break" from Desmond, but he insists she listen to him, pleading for her phone number. After entering the flat, he tells her that he needs her phone number so he can call her in eight years (December 24, 2004). Ultimately even though she doesn't understand why he just can't call her tomorrow and after promising to leave if she gives him the number, she gives him Desmond her number: 7946 0893. She pushes him out the door while he pleads that she not change the number. "If anything goes wrong, Desmond Hume will be my constant". Desmond comes back and tells Sayid the London phone number. Sayid repairs the phone, dials it, and passes it to Desmond. Desmond engages in a joyous and tearful call with Penny where she tells him she has been looking for him for three years, also confirming to Desmond her conversation with Charlie from the Looking Glass. Penelope is wearing a ring on the ring finger of her right hand. She knows about the Island, has researched it, and is desperately trying to find him. They each say they love the other as the phone battery dies out. Desmond then thanks Sayid by name, now with his memory back. He is, he says, "perfect." ♪ On the beach, Daniel flips through his journal looking for an entry until he comes to a page that reads: "If anything goes wrong, Desmond Hume will be my constant." ♪ Damon Lindelof and Carlton Cuse call this "arguably" their favorite episode of Lost. It is also many viewers' favorite.[1] This is the first episode that doesn't use flashbacks or flashforwards. We experience Desmond's flashes as he does—chronologically through both time periods. This episode featured several elements which were clues from the alternate reality game Find 815: Penny's phone number is 7946 0893 in London. "020 7946 0893" was a Season 4 bonus clue in Find 815. (Find 815 clues/January 9) The 020 region code was not introduced until 22nd April 2000. Before this London had a code for Outer London (0181) and Central London (0171), and the local number had 7 digits, not 8. All 0171 numbers were converted to 020 7 on this date. So Penny's number would have originally been 0171 946 0893. Although this looks like a UK telephone number in London, it is an unassigned number (Ofcom specifies that numbers beginning with 020 7946 0 are for drama purposes [1]). Penny's address is 423 Cheyne Walk in London. "423 Cheyne Walk" was a Season 4 bonus clue in Find 815. (Find 815 clues/January 9) It is near where Desmond's photograph with Penny was taken. In addition, Widmore Industries has its offices in the same neighborhood. ("Flashes Before Your Eyes"). Cheyne Walk is a famous street in London known for its famous inhabitants. Residents have included Mick Jagger, Keith Richards, George Eliot, Dante Rossetti, and Henry James (who wrote The Turn of the Screw). "Queen's College, Department of Physics" and "Southfields" were Season 4 bonus clues in Find 815. (Find 815 clues/January 30) "Camp Millar" was a season 4 bonus clue in Find 815. (Find 815 clues/January 23) The calendar on the wall. The ledger that Charles Widmore buys in the auction is the same as the journal referenced by Oscar Talbot in Chapter 5 of Find 815. Talbot was working for a branch of the Widmore Corporation, and says that his employers had the journal. According to the calendar on the wall, as well as Desmond, the real-time events of this episode take place on Day 94 (Christmas Eve), two days after Sayid, Desmond and Frank left the Island. Southfield's, the organization holding the auction, is an anagram for "shifted soul." This might have been done to reflect the way that Desmond's mind, or soul, was shifted through time. (Anagrams) The information about Faraday's device is an example of the Bootstrap Paradox. It is given to Desmond by Faraday himself, who is only aware of it because Desmond told him in the past; thus, the information was never actually discovered. Another example of this would be Richard's compass. This episode is rated TV-PG-LV. This episode features only two main characters from Season 1, Jack and Sayid, the fewest of any episode until "Jughead". Also, this episode features only six main characters, tying with "The Man from Tallahassee", and falling behind "A Tale of Two Cities", "Not in Portland", and "Stranger in a Strange Land" (each of which feature only the same five) and "Dead Is Dead", which has only four. A podcast rehash for the episode was released on February 28, 2008. (Official Lost Podcast/February 28, 2008) The scenes in the military camp were filmed on the slopes of Diamond Head crater. [2] The dog seen at Oxford university when Desmond finds Faraday is the same dog used for the painting in Jacob's cabin. This is Lulu, who was the pet of episode director Jack Bender at the time (she now belongs to costume designer Roland Sanchez). The phone that Sayid connects to the battery is a standard Lineman's Handset (looks like a Harris TS22). The demolished audio mixer prop ([3]) that Sayid takes out of the rack to look at is actually a RadioShack SSM-1850 home mixer ([4]). Klerk-Technik graphic equalizer and Tascam DAT machine are also seen in the same picture. This episode was nominated for (but did not win) the 2009 Hugo Award for Best Dramatic Presentation, Short Form. This was the first time a Lost episode had been nominated for this prestigious fan-voted science fiction award since "Pilot, Part 1" and "Pilot, Part 2" were nominated in 2005. An audio commentary by Mark Goldman, Damon Lindelof, and Carlton Cuse for this episode is available on the Season 4 DVD. According to the audio commentary on the Season 4 DVD, the final scene was to involve Charlotte approaching Daniel with the bag containing the gas masks, directly setting up the events of the next episode. Due to the dramatic and emotional impact of the scenes preceding it, however, they decided that Daniel reading about Desmond in his journal was enough of a cliffhanger to end the episode. A Lost: On Location for this episode is available on the Season 4 DVD. Bloopers and continuity errors Parking fee collection machine outside Penny's townhouse. Outside Penny's house in 1996 Desmond passes a parking fee collection machine. The type features a solar panel on the top that powers it. These were not introduced nation-wide in the UK until 2002. On the blackboard in Daniel's office, Schrodinger's equation for the time evolution of a wave function is missing the Hamiltonian operator. The equation should read: $ \hat{H} \Psi = i \hbar \frac{\partial \Psi}{\partial t} $, instead of $ \Psi = i \hbar \frac{2 \Psi}{2 t} $. The helicopter is in flight to the freighter, but the RPM and oil pressure gauges, seen while viewing Daniel's map taped to the instrument panel, both read zero which indicate the engine is turned off. This may be a result of the Helo being struck by lightning on the way to the island. The Queen's College, Oxford, does not have a physics department. Instead, the University of Oxford has various physics sub-departments situated throughout the city. During the auction, bidders can be seen using laptops that did not exist in 1996. Instead, they are using models that were at least produced from 2004 onwards. The Season 4 soundtrack includes the following tracks from this episode: "Time and Time Again" Animals • Black and white • Character connections • Children • Coincidence • Death • Deceptions and cons • Dreams • Economics • Electromagnetism • Eyes • Fate versus free will • Games • Good and bad people • Imprisonment • Isolation • Leadership • Life and death • Literary works • Mirrors • Missing body parts • Nicknames • The Numbers • Pairings • Parapsychology • Parent issues • Pregnancies • Psychology • Rain • Redemption • Relationships • Religion • Revenge • Salvation • Secrets There are several references to the Numbers: Desmond's consciousness moves back and forth between 1996 and 2004 — 8 years apart. (The Numbers) Desmond's drill sergeant tells the recruits they have 4 minutes to "get in the yard" instead of the usual 8. (The Numbers) Penny lives at 423 Cheyne Walk (4-23 or 42-3). (The Numbers) The frequency that Faraday gives Desmond is 2.342 (23, 42). (The Numbers) The auction lot number of the Black Rock diary is 2342. (The Numbers) Faraday says that while Desmond was in a catatonic state in his room at Oxford, 75 minutes had passed. Desmond perceived the same amount of time as 5 minutes. The ratio of 75:5 is equivalent to 15:1. (The Numbers) The helicopter is marked with "N842M". (The Numbers) Both the adjustment numbers for Daniel's experiment and the Black Rock auction lot share the same number, 2342. (Coincidences) At the auction, Widmore is bidder number 755, the same numbers as the time ratio. (Coincidences) Charlotte earned her doctorate at Oxford. Daniel taught there. (Coincidences) (Character connections) Minkowski is unable to find a constant and dies as a result. On the other hand, Desmond is able to find a constant (Penelope) and apparently manages to avoid death. (Life and death) In 1996, Faraday helps Desmond by explaining to him the need for a constant. Faraday ends up on the Island with Desmond. (Character connections) The doctor, Ray, examines Desmond's eyes. (Eyes) Daniel asks about Desmond being in contact with electromagnetism, a force that seems to affect people who come and go off of the Island. (Electromagnetism) Desmond and his military colleagues are exercising in the rain. It's also raining when Desmond calls Penny from the booth. (Rain) Daniel and Charlotte initially withhold from the survivors the effects of traveling to and from the Island. (Secrets) Desmond thinks he was having a dream at the regiment. (Dreams and visions) Eloise, central to the plot, is actually a rat. (Animals) Minkowski is strapped to a bed in the sick bay. (Imprisonment) (direct references only) Art • Automobiles • Games • History • Literary works • Movies and TV • Music • Philosophy • Religion and ideologies • Science Charles Dickens: After the auctioning of the Black Rock ledger, some of Charles Dickens' belongings are placed up for bidding. Desmond himself has a deep relationship with Dickens' novels since the telling Our Mutual Friend is supposed to be the last novel he wants to read before he dies. (Literary works) Hawaiian language: As the helicopter approached the freighter, a sign near the landing pad indicated the name of the ship: Kahana. Kahana means the drawing of a line, cutting or turning point in the Hawaiian language. [source: Pukui & Elbert, Hawaiian Dictionary] (Science) Relativity: The majority of the notes on Daniel's chalkboard and notebook are (introductory) notes on Special Relativity and General Relativity, with a small amount of quantum mechanics scattered in. (Science) Special Relativity deals with linear contractions/dilations of space and time. General Relativity deals with the curvature of space and time. There have been evidences of both Special and General Relativity on the island - as Faraday pointed out (and his payload experiment showed), there is a contraction of time when you go to or leave the island. Faraday constant: Faraday wrote in his journal, "If anything goes wrong, Desmond Hume will be my constant." In physics and chemistry, the Faraday constant is the amount of electric charge per mole of electrons. The Faraday constant was named after British scientist Michael Faraday, and is widely used in calculations in electrochemistry. (Science) All Good Things...: The series finale of Star Trek: The Next Generation features Captain Picard in three different timelines. He becomes "unstuck" in time, like Desmond and Billy Pilgrim. Also, Desmond's trip to contact Daniel at Oxford is reminiscent of Picard's trip to visit Data at Cambridge. In the Season 4 DVD commentary Damon Lindelof confirmed that "All Good Things" was a big influence in writing "The Constant" episode, while Mark Goldman states that he took inspiration from the Star Trek episode's editing style in deciding how to piece together Desmond's time jumps. (Movies and TV) The Waste Lands: is the third book in Stephen King's, Dark Tower Series. The key used in this book is very similar to The Constant - an anchor existing in both realities that can cure madness caused by time travel. (Literary works) Slaughterhouse Five: Desmond confides his vivid dreams to his military friend Billy. Billy Pilgrim is the main character in Slaughterhouse Five, the second chapter of which begins with the narration, "Listen: Billy Pilgrim has come unstuck in time."; Daniel says he will unstick Eloise in time. (Literary works) Flowers For Algernon: In the short story, the death of the eponymous lab mouse foreshadows the possible death of Charlie, who was undergoing the same experiment. VALIS: The pink beam of light that lets Eloise the rat travel through time is a reference to a similar pink beam of light, believed by the main character to be a transmission from an alien satellite, that is featured in this book, which is shown in the episodes surrounding "the Constant." (Literary works) Comparative: Irony • Juxtaposition • Foreshadowing Plotting: Cliffhanger • Plot twist Stock characters: Archetype • Redshirt • Unseen character Story: Flashbacks • Flash-forwards • Flash sideways • Framing device • Regularly spoken phrases • Symbolism • Unreliable narrator This episode is the first to contain neither a flashback nor flashforward. Instead, Desmond's consciousness from 1996 is traveling between 1996 and 2004 within the context of the present time narrative. The Desmond-centric episode "Flashes Before Your Eyes" also has Desmond "time travel", but only in the context of a flashback that spans almost the entire episode. Minkowski says he can't get back. (Regularly spoken phrases) In 1996 Daniel wonders if his future self knows about his meeting with Desmond, to which Desmond says Daniel probably forgot. Daniel replies sarcastically, "Yeah, how would that happen?" Daniel would later suffer from some form of memory loss. (Foreshadowing) (Irony) In 1996 after Daniel covers his chest to protect himself against the radiation of his ray and explains to Desmond that he (Daniel) needs it because he is constantly exposed to it, he seems surprised when Desmond points that he should be covering his head as well. However, he just shrugs it off and proceeds with the experiment. This is probably foreshadowing the cause of Daniel's chronic loss of memory as seen in the future. (Foreshadowing) Minkowski says "We anchored here" about the freighter just while Desmond is trying to anchor himself during his time-traveling experiences. (Juxtaposition) The episode ends with Daniel reading in his journal "If anything goes wrong, Desmond Hume will be my constant." (Plot twist) (Cliffhanger) Storyline analysis A-Missions • Crimes • Economics • Leadership • O-Missions • Relationships • F-Missions • Rivalries • S-Missions Desmond's love for Penny anchors him in time. (Relationships) Sayid trades his gun for communication. (Economics) Desmond and Sayid land on the freighter to investigate it. (A-Missions) Daniel helps Desmond find his constant. (F-Missions) Episode connections Episode references Penny tells Desmond she realized he was alive and on an island when she spoke to Charlie. ("Through the Looking Glass, Part 2") Daniel refers to meeting Desmond before the helicopter took off. ("The Economist") Frank tries to fly the helicopter on the bearing Daniel gave him. ("The Economist") Episode allusions The comm panel ("The Constant")/("Through the Looking Glass, Part 2") Sayid is asked to perform urgent electronic repairs under mysterious circumstances, and calmly does so without demanding a full explanation. ("Orientation") Daniel has a rat in a maze, with cheese. John Locke described the survivors pushing the button in the Swan as "rats in a maze, with no cheese". ("?") Daniel asks if Desmond had recently been exposed to high levels of radiation or electromagnetism. ("Live Together, Die Alone, Part 2") Frank is told by Daniel to follow a bearing of 305, which is a Northwest direction. Eko's stick bore the inscription "Lift up your eyes and look north - John 3:05", which Locke later followed as a bearing ("I Do") Ben told Michael and Walt to follow a bearing of 325 to find rescue. ("Live Together, Die Alone, Part 2") Ray puts a light in front of Desmond's eyes, which is also reminiscent of the title Desmond's previous time-travel episode. ("Flashes Before Your Eyes") In the radio room, there is the same communications panel that we can see in the Looking Glass communication room. ("Through the Looking Glass, Part 2") The only difference is a strange object on the right in the place of the orange LED. This machine seems to be in good condition in respect to the other in the same room. Desmond and Daniel have a conversation regarding Daniel's memory, which is shown to be affected in earlier episodes. ("Confirmed Dead") ("Eggtown") Do not answer the questions here. Keep the questions open-ended and neutral: do not suggest an answer. For fan theories about these unanswered questions, see: The Constant/Theories What is the nature of the time differential between the Island and the outside world? ABC Primetime Grid TIME Top 10 TV Episodes 2008 1. Lost, "The Constant" ↑ Esquire: The Lost Creators Come Clean 05/07/2014 Desmond Hume Portrayed by Henry Ian Cusick "Live Together, Die Alone" • "Flashes Before Your Eyes" • "Catch-22" • "The Constant" • "Jughead" • "Happily Ever After" "Everybody Loves Hugo" • "What They Died For" • "The End" Auctioneer • Barista • Bartender • Billy • Pierre Chang • Brother Campbell • David • Delivery man • Derek • Donovan • Eloise • Daniel Faraday • Eloise Hawking • Charlie Hume • Penelope Hume • Kelvin Inman • Jimmy Lennon • Man wearing red shoes • Brother Martin • Master Sergeant • Charlie Pace • Partridge • Photographer • Physics student • Receptionist • Ruth • Efren Salonga • Sergeant • Libby Smith • Soldier • Suited guard • Charles Widmore • Abigail Spencer • Theresa Spencer Assistant • Kate Austen • Arnie Bocklin • Clipboard guy • Ana Lucia Cortez • Doctor • Doyle • Stephanie Leifer • Lawyer • Sayid Jarrah • Benjamin Linus • Claire Littleton • John Locke • Mary Markey • Penelope Milton • George Minkowski • MRI tech • Charlie Pace • Nicholas Pepper • Hugo Reyes • Rhodes • Jack Shephard • Nurse Tyra • Ilana Verdansky • Charles Widmore • Daniel Widmore • Eloise Widmore Letters (Desmond) • Elizabeth • Letter (Penny) • Lightning rod • MacCutcheon whisky • Our Mutual Friend (book) • Photograph • Satellite phone • Our Mutual Friend (sailboat) Retrieved from "https://lostpedia.fandom.com/wiki/The_Constant?oldid=1113815"
CommonCrawl
How to show that $\frac{R_1R_2}{R_1+r_2}<(R_1,R_2)$ strictly using AM-GM inequality? I was reading about parallel circuits in Physics.Equivalent resistance of $n$ resistors in parallel is given by $\displaystyle\frac1{R_{eq}}=\frac{1}{R_1}+\frac{1}{R_2}+...+\frac{1}{R_n}$. I tried to prove that $R_{eq}$ will always be less than $R_1,R_2,...,R_n$. I tried to prove it for two resistors,where $R_{eq}=\frac{R_1R_2}{R_1+R_2}$. By applying AM-GM on $R_1,R_2$we have, $\frac{R_1R_2}{4}\geq \frac{R_1R_2}{R_1+R_2}$. Now,I have no idea how to show from here that $(R_1,R_2)\geq\frac{R_1R_2}{R_1+R_2}$ and how it can be extended for $R_n$ Thanks for any help!! inequality physics user_of_math tatantatan $\begingroup$ FYI: The harmonic mean of a set of numbers $\{a_n\}$ is $H=\left(\sum_n \frac{1}{a_n}\right)^{-1}$. $\endgroup$ – Semiclassical Jun 3 '16 at 17:12 $\begingroup$ $=n\sum_n\Bigl(\frac1{a_n}\Bigr)^{-1}$ more exactly. $\endgroup$ – Bernard Jun 3 '16 at 17:15 $\begingroup$ This is not mathematical physics. Edited tags. $\endgroup$ – user_of_math Jun 3 '16 at 17:27 $\begingroup$ $\displaystyle{{1 \over R_{\mathrm{eq}}} > {1 \over R_{k}}\,,\quad\forall\ k}$. $\endgroup$ – Felix Marin Jun 4 '16 at 5:22 That is more trivial. If $$ \frac{1}{R_{eq}}=\sum_{k=1}^{n}\frac{1}{R_k} $$ obviously $$ \frac{1}{R_{eq}}> \frac{1}{R_k} $$ for any $k\in\{1,2,\ldots,n\}$, hence $R_{eq}< R_k$, so $$ R_{eq} < \min_{k} R_k.$$ $\begingroup$ @tatan: if $ u = a+b+c+\ldots $ and $a,b,c,\ldots >0$, then $u>a, u>b, u>c,\ldots$. If that isn't trivial... $\endgroup$ – Jack D'Aurizio Jun 3 '16 at 17:39 $\begingroup$ Thanks for your answer....but I was looking for AM-GM.... $\endgroup$ – tatan Jun 3 '16 at 17:47 suppose $1/R_1 = x$ and $1/R_2=y$ and $1/R_{eq}=z$ now we know that $z=x+y$ and $x>0,y>0$ therefore $z>x$ and $z>y$ and hence $1/R_{eq}>1/R_1 $ and $1/R_{eq}>1/R_2 $ and therefore $R_{eq}<R_1$ and $R_{eq}<R_2$ NikhilNikhil Suppose we remove one of the resistors $R_k$ from the circuit. Then the new equivalent resistance will $\left(\frac{1}{R_{eq}}-\frac{1}{R_k}\right)^{-1}>R_{eq}$ i.e. the equivalent resistance strictly increases. If we repeat this until only one resistor $R_f$ remains, then $R_f$ is the final equivalent resistance. Hence each individual resistor must have a larger resistance than the equivalent resistance of them in parallel. SemiclassicalSemiclassical Not the answer you're looking for? Browse other questions tagged inequality physics or ask your own question. Minimum and Maximum Resistance Generalization of Minkowski's inequality. Inequality for the combined resistance of two resistors connected in parallel Calc question about inequality and electric circuit theory How prove this geometry inequality $R_1^4+R_2^4+R_3^4+R_4^4+R_5^4\geq {4\over 5\sin^2 108^\circ}S^2$ How to prove $\frac{n^n}{3n!}<\frac{e^n}{2}-\sum_{k=0}^{n-1}\frac{n^k}{k!}<\frac{n^n}{2n!}$ Which functions can be the resistance of a network? Infinite network of resistors Is this combination of AM-GM and Muirhead valid? What kind of flow minimizes resistance?
CommonCrawl
Mat. Sb.: Mat. Sb., 2000, Volume 191, Number 9, Pages 139–160 (Mi msb511) On a smooth quintic 4-fold I. A. Cheltsov Abstract: The birational geometry of an arbitrary smooth quintic 4-fold is studied using the properties of log pairs. As a result, a new proof of its birational rigidity is given and all birational maps of a smooth quintic 4-fold into fibrations with general fibre of Kodaira dimension zero are described. In the Addendum similar results are obtained for all smooth hypersurfaces of degree $n$ in $\mathbb P^n$ in the case of $n$ equal to 6, 7, or 8. DOI: https://doi.org/10.4213/sm511 Sbornik: Mathematics, 2000, 191:9, 1399–1419 UDC: 513.6 MSC: 14J35, 14E05 Received: 10.06.1999 and 19.01.2000 Citation: I. A. Cheltsov, "On a smooth quintic 4-fold", Mat. Sb., 191:9 (2000), 139–160; Sb. Math., 191:9 (2000), 1399–1419 \Bibitem{Che00} \by I.~A.~Cheltsov \paper On a smooth quintic 4-fold \jour Mat. Sb. \mathnet{http://mi.mathnet.ru/msb511} \crossref{https://doi.org/10.4213/sm511} \jour Sb. Math. \pages 1399--1419 \crossref{https://doi.org/10.1070/sm2000v191n09ABEH000511} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-0034354986} http://mi.mathnet.ru/eng/msb511 https://doi.org/10.4213/sm511 http://mi.mathnet.ru/eng/msb/v191/i9/p139 V. A. Iskovskikh, "Birational rigidity of Fano hypersurfaces in the framework of Mori theory", Russian Math. Surveys, 56:2 (2001), 207–291 I. A. Cheltsov, "Log canonical thresholds on hypersurfaces", Sb. Math., 192:8 (2001), 1241–1257 I. A. Cheltsov, J. Park, "Total log canonical thresholds and generalized Eckardt points", Sb. Math., 193:5 (2002), 779–789 A. V. Pukhlikov, "Birationally rigid Fano hypersurfaces", Izv. Math., 66:6 (2002), 1243–1269 Pukhlikov, AV, "Birational geometry of higher-dimensional Fano hypersurfaces", Doklady Mathematics, 66:1 (2002), 22 I. A. Cheltsov, "Non-rationality of the 4-dimensional smooth complete intersection of a quadric and a quartic not containing planes", Sb. Math., 194:11 (2003), 1679–1699 I. A. Cheltsov, "Anticanonical models of three-dimensional Fano varieties of degree 4", Sb. Math., 194:4 (2003), 617–640 de Fernex T., Ein L., Mustaţă M., "Bounds for log canonical thresholds with applications to birational rigidity", Math. Res. Lett., 10:2-3 (2003), 219–236 I. A. Cheltsov, "Birationally superrigid cyclic triple spaces", Izv. Math., 68:6 (2004), 1229–1275 I. A. Cheltsov, L. Votslav, "Nonrational Complete Intersections", Proc. Steklov Inst. Math., 246 (2004), 303–307 I. A. Cheltsov, "Double space with double line", Sb. Math., 195:10 (2004), 1503–1544 V. A. Iskovskikh, V. V. Shokurov, "Birational models and flips", Russian Math. Surveys, 60:1 (2005), 27–94 I. A. Cheltsov, "Birationally rigid Fano varieties", Russian Math. Surveys, 60:5 (2005), 875–965 I. A. Cheltsov, "Local inequalities and birational superrigidity of Fano varieties", Izv. Math., 70:3 (2006), 605–639 A. V. Pukhlikov, "Birationally rigid varieties. I. Fano varieties", Russian Math. Surveys, 62:5 (2007), 857–942 Cheltsov, I, "On nodal sextic fivefold", Mathematische Nachrichten, 280:12 (2007), 1344 Pukhlikov, AV, "Birational Geometry of Algebraic Varieties with a Pencil of Fano Cyclic Covers", Pure and Applied Mathematics Quarterly, 5:2 (2009), 641 de Fernex T., "Birationally Rigid Hypersurfaces", Invent. Math., 192:3 (2013), 533–566 A. V. Pukhlikov, "Birational geometry of higher-dimensional Fano varieties", Proc. Steklov Inst. Math., 288, suppl. 2 (2015), S1–S150 Suzuki F., "Birational rigidity of complete intersections", Math. Z., 285:1-2 (2017), 479–492 de Fernex T., "Birational Rigidity of Singular Fano Hypersurfaces", Ann. Scuola Norm. Super. Pisa-Cl. Sci., 17:3 (2017), 911–929
CommonCrawl
Would one actually find their doppelgänger in a "Googolplex Universe"? Related: Infinite universe - Jumping to pointless conclusions I've recently become a fan of Numberphile, and today I happened to watch their video regarding Googol and Googolplex. In the video, I found a rather confusing claim that I'm hoping someone here can help me sort out. It's first stated in the introduction, and then explained in more detail at around 4:10. The claim can be summed up as this: In a universe which is "a Googolplex meters across", if you would travel far enough, you would expect to eventually begin finding duplicates of yourself. Getting deeper into the detail of this, Tony Padilla explains that this is because there is a finite number of possible quantum states which can represent the volume of space in which your body resides. That volume is given as roughly one cubic meter, and the number of possible states for that volume is estimated at $10^{10^{70}}$. This is obviously much less than the number of cubic meters within a "Googolplex Universe", and so the idea does make some sense. But I believe the idea relies on a false premise. That premise would be that the universe as a whole is entirely comprised of random bits of matter. We can easily see that this is not true. The vast majority of our observed universe is occupied by the near-vacuum of space, and those volumes which are not empty are occupied by some fairly organized objects which all interact according to certain laws and patterns. So, I'm curious to know two things: Who originally proposed this idea? Is it something that perhaps Tony just came up with to include in the video, or is there a noted physicist or mathematician who actually wrote about this at some point? Given the possibility that a universe similar to ours could exist and be one Googolplex across, would this actually be probable? Or, would the natural order and organization of the universe prevent this from being as likely as it might be in a more random universe? EDIT (To resolve some comments) A note regarding the size of the universe, in respect to this question. In the linked video, the following constraints are mentioned: The number of particles in the universe is $10^{80}$ Stated at 1:38 in the video. The number of the grains of sand that could fit within the universe is $10^{90}$. The number of Planck volumes in the universe is $10^{183}$ The size of the universe is $(10^{26}m.)^{3}$ A few times in the video, Tony does qualify his statements by referring specifically to the observable universe. However, sometimes it's not quite so clear. So, to simplify the problem for the purposes of this question, let's assume: Our universe is finite, and much less than a Googolplex in diameter. The "Googolplex Universe" suggested in the video, and in question here, is also finite. universe probability IsziIszi Regarding the source of claims such as those, I believe they are due to Max Tegmark, likely from this article. He has curious ideas, though discerning whether they are correct, useful or at least in the realm of possibility is far beyond my capacity. He is a strong proponent of the idea of a mathematical Universe. If I recall correctly, he goes to the point of stating that anything exists at all merely because mathematics/logic exists, and everything we perceive as reality is logical statements playing out in a marvelously complex fashion very far down the line. Why does it happen? Because it can, basically. Brandon Enright Nicolau Saker NetoNicolau Saker Neto I think this assertion would be more correct if it was something like: There are more 1 cubic-meters of matter units in a googolplex-meter-wide universe than there are possible quantum states of 1 cubic-meters of matter units, so at least one of those quantum states of 1 cubic-meters of matter occurs in more than one 1 cubic-meters of matter somewhere. In fact there are at least as many duplicates of some kind, to make up the difference between the number of possible states, and the number of total volumes. But, this does not imply that any particular configuration is duplicated. Heaven forbid there be more of me out there. Maybe a rather simple configuration accounts for 99.99% of all the duplications. RickRick $\begingroup$ Exactly, he's implicitly assuming all configurations are equally probably. There is no reason to believe that. $\endgroup$ – Tobias Brandt Aug 1 '14 at 21:41 Well, the universe has evidently produced you at least once, so we can certainly say the probability of producing you is nonzero. Of course, you're the end result of a very long chain of events, encompassing all sorts of cultural, biological, planetary, astrophysical, and cosmological processes. Ultimately (well, to the extent of our cosmological knowledge) your existence traces back to certain random quantum fluctuations during the inflationary era of the Big Bang, which acted as the seeds for galaxies to form, and so on. If we imagine that the universe extends far beyond the part we can see (the observable universe), at least $10^{10^{100}}$ meters in each direction, but has evolved from similar initial conditions everywhere, then it isn't too difficult to imagine that somewhere in that immensity, some inflation fluctuations occurred that were sufficiently similar to ours, to generate a galaxy sufficiently similar to the Milky Way, to generate a planet sufficiently similar to Earth, to be inhabited by beings sufficiently similar to us, that one of them is very similar to you. All of this would be done by exactly the same physical processes that produced you. It doesn't require that the universe be filled with a random distribution of all possible physical states; it only requires that whatever random processes contributed to your existence will randomly occur again, which they certainly will in a big enough universe. I won't attempt to quantify it, but if a googolplex meters is not enough, just make it bigger. :) As for the history of this idea, it seems quite similar to the notion of Poincaré recurrence, which dates back to 1890. That idea concerns repetition of states over time, due to arbitrarily-unlikely fluctuations occurring if you wait long enough, but it's straightforward to think also of repetition in space due to arbitrarily-unlikely fluctuations occurring if you have enough space for them to occur in. answered Aug 5 '14 at 1:54 Nathan ReedNathan Reed Another such argument was made by Jaume Garriga and Alexander Vilenkin, see here: A generic prediction of inflation is that the thermalized region we inhabit is spatially infinite. Thus, it contains an infinite number of regions of the same size as our observable universe, which we shall denote as O-regions. We argue that the number of possible histories which may take place inside of an O-region, from the time of recombination up to the present time, is finite. Hence, there are an infinite number of O-regions with identical histories up to the present, but which need not be identical in the future. Moreover, all histories which are not forbidden by conservation laws will occur in a finite fraction of all O-regions. The ensemble of O-regions is reminiscent of the ensemble of universes in the many-world picture of quantum mechanics. An important difference, however, is that other O-regions are unquestionably real. But you can just consider all the useful information stored in your brain that you are aware of. Even if there exists someone who is not an exact physical copy, it can still be identical to you if whatever algorithm his/her brain is executing is the same as the one your brain is executing. This means that the local environment only needs to be approximately the same. A copy of me doesn't have to have the exact same number of hairs growing in his head, because I have never counted the exact number. The Earth's radius can be several meters more or less. The copy of the Milky Way galaxy does not need to contain the exact same number of stars. Only what I am aware of must be the same, and it's only the awareness that counts not if it is actually true, although the two things will be strongly correlated. Count IblisCount Iblis $\begingroup$ So, what you're saying is, there will be some exact copies and some near-exact copies who won't even be able to realize that they aren't exact copies. $\endgroup$ – Iszi Sep 25 '15 at 20:32 $\begingroup$ @Iszi, that's indeed what I'm saying. Also, I believe (but have to admit that it's just my belief), that it's essential to think of ourselves as ensembles of nearly identical states instead of a sharply defined state. If you think of yourself as a machine in some well defined state, you get into philosophical problems that can be addressed better if you replace the machine by a set of machines in slightly different states. This makes the algorithm executed by the machines well defined, at the expense of not knowing exactly what state it is in which it couldn't know anyway. $\endgroup$ – Count Iblis Sep 25 '15 at 20:42 If you mass 120 kg (264 lbs) and the average atomic weight in your substance is, we'll be generous, 25, then you contain $(120,000/25) \times (6.022 \times 10^{23})$ atoms or $2.9x10^{27}$ atoms. They can be arranged in $(2.9x10^{27})!$ ways. \begin{align} n! &\approx \sqrt{2\pi n}(n/e)^n = (10^{14})(10^{27})^{27} \end{align} is a big number, dwarfing string theory's $10^{500}$ acceptable vacua. How much time are you given to look? Then QM intrudes (Heisenberg compensators?). Pranav Hosangadi Uncle AlUncle Al Not the answer you're looking for? Browse other questions tagged universe probability or ask your own question. Infinite universe - Jumping to pointless conclusions Is there an intuitive way of thinking about the extra dimensions in M-Theory? If the universe is spatially infinite, and if something (A) is possible, does that mean the thing (A) happens? How would it be to live in a very small universe, let´s say 20x20 square meters? The inner workings of the Olbers paradox Physical differences between our universe and strongly negatively curved one Fraction of the Universe that is unobservable Questions regarding the popular "finite but unbounded" universe
CommonCrawl
Search Results: 1 - 10 of 95148 matches for " Chia-Chen Chang " Page 1 /95148 Effectiveness of 4Ps Creativity Teaching for College Students: A Systematic Review and Meta-Analysis [PDF] Hsing-Yuan Liu, Chia-Chen Chang Creative Education (CE) , 2017, DOI: 10.4236/ce.2017.86062 Abstract: Although the creativity teaching claims benefit college students by increasing their problem-solving capacities and enhancing professional competencies. There are also the current academic gap between the teaching constructs and efficacy. This study has compared how these and other teaching strategies have evaluated the efficacy of creativity derived from the 4Ps model (person, process, press, and product). In a systematic search, we identified eleven articles published from 2000-2011. Moreover, this study classified the creativity teaching experiences and analyzed the effect size of its efficacy. The weighted mean effect size (ES) of above studies was 0.95, with a standard deviation of 1.59. The ES of personality on technology students was 1.18 (95% confidence interval [CI95] = 0.39 - 1.42), which was greater than that for education and medical students. Studies with more than 56 subjects were seen to have the highest efficacy. The ES of process on professional courses was 1.18 (CI95 = 0.47 - 1.89), and for press in the classroom base the ES was 1.0 (CI95 = 0.61 - 1.38). The ES for the product combined with the creativity survey was 1.22 (CI95 = -0.70 - 3.14). Spatially inhomogeneous phase in the two-dimensional repulsive Hubbard model Chia-Chen Chang,Shiwei Zhang Physics , 2008, DOI: 10.1103/PhysRevB.78.165101 Abstract: Using recent advances in auxiliary-field quantum Monte Carlo techniques and the phaseless approximation to control the sign/phase problem, we determine the equation of state in the ground state of the two-dimensional repulsive single-band Hubbard model at intermediate interactions. Shell effects are eliminated and finite-size effects are greatly reduced by boundary condition integration. Spin-spin correlation functions and structure factors are also calculated. In lattice sizes up to $16\times 16$, the results show signal for phase-separation. Upon doping, the system separates into one phase of density $n=1$ (hole-free) and the other at density $n_c$ ($\sim 0.9$). The long-range antiferromagnetic order is coupled to this process, and is lost below $n_c$. Quantum disordered phase near the Mott transition in the staggered-flux Hubbard model on a square lattice Chia-Chen Chang,Richard T. Scalettar Physics , 2012, DOI: 10.1103/PhysRevLett.109.026404 Abstract: We investigate ground state properties of the half-filled staggered-flux Hubbard model on a square lattice. Energy gaps to charge and spin excitations and magnetic as well as dimer orders are calculated as a function of interaction strength $U/t$ by means of constrained-path quantum Monte Carlo method. It is found that the system is a semi-metal at $U/t\lesssim 5.6$ and a Mott insulator with long-range antiferromagnetic order at $U/t \gtrsim 6.6$. In the range $5.6\lesssim U/t\lesssim 6.6$, the ground state is an correlated insulator where both magnetic and dimer orders are absent. Furthermore, spin excitation in the intermediate phase appears to be gapless, and the measured spin-spin correlation function exhibits power-law decaying behavior. The data suggest that the non-magnetic ground state is a possible candidate for the putative algebraic spin liquid. Spin and charge order in doped Hubbard model: long-wavelength collective modes Abstract: Determining the ground state properties of the two-dimensional Hubbard model has remained an outstanding problem. Applying recent advances in constrained path auxiliary-field quantum Monte Carlo techniques and simulating large rectangular periodic lattices, we calculate the long-range spin and charge correlations in the ground state as a function of doping. At intermediate interaction strengths, an incommensurate spin density wave (SDW) state is found, with antiferromagnetic order and essentially homogeneous charge correlation. The wavelength of the collective mode decreases with doping, as does its magnitude. The SDW order vanishes beyond a critical doping. As the interaction is increased, the holes go from a wave-like to a particle-like state, and charge ordering develops which eventually evolves into stripe-like states. Constructing and Evaluating an Intergenerational Academic Service-Learning Curriculum in Gerontology [PDF] Hsing-Yuan Liu, Chia-Chen Chang, Shu-Yuan Chao Abstract: Intergenerational Service-Learning has been documented to enhance student learning. Research indicates that students in healthcare professions view working with the geriatric population as a low priority due to negative stereotypes of the elderly. The purpose of this study was to develop a teaching model for establishing partnerships between academic programs and community services for students of gerontology and to evaluate the effect of an Intergeneration Service-Learning curriculum. This research adopted a qualitative approach to study the learning experiences of nursing students during an interdisciplinary community-based healthcare course. Data were collected by participant observation, students' written journal reflections, verbal presentation of their reflections, instructors' observations, and focus group discussion. It introduced the rationale, development process, content, and evaluation of the teaching model designed by the researcher. The effects of this curriculum not only were reported as a bridge across generations connecting youth and elders but also fostering positive attitude, improving the ability to care and enhancing commitment to elder people. This article provided an overview of Intergenerational Service-Learning teaching model that involved students in learning outside the traditional classroom and provided a needed service in the community. Otherwise, an important element of this teaching model was to infuse "reflection" in learning process of the nursing students by faulty, which would be appropriate for improving the quality of education and the findings of this study provided direction for the course design for gerontology. Semiconductor quantum dots in high magnetic fields: The composite-fermion view Gun Sang Jeon,Chia-Chen Chang,Jainendra K. Jain Physics , 2006, DOI: 10.1140/epjb/e2007-00060-4 Abstract: We review and extend the composite fermion theory for semiconductor quantum dots in high magnetic fields. The mean-field model of composite fermions is unsatisfactory for the qualitative physics at high angular momenta. Extensive numerical calculations demonstrate that the microscopic CF theory, which incorporates interactions between composite fermions, provides an excellent qualitative and quantitative account of the quantum dot ground state down to the largest angular momenta studied, and allows systematic improvements by inclusion of mixing between composite fermion Landau levels (called $\Lambda$ levels). Partially spin polarized quantum Hall effect in the filling factor range 1/3 < nu < 2/5 Chia-Chen Chang,Sudhansu S. Mandal,Jainendra K. Jain Abstract: The residual interaction between composite fermions (CFs) can express itself through higher order fractional Hall effect. With the help of diagonalization in a truncated composite fermion basis of low-energy many-body states, we predict that quantum Hall effect with partial spin polarization is possible at several fractions between $\nu=1/3$ and $\nu=2/5$. The estimated excitation gaps are approximately two orders of magnitude smaller than the gap at $\nu=1/3$, confirming that the inter-CF interaction is extremely weak in higher CF levels. Composite fermion theory of correlated electrons in semiconductor quantum dots in high magnetic fields Abstract: Interacting electrons in a semiconductor quantum dot at strong magnetic fields exhibit a rich set of states, including correlated quantum fluids and crystallites of various symmetries. We develop in this paper a perturbative scheme based on the correlated basis functions of the composite-fermion theory, that allows a systematic improvement of the wave functions and the energies for low-lying eigenstates. For a test of the method, we study systems for which exact results are known, and find that practically exact answers are obtained for the ground state wave function, ground state energy, excitation gap, and the pair correlation function. We show how the perturbative scheme helps resolve the subtle physics of competing orders in certain anomalous cases. Composite-fermion crystallites in quantum dots Physics , 2003, Abstract: The correlations in the ground state of interacting electrons in a two-dimensional quantum dot in a high magnetic field are known to undergo a qualitative change from liquid-like to crystal-like as the total angular momentum becomes large. We show that the composite-fermion theory provides an excellent account of the states in both regimes. The quantum mechanical formation of composite fermions with a large number of attached vortices automatically generates omposite fermion crystallites in finite quantum dots. A Fermion Crystal with Quantum Coherence Chia-Chen Chang,Gun Sang Jeon,Jainendra K. Jain Abstract: When two-dimensional electrons are subjected to a very strong magnetic field, they are believed to form a triangular Wigner crystal. We demonstrate that, in the entire crystal phase, this crystal is very well represented by a composite-fermion-crystal wave function, revealing that it is not a simple Hartree-Fock crystal of electrons but an inherently quantum mechanical crystal characterized by a non-perturbative binding of quantized vortices to electrons, which establishes a long range quantum coherence in it. It is suggested that this has qualitative consequences for experiment.
CommonCrawl
Springer for Research & Development Full-duplex decode-and-forward relaying with joint relay-antenna selection Mahsa Shirzadian Gilan1,2 na1 & Ha H. Nguyen ORCID: orcid.org/0000-0001-6481-04222 na1 EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 16 (2020) Cite this article This paper is concerned with wireless relay networks that employ K full-duplex (FD) decode-and-forward relays to help a source to communicate with a destination. Each FD relay is equipped with multiple antennas, some for receiving and some for transmitting. The paper considers joint relay-antenna selection schemes that are based on the instantaneous channel conditions for two cases of antenna configurations, namely fixed antenna configuration (FAC) and adaptive antenna configuration (AAC). Under FAC, the transmit and receive antennas at each relay are fixed, whereas in the case of AAC an antenna at a relay can be configured to be either a transmit or a receive antenna.In addition to equal power allocation between the source and selected relay, a power scaling approach to counteract the effect of residual self-interference is also examined. Closed-form expressions of the outage probability and average capacity are obtained and provide important insights on the system performance. The accuracy of the obtained expressions are corroborated by simulation results. In particular, it is shown that under FAC and without power scaling, the diversity order approaches K as the self-interference (SI) level gets smaller, while it approaches zero whenever the SI level is nonzero and the SNR increases without bound. Under FAC and with power scaling, the diversity order approaches K for any SI level. For the case of AAC and without power scaling, the diversity order approaches 2K for small SI level. When power scaling is applied in AAC, the diversity order approaches 2K at any SI level. Full-duplex (FD) communications allow simultaneous transmission and reception on the same frequency band, which theoretically achieves twice the spectral efficiency as compared to half-duplex (HD) communications [1]. A critical issue in FD communications is that the self-interfering signal from the FD transmitter is much stronger than the intended receiving signal. Thanks to advanced self-interference cancellation techniques developed in recent years, both in analog and digital signal processing, FD communications have been introduced in recent fifth-generation (5G) standard proposals as an appealing technique to significantly enhance the attainable spectral efficiency of communication systems [1–3]. Given that traditional HD relaying has been shown to greatly extend the coverage and/or significantly reduce power consumption in wireless networks, it is natural to consider FD communications in the context of wireless relay networks [4]. Although various self-interference (SI) cancellation schemes have been developed [5–7], residual self-interference always remains in practice due to imperfect cancellation. The residual interference is modeled as a Rayleigh distributed random variable in [8] and the outage performance of dual-hop FD relaying was analyzed accordingly. Reference [9] further extends [8] to a multihop FD relay system and takes into account the path loss factor. The authors show that, with effective self-interference cancellation, FD relaying outperforms HD relaying. To deal with severe self-interference, a sophisticated hybrid FD/HD relaying scheme was studied in [10] which adaptively switches between FD and HD modes based on the instantaneous SI level. Various FD relaying schemes were investigated in [11] that are based on the codeword expansion technique. Such a technique benefits from time diversity that is dependent on the efficiency of the SI cancellation. The authors in [12] consider an optimization problem to find the power and location of relays so that the effect of self-interference can be minimized. Performance of MIMO FD relaying in the presence of co-channel interference is analyzed in [13], in which closed-form expressions for outage probability and ergodic capacity are derived. Several antenna selection schemes have been investigated in [14, 15] to maximize the end-to-end performance of the multiple-antenna amplify-and-forward (AF) relay systems. The authors in [16] examines a FD two-way relay network that is made up of one base station (BS), one FD AF relay, and one user, and the BS is equipped with massive antennas (massive MIMO). To reduce complexity and cost of the BS, the authors propose a practical antenna selection scheme at the BS. They obtain closed-form expressions for the outage probability and average BER under Nakagami-m fading channels and demonstrate performance superiority of their proposed antenna selection scheme over the conventional scheme. When multiple FD relays are employed, relay selection is an efficient and simple approach to achieve the spatial diversity as compared to other alternatives, such as distributed space-time coding. In [17], several relay selection schemes were proposed to optimize the end-to-end signal-to-interference-plus-noise ratio (SINR) in AF cooperative FD relay networks that take into account the residual self-interference. By analyzing the outage probability, the authors show that, although the effect of the self-interference is reduced, the residual self-interference is the main drawback of a FD relay system. All the works on FD relaying systems discussed above assume that the roles of the transmit and receive antennas are unchanged in the relaying process. When the channel link from the source to the relay's receive antenna and/or from the relay's transmit antenna to the destination is in deep fading, the system performance will be seriously degraded. Based on this observation the authors in [18] consider a FD relaying system in which the antennas of each FD relay can be configured to transmit (Tx) or receive (Rx) the signal. In particular, the authors propose a joint relay and Tx/Rx antenna mode selection scheme (RAMS), where the optimal relay with its optimal Tx/Rx antenna configuration is selected jointly based on the instantaneous channel conditions. Only the optimal relay is active to forward the information from the source to the destination using the AF protocol. In doing so, the proposed scheme provides an additional dimension of selection process, which introduces an extra degree of freedom compared to the conventional FD relay selection. The authors in [18] obtain the CDF of the end-to-end SINR for their proposed RAMS scheme, as well as closed-form expressions of the outage probability and ergodic capacity. In addition, they also propose an adaptive power allocation to mitigate the self-interference and reduce the error floor. Regarding relay networks that employ multiple FD relays and the decode-and-forward (DF) protocol, the authors in [19] analyze the outage performance over Nakagami-m fading channels. More recently, performance of the FD system with DF relay selection is analyzed in [20], in which the authors demonstrate that the error floor in the high SNR regime can be mitigated. However, the scheme considered in [20] requires decoding at all relays and knowing whether the decoded symbols at relays are correct. This implicitly assumes decoding to bits and powerful CRC codes are used, which leads to higher complexity as well as reduced bandwidth efficiency. From the above discussion and particularly motivated by the work in [20], this paper considers a FD relay system that employs multiple FD relays and the DF protocol. As in [20], it is assumed that each FD relay is equipped with multiple antennas and two cases are examined. In the first case, referred to as fixed antenna configuration (FAC), the transmit and receive antennas at each relay are fixed. On the other hand, in the second case, called adaptive antenna configuration (AAC), an antenna at a relay can be configured to be either a transmit or a receive antenna, which implies that there are flexible connection switches between the antennas and the RF chains [20]. In either case, joint relay-antenna selection is performed based on the instantaneous channel conditions so that the minimum SINR via any relying link is maximized. By taking into account the residual self-interference, outage performance of the considered joint relay-antenna selection is derived and ergodic capacity results are obtained. Furthermore, a power scaling approach is investigated to mitigate the outage error floor at the high SNR regime. It is pointed out that, in addition to consider the integration of FD communications with relaying technology and joint antenna-relay selection, it would be very interesting to also integrate energy harvesting into the system model considered in this paper. Indeed, energy-harvesting technology has been extensively studied in various communication systems, including relay-antenna selection in cooperative MIMO/NOMA networks [21], simultaneous wireless information and power transfer in dual-hop relaying networks [22], and UAV relay-assisted IoT networksFootnote 1 [23]. The remainder of the paper is organized as follows. Section 2 describes the methods used in the paper. Section 3 introduces the system model. Sections 4 and 5 present performance analysis of the FAC scheme without and with power scaling, respectively. Section 6 investigates performance of the AAC scheme. Section 7 provides numerical results. Finally, section 8 concludes the paper. The research methodology in this paper involves system modeling, theoretical analysis, and computer simulation. System modeling uses tools and insights from information and communication theories to develop simplified but meaningful mathematical models of the design problems. Theoretical analysis is carried out to provide valuable insights into possible design choices and intuitive understanding of the impacts of different design parameters on the system/network performance. To corroborate the theoretical results, simulation models are developed and implemented. Figure 1 illustrates the FD relaying system considered in this paper. This system model is similar to that studied in [18] but with a major difference that DF relays are employed instead of AF relays. Here, K FD relays assist one source (S) to communicate with one destination (D). It is assumed that, due to blockage and/or large distance separation between S and D, the direct link between them is not available. Each relay is equipped with Q antennas, among which M are designated as receive antennas and the remaining L=Q−M are transmit antennas. On the other hand, both source and destination are equipped with a single antenna. Such an assumption is applicable for scenarios when there are size, power and/or cost constraints to put multiple antennas on information-exchange devices (source and destination), while there are no such constraints (or much more relaxed) for the relay. The channel coefficients from S to antenna m (m=1,2,…,M) of relay i (i=1,2,…,K), Ri, and from antenna l (l=1,2,…,L) of Ri to D are denoted by hS,i(m) and hi,D(l), respectively. Moreover, the effect of residual self-interference (RSI) at relay Ri is represented using the residual self-interference channel \(I_{i,i}^{l\to m}\). All wireless channels are subjected to flat fading and additive white Gaussian noise (AWGN). As in [18], all links are assumed to be independent and characterized with Rayleigh fading. As a result, the squared amplitudes of channel fading coefficients are exponentially distributed. With the considered joint relay-antenna selection, suppose that relay Ri with antenna m for receiving and antenna l for transmitting is selected to assist data transmission from S to D. At time n, the source broadcasts its unit power symbol x(n) to all the relays. The received signal at Ri with receive antenna m and transmit antenna l can be written as $$ {}y_{{\textsf{S}},i}^{l\to m} = \sqrt {{P_{\textsf{S}}}} x(n)h_{{\textsf{S}},i}^{(m)} + \sqrt {{P_{\textsf{R}}}} \hat x(n - n_{0})I_{i,i}^{l\to m} + {w_{i}}(n), $$ where PS is the transmitted power at the source, \(\hat x(n - n_{0})\) is the decoded symbol at relay i and n0 indicates the processing delay at Ri, and wi(n) is an AWGN sample with zero mean and unit variance. In this paper, the information symbols belong to PSK constellation Ψ, i.e., x(n)∈Ψ, where \(\Psi \equiv \left \{\exp \left (j\frac {\pi (2k + 1)}{|\Psi |} \right),\; k = 0,\ldots, |\Psi | - 1\right \}\) and |Ψ| is the cardinality of Ψ. Therefore, the received SNR at each FD relay operating on receive antenna m and transmit antenna l can be written as $$ {\gamma_{{\textsf{S}},i}^{l \to m}} = \frac{{{P_{\textsf{S}}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}}{{{P_{\textsf{R}}}{{\left| {I_{i,i}^{l\to m}} \right|}^{2}} + 1}}. $$ The received signal at the destination is $$ y_{i,{\textsf{D}}}^{l \to m} = \sqrt {{P_{\textsf{R}}}} \hat x\left({n - n_{0}} \right)h_{i,{\textsf{D}}}^{(l)} + {w_{\textsf{D}}}(n), $$ where PR is the transmitted power at the selected relay and wD(n) is an AWGN sample with zero mean and unit variance. With PSK constellation, the received SNR at the destination is $$ \gamma_{i,{\textsf{D}}}^{l \to m} = {P_{\textsf{R}}}{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|^{2}}. $$ Under Rayleigh fading, the squared magnitudes |hS,i(m)|2,|hi,D(l)|2, and \({\left |I_{i,i}^{l\to m} \right |}^{2}\) are exponentially distributed random variables with parameters λS,i,λi,D, and λi,i, respectively. This means that the average power gains of these channels are 1/λS,i,1/λi,D, and 1/λi,i and they do not depend on the particular pair (l,m) of receive/transmit antennas at the selected relay. This paper considers joint relay-antenna selection in order to maximize the minimum SINR via any relaying link connecting S and D. In particular, the joint relay-antenna selection schemes studied for the two cases of antenna configurations are as follows: Fixed antenna configuration (FAC): In this case, the sets of M Rx and L Tx antennas are fixed. The considered joint relay-antenna selection is as follows: $$ \{i,l,m\}= \arg\underset{i}{\max} \underset{\,\,\,\{l, m\}}{\max} \min \left\{{{\gamma_{{\textsf{S}},i}^{l \to m}},\gamma_{i,{\textsf{D}}}^{l\to m}} \right\} $$ Note that the number of all states for i,l,m is KLM. Adaptive antenna configuration (AAC): In this case the sets of Rx and Tx antennas are not fixed, but the optimal Rx and Tx antennas are selected jointly based on the instantaneous channel conditions. This means that each antenna of the FD relay is able to transmit or receive the signal and the relay node has flexible connection switches between the antennas and two RF chains (one for transmitting and one for receiving). The joint relay-antenna selection is performed as follows: $$\begin{array}{@{}rcl@{}} &\{i,k,{l_{k}}, {m_{k}}\} = \arg \underset{i}{\max} \underset{k}{\max} \underset{l_{k},m_{k}}{\max} \bigg\{\\ &\min \left\{{{\gamma_{{\textsf{S}},i}^{{l_{k}} \to {m_{k}}}},\gamma_{i,{\textsf{D}}}^{{l_{k}} \to {m_{k}}}} \right\},\min \left\{{{\gamma_{{\textsf{S}},i}^{{m_{k}} \to {l_{k}}}},\gamma_{i,{\textsf{D}}}^{{m_{k}} \to {l_{k}}}} \right\} \bigg\},\\ & k \in \mathcal{D} = \left\{1,2,\ldots, {Q \choose 2}\right\}, \end{array} $$ where \(\mathcal {D}\) is the set that contains all permutations to select two antennas among Q antennas at each FD relay. For permutation k, also called mode k, lk and mk denote the indices of the transmitting and receiving antennas at each FD relay. The size of \(\mathcal {D}\) is \({{Q \choose 2}}=\frac {Q(Q-1)}{2}\). Because there are two ways to use a pair of antennas for transmitting and receiving, the number of all states for max-min function (i,k) is KQ(Q−1). Performance analysis under FAC and without power scaling Let γi= min{γS,il→m,γi,Dl→m} be the minimum SINR of the ith relay link. Then the CDF of γi is calculated as $$\begin{array}{@{}rcl@{}} F_{\gamma_{i}}(x)&=&\Pr \left\{{{\gamma_{i}} < x} \right\} = 1- \Pr \left\{{{\gamma_{i}} > x} \right\}\\ &=& 1 - \Pr \left\{{\gamma_{{\textsf{S}},i}^{l \to m} > x} \right\}\Pr \left({\gamma_{i,{\textsf{D}}}^{l \to m} > x} \right). \end{array} $$ By substituting the following expressions $$\begin{array}{@{}rcl@{}} &\Pr \left\{{\gamma_{{\textsf{S}},i}^{l \to m}} > x \right\} = \exp \left({- \frac{{x{\lambda_{{\textsf{S}},i}}}}{{{P_{\textsf{S}}}}}} \right){\left({1 + \frac{{x{P_{\textsf{R}}}{\lambda_{{\textsf{S}},i}}}}{{{P_{\textsf{S}}}{\lambda_{i,i}}}}} \right)^{- 1}} \end{array} $$ $$\begin{array}{@{}rcl@{}} & \Pr \left\{{\gamma_{i,{\textsf{D}}}^{l \to m} > x} \right\} = \exp \left({- \frac{{\lambda_{i,{\textsf{D}}}}}{P_{\textsf{R}}}x} \right) \end{array} $$ into (7), one has $$ F_{\gamma_{i}}(x)= {1 - \frac{{\exp \left({- \left({\frac{{\lambda_{i,{\textsf{D}}}}}{{{P_{\textsf{R}}}}} + \frac{{\lambda_{{\textsf{S}},i}}}{{{P_{\textsf{S}}}}}} \right)x} \right)}}{{1 + x{\eta_{i}}}}}, $$ $$\begin{array}{@{}rcl@{}} {\eta_{i}} = \frac{{{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}}}{{{\lambda_{i,i}}{P_{\textsf{S}}}}}=\frac{P_{\textsf{R}}/\lambda_{i,i}}{P_{\textsf{S}}/\lambda_{{\textsf{S}},i}}. \end{array} $$ The parameter ηi quantifies the amount of residual self-interference power as a percentage of the power received from the source at the selected relay. For example, if ηi=0.01, then the residual self-interference power is 1% of the power received from the source. In the high transmit power regime, PS and PR approach infinity, hence \(F_{\gamma _{i}}(x)\) in (10) approaches $$ F_{{\gamma_{i}}}^{(\infty)}(x) = {1 - \frac{1}{{1 + x{\eta_{i}}}}}. $$ The above expression shows that, in the high SNR regime, the distribution of the minimum SINR for any relay link only depends on the self-interference level at the relay node. This means that, in the high SNR regime, increasing the transmitted powers of the source and selected relay is ineffective to enhance the system performance. Outage probability In the joint relay-antenna selection scheme considered in this paper, the combination of relay and Rx/Tx antennas that yields the largest minimum SINR is selected for relaying information from S to D. Therefore, the outage probability of the network with K relays can be calculated as $$\begin{array}{@{}rcl@{}} {P_{\text{out}}}(x)& = \Pr \left(\max\{\gamma_{1},\ldots,\gamma_{K}\} < x \right)=\prod\limits_{i = 1}^{K} {\Pr \left\{ {{\gamma_{i}} < x} \right\}}\\ & = \prod\limits_{i = 1}^{K} {\left({1 - \frac{{\exp \left({- \left({\frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}} + \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}}} \right)x} \right)}}{1 + x{\eta_{i}}}}\right)} \end{array} $$ Consider equal transmitted power at the source and selected relay and define the average SNR as \(\bar \gamma =\frac {P_{\textsf {S}}}{\sigma _{w}^{2}}=\frac {{P_{\textsf {R}}}}{\sigma _{w}^{2}}=P_{\textsf {R}}\) where \({\sigma _{w}^{2}}=1\) is the variance of AWGN noise. The finite-SNR diversity order is defined as [18, 24] $$\begin{array}{@{}rcl@{}} d\left(\bar\gamma \right) = - \frac{\partial \ln {P_{\text{out}}}\left(\bar\gamma \right)}{\partial \ln \bar\gamma}, \end{array} $$ where \(P_{\text {out}}(\bar \gamma)\) is the outage probability of the FD relaying system at SNR \(\bar \gamma \). Given the expression of the outage probability in (13) and applying the recursive rule, the finite-SNR diversity order of the considered system can be shown to be $$\begin{array}{@{}rcl@{}} &d=\sum\limits_{i = 1}^{K} {\frac{x}{P_{\textsf{R}}}} \left({\lambda_{i,{\textsf{D}}} + \lambda_{{\textsf{S}},i}} \right)\frac{{\exp \left({- \left({\frac{\lambda_{i,{\textsf{D}}}}{{{P_{\textsf{R}}}}}+ \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}}} \right)x} \right)}}{{1 + x{\eta_{i}} - \exp \left({- \left({\frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}} + \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}}}\right)x} \right)}}. \end{array} $$ Furthermore, using Taylor series expansions, (15) can be approximated as $$ d \approx \sum\limits_{i = 1}^{K} {{{\frac{{1}}{{1 + {\kappa_{i}}}}}} } $$ where \({\kappa _{i}} = \frac {{{P_{\textsf {R}}}{\eta _{i}}}}{\lambda _{i,{\textsf {D}}} + \lambda _{{\textsf {S}},i}}\). In the case of small self-interference level, one has ηi→0 and d→K. Furthermore, as long as the residual self interference level is nonzero, i.e., ηi≠0, then in the high SNR regime, PR→∞ and κi→∞. This leads to a diversity order of zero, which is a direct consequence of the irreducible floor of the outage probability caused by the self-interference at the FD relay. Average capacity The outage probability in (13) can be rewritten as [18]: $$ P_{\text{out}}(x) = \sum\limits_{\mathcal{A} \subset \mathcal{S}} {\prod\limits_{i \in \mathcal{A}} {\frac{{- \exp \left({- \left({\frac{{\lambda_{i,{\textsf{D}}}}}{P_{\textsf{R}}}+ \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}}} \right)x} \right)}}{1 + x{\eta_{i}}}}}. $$ In the above expression, \(\mathcal {S}=\{1,2,\ldots,K\}, \mathcal {A}\) denotes a subset of \(\mathcal {S}\) and the summation is over all possible subsets of \(\mathcal {S}\). Therefore, the ergodic capacity can be calculated as [18] $$ {}\begin{aligned} \bar C\! &= \frac{1}{{\ln 2}}\int_{0}^{\infty} {\frac{1 - {P_{\text{out}}}(x)}{1 +x }} dx \\ &=\! \sum\limits_{\scriptstyle \mathcal{A} \subset \mathcal{S} \atop \scriptstyle \mathcal{A} \ne \emptyset} \!{\frac{{- {{\left({- 1} \right)}^{|\mathcal{A}|}}}}{\ln 2}}\! \int_{0}^{\infty}\! {\frac{1}{{1 + x }}}\!\prod\limits_{i \in \mathcal{A}}\! {\frac{{\exp \left({- \left({\frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}} + \frac{\lambda_{{\textsf{S}},i}}{{{P_{\textsf{S}}}}}} \right)x} \right)}}{1 + {\eta_{i}}x }} dx\\ &=\! \sum\limits_{\scriptstyle \mathcal{A} \subset \mathcal{S} \atop \scriptstyle \mathcal{A} \ne \emptyset}\! {\frac{- {{\left({- 1} \right)}^{|\mathcal{A}|}}}{\ln 2}} \!\int_{0}^{\infty}\! {T\!\left(x \right)} \exp \!\left(\!{-\! \sum\limits_{i \in \mathcal{A}}\! {\left(\!{\frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}\,+\, \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}}}\right)}x} \right)dx, \end{aligned} $$ where \(|\mathcal {A}|\) denotes the cardinality of set \(\mathcal {A}\) and $$ T\left(x \right) = \frac{1}{{\left({1 + x} \right)\prod\limits_{i \in \mathcal{A}} {\left({1 + {\eta_{i}}x} \right)} }}. $$ Using the residue theorem, the function T(x) can be written as $$ T(x) = \frac{a}{{1 + x }} + \sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{1 + {\eta_{i}}x}}, $$ where a and bi are given by $$\begin{array}{@{}rcl@{}} {}a = \frac{1}{\prod\limits_{i \in \mathcal{A}} {\left({1 - {\eta_{i}}} \right)}},\quad {b_{i}} = \frac{1}{\left({1 - \frac{1}{\eta_{i}}}\right)\prod\limits_{i' \in \mathcal{A},i' \ne i} {\left({1 - \frac{\eta_{i'}}{\eta_{i}}} \right)}}. \end{array} $$ By using the following identity [25], $$ {} \begin{aligned} &{\int_{0}^{\infty} {\frac{\mathrm{e}^{- \mu x}}{{\left({x + \nu} \right)}^{n}}dx}} =\frac{1}{\left({n - 1} \right)!} \sum\limits_{k = 1}^{n - 1}{\left(k - 1 \right)!\left(- \mu \right)}^{n - k - 1}\nu^{- k}-\\ &\frac{{\left({- \mu} \right)}^{n - 1}}{\left({n - 1} \right)!}{\mathrm{e}^{\nu \mu }}E_{i}\left({- \nu \mu} \right),\; n \ge 2,\; \left| {\arg (\nu)} \right| < \pi,\; \text{Re}\{\mu\} > 0, \end{aligned} $$ the average capacity is finally expressed as $$\begin{array}{@{}rcl@{}} {}\bar C = \sum\limits_{\scriptstyle \mathcal{A} \subset \mathcal{S} \atop \scriptstyle \mathcal{A} \ne \emptyset} {\frac{{- {{\left({- 1} \right)}^{|\mathcal{A}|}}}}{{\ln 2}}\left[ {a{\mathrm{e}^{\beta} }{E_{1}}\left(\beta \right) + \sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{\eta_{i}}{E_{1}}\left({\frac{\beta}{\eta_{i}}}\right)}} \right]} \end{array} $$ where \(\beta =\sum \limits _{i \in \mathcal {A}} {\left ({\frac {\lambda _{i,{\textsf {D}}}}{P_{\textsf {R}}} + \frac {\lambda _{{\textsf {S}},i}}{{P_{\textsf {S}}}}}\right)}\) and Ei(·) is the exponential integral function. When the link SNRs approach ∞, the exponential term in (18) becomes 1 and the average capacity can be evaluated as $$\begin{array}{@{}rcl@{}} {\bar C}^{(\infty)}&=& \sum\limits_{\scriptstyle \mathcal{A} \subset \mathcal{S} \atop \scriptstyle \mathcal{A} \ne \emptyset} {\frac{- {{\left({- 1} \right)}^{|\mathcal{A}|}}}{\ln 2}}\int_{0}^{\infty} {T\left(x \right)} dx \end{array} $$ Using the identity \(\int {\frac {1}{b + ax}} dx = \frac {1}{a}\ln \left | {ax + b} \right |\), the indefinite integral evaluates to: $$ {}g(x)= \int {T\left(x \right)} dx = a\ln \left| {1 + x} \right| + \sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{\eta_{i}}}\ln \left| {1 + {\eta_{i}}x} \right|+C. $$ Furthermore, by comparing T(x) in (19) and (20) at x→∞, it can be shown that \(a + \sum \limits _{i \in \mathcal {A}} {\frac {b_{i}}{\eta _{i}}} = 0\). Using this relationship, g(∞) is calculated as $$\begin{array}{*{20}l} {}g(\infty)&=\underset{x \to \infty}{\lim} \sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{\eta_{i}}} \left[ {\ln \left| {1 + {\eta_{i}}x} \right| - \ln \left| {1 + x} \right|} \right] \end{array} $$ $$\begin{array}{*{20}l} &= \underset{x \to \infty}{\lim} \sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{\eta_{i}}}\left[ {\ln \left| {\frac{{1 + {\eta_{i}}x}}{1 + x}}\right|} \right]= \sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{\eta_{i}}}\ln {\eta_{i}}. \end{array} $$ Finally, substituting (26) and g(0)=0 into (24) yields $$ {\bar C}^{(\infty)} = \sum\limits_{\scriptstyle \mathcal{A} \subset \mathcal{S} \atop \scriptstyle \mathcal{A} \ne \emptyset} {\frac{- {{\left({- 1} \right)}^{|\mathcal{A}|}}}{\ln 2}}\left[ {\sum\limits_{i \in \mathcal{A}} {\frac{b_{i}}{\eta_{i}}\ln \left({\eta_{i}}\right)}} \right]. $$ The above analysis reveals that there is a hard limit on the capacity even when the transmit powers at the source and selected relay are the same and increase without bound. Again, this is a direct consequence of the nonzero residual self interference. Performance analysis under FAC and with power scaling The analysis in the previous section for the case when equal power is assigned at the source and selected relay, i.e., PS=PR, shows that the diversity order is zero and there is a hard limit on the capacity in the high SNR regime. This section analyzes the system performance in which power scaling is performed at the source in order to overcome zero diversity order and remove the hard capacity limit. The power scaling considered here is similar to what investigated in [19], but it is pointed out that only relay selection, not joint relay-antenna selection, is examined in [19]. In particular, the source power is scaled according to the instantaneous channel between the source and selected relay and the transmit power of the relay as follows: $$ \hat{P}_{\textsf{S}}= P_{\textsf{R}}^{2} \left| {h_{{\textsf{S}},i}^{(m)}} \right|{~}^{2}. $$ With the above power scaling, the instantaneous SINR at relay i becomes $$ \hat{\gamma}_{{\textsf{S}},i}^{l \to m} = \frac{P_{\textsf{R}}^{2}\left|h_{{\textsf{S}},i}^{(m)}\right|{~}^{4}}{{P_{\textsf{R}}}\left|I_{i,i}^{l\to m}\right|{~}^{2} + 1}=\frac{Z}{Y+1}, $$ where \(Z = P_{\textsf {R}}^{2}\left |h_{{\textsf {S}},i}^{(m)}\right |{~}^{4}\) and \(Y = {P_{\textsf {R}}}\left |I_{i,i}^{l \to m}\right |{~}^{2}\). The CDF of Z and the PDF of Y are given as follows: $$\begin{array}{*{20}l} F_{Z}(z) &= 1 - \exp \left(- \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt z \right) \end{array} $$ $$\begin{array}{*{20}l} f_{Y}(y) &= \frac{\lambda_{i,i}}{P_{\textsf{R}}}\exp \left(- \frac{\lambda_{i,i}}{P_{\textsf{R}}} y\right) \end{array} $$ In this case, the outage probability is derived as $$ P_{\text{out}}(x) = \prod\limits_{i = 1}^{K} F_{l,m,i}(x), $$ $$\begin{array}{*{20}l} {}F_{l,m,i}(x) &= 1 - \left(1 - F_{\hat{\gamma}_{{\textsf{S}},i}^{l \to m}}(x) \right)\left(1 - F_{\gamma^{l \to m}_{i,{\textsf{D}}}}(x)\right) \end{array} $$ $$\begin{array}{*{20}l} F_{\gamma_{i,{\textsf{D}}}^{l\to m}}(x) &= 1 - \exp \left(- \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}x\right) \end{array} $$ $$\begin{array}{*{20}l} F_{\hat{\gamma}_{{\textsf{S}},i}^{l \to m}}(x) &= \Pr \left({\frac{Z}{{Y + 1}} < x} \right) = \Pr \left({Z < \left({Y + 1} \right)x} \right) \\ &=\int_{0}^{\infty} {\Pr \left({Z < \left({Y + 1} \right)x} \right)} {f_{Y}}\left(y \right)dy\\ &= 1 - \frac{\lambda_{i,i}}{P_{\textsf{R}}}\int_{0}^{\infty} {\exp\! \left({\!- \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt x \sqrt {y + 1} - \frac{\lambda_{i,i}}{P_{\textsf{R}}}y} \right)} dy. \end{array} $$ The above integral can be evaluated in a closed form by making change of variable \(u=\sqrt {y+1}\) and using the following identity [25]: $$ {}\begin{aligned} \int_{1}^{\infty} x\exp &\left({- mx - n{x^{2}}} \right) dx\\ &= \frac{{\exp \left({- \left({m + n} \right)} \right)}}{{2n}}\\ &- \frac{\sqrt \pi m\exp \left({\frac{{m^{2}}}{4n}}\right)Q\left({\frac{{\sqrt 2m + 2\sqrt 2n}}{2\sqrt n }}\right)}{2{n^{\frac{3}{2}}}}. \end{aligned} $$ The final expression for Fl,m,i(x) is given as $$ {}\begin{aligned} {F_{l,m,i}}(x) &= 1 - \exp \left({- \left({\frac{\lambda_{{\textsf{S}},i}\sqrt x}{P_{\textsf{R}}} + \frac{\lambda_{i,{\textsf{D}}}x}{P_{\textsf{R}}}}\right)} \right) \\ &+ \frac{\exp {\left(\frac{\lambda_{i,i}}{P_{\textsf{R}}}+\frac{(\lambda_{{\textsf{S}},i})^{2} x}{4P_{\textsf{R}}{\lambda_{i,i}}}\right)}\exp \left({- \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}x} \right)\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt {\pi x}}{\sqrt {\frac{\lambda_{i,i}}{P_{\textsf{R}}}}}\\ &\quad Q\left({\frac{\frac{\sqrt 2\lambda_{{\textsf{S}},i}\sqrt x}{P_{\textsf{R}}} + 2\sqrt 2 \frac{\lambda_{i,i}}{P_{\textsf{R}}}}{2\sqrt {\frac{\lambda_{i,i}}{P_{\textsf{R}}}}}}\right) \end{aligned} $$ By using the inequality \(Q\left (x \right) \le \frac {1}{2}\exp \left ({- \frac {{{x^{2}}}}{2}} \right)\), an upper bound on the outage probability is obtained as $$ P_{\text{out}}(x) \le P^{(\text{UB})}_{\text{out}}(x)= {{\prod\limits_{i = 1}^{K} {{G_{l,m,i}}}(x)} } $$ where qi and Gl,m,i(x) are defined as $$\begin{array}{*{20}l} q_{i} &= \frac{{\lambda_{{\textsf{S}},i}\sqrt {\pi P_{\textsf{R}}} }}{2{P_{\textsf{R}}}\sqrt {\lambda_{i,i}}} \end{array} $$ $$\begin{array}{*{20}l} {}{G_{l,m,i}}(x) & = 1 - \exp \left({- \left({\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt x + \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}x} \right)} \right)\\ &\quad + {{q_{i}}}\sqrt x \exp \left({- \left({\frac{{\lambda_{{\textsf{R}},i}\sqrt x}}{P_{\textsf{R}}}+ \frac{\lambda_{i,{\textsf{D}}}x}{P_{\textsf{R}}}}\right)} \right). \end{array} $$ Furthermore, in the high SNR regime, qi→0 and the outage probability can be upper bounded as $$ {}P_{\text{out}}(x) \le \prod\limits_{i = 1}^{K} {\left(1 - \exp \left({- \left({\frac{\lambda_{{\textsf{R}},i}}{P_{\textsf{R}}}\sqrt x + \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}x} \right)} \right) \right)}. $$ Using the definition of finite-SNR diversity order in (14), the diversity order can be obtained as $$ d = \sum\limits_{i = 1}^{K} {\frac{\left({{\lambda_{{\textsf{S}},i}}\sqrt x + {\lambda_{i,{\textsf{D}}}}x} \right)\exp \left({- \left({\frac{{\lambda_{{\textsf{S}},i}}\sqrt x}{P_{\textsf{R}}}+ \frac{{{\lambda_{i,{\textsf{D}}}}x}}{P_{\textsf{R}}}}\right)} \right)}{{P_{\textsf{R}}}\left({1 - \exp \left({- \left({\frac{{{\lambda_{{\textsf{S}},i}}\sqrt x }}{{{P_{\textsf{R}}}}} + \frac{{\lambda_{i,{\textsf{D}}}}x}{P_{\textsf{R}}}}\right)} \right)} \right)}}. $$ In the high SNR regime, \(1 - \exp \left ({- \left ({\frac {{\lambda _{{\textsf {S}},i}}\sqrt x}{P_{\textsf {R}}} + \frac {{\lambda _{i,{\textsf {D}}}}x}{P_{\textsf {R}}}}\right)} \right) \approx \left ({\frac {{\lambda _{{\textsf {S}},i}}\sqrt x}{P_{\textsf {R}}}+ \frac {{\lambda _{i,{\textsf {D}}}}x}{P_{\textsf {R}}}} \right)\). Therefore, the full diversity order d=K can be achieved, regardless of the RSI level ηi. The above analysis is with respect to the relay power, i.e., the SNR \(\bar \gamma =\frac {P_{\textsf {R}}}{\sigma _{w}^{2}}\). Note that, depending on the instantaneous channel gain |hS,i(m)| 2, the instantaneous source power in (29) might be larger or smaller than PR. When the total power is constrained to be P, then $$\begin{array}{*{20}l} \frac{P_{\textsf{R}}^{2}}{\lambda_{{\textsf{S}},i}} + P_{\textsf{R}} &=& P \end{array} $$ $$\begin{array}{*{20}l} P_{\textsf{R}} &=& \frac{- {\lambda_{{\textsf{S}},i}} + \sqrt {\lambda_{{\textsf{S}},i}^{2} + 4P{\lambda_{{\textsf{S}},i}}}}{2} \end{array} $$ Therefore, the diversity order with respect to the SNR \(\bar \gamma =\frac {P}{\sigma _{w}^{2}}\) is obtained as $$ {}\frac{\partial \ln {P_{\text{out}}}}{\partial \ln P} = \frac{\partial \ln {P_{\text{out}}}}{\partial \ln P_{\textsf{R}}}\times \frac{\partial \ln P_{\textsf{R}}}{\partial P_{\textsf{R}}}\times \frac{\partial P_{\textsf{R}}}{\partial P}\times \frac{\partial P}{\partial \ln P} = \frac{K}{2}. $$ Thus, in this case the diversity order is twice smaller. Therefore, there is a tradeoff between the diversity order and how the source power is scaled with respect to the total power. The upper bound of outage probability in (39) can be written as $$\begin{array}{*{20}l} P^{(\text{UB})}_{\text{out}}(x)&=\!\sum\limits_{\scriptstyle \mathcal{A} \cup \mathcal{B} \subset S \atop {\scriptstyle \mathcal{A} \cup \mathcal{B} \ne \emptyset \atop \scriptstyle \mathcal{A} \cap \mathcal{B} = \emptyset}}\left[\prod\limits_{i \in \mathcal{A}}\left({- \exp \left({- \left({\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt x + \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}x} \right)} \right)} \right) \right.\\ & \quad\left.\times \prod\limits_{j \in \mathcal{B}}{\left({{q_{j}}\sqrt x \exp \left({- \left({\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt x + \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}x} \right)} \right)} \right)} \right]. \end{array} $$ The corresponding lower bound on the average capacity can then be computed as follows: $$\begin{array}{*{20}l} {\bar C}_{(\text{LB})} &= \frac{1}{\ln 2}\int_{0}^{\infty} {\frac{1 - P^{(\text{UB})}_{\text{out}}(x)}{1 + x }}dx \end{array} $$ $$\begin{array}{*{20}l} & = \sum\limits_{\scriptstyle \mathcal{A} \cup \mathcal{B} \subset S \atop {\scriptstyle \mathcal{A} \cup \mathcal{B} \ne \emptyset \atop \scriptstyle \mathcal{A} \cap \mathcal{B} = \emptyset }}\frac{- {{\left({- 1} \right)}^{|\mathcal{A}|}}\prod\limits_{{j} \in \mathcal{B}}{q_{j}}}{\ln 2}\int_{0}^{\infty} \frac{x^{\frac{|\mathcal{B}|}{2}}}{{1 + x}}\\ &\quad\exp \left(- \sum\limits_{{i} \in \mathcal{S}} \left({\frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}} x + \frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{R}}}\sqrt x} \right) \right) dx\\ \end{array} $$ where the summation is employed over all possible sets for \(\mathcal {A}\) and \(\mathcal {B}\) such that \(\mathcal {A} \cup \mathcal {B} \in \mathcal {S}, \mathcal {A} \cup \mathcal {B} \ne \emptyset \) and \(\mathcal {A} \cap \mathcal {B} = \emptyset \). In the high SNR regime, the lower bound of the average capacity can be written as $$ {\bar C}^{(\infty)}_{(\text{LB})} = \sum\limits_{\scriptstyle \mathcal{A} \cup \mathcal{B} \subset S \atop {\scriptstyle \mathcal{A} \cup \mathcal{B} \ne \emptyset \atop \scriptstyle \mathcal{A} \cap \mathcal{B} = \emptyset}} {\frac{{- {{\left({- 1} \right)}^{|\mathcal{A}|}}\prod\limits_{{j} \in \mathcal{B}} {{q_{{j}}}} }}{{\ln 2}}\int_{0}^{\infty} {\frac{{{x^{\frac{|\mathcal{B}|}{2}}}}}{{1 + x}}dx}} \to \infty $$ where \(|\mathcal {A}|\) and \(|\mathcal {B}|\) are the cardinalities of sets \(\mathcal {A}\) and \(\mathcal {B}\). As can be seen, the lower bound increases without bound. Thus, there is no capacity limit when power scaling is performed. Performance analysis under AAC: with and without power scaling Without power scaling Focusing on the case Q=2, define γi as follows: $$ {}\gamma_{i}\,=\,\max\! \left\{{\min\! \left\{{{\gamma_{{\textsf{S}},i}^{l \to m}},\gamma_{i,{\textsf{D}}}^{l \to m}} \right\}\!,\min\! \left\{{{\gamma_{{\textsf{S}},i}^{m \to l}},\gamma_{i,{\textsf{D}}}^{m \to l}} \right\}} \right\} $$ For notational convenience, let \({\tilde \lambda }_{{\textsf {S}},i}=\lambda _{{\textsf {S}},i}/P_{\textsf {S}}, {\tilde {\lambda }}_{i,{\textsf {D}}}=\lambda _{i,{\textsf {D}}}/P_{\textsf {R}}\), and \({\tilde \lambda }_{i,i}=\lambda _{i,i}/P_{\textsf {R}}\). In essence, \({\tilde \lambda }_{{\textsf {S}},i}, {\tilde \lambda }_{i,{\textsf {D}}}\), and \({\tilde \lambda }_{i,i}\) are the exponential parameters of scaled random variables PS|hS,i(m)|2,PR|hi,D(l)|2, and \({P_{\textsf {R}}}{{\left | {I_{i,i}^{l \to m}} \right |}^{2}}\), respectively. According to the including-excluding principle [18], the CDF of γi can be obtained as in (52). $$ \begin{aligned} F_{\gamma_{i}}(x) &=\Pr \left(\min \left\{{\frac{{P_{\textsf{S}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}}{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{l \to m}}} \right|}^{2}} + 1},{P_{R}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\}\right.\\ &\left.< x,\min \left\{{\frac{{P_{\textsf{S}}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}{{P_{R}}{{\left| {{I_{i,i}^{m \to l}}} \right|}^{2}} + 1},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} < x \right) \\ &= \Pr \left(\min \left\{{\frac{{P_{\textsf{S}}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}{{P_{\textsf{R}}}{{\left| {I_{i,i}^{l \to m}} \right|}^{2}} + 1},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\}\right.\\ &\left.> x,\min \left\{{\frac{{{P_{\textsf{S}}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}}{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{m \to l}}} \right|}^{2}} + 1},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} > x \right) \\ & \quad+1 - \Pr \left({\min \left\{{\frac{P_{\textsf{S}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{l \to m}}} \right|}^{2}} + 1},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} > x} \right)\\ & \quad- \Pr \left({\min \left\{ {\frac{{P_{\textsf{S}}}{{\left| {h_{{\textsf{S}},i}^{(m)}} \right|}^{2}}}{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{m \to l}}} \right|}^{2}} + 1},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} > x} \right)\\ &= 1 - \frac{{2\exp \left({- \left({{\tilde\lambda_{{\textsf{S}},i}} + {\tilde\lambda_{i,{\textsf{D}}}}} \right)x} \right)}}{{1 + x{\eta_{i}}}}\\ & \quad + \frac{{\exp \left({- 2\left({{\tilde\lambda_{{\textsf{S}},i}} + {\tilde\lambda_{i,{\textsf{D}}}}} \right)x} \right)}}{{1 + 2{\eta_{i}}x}}. \end{aligned} $$ Next, define γmax=max{γ1,…,γK}. Then, the CDF of γmax is \(F_{\gamma _{\max }}(x)=\prod \limits _{i = 1}^{K} F_{\gamma _{i}}(x)\), which is also the outage performance of the system. Based on the definition of finite-SNR diversity order, it is obtained as in (53). $$ \begin{aligned} d &= \sum\limits_{i = 1}^{K} {\frac{{2x}}{P_{\textsf{R}}}}\left({{\lambda_{{\textsf{S}},i}} + {\lambda_{i,{\textsf{D}}}}}\right) \\ &\times\!\!\frac{\frac{1}{{1 + {\eta_{i}}x}}\left({\exp \left({- \left({\frac{\lambda_{{\textsf{S}},i}}{{P_{\textsf{S}}}} + \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}}\right)x} \right)} \right) \!- \frac{1}{{1 + 2{\eta_{i}}x}}\left({\exp \left({- 2\left({\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}} + \frac{\lambda_{i,{\textsf{D}}}}{{P_{\textsf{R}}}}}\right)x} \right)} \right)}{1 - \frac{2}{{1 + {\eta_{i}}x}}\left({\exp \left({- \left({\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}} + \frac{{{\lambda_{i,{\textsf{D}}}}}}{{{P_{\textsf{R}}}}}} \right)x} \right)} \right) \!+ \frac{1}{{1 + 2{\eta_{i}}x}}\left({\exp \left({- 2\left({\frac{\lambda_{{\textsf{S}},i}}{P_{\textsf{S}}} + \frac{\lambda_{i,{\textsf{D}}}}{P_{\textsf{R}}}}\right)x} \right)} \right)} \end{aligned} $$ By employing Taylor series expansion, the diversity order in this case can be approximated as $$ d \approx \sum\limits_{i = 1}^{K} {2\frac{1 + {\zeta_{i}}}{1 + 2{\zeta_{i}} + 2\zeta_{i}^{2}}}, $$ where \({\zeta _{i}} = \frac {{P_{\textsf {R}}}{\eta _{i}}}{{\lambda _{{\textsf {S}},i}} + {\lambda _{i,{\textsf {D}}}}}\). It can be seen that as ηi→0,ζi→0 and d→2K. Thus, by the fact that having adaptive configuration between the transmit and receive antennas, the number of effective channels between the source and K relays or between K relays and the destination becomes 2K, which explains the diversity order d→2K when the RSI approaches zero. Next, rewrite \(F_{\gamma _{\max }}(x)\) as $$ {}\begin{aligned} F_{\gamma_{\max}}(x) &=\sum\limits_{\scriptstyle \mathcal{A} \cup \mathcal{B} \subset S \hfill \atop \scriptstyle \mathcal{A} \cap \mathcal{B} = \emptyset \hfill} \left[\prod\limits_{i \in \mathcal{A}}\left({- 2\frac{{\exp \left({- \left({{{\tilde \lambda }_{{\textsf{S}},i}} + {{\tilde \lambda}_{i,{\textsf{D}}}}} \right)x} \right)}}{{1 + x{\eta_{i}}}}} \right)\right.\\ &\qquad\qquad\quad\left.\prod\limits_{j \in \mathcal{B}} {\left({\frac{{\exp \left({- 2\left({{{\tilde \lambda }_{{\textsf{S}},i}} + {{\tilde \lambda }_{i,{\textsf{D}}}}} \right)x} \right)}}{{1 + 2x{\eta_{i}}}}} \right)} \right] \end{aligned} $$ where \(\mathcal {A}, \mathcal {B}\), and \(\mathcal {S}\) were defined as before. Therefore, the average capacity is $$ {}\begin{aligned} \bar C &= \sum\limits_{\scriptstyle \mathcal{A} \cap \mathcal{B} = \emptyset \atop {\scriptstyle \mathcal{A} \cup \mathcal{B} \subset S \atop \scriptstyle \mathcal{A} \cup \mathcal{B} \ne \emptyset}} {\frac{{- {{\left({- 2} \right)}^{|\mathcal{A}|}}}}{{\ln 2}}} \int_{0}^{\infty} {H\left(x \right)} \\ &\quad\!\times\!\exp\! \left(\!{-\! \left({\sum\limits_{i \in \mathcal{A}}\! {\left({{\tilde\lambda_{{\textsf{S}},i}} \,+\, {\tilde\lambda_{i,{\textsf{D}}}}} \right) \,+\, 2\sum\limits_{j \in \mathcal{B}} {\left({{\tilde\lambda_{{\textsf{S}},j}} \,+\, {\tilde\lambda_{j,{\textsf{D}}}}} \right)}} } \right)\!x} \right)\!dx \\ &= \sum\limits_{\scriptstyle \mathcal{A} \cap \mathcal{B} = \emptyset \atop {\scriptstyle \mathcal{A} \cup \mathcal{B} \subset S \atop \scriptstyle \mathcal{A} \cup \mathcal{B} \ne \emptyset }} {\frac{{- {{\left({- 2} \right)}^{|\mathcal{A}|}}}}{{\ln 2}}} \left(a\exp \left(\mu \right) + \sum\limits_{i \in \mathcal{A}} \frac{{{b_{i}}}}{{{\eta_{i}}}}{E_{1}}\left({\frac{\mu }{{{\eta_{i}}}}} \right)\right.\\ &\qquad\qquad\qquad\qquad\qquad\left.+ \sum\limits_{j \in \mathcal{B}} {\frac{{{c_{j}}}}{{2{\eta_{j}}}}{E_{1}}\left({\frac{\mu }{{2{\eta_{j}}}}} \right)} \right)\\ \end{aligned} $$ where H(x), a, bi,cj, and μ are $$\begin{array}{*{20}l} {}H\left(x \right) &= \frac{1}{{\left({1 + x} \right)\prod\limits_{i \in \mathcal{A}} {\left({1 + {\eta_{i}}x} \right)\prod\limits_{j \in \mathcal{B}} {\left({1 + 2{\eta_{j}}x} \right)}} }} \\ & = \frac{a}{{1 + x}} + \sum\limits_{i \in \mathcal{A}} {\frac{{{b_{i}}}}{{1 + {\eta_{i}}x}}} + \sum\limits_{j \in \mathcal{B}} {\frac{{{c_{j}}}}{{1 + 2{\eta_{j}}x}}} \end{array} $$ $$\begin{array}{*{20}l} a &= \frac{1}{{\prod\limits_{i \in \mathcal{A}} {\left({1 - {\eta_{i}}} \right)\prod\limits_{j \in \mathcal{B}} {\left({1 - 2{\eta_{j}}} \right)}} }} \end{array} $$ $$\begin{array}{*{20}l} {b_{i}} & = \frac{1}{{\left({1 - \frac{1}{{{\eta_{i}}}}} \right)\prod\limits_{\scriptstyle i' \in \mathcal{A} \atop \scriptstyle i' \ne i} {\left({1 - \frac{{{\eta_{i'}}}}{{{\eta_{i}}}}} \right)\prod\limits_{j \in \mathcal{B}} {\left({1 - \frac{{2{\eta_{j}}}}{{{\eta_{i}}}}} \right)}} }} \end{array} $$ $$\begin{array}{*{20}l} {c_{j}} &= \frac{1}{{\left({1 - \frac{1}{{2{\eta_{j}}}}} \right)\prod\limits_{i \in \mathcal{A}} {\left({1 - \frac{{{\eta_{i}}}}{{2{\eta_{j}}}}} \right)\prod\limits_{\scriptstyle j' \in \mathcal{B} \atop \scriptstyle j' \ne j} {\left({1 - \frac{{{\eta_{j'}}}}{{{\eta_{j}}}}} \right)}} }} \end{array} $$ $$\begin{array}{*{20}l} \mu &= {\sum\limits_{i \in \mathcal{A}} {\left({{\tilde\lambda_{{\textsf{S}},i}} + {\tilde\lambda_{i,{\textsf{D}}}}} \right) + 2\sum\limits_{j \in \mathcal{B}} {\left({{\tilde\lambda_{{\textsf{S}},j}} + {\tilde\lambda_{j,{\textsf{D}}}}} \right)}} } \end{array} $$ With power scaling Similar to the case of fixed antenna configuration, when power scaling is employed in adaptive antenna configuration, define $$ {}\hat\gamma_{i}=\max \left\{{\min \left\{{{{\hat\gamma_{{\textsf{S}},i}^{l \to m}}},\gamma_{i,{\textsf{D}}}^{l \to m}} \right\},\min \left\{{{{\hat\gamma_{{\textsf{S}},i}^{m \to l}}},\gamma_{i,{\textsf{D}}}^{m \to l}} \right\}} \right\}, $$ where \(\hat \gamma _{{\textsf {S}},i}^{l \to m}\) is defined as in (30). Then the CDF of \(\hat \gamma _{i}\) can be derived similarly as done in (52) and the result is given in (63), where \(\vartheta = {\left | {{I_{i,i}^{l\to m}}} \right |{~}^{2}}\) implies a specific value of random variable \(\left | {{I_{i,i}^{l\to m}}} \right |{~}^{2}\). $$ \begin{aligned} F_{{\hat\gamma}_{i}}(x) &= \Pr \left({\min \left\{{\frac{\hat P_{\textsf{S}}}{{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{l \to m}}} \right|}^{2}} + 1}},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\}}\right.\\ &<\left.{x,\min \left\{{\frac{\hat P_{\textsf{S}}}{{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{m \to l}}} \right|}^{2}} + 1}},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} < x} \right) \\ &= \Pr \left({\min \left\{{\frac{\hat P_{\textsf{S}}}{{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{l \to m}}} \right|}^{2}} + 1}},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\}}\right.\\ &\left.{> x,\min \left\{{\frac{\hat P_{\textsf{S}}}{{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{m \to l}}} \right|}^{2}} + 1}},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} > x} \right) \\ &\quad+\!1 \,-\, \Pr \left({\min \left\{{\frac{\hat P_{\textsf{S}}}{{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{l \to m}}} \right|}^{2}} + 1}},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} > x} \right) \\ &\quad - \Pr \left({\min \left\{{\frac{\hat P_{\textsf{S}}}{{{P_{\textsf{R}}}{{\left| {{I_{i,i}^{m \to l}}} \right|}^{2}} + 1}},{P_{\textsf{R}}}{{\left| {h_{i,{\textsf{D}}}^{(l)}} \right|}^{2}}} \right\} > x} \right)\\ &= 1 - 2\exp (- {\tilde\lambda_{i,{\textsf{D}}}}x)\int_{0}^{\infty} {\exp \left({- {\tilde\lambda_{i,i}}{\vartheta} - \sqrt {\left({{\vartheta}+ 1} \right)x} {\tilde\lambda_{{\textsf{S}},i}}} \right)} d\vartheta \\ &\quad+ \exp (- 2{\tilde\lambda_{i,{\textsf{D}}}}x)\int_{0}^{\infty} {\exp \left({- {\tilde\lambda_{i,i}}{\vartheta} - 2\sqrt {\left({{\vartheta} + 1} \right)x} {\tilde\lambda_{{\textsf{S}},i}}} \right)} d\vartheta \\ &= 1 - 2\exp \left({- \left({{\tilde\lambda_{i,{\textsf{D}}}x} + {\tilde\lambda_{{\textsf{S}},i}\sqrt{x}}} \right)} \right) \\ &\quad+ 4\frac{{\sqrt {\pi x} {\tilde\lambda_{{\textsf{S}},i}}\exp \left({{\frac{{\tilde\lambda_{{\textsf{S}},i}^{2} x}}{{4{\tilde\lambda_{i,i}}}}}+\tilde\lambda_{i,i}-\tilde\lambda_{i,{\textsf{D}}}x}\right)Q\left({\frac{{{\sqrt2\tilde\lambda_{{\textsf{S}},i}\sqrt x} + \sqrt2{\tilde\lambda_{i,i}}}}{{2\sqrt {{\tilde\lambda_{i,i}}} }}} \right)}}{{\sqrt {{\tilde\lambda_{i,i}}} }} \\ &\quad+ \exp \left({- 2\left({{\tilde\lambda_{i,{\textsf{D}}}}x + {\tilde\lambda_{{\textsf{S}},i}\sqrt{x}}} \right)} \right) \\ &\quad- 4\frac{{\sqrt {\pi x} {\tilde\lambda_{{\textsf{S}},i}}\exp \left({{\frac{{\tilde\lambda_{{\textsf{S}},i}^{2} x}}{{{\tilde\lambda_{i,i}}}}}+\tilde\lambda_{i,i}-2\tilde\lambda_{i,{\textsf{D}}}x}\right)\!Q\!\left(\! {\frac{{{\sqrt2\tilde\lambda_{{\textsf{S}},i}\sqrt x} + {\sqrt2\tilde\lambda_{i,i}}}}{{\sqrt {{\tilde\lambda_{i,i}}} }}} \right)}}{{\sqrt {{\tilde\lambda_{i,i}}} }} \end{aligned} $$ Finally, define \({\hat \gamma }_{\max }=\max \{{\hat \gamma }_{1},\ldots,{\hat \gamma }_{K}\}\). Then, the outage probability is simply calculated as $$ P_{\text{out}}(x)=F_{{\hat\gamma}_{\max}}(x) = \prod\limits_{i = 1}^{K} F_{{\hat\gamma}_{i}}(x). $$ In the high transmit power scenario, the outage probability can be written as $$ {}\begin{aligned} P_{\text{out}}\left(x \right) &\approx \prod\limits_{i = 1}^{K} \left(1 - 2\exp \left({- \left({\frac{{{\lambda_{{\textsf{S}},i}}\sqrt x }}{{{P_{\textsf{S}}}}} + \frac{{{\lambda_{i,{\textsf{D}}}}x}}{P_{\textsf{R}}}}\right)} \right)\right.\\ &\quad\left.+\exp \left({- 2\left({\frac{{{\lambda_{{\textsf{S}},i}}\sqrt x}}{P_{\textsf{S}}} + \frac{{\lambda_{i,{\textsf{D}}}}x}{P_{\textsf{R}}}} \right)} \right) \right) \end{aligned} $$ Based on the definition of finite-SNR diversity order, it is obtained as in (66). $$ \begin{aligned} d &\,=\, \sum\limits_{i = 1}^{K} {\frac{2}{{P_{\textsf{R}}}}}\! \left({{\lambda_{{\textsf{S}},i}}\sqrt x + {\lambda_{i,{\textsf{D}}}}x} \right) \\ &\quad\times\! \frac{{\exp \left({- \left({\frac{{\lambda_{{\textsf{S}},i}}\sqrt x}{{P_{\textsf{S}}}} + \frac{{{\lambda_{i,{\textsf{D}}}}x}}{{{P_{\textsf{R}}}}}} \right)} \right) - \exp \left({- 2\left({\frac{{{\lambda_{{\textsf{S}},i}}\sqrt x }}{{{P_{\textsf{S}}}}} + \frac{{{\lambda_{i,{\textsf{D}}}}x}}{{{P_{\textsf{R}}}}}} \right)} \right)}}{{1 - \exp \left({- \left({\frac{{{\lambda_{{\textsf{S}},i}}\sqrt x }}{{{P_{\textsf{S}}}}} + \frac{{{\lambda_{i,{\textsf{D}}}}x}}{{{P_{\textsf{R}}}}}} \right)} \right) + \exp \left({- 2\left({\frac{{\lambda_{{\textsf{S}},i}}\sqrt x}{{{P_{\textsf{S}}}}} + \frac{{\lambda_{i,{\textsf{D}}}x}}{{{P_{\textsf{R}}}}}} \right)} \right)} } \end{aligned} $$ Again, in the high transmit power scenario, employing the Taylor series expansion can easily show that d→2K. In this section, numerical results are given to corroborate the theoretical analysis carried out in previous sections. Without loss of generality, suppose that all channels have unit power gains, i.e., λS,i=λi,D=1,i=1,…,K. When no power scaling is performed, the transmit powers of the source and selected relay are set to be equal, i.e., PS=PR. Throughout this section, AWGN noise power is set to unity. Figure 2 plots the finite-SNR diversity orders of the proposed joint relay-antenna selection schemes under both cases of FAC and AAC and with different self-interference levels and K=3. Observe that when there is nonzero self-interference, the finite-SNR diversity order under either AAC or FAC approaches zero in the high transmit power region. In the low-to-medium transmit power region the diversity order under AAC is always greater than that under FAC. As expected, in the absence of the self-interference at relay nodes, the diversity order under AAC approaches 2K, whereas the diversity order under FAC is K. Finite-SNR diversity orders of the proposed joint relay-antenna selection schemes versus the transmit power with different self-interference levels for K=3 Figure 3 shows the outage performance versus the transmit power for the considered FD relay systems with K=2,3,5 and the self-interference level η=0.01. The performance is included under both cases of FAC and AAC. As can be seen, the outage probability obtained by simulation matches very well with the expressions in (13) and (52). It can also be seen from the figure that there is a performance floor which agrees with the theoretical analysis. Under the same self-interference condition, the outage performance under AAC outperforms that under FAC. Also, as the number of relays increases, the outage performance gets better. Outage performance versus transmit power for the FD relay system with different values of K and the self-interference level η=0.01 Figure 4 shows the outage probability under AAC with the proposed power scaling for different values of λi,i and when K=3. Recall that with the proposed power scaling, the transmit power of the source varies with the instantaneous CSI. For comparison, performance under AAC scheme with equal power allocation between the source and selected relay is also plotted. For all values of the self-interference, the system with power scaling achieves higher diversity order. Specifically, compared to the case of FAC, the case of AAC with equal power achieves twice the diversity order at low-to-medium SNRs and a much lower outage floor at high SNRs. This figure also shows that the outage probability under AAC and with power scaling changes very little with the increase of λi,i. This means that the proposed scheme is robust to the variation of self interference. In fact, the diversity order of the proposed scheme under AAC and with power scaling is equal to 2K even when λi,i changes. Therefore, the diversity order is not influenced by the self-interference under AAC and with the proposed power scaling. Furthermore, performance of the conventional FD system considering the availability of the direct link, proposed in [19], is also included in this figure. As can be seen, our considered FD system under AAC (with or without power scaling), even without the availability of the direct link, outperforms the system in [19]. Outage probability of the proposed joint relay-antenna selection versus transmit power with different values of λi,i and K=3 The ergodic capacity is shown in Fig. 5 under AAC (with and without power scaling) with K=2 and λi,i=100,500. It can be seen that when λi,i increases, the ergodic capacity under AAC improves. There is a capacity ceiling for the case of AAC without power scaling, whereas under AAC and with power scaling, capacity ceiling does not exist, which agrees with the analytical results obtained in previous sections. Ergodic capacity under AAC with K=2 and different values of λi,i Figure 6 compares the capacity performance of FD and HD relay systems obtained by simulation versus ηi. The transmit power of the source and relay is considered equal in both FD and HD systems. The performance of the FD relay system with AAC is simulated for two different SNRs. The simulation results indicate that when the self-interference is small, the capacity performance of the FD relay system outperforms that of the HD relay system. Also, as SNR becomes larger, the capacity performance of both FD and HD systems can be improved. Capacity performance of the FD and HD relay systems versus ηi Figure 7 compares outage performance between the cases of AAC and FAC versus the number of relays K for two different SNR values (0 dB, 5 dB) at η=0.01. It can be seen that the outage performance improves when K increases and the performance gain becomes larger when SNR increases. Therefore, in large-scale relay systems with a large number of relays, the outage probability quickly approaches zero. Moreover, the variation of K on the outage performance is very small when SNR is very small and there is very little difference of outage performance between the two cases at small values of SNR. Outage performance comparison for AAC and FAC versus the number of relays K for different SNR values (0,5 dB) at η=0.01 Finally, Fig. 8 depicts the outage performance versus ηi at SNR=10 dB for different values of K under FAC, AAC without power scaling, and AAC with power scaling. We can see that when ηi increases, the outage performance degrades. Also, the variation of K has a strong effect on the outage performance in all cases. Outage performance versus ηi for different values of K: FAC, AAC, and AAC with power scaling This paper has considered wireless relay networks that employ K full-duplex decode-and-forward relays to help a source to communicate with a destination. Joint relay-antenna selection schemes are proposed and analyzed for two cases of antenna configurations, namely fixed antenna configuration (FAC) and adaptive antenna configuration (AAC). Closed-form expressions of the outage probability and average capacity were derived and provide important insights on the system performance. In particular, under FAC and without power scaling, the diversity order approaches K as the self-interference level gets smaller, while it approaches zero whenever the SI level is nonzero and the SNR increases without bound. Under FAC and with power scaling, the diversity order approaches K for any SI level. For the case of AAC and without power scaling, the diversity order approaches 2K under small SI level. When power scaling is applied in AAC, the diversity order approaches 2K at any SI level. All the analytical results are validated by computer simulations. Data sharing is available by emailing the first author ([email protected]). The abbreviations of NOMA, UAV, and IoT stand for "non-orthogonal multiple-access", "unmanned aerial vehicle", and "Internet of things", respectively. Adaptive antenna configuration Amplify-and-forward AGWN: Additive white Gaussian noise DF: Decode-and-forward FAC: Fixed antenna configuration FD: Full-duplex Half-duplex IoT: NOMA: Non-orthogonal multiple-access RAMS: Relay and Tx/Rx antenna mode selection scheme RSI: Residual self-interference Rx: Self-interference SINR: Signal-to-interference-plus-noise ratio UAV: Z. Zhang, K. Long, A. V. Vasilakos, L. Hanzo, Full-duplex wireless communications: Challenges, solutions, and future research directions. Proc. IEEE. 104(7), 1369–1409 (2016). Z. Zhang, X. Chai, K. Long, A. Vasilakos, L. Hanzo, Full duplex techniques for 5G networks: Self-interference cancellation, protocol design, and relay selection. IEEE Commun. Mag.53(5), 128–137 (2015). L. Li, H. Poor, L. Hanzo, Non-coherent successive relaying and cooperation: Principles, designs, and applications. IEEE Commun. Surv. Tut.17(3), 1708–1737 (2015). A. Sabharwal, P. Schniter, D. Guo, D. W. Bliss, S. Rangarajan, R. Wichman, In-band full-duplex wireless: Challenges and opportunities. IEEE J. Sel. Areas Commun.32(9), 1637–1652 (2014). T. Riihonen, S. Werner, R. Wichman, Mitigation of loopback selfinterference in full-duplex MIMO relays. IEEE Trans. Sig. Process.59(12), 5983–5993 (2011). E. Everett, M. Duarte, C. Dick, A. Sabharwal, in Proc. Asilomar Conf. Signals Syst. Comp. Empowering full-duplex wireless communication by exploiting directional diversity, (2011), pp. 2002–2006. https://doi.org/10.1109/acssc.2011.6190376. H. Jin, V. Leung, Performance analysis of full-duplex relaying employing fiber-connected distributed antennas. IEEE Trans. Veh. Technol.61(1), 146–160 (2013). T. Kim, A. Paulraj, in IEEE WCNC. Outage probability of amplify-and-forward cooperation with full duplex relay (IEEE, 2012), pp. 75–79. https://doi.org/10.1109/wcnc.2012.6214473. T. K. Baranwal, D. S. Michalopoulos, R. Schober, Outage analysis of multihop full duplex relaying. IEEE Commun. Lett.11(1), 63–66 (2013). T. Riihonen, S. Werner, R. Wichman, Hybrid full-duplex/half duplex relaying with transmit power adaptation. IEEE Trans. Wireless Commun.10(9), 3074–3085 (2011). I. Krikidis, H. Suraweera, S. Yang, K. Berberidis, Full-duplex relaying over block fading channel: A diversity perspective. IEEE Trans. Wirel. Commun.11(12), 4524–4535 (2012). S. Li, K. Yang, M. Zhou, J. Wu, L. Song, Y. Li, H. Li, Full-duplex amplify-and-forward relaying: Power and location optimization. IEEE Trans. Veh. Technol.66(9), 8458–8468 (2017). A. Almradi, A. K. Hamdi, MIMO full-duplex relaying in the presence of co-channel interference. IEEE Trans. Veh. Technol.66(6), 4874–4885 (2017). H. A. Suraweera, I. Krikidis, C. Yuen, in Proc. IEEE ICC. Antenna selection in the full-duplex multi-antenna relay channel, (2013), pp. 4823–4828. https://doi.org/10.1109/icc.2013.6655338. H. Suraweera, I. Krikidis, G. Zheng, C. Yuen, P. J. Smith, Low-complexity end-to-end performance optimization in MIMO full-duplex relay systems. IEEE Trans. Wirel. Commun.13(2), 913–927 (2014). B. Ji, Y. Li, Y. Meng, Y. Wang, K. Song, C. Han, H. Wen, L. Song, Performance analysis of two-way full-duplex relay with antenna selection under Nakagami channels. EURASIP J. Wirel. Commun. Netw.2018: (2018). https://doi.org/10.1186/s13638-018-1283-2. H. Cui, M. Ma, L. Song, B. Jiao, Relay selection for two-way full duplex relay networks with amplify-and-forward protocol. IEEE Trans. Wirel. Commun.13(7), 3768–3777 (2014). K. Yang, H. Cui, L. Song, Y. Li, Efficient full-duplex relaying with joint antenna-relay selection and self-interference suppression. IEEE Trans. Wirel. Commun.14(7), 3991–4005 (2015). Y. Wang, Y. Xu, N. Li, W. Xie, K. Xu, X. Xia, Relay selection of full-duplex decode-and forward relaying over Nakagami-m fading channels. IET Commun.10(2), 170–179 (2016). Q. Li, S. Feng, X. Ge, G. Mao, L. Hanzo, On the performance of full-duplex multi-relay channels with DF relays. IEEE Trans. Veh. Technol.66(10), 9550–9554 (2017). T. A. Le, H. Y. Kong, Energy harvesting relay-antenna selection in cooperative MIMO/NOMA network over Rayleigh fading. Wirel. Netw., 1–13 (2019). K. Song, B. Ji, C. Li, L. Yang, Outage analysis for simultaneous wireless information and power transfer in dual-hop relaying networks. Wirel. Netw.25(2), 837–844 (2019). B. Ji, Y. Li, B. Zhou, C. Li, K. Song, H. Wen, Performance analysis of UAV relay assisted IoT communication network enhanced with energy harvesting. IEEE Access. 7:, 38738–38747 (2019). https://doi.org/10.1109/ACCESS.2019.2906088. R. Narasimhan, A. Ekbal, J. M. Cioffi, in IEEE ICC. Finite-SNR diversitymultiplexing tradeoff of space-time codes (IEEE, 2005), pp. 458–462. https://doi.org/10.1109/icc.2005.1494394. I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products, 8th ed. (Academic, New York, 2014). Mahsa Shirzadian Gilan and Ha H. Nguyen contributed equally to this work. Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran Mahsa Shirzadian Gilan Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada & Ha H. Nguyen Search for Mahsa Shirzadian Gilan in: Search for Ha H. Nguyen in: MSG carried out the main works of system model development, theoretical analysis and simulation, and writing of the manuscript. HHN contributed significantly to system model development, theoretical analysis and simulation, and writing of the manuscript. Both authors read and approved the final manuscript. Correspondence to Ha H. Nguyen. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Gilan, M.S., Nguyen, H.H. Full-duplex decode-and-forward relaying with joint relay-antenna selection. J Wireless Com Network 2020, 16 (2020) doi:10.1186/s13638-019-1626-7 Received: 07 August 2018 Antenna mode selection Relay selection Self-loop interference DF relaying Not logged in - 35.172.195.82
CommonCrawl
Mathematical headaches? Problem solved! Hi, I'm Colin, and I'm here to help you make sense of maths Adult numeracy University maths Vocational maths Mathematical quotes Maths For Dummies Basic Maths For Dummies Basic Maths Practice Problems For Dummies Numeracy Tests For Dummies 20 Questions on… Core 1 Maths Core 2: Logs Core 3: Trigonometry Core 4: Integration Glorious Resolution Of Forces In Equilibrium n Mathematical Quotes (where n ~ 100) The Little Algebra Book The Evolution of Polynomials Written by Colin+ in algebra. It's always fascinating to see what's going on in textbooks of the olden days, and National Treasure @mathsjem recently found a beauty of its type. Look at those whences! Check out the subjunctives! It thrills the heart, doesn't it?1 What caught my attention, though, was evolution - in this context, taking square or cube (or, presumably, higher) roots of an expression. I know! There's a whole chapter on each of them. Looks complicated... pic.twitter.com/DWPH4e96Ip — Jo Morgan (@mathsjem) January 3, 2018 I haven't studied this method in detail - it looks similar in flavour to finding square roots by long division, which International Legend @colinthemathmo has discussed, at least for numbers . However, it got me thinking: how would I find the cube root of $c(x) = 27 + 108x + 90x^2 + 80x^3 - 60x^4 + 48x^5 - 8x^6$? If we're allowed to assume that $c(x)$ is a perfect cube, then it's not tricky at all: it's a degree-6 polynomial, so its cube root must be quadratic; the constant term is clearly 3 and the $x^2$ coefficient is just as clearly -2. That means $c(x) = \br{ 3 + bx - 2x^2}^3$, for some value of $b$. Assuming $x$ is small enough that we can ignore $x^2$ terms and higher, expanding the bracket with the binomial expansion gives $c(x) \approx 27 + 27bx$, and $b=4$. Expanding the whole thing properly gives the same answer - which I leave as an exercise. Binomial expansion As an alternative to assuming the form of the solution, it's also possible to use the general binomial expansion (at least for small $x$) to figure out the cube root of $c(x)$. Suppose we write $c(x) = 27 + x\br{108 + 90x + 80x^2 + ...}$ and work out $\br{c(x)}^{\frac{1}{3}}$? The general form for $\br{a+bx}^{\frac{1}{3}}$ is $a^{\frac{1}{3}} + \frac{1}{3}a^{-\frac{2}{3}}bx - \frac{1}{9}a^{-\frac{5}{3}}b^2x^2 + ...$ That looks a mess; however, $a$ behaves quite nicely here and simplifies the whole thing down to $3 + \frac{1}{27} bx - \frac{1}{3^{7}}b^2 x^2 + ...$ There's a bit of book-keeping to do here, because $b$ is a bit of a monster, but stick with it: $\frac{1}{27} bx = 4x + \frac{10}{3}x^2 + ...$, while $\frac{1}{3^7} b^2x^2 = \frac{108^2}{3^7} x^2 + ...$. What's $\frac{108^2}{3^7}$? I don't know off the top of my head. I do know that $108 = 2^2 \times 3^3$, so $108^2 = 2^4 \times 3^6$ ... A ninja! "It's 11,664, because $(100+k)^2 = 10,000 + 200k + k^2$." As I was saying ... which leaves us with $\frac{16}{3}$. So, we have an answer -- assuming we only need to go up to the $x^2$ term -- of $3 + 4x + \br{\frac{10}{3}- \frac{16}{3}}x^2$, which works out to $3 + 4x - 2x^2$ again. The only problem left is checking whether we really have gone far enough. One option is to cube the answer, but you don't really want to be doing that every time you extract a term. Presumably, something in the original method given in Jo's Victorian textbook checks for a remainder, but I don't really feel like wading through it to figure it out. Perhaps you, dear reader, can enlighten us all? Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. HOW much rice? It's the way they write the questions: a case study The Mathematical Ninja lets the student investigate… cube roots The Involution of Polynomials Proving a nice pattern If one were so inclined, one might even shiver in ecstasy. [↩] Sign up for the Sum Comfort newsletter and get a free e-book of mathematical quotations. No spam ever, obviously. Where do you teach? I teach in my home in Abbotsbury Road, Weymouth. It's a 15-minute walk from Weymouth station, and it's on bus routes 3, 8 and X53. On-road parking is available nearby. Blimey, this is close! twitter.com/aperiodical/status… "Obviously, we can't have a polygon with just two vertices and two edges, but if we're prepared to let bigons be bigons…" A kid at school has a French nerf gun, or a 9 g1. That's just tremendous. twitter.com/TimesPictures/stat… A folding puzzle www.flyingcoloursmaths.co.uk/a… Wordpress theme © allure 2011. All rights reserved.
CommonCrawl
BranchesHistoryFPLHelpLogin You're not logged in. 285 users online, thereof 0 logged in In order to define what it means to have a relation between sets elements, we have to provide a strict set-theoretical idea of putting the elements in a specific order. For instance, if we want to relate two elements with each other, we want to speak about "the first" (or "the left") and "the second" (or "the right") element. This leads us to the concept of an ordered pair. A strict, purely set-theoretical definition was first proposed by Kazimierz Kuratowski: Definition: Ordered Pair, n-Tuple Let \(X\) be a set and for $n\ge 1$, $i=1,\ldots n$ let \(x_i\in X\) be some of its elements. We set for $n=1$ the tuple $(x_1)$ to be the singleton: \[\begin{array}{rcl} (x_1)&:=&\{x_1\} \end{array}\] We define the ordered pair as (x_1,x_2)&:=&\{\{x_1,x_2\},\{x_1\}\} and in general, for $n > 2$ we set recursively the ordered \(n\)-tuple (or just \(n\)-tuple) $(x_1,\ldots,x_n)$ recursively as the ordered pair of $(x_1,\ldots,x_{n-1})$ and $x_n, $ formally (x_1,\ldots,x_n)&:=&((x_1,\ldots,x_{n-1}),x_n)\end{array}.\] We denote \(x_i\) as the \(i\)-th element of the $n$-tuple. Occasionally, $n$-tuples are also called lists. Sometimes, objects denoted by $(x_1,x_2,\ldots)$ might be said to have infinite length. By definition, an $n$-tuple has a finite length and such objects are not $n$-tuples. $n$-tuples differ from sets, since order matters and repetitions have meaning, while in sets, order and repetition are not important. | | | | | created: 2014-07-05 10:42:08 | modified: 2020-11-29 05:21:23 | by: bookofproofs 1.Proposition: Set-Theoretical Meaning of Ordered Tuples Edit or AddNotationAxiomatic Method This work was contributed under CC BY-SA 4.0 by: bookofproofs This work is a derivative of: Bibliography (further reading) FeedsAcknowledgments Terms of UsePrivacy PolicyImprint © 2014+ Powered by bookofproofs, All rights reserved.
CommonCrawl
This account is temporarily suspended network-wide. The suspension period ends on Apr 22 '21 at 15:09. Joseph Van Name $\color{red}{\text{Please follow the rules of this site.}}$ $\color{red}{\text{All of the recent downvoting against my posts is unjustified.}}$ $\color{red}{\text{Please upvote so the following questions are not deleted.}}$ https://mathoverflow.net/questions/326841/examples-of-nilpotent-self-distributive-algebras?rq=1 https://mathoverflow.net/questions/327635/braided-lobsters https://mathoverflow.net/questions/289275/is-the-variety-of-ternary-self-distributive-algebras-generated-by-its-finite-mem https://mathoverflow.net/questions/321889/is-this-condition-sufficient-for-a-variety-to-be-reversible https://mathoverflow.net/q/321836/22277 Mount Vinson, Antarctica blueletterbible.com Mathematics 1 1 22 silver badges1212 bronze badges MathOverflow 1 1 33 gold badges5151 silver badges7878 bronze badges Christianity 1 1 22 bronze badges Mathematics Educators 1 1 22 bronze badges 163 Why do roots of polynomials tend to have absolute value close to 1? 72 What computational problems would be good proof-of-work problems for cryptocurrency mining? 33 Which powers of the closed unit interval are homeomorphic? 31 Which sets occur as boundaries of other sets in topological spaces? 27 First-order axiomatization of free groups 25 Are sums of sequences decidable? 21 Connectedness in the language of path-connectedness cc.complexity-theory circuit-complexity reversible-computing cr.crypto-security 7 Can every efficiently computable permutation be written as the composition of two efficiently computable involutions? Apr 21 '18 6 Does there exist a polynomial time algorithm to determine whether an equation is a consequence of (x+y)*(y+z)=(x*y)+(y*z)? Aug 23 '18 5 When do cellular automata on non-abelian groups not offer a computational speed up? Sep 13 '18 5 Complexity of computing invertible functions with reversible circuits Mar 1 '18 4 Can an efficiently computable non-one way permutation be written as the composition of polynomially many easy to compute involutions? May 22 '18 4 Why don't the critters ever age? Mar 19 '18 3 Do there exists reversible gate sets of intermediate growth? Sep 13 '18 3 How many arithmetic and max operations does it take to compute Dynnikov's action of the braid groups on $\mathbb{Z}^{2n}$? Sep 21 '18 2 Is topological conventional computation possible? Sep 23 '18 2 Circuit complexity of group actions Sep 29 '18
CommonCrawl
Last month (2) Invasive Plant Science and Management (2) Publications of the Astronomical Society of Australia (2) American Antiquity (1) Animal Science (1) Disaster Medicine and Public Health Preparedness (1) Judgment and Decision Making (1) BSAS (1) Society for American Archaeology (1) Society for Disaster Medicine and Public Health, Inc. SDMPH (1) The New Zealand Society of Otolaryngology, Head and Neck Surgery (1) Endoscopic retrograde cholangiopancreatography and endoscopic ultrasound endoscope reprocessing: Variables impacting contamination risk Ashley M. Ayres, Julia Wozniak, Jose O'Neil, Kimberly Stewart, John St. Leger, A. William Pasculle, Casey Lewis, Kevin McGrath, Adam Slivka, Graham M Snyder Journal: Infection Control & Hospital Epidemiology , First View Published online by Cambridge University Press: 16 January 2023, pp. 1-5 To evaluate variables that affect risk of contamination for endoscopic retrograde cholangiopancreatography and endoscopic ultrasound endoscopes. Observational, quality improvement study. University medical center with a gastrointestinal endoscopy service performing ∼1,000 endoscopic retrograde cholangiopancreatography and ∼1,000 endoscopic ultrasound endoscope procedures annually. Duodenoscope and linear echoendoscope sampling (from the elevator mechanism and instrument channel) was performed from June 2020 through September 2021. Operational changes during this period included standard reprocessing with high-level disinfection with ethylene oxide gas sterilization (HLD–ETO) was switched to double high-level disinfection (dHLD) (June 16, 2020–July 15, 2020), and duodenoscopes changed to disposable tip model (March 2021). The frequency of contamination for the co-primary outcomes were characterized by calculated risk ratios. The overall pathogenic contamination rate was 4.72% (6 of 127). Compared to duodenoscopes, linear echoendoscopes had a contamination risk ratio of 3.64 (95% confidence interval [CI], 0.69–19.1). Reprocessing using HLD-ETO was associated with a contamination risk ratio of 0.29 (95% CI, 0.06–1.54). Linear echoendoscopes undergoing dHLD had the highest risk of contamination (2 of 18, 11.1%), and duodenoscopes undergoing HLD-ETO and the lowest risk of contamination (0 of 53, 0%). Duodenoscopes with a disposable tip had a 0% contamination rate (0 of 27). We did not detect a significant reduction in endoscope contamination using HLD-ETO versus dHLD reprocessing. Linear echoendoscopes have a risk of contamination similar to that of duodenoscopes. Disposable tips may reduce the risk of duodenoscope contamination. The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers Neil Stewart, Christoph Ungemach, Adam J. L. Harris, Daniel M. Bartels, Ben R. Newell, Gabriele Paolacci, Jesse Chandler Journal: Judgment and Decision Making / Volume 10 / Issue 5 / September 2015 Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015. A survey study of the association between active utilization of National Healthcare Safety Network resources and central-line–associated bloodstream infection reporting Caitlin M. Adams Barker, Kathleen O. Stewart, Miriam C. Dowling-Schmitt, Michael S. Calderwood, Justin J. Kim Published online by Cambridge University Press: 15 July 2022, pp. 1-3 In this survey of 41 hospitals, 18 (72%) of 25 respondents reporting utilization of National Healthcare Safety Network resources demonstrated accurate central-line–associated bloodstream infection reporting compared to 6 (38%) of 16 without utilization (adjusted odds ratio, 5.37; 95% confidence interval, 1.16–24.8). Adherence to standard definitions is essential for consistent reporting across healthcare facilities. Implementation of SARS-CoV-2 Monoclonal Antibody Infusion Sites at Three Medical Centers in the United States: Strengths and Challenges Assessment to Inform COVID-19 Pandemic and Future Public Health Emergency Use Anastasia S. Lambrou, John T. Redd, Miles A. Stewart, Kaitlin Rainwater-Lovett, Jonathan K. Thornhill, Lynn Hayes, Gina Smith, George M. Thorp, Christian Tomaszewski, Adolphe Edward, Natalia Elías Calles, Mark Amox, Steven Merta, Tiffany Pfundt, Victoria Callahan, Adam Tewell, Helga Scharf-Bell, Samuel Imbriale, Jeffrey D. Freeman, Michael Anderson, Robert P. Kadlec Journal: Disaster Medicine and Public Health Preparedness , First View Published online by Cambridge University Press: 14 January 2022, pp. 1-11 Monoclonal antibody therapeutics to treat coronavirus disease (COVID-19) have been authorized by the US Food and Drug Administration under Emergency Use Authorization (EUA). Many barriers exist when deploying a novel therapeutic during an ongoing pandemic, and it is critical to assess the needs of incorporating monoclonal antibody infusions into pandemic response activities. We examined the monoclonal antibody infusion site process during the COVID-19 pandemic and conducted a descriptive analysis using data from 3 sites at medical centers in the United States supported by the National Disaster Medical System. Monoclonal antibody implementation success factors included engagement with local medical providers, therapy batch preparation, placing the infusion center in proximity to emergency services, and creating procedures resilient to EUA changes. Infusion process challenges included confirming patient severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) positivity, strained staff, scheduling, and pharmacy coordination. Infusion sites are effective when integrated into pre-existing pandemic response ecosystems and can be implemented with limited staff and physical resources. The ASKAP Variables and Slow Transients (VAST) Pilot Survey Australian SKA Pathfinder Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 12 October 2021, e054 The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified. First Aid Practices for Injured Children in Rural Ghana: A Cluster-Random Population-Based Survey Adam Gyedu, Barclay Stewart, Easmon Otupiri, Peter Donkor, Charles Mock Journal: Prehospital and Disaster Medicine / Volume 36 / Issue 1 / February 2021 Published online by Cambridge University Press: 01 December 2020, pp. 79-85 Print publication: February 2021 The majority of injury deaths occur outside health facilities. However, many low- and middle-income countries (LMICs) continue to lack efficient Emergency Medical Services (EMS). Understanding current first aid practices and perceptions among members of the community is vital to strengthening non-EMS, community-based prehospital care. Study Objective: This study sought to determine caregiver first aid practices and care-seeking behavior for common household child injuries in rural communities in Ghana to inform context-specific interventions to improve prehospital care in LMICs. A cluster-randomized, population-based household survey of caregivers of children under five years in a rural sub-district (Amakom) in Ghana was conducted. Caregivers were asked about their practices and care-seeking behaviors should children sustain injuries at home. Common injuries of interest were burns, laceration, choking, and fractures. Multiple responses were permitted and reported practices were categorized as: recommended, low-risk, or potentially harmful to the child. Logistic regression was used to examine the association between caregiver characteristics and first aid practices. Three hundred and fifty-seven individuals were sampled, representing 5,634 caregivers in Amakom. Mean age was 33 years. Most (79%) were mothers to the children; 68% had only completed basic education. Most caregivers (64%-99%) would employ recommended first aid practices to manage common injuries, such as running cool water over a burn injury or tying a bleeding laceration with a piece of cloth. Nonetheless, seven percent to 56% would also employ practices which were potentially harmful to the child, such as attempting manual removal of a choking object or treating fractures at home without taking the child to a health facility. Reporting only recommended practices ranged from zero percent (burns) to 93% (choking). Reporting only potentially harmful practices ranged from zero percent (burns) to 20% (fractures). Univariate regression analysis did not reveal consistent associations between various caregiver characteristics and the employment of recommended only or potentially harmful only first aid practices. Caregivers in rural Ghanaian communities reported using some recommended first aid practices for common household injuries in children. However, they also employed many potentially harmful practices. This study highlights the need to increase context-appropriate, community-targeted first aid training programs for rural community populations of LMICs. This is important as the home-based care provided for injured children in these communities might be the only care they receive. The Rapid ASKAP Continuum Survey I: Design and first results D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier Published online by Cambridge University Press: 30 November 2020, e048 The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz. Assessment of emotions and behaviour by the Developmental Behaviour Checklist in young people with neurodevelopmental CNVs Adam C. Cunningham, Jeremy Hall, Stewart Einfeld, Michael J. Owen, IMAGINE-ID consortium, Marianne B. M. van den Bree Journal: Psychological Medicine / Volume 52 / Issue 3 / February 2022 A number of genomic conditions caused by copy number variants (CNVs) are associated with a high risk of neurodevelopmental and psychiatric disorders (ND-CNVs). Although these patients also tend to have cognitive impairments, few studies have investigated the range of emotion and behaviour problems in young people with ND-CNVs using measures that are suitable for those with learning difficulties. A total of 322 young people with 13 ND-CNVs across eight loci (mean age: 9.79 years, range: 6.02–17.91, 66.5% male) took part in the study. Primary carers completed the Developmental Behaviour Checklist (DBC). Of the total, 69% of individuals with an ND-CNV screened positive for clinically significant difficulties. Young people from families with higher incomes (OR = 0.71, CI = 0.55–0.91, p = .008) were less likely to screen positive. The rate of difficulties differed depending on ND-CNV genotype (χ2 = 39.99, p < 0.001), with the lowest rate in young people with 22q11.2 deletion (45.7%) and the highest in those with 1q21.1 deletion (93.8%). Specific patterns of strengths and weaknesses were found for different ND-CNV genotypes. However, ND-CNV genotype explained no more than 9–16% of the variance, depending on DBC subdomain. Emotion and behaviour problems are common in young people with ND-CNVs. The ND-CNV specific patterns we find can provide a basis for more tailored support. More research is needed to better understand the variation in emotion and behaviour problems not accounted for by genotype. Empirical Evidence of Long-Distance Dispersal in Miscanthus sinensis and Miscanthus × giganteus Lauren D. Quinn, David P. Matlaga, J. Ryan Stewart, Adam S. Davis Journal: Invasive Plant Science and Management / Volume 4 / Issue 1 / March 2011 Many perennial bioenergy grasses have the potential to escape cultivation and invade natural areas. We quantify dispersal, a key component in invasion, for two bioenergy candidates:Miscanthus sinensis and M. × giganteus. For each species, approximately 1 × 106 caryopses dispersed anemochorously from a point source into traps placed in annuli near the source (0.5 to 5 m; 1.6 to 16.4 ft) and in arcs (10 to 400 m) in the prevailing wind direction. For both species, most caryopses (95% for M. sinensis and 77% for M. × giganteus) were captured within 50 m of the source, but a small percentage (0.2 to 3%) were captured at 300 m and 400 m. Using a maximum-likelihood approach, we evaluated the degree of support in our empirical dispersal data for competing functions to describe seed-dispersal kernels. Fat-tailed functions (lognormal, Weibull, and gamma (Γ)) fit dispersal patterns best for both species overall, but because M. sinensis dispersal distances were significantly affected by wind speed, curves were also fit separately for dispersal distances in low, moderate, and high wind events. Wind speeds shifted the M. sinensis dispersal curve from a thin-tailed exponential function at low speeds to fat-tailed lognormal functions at moderate and high wind speeds. M. sinensis caryopses traveled farther in higher wind speeds (low, 30 m; moderate, 150 m; high, 400 m). Our results demonstrate the ability of Miscanthus caryopses to travel long distances and raise important implications for potential escape and invasion of fertile Miscanthus varieties from bioenergy cultivation. Light Response of Native and Introduced Miscanthus sinensis Seedlings David P. Matlaga, Lauren D. Quinn, Adam S. Davis, J. Ryan Stewart Journal: Invasive Plant Science and Management / Volume 5 / Issue 3 / September 2012 The Asian grass Miscanthus sinensis (Poaceae) is being considered for use as a bioenergy crop in the U.S. Corn Belt. Originally introduced to the United States for ornamental plantings, it escaped, forming invasive populations. The concern is that naturalized M. sinensis populations have evolved shade tolerance. We tested the hypothesis that seedlings from within the invasive U.S. range of M. sinensis would display traits associated with shade tolerance, namely increased area for light capture and phenotypic plasticity, compared with seedlings from the native Japanese populations. In a common garden experiment, seedlings of 80 half-sib maternal lines were grown from the native range (Japan) and 60 half-sib maternal lines from the invasive range (U.S.) under four light levels. Seedling leaf area, leaf size, growth, and biomass allocation were measured on the resulting seedlings after 12 wk. Seedlings from both regions responded strongly to the light gradient. High light conditions resulted in seedlings with greater leaf area, larger leaves, and a shift to greater belowground biomass investment, compared with shaded seedlings. Japanese seedlings produced more biomass and total leaf area than U.S. seedlings across all light levels. Generally, U.S. and Japanese seedlings allocated a similar amount of biomass to foliage and equal leaf area per leaf mass. Subtle differences in light response by region were observed for total leaf area, mass, growth, and leaf size. U.S. seedlings had slightly higher plasticity for total mass and leaf area but lower plasticity for measures of biomass allocation and leaf traits compared with Japanese seedlings. Our results do not provide general support for the hypothesis of increased M. sinensis shade tolerance within its introduced U.S. range compared with native Japanese populations. By Mitchell Aboulafia, Frederick Adams, Marilyn McCord Adams, Robert M. Adams, Laird Addis, James W. Allard, David Allison, William P. Alston, Karl Ameriks, C. Anthony Anderson, David Leech Anderson, Lanier Anderson, Roger Ariew, David Armstrong, Denis G. Arnold, E. J. Ashworth, Margaret Atherton, Robin Attfield, Bruce Aune, Edward Wilson Averill, Jody Azzouni, Kent Bach, Andrew Bailey, Lynne Rudder Baker, Thomas R. Baldwin, Jon Barwise, George Bealer, William Bechtel, Lawrence C. Becker, Mark A. Bedau, Ernst Behler, José A. Benardete, Ermanno Bencivenga, Jan Berg, Michael Bergmann, Robert L. Bernasconi, Sven Bernecker, Bernard Berofsky, Rod Bertolet, Charles J. Beyer, Christian Beyer, Joseph Bien, Joseph Bien, Peg Birmingham, Ivan Boh, James Bohman, Daniel Bonevac, Laurence BonJour, William J. Bouwsma, Raymond D. Bradley, Myles Brand, Richard B. Brandt, Michael E. Bratman, Stephen E. Braude, Daniel Breazeale, Angela Breitenbach, Jason Bridges, David O. Brink, Gordon G. Brittan, Justin Broackes, Dan W. Brock, Aaron Bronfman, Jeffrey E. Brower, Bartosz Brozek, Anthony Brueckner, Jeffrey Bub, Lara Buchak, Otavio Bueno, Ann E. Bumpus, Robert W. Burch, John Burgess, Arthur W. Burks, Panayot Butchvarov, Robert E. Butts, Marina Bykova, Patrick Byrne, David Carr, Noël Carroll, Edward S. Casey, Victor Caston, Victor Caston, Albert Casullo, Robert L. Causey, Alan K. L. Chan, Ruth Chang, Deen K. Chatterjee, Andrew Chignell, Roderick M. Chisholm, Kelly J. Clark, E. J. Coffman, Robin Collins, Brian P. Copenhaver, John Corcoran, John Cottingham, Roger Crisp, Frederick J. Crosson, Antonio S. Cua, Phillip D. Cummins, Martin Curd, Adam Cureton, Andrew Cutrofello, Stephen Darwall, Paul Sheldon Davies, Wayne A. Davis, Timothy Joseph Day, Claudio de Almeida, Mario De Caro, Mario De Caro, John Deigh, C. F. Delaney, Daniel C. Dennett, Michael R. DePaul, Michael Detlefsen, Daniel Trent Devereux, Philip E. Devine, John M. Dillon, Martin C. Dillon, Robert DiSalle, Mary Domski, Alan Donagan, Paul Draper, Fred Dretske, Mircea Dumitru, Wilhelm Dupré, Gerald Dworkin, John Earman, Ellery Eells, Catherine Z. Elgin, Berent Enç, Ronald P. Endicott, Edward Erwin, John Etchemendy, C. Stephen Evans, Susan L. Feagin, Solomon Feferman, Richard Feldman, Arthur Fine, Maurice A. Finocchiaro, William FitzPatrick, Richard E. Flathman, Gvozden Flego, Richard Foley, Graeme Forbes, Rainer Forst, Malcolm R. Forster, Daniel Fouke, Patrick Francken, Samuel Freeman, Elizabeth Fricker, Miranda Fricker, Michael Friedman, Michael Fuerstein, Richard A. Fumerton, Alan Gabbey, Pieranna Garavaso, Daniel Garber, Jorge L. A. Garcia, Robert K. Garcia, Don Garrett, Philip Gasper, Gerald Gaus, Berys Gaut, Bernard Gert, Roger F. Gibson, Cody Gilmore, Carl Ginet, Alan H. Goldman, Alvin I. Goldman, Alfonso Gömez-Lobo, Lenn E. Goodman, Robert M. Gordon, Stefan Gosepath, Jorge J. E. Gracia, Daniel W. Graham, George A. Graham, Peter J. Graham, Richard E. Grandy, I. Grattan-Guinness, John Greco, Philip T. Grier, Nicholas Griffin, Nicholas Griffin, David A. Griffiths, Paul J. Griffiths, Stephen R. Grimm, Charles L. Griswold, Charles B. Guignon, Pete A. Y. Gunter, Dimitri Gutas, Gary Gutting, Paul Guyer, Kwame Gyekye, Oscar A. Haac, Raul Hakli, Raul Hakli, Michael Hallett, Edward C. Halper, Jean Hampton, R. James Hankinson, K. R. Hanley, Russell Hardin, Robert M. Harnish, William Harper, David Harrah, Kevin Hart, Ali Hasan, William Hasker, John Haugeland, Roger Hausheer, William Heald, Peter Heath, Richard Heck, John F. Heil, Vincent F. Hendricks, Stephen Hetherington, Francis Heylighen, Kathleen Marie Higgins, Risto Hilpinen, Harold T. Hodes, Joshua Hoffman, Alan Holland, Robert L. Holmes, Richard Holton, Brad W. Hooker, Terence E. Horgan, Tamara Horowitz, Paul Horwich, Vittorio Hösle, Paul Hoβfeld, Daniel Howard-Snyder, Frances Howard-Snyder, Anne Hudson, Deal W. Hudson, Carl A. Huffman, David L. Hull, Patricia Huntington, Thomas Hurka, Paul Hurley, Rosalind Hursthouse, Guillermo Hurtado, Ronald E. Hustwit, Sarah Hutton, Jonathan Jenkins Ichikawa, Harry A. Ide, David Ingram, Philip J. Ivanhoe, Alfred L. Ivry, Frank Jackson, Dale Jacquette, Joseph Jedwab, Richard Jeffrey, David Alan Johnson, Edward Johnson, Mark D. Jordan, Richard Joyce, Hwa Yol Jung, Robert Hillary Kane, Tomis Kapitan, Jacquelyn Ann K. Kegley, James A. Keller, Ralph Kennedy, Sergei Khoruzhii, Jaegwon Kim, Yersu Kim, Nathan L. King, Patricia Kitcher, Peter D. Klein, E. D. Klemke, Virginia Klenk, George L. Kline, Christian Klotz, Simo Knuuttila, Joseph J. Kockelmans, Konstantin Kolenda, Sebastian Tomasz Kołodziejczyk, Isaac Kramnick, Richard Kraut, Fred Kroon, Manfred Kuehn, Steven T. Kuhn, Henry E. Kyburg, John Lachs, Jennifer Lackey, Stephen E. Lahey, Andrea Lavazza, Thomas H. Leahey, Joo Heung Lee, Keith Lehrer, Dorothy Leland, Noah M. Lemos, Ernest LePore, Sarah-Jane Leslie, Isaac Levi, Andrew Levine, Alan E. Lewis, Daniel E. Little, Shu-hsien Liu, Shu-hsien Liu, Alan K. L. Chan, Brian Loar, Lawrence B. Lombard, John Longeway, Dominic McIver Lopes, Michael J. Loux, E. J. Lowe, Steven Luper, Eugene C. Luschei, William G. Lycan, David Lyons, David Macarthur, Danielle Macbeth, Scott MacDonald, Jacob L. Mackey, Louis H. Mackey, Penelope Mackie, Edward H. Madden, Penelope Maddy, G. B. Madison, Bernd Magnus, Pekka Mäkelä, Rudolf A. Makkreel, David Manley, William E. Mann (W.E.M.), Vladimir Marchenkov, Peter Markie, Jean-Pierre Marquis, Ausonio Marras, Mike W. Martin, A. P. Martinich, William L. McBride, David McCabe, Storrs McCall, Hugh J. McCann, Robert N. McCauley, John J. McDermott, Sarah McGrath, Ralph McInerny, Daniel J. McKaughan, Thomas McKay, Michael McKinsey, Brian P. McLaughlin, Ernan McMullin, Anthonie Meijers, Jack W. Meiland, William Jason Melanson, Alfred R. Mele, Joseph R. Mendola, Christopher Menzel, Michael J. Meyer, Christian B. Miller, David W. Miller, Peter Millican, Robert N. Minor, Phillip Mitsis, James A. Montmarquet, Michael S. Moore, Tim Moore, Benjamin Morison, Donald R. Morrison, Stephen J. Morse, Paul K. Moser, Alexander P. D. Mourelatos, Ian Mueller, James Bernard Murphy, Mark C. Murphy, Steven Nadler, Jan Narveson, Alan Nelson, Jerome Neu, Samuel Newlands, Kai Nielsen, Ilkka Niiniluoto, Carlos G. Noreña, Calvin G. Normore, David Fate Norton, Nikolaj Nottelmann, Donald Nute, David S. Oderberg, Steve Odin, Michael O'Rourke, Willard G. Oxtoby, Heinz Paetzold, George S. Pappas, Anthony J. Parel, Lydia Patton, R. P. Peerenboom, Francis Jeffry Pelletier, Adriaan T. Peperzak, Derk Pereboom, Jaroslav Peregrin, Glen Pettigrove, Philip Pettit, Edmund L. Pincoffs, Andrew Pinsent, Robert B. Pippin, Alvin Plantinga, Louis P. Pojman, Richard H. Popkin, John F. Post, Carl J. Posy, William J. Prior, Richard Purtill, Michael Quante, Philip L. Quinn, Philip L. Quinn, Elizabeth S. Radcliffe, Diana Raffman, Gerard Raulet, Stephen L. Read, Andrews Reath, Andrew Reisner, Nicholas Rescher, Henry S. Richardson, Robert C. Richardson, Thomas Ricketts, Wayne D. Riggs, Mark Roberts, Robert C. Roberts, Luke Robinson, Alexander Rosenberg, Gary Rosenkranz, Bernice Glatzer Rosenthal, Adina L. Roskies, William L. Rowe, T. M. Rudavsky, Michael Ruse, Bruce Russell, Lilly-Marlene Russow, Dan Ryder, R. M. Sainsbury, Joseph Salerno, Nathan Salmon, Wesley C. Salmon, Constantine Sandis, David H. Sanford, Marco Santambrogio, David Sapire, Ruth A. Saunders, Geoffrey Sayre-McCord, Charles Sayward, James P. Scanlan, Richard Schacht, Tamar Schapiro, Frederick F. Schmitt, Jerome B. Schneewind, Calvin O. Schrag, Alan D. Schrift, George F. Schumm, Jean-Loup Seban, David N. Sedley, Kenneth Seeskin, Krister Segerberg, Charlene Haddock Seigfried, Dennis M. Senchuk, James F. Sennett, William Lad Sessions, Stewart Shapiro, Tommie Shelby, Donald W. Sherburne, Christopher Shields, Roger A. Shiner, Sydney Shoemaker, Robert K. Shope, Kwong-loi Shun, Wilfried Sieg, A. John Simmons, Robert L. Simon, Marcus G. Singer, Georgette Sinkler, Walter Sinnott-Armstrong, Matti T. Sintonen, Lawrence Sklar, Brian Skyrms, Robert C. Sleigh, Michael Anthony Slote, Hans Sluga, Barry Smith, Michael Smith, Robin Smith, Robert Sokolowski, Robert C. Solomon, Marta Soniewicka, Philip Soper, Ernest Sosa, Nicholas Southwood, Paul Vincent Spade, T. L. S. Sprigge, Eric O. Springsted, George J. Stack, Rebecca Stangl, Jason Stanley, Florian Steinberger, Sören Stenlund, Christopher Stephens, James P. Sterba, Josef Stern, Matthias Steup, M. A. Stewart, Leopold Stubenberg, Edith Dudley Sulla, Frederick Suppe, Jere Paul Surber, David George Sussman, Sigrún Svavarsdóttir, Zeno G. Swijtink, Richard Swinburne, Charles C. Taliaferro, Robert B. Talisse, John Tasioulas, Paul Teller, Larry S. Temkin, Mark Textor, H. S. Thayer, Peter Thielke, Alan Thomas, Amie L. Thomasson, Katherine Thomson-Jones, Joshua C. Thurow, Vzalerie Tiberius, Terrence N. Tice, Paul Tidman, Mark C. Timmons, William Tolhurst, James E. Tomberlin, Rosemarie Tong, Lawrence Torcello, Kelly Trogdon, J. D. Trout, Robert E. Tully, Raimo Tuomela, John Turri, Martin M. Tweedale, Thomas Uebel, Jennifer Uleman, James Van Cleve, Harry van der Linden, Peter van Inwagen, Bryan W. Van Norden, René van Woudenberg, Donald Phillip Verene, Samantha Vice, Thomas Vinci, Donald Wayne Viney, Barbara Von Eckardt, Peter B. M. Vranas, Steven J. Wagner, William J. Wainwright, Paul E. Walker, Robert E. Wall, Craig Walton, Douglas Walton, Eric Watkins, Richard A. Watson, Michael V. Wedin, Rudolph H. Weingartner, Paul Weirich, Paul J. Weithman, Carl Wellman, Howard Wettstein, Samuel C. Wheeler, Stephen A. White, Jennifer Whiting, Edward R. Wierenga, Michael Williams, Fred Wilson, W. Kent Wilson, Kenneth P. Winkler, John F. Wippel, Jan Woleński, Allan B. Wolter, Nicholas P. Wolterstorff, Rega Wood, W. Jay Wood, Paul Woodruff, Alison Wylie, Gideon Yaffe, Takashi Yagisawa, Yutaka Yamamoto, Keith E. Yandell, Xiaomei Yang, Dean Zimmerman, Günter Zoller, Catherine Zuckert, Michael Zuckert, Jack A. Zupko (J.A.Z.) Edited by Robert Audi, University of Notre Dame, Indiana Book: The Cambridge Dictionary of Philosophy Published online: 05 August 2015 Print publication: 27 April 2015, pp ix-xxx By Rob Atkinson, Joyce Chia, Nina J Crimm, G E Dal Pont, Christopher Decker, David G Duff, John Emerson, Jonathan Garton, Matthew Harding, Fiona Martin, Myles Mcgregor-Lowndes, Alison Mckenna, Debra Morris, Ann O'Connell, Adam Parachin, Hubert Picarda, Miranda Stewart, Elizabeth Turnour, Matthew Turnour, Laurence H Winer Edited by Matthew Harding, University of Melbourne, Ann O'Connell, University of Melbourne, Miranda Stewart, University of Melbourne Book: Not-for-Profit Law Print publication: 08 May 2014, pp viii-ix Squamous cell carcinoma of the thyroid gland: primary or secondary disease? M I Syed, M Stewart, S Syed, S Dahill, C Adams, D R McLellan, L J Clark Journal: The Journal of Laryngology & Otology / Volume 125 / Issue 1 / January 2011 Published online by Cambridge University Press: 18 October 2010, pp. 3-9 Print publication: January 2011 To review the aetiopathogenesis, clinical characteristics, immunohistochemical profile, prognosis and treatment options for primary thyroid squamous cell carcinoma, and to compare it with squamous cell carcinoma metastatic to the thyroid, thus providing the reader with a framework for differentiating primary and secondary disease. Review of English language literature from the past 25 years. Search strategy: A search of the Medline, Embase and Cochrane databases (April 1984 to April 2009) was undertaken to enable a comprehensive review. After applying strict criteria for the diagnosis of primary thyroid squamous cell carcinoma, 28 articles were identified reporting 84 cases. When reviewing secondary thyroid squamous cell carcinoma, we only analysed cases of squamous cell carcinoma metastatic to the thyroid gland, and found 28 articles reporting 78 cases. It is possible to differentiate between primary and secondary thyroid squamous cell carcinoma, on the basis of combined evidence from clinical examination and endoscopic, pathological and radiological evaluation. Such differentiation is important, as the prognosis for primary squamous cell carcinoma is uniformly poor irrespective of treatment, and the most suitable option may be supportive therapy. Treatment for secondary squamous cell carcinoma of the thyroid varies with the site and extent of spread of the primary tumour. Evaluating Visual Criteria for Identifying Carbon- and Iron-Based Pottery Paints from the Four Corners Region Using SEM-EDS Joe D. Stewart, Karen R. Adams Journal: American Antiquity / Volume 64 / Issue 4 / October 1999 Print publication: October 1999 Paint types on black-on-white pottery in the prehistoric American Southwest have had significance for both chronological and sociocultural interpretations. Visual attributes have formed the basis for distinguishing carbon- and mineral-based paints on ancient black-on-white pottery in the American Southwest for over 60 years. In this study, an SEM-EDS (scanning electron microscope-energy dispersive X-ray spectrometer) system was first used to make an independent objective determination of the mineral or non-mineral paint present on 15 Mesa Verde White Ware sherds. Then, a group of 19 people (including experienced archaeologists and newly trained individuals) examined and classified the paint on these sherds, achieving an overall accuracy of 84.2 percent. This group also ranked in priority order the visual attributes they felt were most useful in determining pottery paint type: nature of edges (fuzzy, sharp), absorption (soaks in, sits on top), luster (shiny, dull), color range (black-gray-blue; black-brown-reddish), flakiness (doesn't flake off, flakes off), thickness (thin, thick), and surface polish (polish striations visible through paint, striations not visible through paint). In each case, the attribute applicable to carbon-based paint is listed first. The most difficult sherds for the group to identify displayed attributes of both carbon and mineral paints. A category for "mixed" paint type, already in use by archaeologists, is a reasonable third category for labeling sherd paint, as long as it does not become a "catch-all" category. For problematic sherds, the SEM-EDS can be used to characterize paint type, then the visual attributes adjusted to improve investigator accuracy in paint type determination. Responses in wool and live weight when different sources of dietary protein are given to pregnant and lactating ewes D. G. Masters, C. A. Stewart, G. Mata, N. R. Adams Journal: Animal Science / Volume 62 / Issue 3 / June 1996 Wool growth, staple strength and fibre diameter are reduced during pregnancy and lactation. This may be due to the increased requirement for protein for foetal growth, udder development and milk production causing a lack of amino acids for wool. Responses in wool production, ewe live weight, lamb birth weight and growth, plasma amino acids and levels of cortisol, insulin and growth hormone were measured when different sources of protein were offered. Either lupin seed (L), fish meal (F) or formaldehyde-treated egg white (E) were included in an oaten hay-based diet offered during the final 3 weeks of pregnancy and first 3 weeks of lactation. Provision of diets containing E or F resulted in significant (P < 0·001) increases in wool growth and trends towards increased staple strength (4 to 6 N/ktex) and clean fleece weights (0·17 to 0·38 kg) compared with the sheep given L. Feeding the E diet increased the concentration of cystine in plasma and sulphur in wool in late pregnancy. Feeding the F diet increased the concentrations of arginine, histidine, lysine and threonine in plasma in early lactation. Ewes given E had higher circulating insulin and increased insulin resistance, compared with sheep given L, on 2 of the 4 days of sampling during pregnancy and lactation during the treatment period. There were no treatment effects on lamb birth weight or growth but ewes given the E diet were significantly (P < 0·05, 3·3 kg) heavier than the ewes given L after 3 weeks of lactation. The results indicate that a lack of protein available for absorption in the small intestine causes reduced wool growth during late pregnancy and early lactation. Wool growth is more sensitive to a reduced protein supply than foetal growth, maternal weight or milk production.
CommonCrawl
Difference between revisions of "Aleksandrov-Čech homology and cohomology" From Encyclopedia of Mathematics Ivan (talk | contribs) ''spectral homology and cohomology'' Homology and cohomology theories which satisfy all [[Steenrod–Eilenberg axioms|Steenrod–Eilenberg axioms]] (except, possibly, the exactness axiom) and a certain continuity condition. The Aleksandrov–Čech homology groups (modules) $H_n(X,A;G)$ [[#References|[1]]], [[#References|[2]]] are defined as the inverse limit $\lim_\leftarrow H_n(\alpha,\alpha';G)$ over all open coverings $\alpha$ of the space $X$; here $\alpha$ signifies not only the covering, but also its nerve, and $\alpha'$ is the subcomplex in $\alpha$ that is the nerve of the restriction of $\alpha$ on the closed set $A$ (cf. [[Nerve of a family of sets|Nerve of a family of sets]]). The possibility of passing to the limit is ensured by the existence of simplicial projections $(\beta,\beta')\to(\alpha,\alpha')$ defined, up to an homotopy, by the inclusion of $\beta$ in $\alpha$. The Aleksandrov–Čech cohomology groups $H^n(X,A;G)$ are defined as the direct limit $\lim_\to H^n(\alpha,\alpha;G)$. The homology groups satisfy all the Steenrod–Eilenberg axioms except for the exactness axiom. All axioms are valid for cohomology, and, partly for this reason, cohomology is often more useful. The exactness axiom is also valid for homology on the category of compacta if $G$ is a compact group or field. In addition, Aleksandrov–Čech homology and cohomology groups have the property of continuity: For $X=\lim_\leftarrow X_\lambda$ the homology (cohomology) groups in addition are equal to the respective limit of the homology (cohomology) groups of the compacta $X_\lambda$. The Aleksandrov–Čech theory is the only theory satisfying the Steenrod–Eilenberg axioms (with the exception indicated above) and this condition of continuity. On the category of paracompact spaces, the usual characterization by mappings into Eilenberg–MacLane spaces is valid for cohomology; while the cohomology itself is equivalent to the cohomology defined in [[Sheaf theory|sheaf theory]]. Cohomology may also be defined as cohomology of some cochain complex, which makes it possible to operate with sheaves of cochains. Similar ideas, applied to homology, are contained in the homology theory originating from N. Steenrod, A. Borel and others, which satisfies all axioms including the exactness axiom (but the property of continuity is lost). Aleksandrov–Čech homology and cohomology, including the above modification, are employed in homological problems in the theory of continuous mappings, in the theory of transformation groups (a connection with quotient spaces), in the theory of generalized manifolds (in particular, in various duality relations), in the theory of analytic spaces (e.g. in defining the fundamental classes of homology), in homological dimension theory, etc. Homology and cohomology theories which satisfy all [[Steenrod–Eilenberg axioms|Steenrod–Eilenberg axioms]] (except, possibly, the exactness axiom) and a certain continuity condition. The Aleksandrov–Čech homology groups (modules) $H_n(X,A;G)$ [[#References|[1]]], [[#References|[2]]] are defined as the inverse limit $\lim_\leftarrow H_n(\alpha,\alpha';G)$ over all open coverings $\alpha$ of the space $X$; here $\alpha$ signifies not only the covering, but also its nerve, and $\alpha'$ is the subcomplex in $\alpha$ that is the nerve of the restriction of $\alpha$ on the closed set $A$ (cf. [[Nerve of a family of sets|Nerve of a family of sets]]). The possibility of passing to the limit is ensured by the existence of simplicial projections $(\beta,\beta')\to(\alpha,\alpha')$ defined, up to an homotopy, by the inclusion of $\beta$ in $\alpha$. The Aleksandrov–Čech cohomology groups $H^n(X,A;G)$ are defined as the direct limit $\lim_\to H^n(\alpha,\alpha';G)$. The homology groups satisfy all the Steenrod–Eilenberg axioms except for the exactness axiom. All axioms are valid for cohomology, and, partly for this reason, cohomology is often more useful. The exactness axiom is also valid for homology on the category of compacta if $G$ is a compact group or field. In addition, Aleksandrov–Čech homology and cohomology groups have the property of continuity: For $X=\lim_\leftarrow X_\lambda$ the homology (cohomology) groups in addition are equal to the respective limit of the homology (cohomology) groups of the compacta $X_\lambda$. The Aleksandrov–Čech theory is the only theory satisfying the Steenrod–Eilenberg axioms (with the exception indicated above) and this condition of continuity. On the category of paracompact spaces, the usual characterization by mappings into Eilenberg–MacLane spaces is valid for cohomology; while the cohomology itself is equivalent to the cohomology defined in [[Sheaf theory|sheaf theory]]. Cohomology may also be defined as cohomology of some cochain complex, which makes it possible to operate with sheaves of cochains. Similar ideas, applied to homology, are contained in the homology theory originating from N. Steenrod, A. Borel and others, which satisfies all axioms including the exactness axiom (but the property of continuity is lost). Aleksandrov–Čech homology and cohomology, including the above modification, are employed in homological problems in the theory of continuous mappings, in the theory of transformation groups (a connection with quotient spaces), in the theory of generalized manifolds (in particular, in various duality relations), in the theory of analytic spaces (e.g. in defining the fundamental classes of homology), in homological dimension theory, etc. ====References==== spectral homology and cohomology Homology and cohomology theories which satisfy all Steenrod–Eilenberg axioms (except, possibly, the exactness axiom) and a certain continuity condition. The Aleksandrov–Čech homology groups (modules) $H_n(X,A;G)$ [1], [2] are defined as the inverse limit $\lim_\leftarrow H_n(\alpha,\alpha';G)$ over all open coverings $\alpha$ of the space $X$; here $\alpha$ signifies not only the covering, but also its nerve, and $\alpha'$ is the subcomplex in $\alpha$ that is the nerve of the restriction of $\alpha$ on the closed set $A$ (cf. Nerve of a family of sets). The possibility of passing to the limit is ensured by the existence of simplicial projections $(\beta,\beta')\to(\alpha,\alpha')$ defined, up to an homotopy, by the inclusion of $\beta$ in $\alpha$. The Aleksandrov–Čech cohomology groups $H^n(X,A;G)$ are defined as the direct limit $\lim_\to H^n(\alpha,\alpha';G)$. The homology groups satisfy all the Steenrod–Eilenberg axioms except for the exactness axiom. All axioms are valid for cohomology, and, partly for this reason, cohomology is often more useful. The exactness axiom is also valid for homology on the category of compacta if $G$ is a compact group or field. In addition, Aleksandrov–Čech homology and cohomology groups have the property of continuity: For $X=\lim_\leftarrow X_\lambda$ the homology (cohomology) groups in addition are equal to the respective limit of the homology (cohomology) groups of the compacta $X_\lambda$. The Aleksandrov–Čech theory is the only theory satisfying the Steenrod–Eilenberg axioms (with the exception indicated above) and this condition of continuity. On the category of paracompact spaces, the usual characterization by mappings into Eilenberg–MacLane spaces is valid for cohomology; while the cohomology itself is equivalent to the cohomology defined in sheaf theory. Cohomology may also be defined as cohomology of some cochain complex, which makes it possible to operate with sheaves of cochains. Similar ideas, applied to homology, are contained in the homology theory originating from N. Steenrod, A. Borel and others, which satisfies all axioms including the exactness axiom (but the property of continuity is lost). Aleksandrov–Čech homology and cohomology, including the above modification, are employed in homological problems in the theory of continuous mappings, in the theory of transformation groups (a connection with quotient spaces), in the theory of generalized manifolds (in particular, in various duality relations), in the theory of analytic spaces (e.g. in defining the fundamental classes of homology), in homological dimension theory, etc. [1] P.S. [P.S. Aleksandrov] Aleksandroff, "Untersuchungen über Gestalt und Lage abgeschlossener Mengen beliebiger Dimension" Ann. of Math. (2) , 30 (1929) pp. 101–187 [2] E. Čech, "Théorie générale de l'homologie dans un espace quelconque" Fund. Math. , 19 (1932) pp. 149–183 [3] N.E. Steenrod, S. Eilenberg, "Foundations of algebraic topology" , Princeton Univ. Press (1966) [4] E.G. Sklyarenko, "Homology theory and the exactness axiom" Russian Math. Surveys , 24 : 5 (1969) pp. 91–142 Uspekhi Mat. Nauk , 24 : 5 (1969) pp. 87–140 Often one speaks also of Čech cohomology instead of Aleksandrov–Čech cohomology. How to Cite This Entry: Aleksandrov-Čech homology and cohomology. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Aleksandrov-%C4%8Cech_homology_and_cohomology&oldid=33084 This article was adapted from an original article by E.G. Sklyarenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Aleksandrov-Čech_homology_and_cohomology&oldid=33085" TeX done
CommonCrawl
Does the electromagnetic field "spin"? Due to electron "spin", a small magnetic field is produced. Maxwell's equations imply that magnetic fields are due to changes in electric fields. Is the magnetic field produced then because the electric field is "spinning" with the "spinning" electron, in the quantum sense of "spinning" and this change in electric field is generating the magnetic field? Can one generalize to say that the magnetic field thus would spin when a magnet is spun? electromagnetism photons quantum-spin electrons KenshinKenshin $\begingroup$ Maxwell equations state that a varying electric field produces a magnetic field, but not necessarily every magnetic field is produced by a varying electric field. The spin of a particle produces (for not straightforward reasons) a magnetic dipole, which produces a magnetic field independent on charge motions. $\endgroup$ – Bzazz Jan 31 '13 at 16:01 $\begingroup$ Thanks. Do you know how to calculate the magnetic field produced then without Maxwell's equations. $\endgroup$ – Kenshin Jan 31 '13 at 23:14 $\begingroup$ Thinking about "spin" always leaves me dizzy. $\endgroup$ – Hot Licks Dec 28 '17 at 19:38 If by "spin", you mean rotate around its axis, like the earth does every 24h, then it is incorrect that the electric field of a point particle spins. A point particle doesn't have dimensions so it doesn't have an axis to rotate around and thus no magnetic field is produced. The property of "spin" of elementary particles is not caused because of their rotation. Now, the magnetic fields are different because there are no magnetic monopoles, so a magnetic field does rotate when the magnet is spun. PrasttPrastt $\begingroup$ How do you explain the magnetic field produced by "electron spin" if not by a change in electric field? $\endgroup$ – Kenshin Jan 31 '13 at 14:59 $\begingroup$ Spin is an intrinsic property, as far as we know, just as it's mass. It is associated with an angular momentum, but it is quantized, and the group of symmetries is not the same as the group of symmetries of spinning macroscopic objects. The latter is SO(3) and the former SU(2). The difference,roughly, is that in the second one, after a rotation of 360 degrees, you get to the initial state with a minus sign. $\endgroup$ – Prastt Jan 31 '13 at 15:06 $\begingroup$ I understand that. I am wondering how the magnetic field results from this "spin". $\endgroup$ – Kenshin Jan 31 '13 at 15:06 $\begingroup$ You cannot obtain the magnetic field as you would in the classical case. Because charge zero particle also have spin so you cannot suppose that the charge is distributed in a tiny sphere and work out the magnetic field. $\endgroup$ – Prastt Jan 31 '13 at 15:46 $\begingroup$ Well, the quantum dynamical description of the electron (spin 1/2 particles) is given by Dirac's equation. When you couple this with an external electromagnetic field (vector potential) you get a gauge theory. The equations of motion in that theory include a term which is the interaction energy of the magnetic field with the intrinsic magnetic moment of the particle. That therm predicts the correct magnetic moment of an electron. I'm not going to repeat the calculation here because is rather lengthy, but you can check a quantum mechanics book I guess. $\endgroup$ – Prastt Feb 1 '13 at 5:34 It is certainly possible for an electromagnetic field to spin and this can be demonstrated by producing such a field in a cavity resonator. The electromagnetic field in a resonant cavity is normally stationary but varying in amplitude so that the changing magnitude of the electric field produces a varying amplitude magnetic field which in turn produces the varying electric field. When a spinning mode is produced the amplitude of the electric and magnetic fields are constant but it is their rotation about the axis of the cavity which produces the time variation required to sustain the fields. The field equations for the spinning electromagnetic field can be derived from those of the conventional stationary field. They satisfy Maxwell's Equations and the Poynting vector can be shown to point in the direction of field spin. Computer modelling of the propagation of the fields using the FDTD (Finite Difference Time Domain) technique clearly show them to spinning. Also a practical experiment has been done to confirm that measurements of the spinning electromagnetic fields are as predicted. Further details are available at http://mike2017.000webhostapp.com/ which includes field plots of the spinning fields. Mike AllanMike Allan Spin corresponds to quantized angular momentum. However a substantial fraction of the spin angular momentum of an electron is included in its surrounding electromagnetic field where a nonzero Poynting vector does exist everywhere outside of its spin axis. This electron-bound Poynting vector corresponds to electromagnetic energy-momentum density circulating around the electron. The local magnetic field at a given point is given by the electron's dipole field while the electrostatic field results from the Coulomb-field of a point-like charge [1]. Please also note that neither an electrostatic field nor a magnetostatic field can rotate like a rigid body. This misconception would contradict Maxwell's and relativistic electrodynamics. See Spinning magnets and Jehle's model of the electron. Blackbody Blacklight Realist753Realist753 Your statement "Maxwell's equations imply that magnetic fields are due to changes in electric fields." is not complete. A corrected statement is that Maxwell's equations imply that magnetic fields are due to changes in electric fields AND due to currents (which can be stationary): $$ \nabla\times\mathbf{H} = \mathbf{J}+\partial\mathbf{D}/\partial t $$ As you can see, the magnetic field $\mathbf{H}$ has two "sources": the $\partial\mathbf{D}/\partial t$ part is due to varying electric fields as you said (where $\mathbf{D}$ is the electric displacement), but the $\mathbf{J}$ is the part due to the free currents. This is why a coiled wire with a constant current running through it creates a magnetic field (without need of changing electric fields). In the case of the electron spin, this goes beyond my areas of knowledge, but according to my limited understanding of quantum mechanics, the magnetic field comes from the stationary particle current associated with the wave-function of the electron. So it is similar to the magnetic field arising from a coil of wire with a current. As a further related note: interestingly, Maxwell's equations apply to any inertial frame, so you could argue that an observer moving with respect to the electron will see a changing electric field (because the electron is moving), and this will create a magnetic field which apparently does not exist for a stationary observer. This is because different observers will not agree on the electric and magnetic fields separately, however they will agree on the existence of an electromagnetic tensor (which includes the electric and magnetic fields as its "parts"), and they will agree on the physical effects produced by it. Francisco Rodríguez FortuñoFrancisco Rodríguez Fortuño Not the answer you're looking for? Browse other questions tagged electromagnetism photons quantum-spin electrons or ask your own question. How does a spinning electron produce a magnetic field? Are electron fields and photon fields part of the same field in QED? How does a magnet cause magnetic force and its magnitude/direction? Does the spin of an electron change when transferred from one atom to another? Why does Carbon-12 have zero nuclear spin?
CommonCrawl
A* search algorithm Algorithm used for pathfinding and graph traversal Search algorithm Worst-case performance O ( | E | ) = O ( b d ) {\displaystyle O(|E|)=O(b^{d})} Worst-case space complexity O ( | V | ) = O ( b d ) {\displaystyle O(|V|)=O(b^{d})} Graph and tree search algorithms α–β Bellman–Ford Best-first Branch & bound D* Dijkstra Floyd–Warshall Fringe search IDA* Iterative deepening Jump point Kruskal Lexicographic BFS LPA* SMA* List of graph algorithms Graph traversal A* (pronounced "A-star") is a graph traversal and path search algorithm, which is often used in many fields of computer science due to its completeness, optimality, and optimal efficiency.[1] One major practical drawback is its O ( b d ) {\displaystyle O(b^{d})} space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms which can pre-process the graph to attain better performance,[2] as well as memory-bounded approaches; however, A* is still the best solution in many cases.[3] Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968.[4] It can be seen as an extension of Edsger Dijkstra's 1959 algorithm. A* achieves better performance by using heuristics to guide its search. 2.1 Pseudocode 2.3 Implementation details 2.4 Special cases 3.1 Termination and Completeness 3.2 Admissibility 3.3 Optimal efficiency 4 Bounded relaxation 5 Complexity 7 Relations to other algorithms A* was invented by researchers working on Shakey the Robot's path planning. A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm[5] for Shakey's path planning.[6] Graph Traverser is guided by a heuristic function h(n), the estimated distance from node n to the goal node: it entirely ignores g(n), the distance from the start node to n. Bertram Raphael suggested using the sum, g(n) + h(n).[6] Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra.[7] The original 1968 A* paper[4] contained a theorem stating that no A*-like algorithm[8] could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A ″correction″ was published a few years later[9] claiming that consistency was not required, but this was shown to be false in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm.[10] Description[edit] A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until its termination criterion is satisfied. At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes f ( n ) = g ( n ) + h ( n ) {\displaystyle f(n)=g(n)+h(n)} where n is the next node on the path, g(n) is the cost of the path from the start node to n, and h(n) is a heuristic function that estimates the cost of the cheapest path from n to the goal. A* terminates when the path it chooses to extend is a path from start to goal or if there are no paths eligible to be extended. The heuristic function is problem-specific. If the heuristic function is admissible, meaning that it never overestimates the actual cost to get to the goal, A* is guaranteed to return a least-cost path from start to goal. Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost nodes to expand. This priority queue is known as the open set or fringe. At each step of the algorithm, the node with the lowest f(x) value is removed from the queue, the f and g values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a removed node (thus the node with the lowest f value out of all fringe nodes) is a goal node.[a] The f value of that goal is then also the cost of the shortest path, since h at the goal is zero in an admissible heuristic. The algorithm described so far gives us only the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node. As an example, when searching for the shortest route on a map, h(x) might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points. For a grid map from a video game, using the Manhattan distance or the octile distance becomes better depending on the set of movements available (4-way or 8-way). If the heuristic h satisfies the additional condition h(x) ≤ d(x, y) + h(y) for every edge (x, y) of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running Dijkstra's algorithm with the reduced cost d'(x, y) = d(x, y) + h(y) − h(x). Pseudocode[edit] The following pseudocode describes the algorithm: function reconstruct_path(cameFrom, current) total_path := {current} while current in cameFrom.Keys: current := cameFrom[current] total_path.prepend(current) return total_path // A* finds a path from start to goal. // h is the heuristic function. h(n) estimates the cost to reach goal from node n. function A_Star(start, goal, h) // The set of discovered nodes that may need to be (re-)expanded. // Initially, only the start node is known. // This is usually implemented as a min-heap or priority queue rather than a hash-set. openSet := {start} // For node n, cameFrom[n] is the node immediately preceding it on the cheapest path from start // to n currently known. cameFrom := an empty map // For node n, gScore[n] is the cost of the cheapest path from start to n currently known. gScore := map with default value of Infinity gScore[start] := 0 // For node n, fScore[n] := gScore[n] + h(n). fScore[n] represents our current best guess as to // how short a path from start to finish can be if it goes through n. fScore := map with default value of Infinity fScore[start] := h(start) while openSet is not empty // This operation can occur in O(1) time if openSet is a min-heap or a priority queue current := the node in openSet having the lowest fScore[] value if current = goal return reconstruct_path(cameFrom, current) openSet.Remove(current) for each neighbor of current // d(current,neighbor) is the weight of the edge from current to neighbor // tentative_gScore is the distance from start to the neighbor through current tentative_gScore := gScore[current] + d(current, neighbor) if tentative_gScore < gScore[neighbor] // This path to neighbor is better than any previous one. Record it! cameFrom[neighbor] := current gScore[neighbor] := tentative_gScore fScore[neighbor] := gScore[neighbor] + h(neighbor) if neighbor not in openSet openSet.add(neighbor) // Open set is empty but goal was never reached return failure Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the heuristic function is admissible but not consistent. If the heuristic is consistent, when a node is removed from openSet the path to it is guaranteed to be optimal so the test 'tentative_gScore < gScore[neighbor]' will always fail if the node is reached again. Illustration of A* search for finding path from a start node to a goal node in a robot motion planning problem. The empty circles represent the nodes in the open set, i.e., those that remain to be explored, and the filled ones are in the closed set. Color on each closed node indicates the distance from the start: the greener, the closer. One can first see the A* moving in a straight line in the direction of the goal, then when hitting the obstacle, it explores alternative routes through the nodes from the open set. See also: Dijkstra's algorithm Example[edit] An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to target point: Key: green: start; blue: goal; orange: visited The A* algorithm also has real-world applications. In this example, edges are railroads and h(x) is the great-circle distance (the shortest possible distance on a sphere) to the target. The algorithm is searching for a path between Washington, D.C. and Los Angeles. Implementation details[edit] There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution). When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower cost path. A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time. Special cases[edit] Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where h ( x ) = 0 {\displaystyle h(x)=0} for all x.[11][12] General depth-first search can be implemented using A* by considering that there is a global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly discovered neighbors. After each single assignment, we decrease the counter C by one. Thus the earlier a node is discovered, the higher its h ( x ) {\displaystyle h(x)} value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including an h ( x ) {\displaystyle h(x)} value at each node. Properties[edit] Termination and Completeness[edit] On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero ( d ( x , y ) > ε > 0 {\textstyle d(x,y)>\varepsilon >0} for some fixed ε {\displaystyle \varepsilon } ), A* is guaranteed to terminate only if there exists a solution. Admissibility[edit] A search algorithm is said to be admissible if it is guaranteed to return an optimal solution. If the heuristic function used by A* is admissible, then A* is admissible. An intuitive ″proof″ of this is as follows: When A* terminates its search, it has found a path from start to goal whose actual cost is lower than the estimated cost of any path from start to goal through any open node (the node's f {\displaystyle f} value). When the heuristic is admissible, those estimates are optimistic (not quite—see the next paragraph), so A* can safely ignore those nodes because they cannot possibly lead to a cheaper solution than the one it already has. In other words, A* will never overlook the possibility of a lower-cost path from start to goal and so it will continue to search until no such possibilities exist. The actual proof is a bit more involved because the f {\displaystyle f} values of open nodes are not guaranteed to be optimistic even if the heuristic is admissible. This is because the g {\displaystyle g} values of open nodes are not guaranteed to be optimal, so the sum g + h {\displaystyle g+h} is not guaranteed to be optimistic. Optimal efficiency[edit] Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on a set of problems P if for every problem P in P and every algorithm A′ in Alts, the set of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina Dechter and Judea Pearl.[10] They considered a variety of definitions of Alts and P in combination with A*'s heuristic being merely admissible or being both consistent and admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all ″non-pathological″ search problems. Roughly speaking, their notion of non-pathological problem is what we now mean by ″up to tie-breaking″. This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems. Optimal efficiency is about the set of nodes expanded, not the number of node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case.[13] In such circumstances Dijkstra's algorithm could outperform A* by a large margin. Bounded relaxation[edit] A* search that uses a heuristic that is 5.0(=ε) times a consistent heuristic, and obtains a suboptimal path. While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + ε) times the optimal solution path. This new guarantee is referred to as ε-admissible. There are a number of ε-admissible algorithms: Weighted A*/Static Weighting.[14] If ha(n) is an admissible heuristic function, in the weighted version of the A* search one uses hw(n) = ε ha(n), ε > 1 as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ha since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ε times that of the least cost path in the graph.[15] Dynamic Weighting[16] uses the cost function f ( n ) = g ( n ) + ( 1 + ε w ( n ) ) h ( n ) {\displaystyle f(n)=g(n)+(1+\varepsilon w(n))h(n)} , where w ( n ) = { 1 − d ( n ) N d ( n ) ≤ N 0 otherwise {\displaystyle w(n)={\begin{cases}1-{\frac {d(n)}{N}}&d(n)\leq N\\0&{\text{otherwise}}\end{cases}}} , and where d ( n ) {\displaystyle d(n)} is the depth of the search and N is the anticipated length of the solution path. Sampled Dynamic Weighting[17] uses sampling of nodes to better estimate and debias the heuristic error. A ε ∗ {\displaystyle A_{\varepsilon }^{*}} .[18] uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second hF is used to select the most promising node from the FOCAL list. Aε[19] selects nodes with the function A f ( n ) + B h F ( n ) {\displaystyle Af(n)+Bh_{F}(n)} , where A and B are constants. If no nodes can be selected, the algorithm will backtrack with the function C f ( n ) + D h F ( n ) {\displaystyle Cf(n)+Dh_{F}(n)} , where C and D are constants. AlphA*[20] attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function f α ( n ) = ( 1 + w α ( n ) ) f ( n ) {\displaystyle f_{\alpha }(n)=(1+w_{\alpha }(n))f(n)} , where w α ( n ) = { λ g ( π ( n ) ) ≤ g ( n ~ ) Λ otherwise {\displaystyle w_{\alpha }(n)={\begin{cases}\lambda &g(\pi (n))\leq g({\tilde {n}})\\\Lambda &{\text{otherwise}}\end{cases}}} , where λ and Λ are constants with λ ≤ Λ {\displaystyle \lambda \leq \Lambda } , π(n) is the parent of n, and ñ is the most recently expanded node. Complexity[edit] The time complexity of A* depends on the heuristic. In the worst case of an unbounded search space, the number of nodes expanded is exponential in the depth of the solution (the shortest path) d: O(bd), where b is the branching factor (the average number of successors per state).[21] This assumes that a goal state exists at all, and is reachable from the start state; if it is not, and the state space is infinite, the algorithm will not terminate. The heuristic function has a major effect on the practical performance of A* search, since a good heuristic allows A* to prune away many of the bd nodes that an uninformed search would expand. Its quality can be expressed in terms of the effective branching factor b*, which can be determined empirically for a problem instance by measuring the number of nodes generated by expansion, N, and the depth of the solution, then solving[22] N + 1 = 1 + b ∗ + ( b ∗ ) 2 + ⋯ + ( b ∗ ) d . {\displaystyle N+1=1+b^{*}+(b^{*})^{2}+\dots +(b^{*})^{d}.} Good heuristics are those with low effective branching factor (the optimal being b* = 1). The time complexity is polynomial when the search space is a tree, there is a single goal state, and the heuristic function h meets the following condition: | h ( x ) − h ∗ ( x ) | = O ( log ⁡ h ∗ ( x ) ) {\displaystyle |h(x)-h^{*}(x)|=O(\log h^{*}(x))} where h* is the optimal heuristic, the exact cost to get from x to the goal. In other words, the error of h will not grow faster than the logarithm of the "perfect heuristic" h* that returns the true distance from x to the goal.[15][21] The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory.[23] In practice, this turns out to be the biggest drawback of A* search, leading to the development of memory-bounded heuristic searches, such as Iterative deepening A*, memory bounded A*, and SMA*. Applications[edit] A* is often used for the common pathfinding problem in applications such as video games, but was originally designed as a general graph traversal algorithm.[4] It finds applications in diverse problems, including the problem of parsing using stochastic grammars in NLP.[24] Other cases include an Informational search with online learning.[25] Relations to other algorithms[edit] What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, g(n), into account. Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic h ( n ) = 0 {\displaystyle h(n)=0} for all nodes;[11][12] in turn, both Dijkstra and A* are special cases of dynamic programming.[26] A* itself is a special case of a generalization of branch and bound.[27] Variants[edit] Anytime A*[28] Block A* Field D* Fringe Saving A* (FSA*) Generalized Adaptive A* (GAA*) Incremental heuristic search Iterative deepening A* (IDA*) Jump point search Lifelong Planning A* (LPA*) New Bidirectional A* (NBA*)[29] Simplified Memory bounded A* (SMA*) Theta* A* can also be adapted to a bidirectional search algorithm. Special care needs to be taken for the stopping criterion.[30] Breadth-first search Depth-first search Any-angle path planning, search for paths that are not limited to move along graph edges but rather can take on any angle ^ Goal nodes may be passed over multiple times if there remain other nodes with lower f values, as they may lead to a shorter path to a goal. ^ Russell, Stuart J. (2018). Artificial intelligence a modern approach. Norvig, Peter (4th ed.). Boston: Pearson. ISBN 978-0134610993. OCLC 1021874142. ^ Delling, D.; Sanders, P.; Schultes, D.; Wagner, D. (2009). "Engineering Route Planning Algorithms". Algorithmics of Large and Complex Networks: Design, Analysis, and Simulation. Lecture Notes in Computer Science. 5515. Springer. pp. 11个$7–139. CiteSeerX 10.1.1.164.8916. doi:10.1007/978-3-642-02094-0_7. ISBN 978-3-642-02093-3. ^ Zeng, W.; Church, R. L. (2009). "Finding shortest paths on real road networks: the case for A*". International Journal of Geographical Information Science. 23 (4): 531–543. doi:10.1080/13658810801949850. S2CID 14833639. ^ a b c Hart, P. E.; Nilsson, N. J.; Raphael, B. (1968). "A Formal Basis for the Heuristic Determination of Minimum Cost Paths". IEEE Transactions on Systems Science and Cybernetics. 4 (2): 100–107. doi:10.1109/TSSC.1968.300136. ^ Doran, J. E.; Michie, D. (1966-09-20). "Experiments with the Graph Traverser program". Proc. R. Soc. Lond. A. 294 (1437): 235–259. Bibcode:1966RSPSA.294..235D. doi:10.1098/rspa.1966.0205. ISSN 0080-4630. S2CID 21698093. ^ a b Nilsson, Nils J. (2009-10-30). The Quest for Artificial Intelligence (PDF). Cambridge: Cambridge University Press. ISBN 9780521122931. ^ Edelkamp, Stefan; Jabbar, Shahid; Lluch-Lafuente, Alberto (2005). "Cost-Algebraic Heuristic Search" (PDF). Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI): 1362–1367. ^ "A*-like" means the algorithm searches by extending paths originating at the start node one edge at a time, just as A* does. This excludes, for example, algorithms that search backward from the goal or in both directions simultaneously. In addition, the algorithms covered by this theorem must be admissible and "not more informed" than A*. ^ Hart, Peter E.; Nilsson, Nils J.; Raphael, Bertram (1972-12-01). "Correction to 'A Formal Basis for the Heuristic Determination of Minimum Cost Paths'" (PDF). ACM SIGART Bulletin (37): 28–29. doi:10.1145/1056777.1056779. ISSN 0163-5719. S2CID 6386648. ^ a b Dechter, Rina; Judea Pearl (1985). "Generalized best-first search strategies and the optimality of A*". Journal of the ACM. 32 (3): 505–536. doi:10.1145/3828.3830. S2CID 2092415. ^ a b De Smith, Michael John; Goodchild, Michael F.; Longley, Paul (2007), Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software Tools, Troubadour Publishing Ltd, p. 344, ISBN 9781905886609 . ^ a b Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 214, ISBN 9781430232377 . ^ Martelli, Alberto (1977). "On the Complexity of Admissible Search Algorithms". Artificial Intelligence. 8 (1): 1–13. doi:10.1016/0004-3702(77)90002-9. ^ Pohl, Ira (1970). "First results on the effect of error in heuristic search". Machine Intelligence. 5: 219–236. ^ a b Pearl, Judea (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley. ISBN 978-0-201-05594-8. ^ Pohl, Ira (August 1973). "The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem solving" (PDF). Proceedings of the Third International Joint Conference on Artificial Intelligence (IJCAI-73). 3. California, USA. pp. 11–17. ^ Köll, Andreas; Hermann Kaindl (August 1992). "A new approach to dynamic weighting". Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI-92). Vienna, Austria. pp. 16–17. ^ Pearl, Judea; Jin H. Kim (1982). "Studies in semi-admissible heuristics". IEEE Transactions on Pattern Analysis and Machine Intelligence. 4 (4): 392–399. doi:10.1109/TPAMI.1982.4767270. PMID 21869053. S2CID 3176931. ^ Ghallab, Malik; Dennis Allard (August 1983). "Aε – an efficient near admissible heuristic search algorithm" (PDF). Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). 2. Karlsruhe, Germany. pp. 789–791. Archived from the original (PDF) on 2014-08-06. ^ Reese, Bjørn (1999). "AlphA*: An ε-admissible heuristic search algorithm". Archived from the original on 2016-01-31. Retrieved 2014-11-05. Cite journal requires |journal= (help) ^ a b Russell, Stuart; Norvig, Peter (2003) [1995]. Artificial Intelligence: A Modern Approach (2nd ed.). Prentice Hall. pp. 97–104. ISBN 978-0137903955. ^ Russell, Stuart; Norvig, Peter (2009) [1995]. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. p. 103. ISBN 978-0-13-604259-4. ^ Klein, Dan; Manning, Christopher D. (2003). A* parsing: fast exact Viterbi parse selection. Proc. NAACL-HLT. ^ Kagan E. and Ben-Gal I. (2014). "A Group-Testing Algorithm with Online Informational Learning" (PDF). IIE Transactions, 46:2, 164-184. Cite journal requires |journal= (help) ^ Ferguson, Dave; Likhachev, Maxim; Stentz, Anthony (2005). A Guide to Heuristic-based Path Planning (PDF). Proc. ICAPS Workshop on Planning under Uncertainty for Autonomous Systems. ^ Nau, Dana S.; Kumar, Vipin; Kanal, Laveen (1984). "General branch and bound, and its relation to A∗ and AO∗" (PDF). Artificial Intelligence. 23 (1): 29–58. doi:10.1016/0004-3702(84)90004-3. ^ Hansen, Eric A., and Rong Zhou. "Anytime Heuristic Search." J. Artif. Intell. Res.(JAIR) 28 (2007): 267-297. ^ Pijls, Wim; Post, Henk "Yet another bidirectional algorithm for shortest paths" In Econometric Institute Report EI 2009-10/Econometric Institute, Erasmus University Rotterdam. Erasmus School of Economics. ^ "Efficient Point-to-Point Shortest Path Algorithms" (PDF). Cite journal requires |journal= (help) from Princeton University Nilsson, N. J. (1980). Principles of Artificial Intelligence. Palo Alto, California: Tioga Publishing Company. ISBN 978-0-935382-01-3. Clear visual A* explanation, with advice and thoughts on path-finding Variation on A* called Hierarchical Path-Finding A* (HPA*) Retrieved from "https://en.wikipedia.org/w/index.php?title=A*_search_algorithm&oldid=1000464821" Combinatorial optimization Game artificial intelligence Graph distance Articles with example pseudocode
CommonCrawl
What would the equilibrium temperature be at the poles in a world without seasonality? Inspired by: How does Antarctica stay frozen? If the Earth was in a fixed solstice state - northern winter and southern summer (e.g. the axis obliquity rotated with the Earth's orbit), what would the equilibrium temperature be at each pole? Assume daily cycle and convection and so on operate as usual, just that it's permanent midnight in the Arctic, and permanent midday in the Antarctic. Bonus questions: Would the equilibrium temperatures be significantly different if it was northern summer and southern winter (e.g. would land masses and orography have much effect, relative to the solar imbalance)? hypothetical poles axial-obliquity seasons naught101naught101 $\begingroup$ You would have to run a climate model while forcing the sunlight from the solstice state. $\endgroup$ – Communisty Sep 19 '18 at 11:22 If we consider that by "Assume daily cycle and convection and so on operate as usual" you meant that all heat transport from/to the pole remain as it is today. Then, we can do a back of the envelope calculation. This calculation will at least give you an order-of-magnitude answer, and we can then consider everything that would affect the result. Currently both poles radiate more energy than they receive trough direct solar irradiation. This is because they are colder than the rest of the planet. Therefore, there is a neat heat transport from the equatorial latitudes to the poles. That heat travels on atmospheric and oceanic currents. This figure nicely summarize this radiation balance: Figure from here ©The COMET Program If we consider that the heat transport depicted at the bottom of the figure remains the same. The only change in the energy budget would be from solar irradiation, and once they start warming the only way to release the additional heat and reach equilibrium would be trough infrared radiation (black body radiation). The poles receive zero energy on the equinox and the winter solstice and 12.64 kWh/m$^2$ per day on the summer solstice. Therefore a rough yearly average would be 3.16 kWh/m$^2$ per day. So, if we were locked on a summer solstice state you would receive four time more energy, and none if locked in a winter solstice state. Summer solstice locked pole Let's consider the summer solstice case first. If we call $E_1$ the current amount of outgoing radiation from the pole, and $T_1$ the current temperature. The Stefan–Boltzmann law states that $E_1 = \sigma T_1^4$ Analogously, if $E_2$ and $T_2$ represent the summer solstice locked state, you have Considering that $E_2 = 4 E_1$ You can write that $4 = \frac{T_2^4}{T_1^4}$ $T_2 = \left(\frac{1}{4}\right)^4 T_1 = 1.41\, T_1$ Therefore the temperature would increase 41%. It doesn't sounds much, but considering that the temperatures above are in Kelvin, it es a lot! In fact, with the current mean temperature in the north pole of -16 °C (257K), it would go up to +89°C!! And the South pole would go from the current -49°C (224K) to +42°C. This diverges from any posible reality because with 89°C at the north pole there would not be a net influx of energy from the equatorial latitudes as we are assuming. Instead, atmospheric and oceanic currents would take a lot of that heat down south and the temperature would equilibrarte at a much lower value. Note that in the above calculation I've ignored the energy flux due to advection in atmospheric/oceanic currents. I did so because that flux is about five time smaller than the flux from solar radiation, so it won't change much the results and ignoring it simplifies the math. You can do the calculation considering those flows if you want. Winter solstice locked pole In the case of a pole locked to a winter solstice. There would be no incoming solar radiation at all. Therefore, all incoming energy would be transported by currents from equatorial latitudes. In that way, we can see in the above figure that the north pole receives about 150 W/m$^2$, and the south pole about 100 W/m$^2$. The equilibrium temperature would be reached when they radiate the same amount. So, using Stefan–Boltzmann law again we can do some rough calculations. Let's call the south pole temperature $T_{SP}$ and $T_{NP}$ for the north pole. Then: $T_{SP} = \left(\frac{E_{SP}}{\sigma}\right)^{1/4} = \left(\frac{100 \,W/m^2}{5.67\times 10^{-8}}\right)^{1/4} = 205 K= -68°C$ And for the north pole $T_{NP} = \left(\frac{E_{NP}}{\sigma}\right)^{1/4} = \left(\frac{150 \,W/m^2}{5.67\times 10^{-8}}\right)^{1/4} = 227 K= -46°C$ But again, with such cold temperatures the actual inflow of energy would be grater and the equilibrium temperature would not be that extreme. In any case, that condition would be favourable to the growth of a massive ice sheet in the corresponding polar area, in the same way I describe in this question and answer. As a final thought. For this condition to happen in real life you would need something analogous to Tidal locking to lock the period of Earth's precession (currently about 26,000 years) to exactly one year. And don't know of any mechanism that could achieve that. Camilo RadaCamilo Rada Not the answer you're looking for? Browse other questions tagged hypothetical poles axial-obliquity seasons or ask your own question. How does Antarctica stay frozen? What would happen to the Earth if there were no seasons? (How long) would Earth's atmosphere last without a global magnetic field? Why is the troposphere 8km higher at the equator than the poles? Does earth's size affect the temperature of the poles? Would life on Earth survive without the Sun? How would waves (in a fluid) behave without intermolecular attraction? What would be the impact on tides if the earth had no tilt? The axis of rotation of the earth passes through the geographical poles or is there a true axis?
CommonCrawl
CADD-Splice—improving genome-wide variant effect prediction using deep learning-derived splice scores Philipp Rentzsch1,2, Max Schubach1,2, Jay Shendure3,4 & Martin Kircher ORCID: orcid.org/0000-0001-9278-54711,2 Splicing of genomic exons into mRNAs is a critical prerequisite for the accurate synthesis of human proteins. Genetic variants impacting splicing underlie a substantial proportion of genetic disease, but are challenging to identify beyond those occurring at donor and acceptor dinucleotides. To address this, various methods aim to predict variant effects on splicing. Recently, deep neural networks (DNNs) have been shown to achieve better results in predicting splice variants than other strategies. It has been unclear how best to integrate such process-specific scores into genome-wide variant effect predictors. Here, we use a recently published experimental data set to compare several machine learning methods that score variant effects on splicing. We integrate the best of those approaches into general variant effect prediction models and observe the effect on classification of known pathogenic variants. We integrate two specialized splicing scores into CADD (Combined Annotation Dependent Depletion; cadd.gs.washington.edu), a widely used tool for genome-wide variant effect prediction that we previously developed to weight and integrate diverse collections of genomic annotations. With this new model, CADD-Splice, we show that inclusion of splicing DNN effect scores substantially improves predictions across multiple variant categories, without compromising overall performance. While splice effect scores show superior performance on splice variants, specialized predictors cannot compete with other variant scores in general variant interpretation, as the latter account for nonsense and missense effects that do not alter splicing. Although only shown here for splice scores, we believe that the applied approach will generalize to other specific molecular processes, providing a path for the further improvement of genome-wide variant effect prediction. One of the key steps involved in the regulation of eukaryotic gene expression is RNA splicing, the transformation of transcribed pre-mRNA into translatable mRNA through the removal of intronic sequences. While variations of this process have been described [1], the principal mechanism of RNA splicing is that the branchpoint located in the spliced intron binds to the 5′-donor site (relative to the intron), forming a lariat intermediate. The 3′-donor site binds to the acceptor and connects the two exons, thereby releasing the intron. At some genes, multiple acceptor or donor sites compete, such that multiple different alternative transcripts can be formed from one gene, i.e., alternative splicing [2]. Various studies show that more than 90% [3, 4] of genes with multiple exons undergo alternative splicing, i.e., not all exons are included in every transcript. For each exon or exon segment, the quantity "percent spliced-in" (psi) is defined as the relative fraction of transcripts this segment is included in [5]. Exons with high psi values are associated with stronger conservation and depletion of loss-of-function variation [6]. The dynamics of both canonical and alternative splicing can be influenced or disrupted by genomic sequence variation. Variants disrupting splicing are established contributors to rare genetic disease and more generally variants modulating splicing substantially contribute to phenotypic variation with respect to common traits and disease risk [7,8,9,10]. However, splicing is just one of many biological processes that can be impacted by genetic variants, with others including protein function, distal and proximal regulation of cell type-specific transcription, transcript stability, and DNA replication. Given millions of variants in a human genome [11] and myriad molecular processes through which each variant might act, pinpointing the genetic changes causal for a specific phenotype down to a set or single variant remains difficult. To address this, the field increasingly relies on automated approaches to prioritize causal variants. While some predictors specialize on certain variant categories (e.g., synonymous [12] or missense effects [13, 14]) or classes (e.g., SNVs [15] or InDels [16, 17]), others take features from different biological processes into account and enable variant interpretation across the genome. Both process-specific and genome-wide approaches to variant effect prediction have distinct advantages, and it has been challenging to reconcile them into a maximally effective approach. A number of genome-wide scores predict variant effects from sequence alone [18, 19]; most however use annotations and genomic features defined based on experimental assays, simulations, and statistical analyses thereof [12, 20,21,22]. A common approach is to train machine learning classifiers to distinguish between two defined classes of variants (e.g., pathogenic and benign) using selected features. Such models can be trained via various techniques of machine learning, e.g., logistic regression, boosting trees, support vector machines, or deep learning. A general variant scoring tool that we previously developed is Combined Annotation Dependent Depletion [20, 23] (CADD), a logistic regression model that is trained on more than 15 million evolutionary derived variants (proxy-benign) and a matching set of simulated variants (proxy-deleterious). This approach has advantages over using known sets of pathogenic and benign variants. Firstly, the CADD training set is much larger, covering diverse genomic regions and even rare feature annotations. Secondly, it does not suffer from the many different ascertainment effects that come with historic and on-going selection [24] of small but well-characterized variant sets. Therefore, it leverages a high number of features and does not easily overfit. While existing variant effect prediction scores already proved very helpful in detecting deleterious mutations genome-wide, multiple studies showed limited specificity for predicting splice-altering variants [10, 25, 26]. Even though conservation scores like PhastCons [27] or PhyloP [28], a major feature of many effect predictions, are better than random in intronic regions [26], specialized scores show improved performance and are necessary to successfully predict splice variants residing within exonic regions. There are a number of specialized scores for predicting splice changes [29], trained using different types of machine learning [30], including decision tree [31,32,33,34], probabilistic [35], and kmer-based [36, 37] models. The first generation of splicing scores, like MaxEntScan [35], focuses on the immediate neighborhood of splice junctions, as most splicing variants have been found in these regions [30]. In the last few years, more distal splicing regulatory elements have been taken into account [31, 32, 34]. Recently, deep neural networks (DNNs) achieved good results on predicting splice variants genome-wide. While the idea of using neural networks for splice predictions is more than two decades old [38], the first tool to leverage the recent progress in deep learning technology was SPANR/Spidex [39], which is trained on experimentally observed exon skipping events and predicts exon inclusion percentages based on genomic features. Instead of using predefined features, two recent tools (MMSplice [40] and SpliceAI [41]) are limited to genomic sequence as input for their prediction. In order to study a large number of RNA splice-altering variants, Cheung et al. [26] developed a highly parallel reporter assay, called Multiplexed Functional Assay of Splicing using Sort-seq (MFASS, Fig. 1). The MFASS experiment used a minigene reporter assay to investigate 27,000 human population variants obtained from ExAC [42] for their impact on RNA splicing. In their analysis, the authors note that while immediate splice site variants are most important, many variants further away in the intronic and exonic sequence lead to deviation from the reference splicing behavior [26]. Due to its high number of exonic and intronic variants from over 2000 different exons tested, this data set represents a comprehensive resource for benchmarking splicing predictions. Here we present a computational analysis that leverages the MFASS data set. First, we assess several machine learning methods that score variant splicing effects. Next, we integrate the two best performing approaches into our genome-wide variant prioritization tool CADD. Finally, we show that the refined CADD model "CADD-Splice" has substantially improved performance for predicting splicing and multiple other variant categories. As process-specific information should generally improve variant prioritization, our results underline the importance of developing and integrating process-specific scores. Benchmarking available splice predictions on the MFASS data set. We use the Multiplexed Functional Assay of Splicing using Sort-seq (MFASS) data set to benchmark different available splice effect predictors. MFASS studied splicing effects of more than 27,000 human exonic and intronic variants by creating a synthetic library of the respective exons (or nearest exon for intronic variants) between two GFP exons. The genome integrated sequences are transcribed and it is observed how much each exon is spliced in or out of the reporter mRNAs through RNA-seq. Changes in the percent spliced-in (psi) between reference and alternative sequence alleles are used to identify splice disrupting variants (sdv). We analyze how well different machine learning models distinguish between sdv and no-sdv variants MFASS reporter assay data set of splicing effects The MFASS [26] data set was downloaded from GitHub (https://github.com/KosuriLab/MFASS/). The data set was split into intronic (n = 13,603) and exonic (n = 14,130) variants as defined by Cheung et al. [26]. Further, the data set was split into splice-disrupting variants (sdv, n = 1050) and variants that do not disrupt splicing (no-sdv, n = 26,683) based on whether the psi ratio of the tested exon changed by more than 0.5 (Δpsi > 0.5). We explored additional thresholds at 0.7, 0.3, and 0.1, as well as using only variants with Δpsi > 0.5 for the sdv set and variants with Δpsi < 0.1 for the no-sdv set. In performance comparisons, the number of variants is slightly reduced as only variants were included for which all tested scores are defined. Psi values were downloaded in natural scale with the MFASS data set. Predictors of splice effects dbscSNV v1.1 scores [33] were downloaded at https://sites.google.com/site/jpopgen/dbNSFP. The dbscSNV random forest model is shown in performance comparisons. CADD started integrating the two dbscSNV models (random forest and AdaBoost) in version 1.4. Hexamere HAL [37] scores were generated using HAL model scripts from Kipoi [43]. HAL scores including percent spliced-in (psi) were downloaded with the MFASS data set, originally obtained via the HAL website http://splicing.cs.washington.edu/ for exon skipping variants by the MFASS authors [26]. S-CAP [32] (v1.0) scores were downloaded from http://bejerano.stanford.edu/scap/. All eight S-CAP scores were combined into one score by taking the maximum per variant. Where specifically indicated and per S-CAP definition, variants without precalculated score were imputed as benign (S-CAP score = 0). Spidex [39] (v1.0, noncommercial) scores were downloaded from http://assets.deepgenomics.com/spidex_public_noncommercial_v1_0.tar. MMSplice [40] scores were generated via the script (v1.0.2) installed from pypi. The exon-intron boundaries were provided as GTF gene annotation file downloaded from Ensembl [44] v95. The script provides model scores of the sequence with reference allele and with alternative allele for five submodels (acceptor, acceptor intron, exon, donor, and donor intron). The script also provides the composite linear models' delta_logit_psi and pathogenicity that summarize the five submodels in one metric. delta_logit_psi scores were used in performance comparisons. Pre-scored SpliceAI [41] v1.3 scores were downloaded from Illumina BaseSpace. For larger InDels unavailable from precomputed scores, the variant scores were computed via an adapted version of the SpliceAI scripts version 1.3 (https://github.com/Illumina/SpliceAI/) that is able to integrate scores from pre-scored files in order to enable faster scoring. In comparisons of SpliceAI with other scores, all four SpliceAI models were combined into a single score by using the maximum score for a variant. A combined score of MMSplice and SpliceAI, MMAI, was defined for evaluation on the MFASS data set. To give equal weight to both MMSplice and SpliceAI, scores were divided by their respective standard deviation across all MFASS variants (MMSplice 0.5291, SpliceAI 0.1206) and the normalized scores added. For SpliceAI, the maximum score across all SpliceAI submodels was used and for MMSplice delta_logit_psi. Similarly, MMAIpsi was defined by including normalized "percent spliced-in" as measured for the reference allele in the MFASS data set (standard deviation of 0.0622 across all MFASS variants). We explored "proportion expressed across transcripts" (pext) [6] (version February 27, 2019) as a predictor of splice site importance. Values were downloaded from the gnomAD server and archived for reproducibility at https://doi.org/10.5281/zenodo.4447230. For intronic variants, the pext value of the closest exon is used. Integration of SpliceAI and MMSplice features in CADD SpliceAI and MMSplice (see above) were adapted as features into CADD. For SpliceAI, all four SpliceAI submodels for 10 kb sequence windows were integrated as separate annotations. In both training data set and final scoring, predicted splice gains at annotated splice sites and predicted splice loss outside of annotated splice sites were set to 0 (for donor and acceptor sites). This was previously described for SpliceAI [41] and has been referred to as masking. We relied on precomputed SpliceAI scores as genome-wide scoring from sequence was too computationally expensive. Since models require the reference base of a variant to match the human reference, variants of the proxy-benign CADD training data set (human-derived variants) were scored with reference and alternative alleles reversed. To adjust for this, gain and loss model scores were swapped for donor and acceptor, and masking was applied after the swap as described above. For MMSplice, all five submodels were integrated as separate annotations. MMSplice provides only scores for variants where the reference matches the genome reference. In case of the proxy-deleterious class of simulated variants as well as in scoring applications of the CADD model, the reference score was subtracted from the alternative score, as described by the authors. In the proxy-benign class, the alternative score was subtracted from the reference score. For all MMSplice submodels, positive score differences were set to 0. For variants annotated with multiple different consequence predictions as annotated by Ensembl VEP, both MMSplice and SpliceAI scores were limited to the consequence of the same gene. All variants not annotated by MMSplice or SpliceAI were imputed as 0. All nine MMSplice annotations and SpliceAI submodels for 10 kb sequence windows were further included in a feature cross with the consequence annotation (see "Summary of CADD v1.6 models" below). ClinVar pathogenic vs. gnomAD common variants ClinVar [45] was downloaded from https://ftp.ncbi.nlm.nih.gov/pub/clinvar/ (April 20, 2020). "pathogenic" variants were selected from the database based on the assignment of "Variant Clinical Significance", excluding variants with multiple assignments. gnomAD [46] variants (version 2.1.1, 229 million single nucleotide variants from 15,708 whole genome sequenced individuals) were downloaded from https://gnomad.broadinstitute.org/. Variants were filtered based on filters set by the gnomAD authors, i.e., only variants passing quality filters were considered. InDel variants longer than 50 bp were not considered. Common variants from gnomAD with minor allele frequency (MAF) greater than 0.05 were used as a "benign set" compared to "pathogenic" ClinVar variants. In order to score GRCh37 variants with CADD GRCh38 models, variants were lifted to GRCh38 using CrossMap [47], excluding variants that did not lift back to the same GRCh37 coordinates. 12 out of 68,491 pathogenic ClinVar variants and 2300 out of 165,881 common gnomAD variants could not be lifted reciprocally between genome builds and were excluded. Variant types were annotated using Ensembl VEP [48] and CADD's broader consequence assignments. ClinVar likely pathogenic vs. low frequency gnomAD variants SNVs from ClinVar (see above) assigned clinical significance "likely-pathogenic" (incl. Variants assigned the two terms "likely-pathogenic" and "pathogenic") were tested. We chose to also look at these variants in a separate test data set, as these are less frequently used for training of variant classifiers, reducing the likelihood of inflated performance estimates. The "likely-pathogenic" variants are compared to 300,000 randomly picked SNVs from gnomAD (see above) with minor allele frequency below 0.05 and an allele count above 1. Enrichment of gnomAD variants To look at score enrichments, gnomAD variants (see above) were assigned to three bins as frequent (MAF > 0.001), rare (MAF < 0.001, allele count > 1) and singleton (allele count = 1). In order to compare between different CADD versions, score percentiles were used as variant ranks. Variant types were annotated using Ensembl VEP [48] and CADD's broader consequence category. Enrichments per category were calculated as percentiles for all variants of the same category and dividing the number of observed variants above this threshold per bin by the number expected from random drawing. To estimate variance, 1000 bootstrap iterations were performed of which the 95% confidence interval is shown. Changes in CADD since version 1.4/1.5 Several minor changes compared to CADD v1.4/v1.5 were implemented as outlined in the CADD v1.6 release notes [49]. This includes annotation fixes in the GRCh38 version of CADD, specifically GERP [50] scores where an integer overflow was corrected, and Ensembl Regulatory Build [51] where the hierarchical assignment of different element categories was unstable if more than one category was reported per variant. Another issue specific to CADD v1.4/v1.5 was fixed, where highly conserved coding variants could be scored as UTR of overlapping gene annotations. Further, "unknown" was removed from the categorical consequence levels as this included only two variants in the entire training set. These variants (classified by VEP as coding sequence variants without further specification) were reassigned to the "synonymous" consequence category. Summary of CADD v1.6 models A full list of annotations included in CADD-Splice is summarized in Additional file 1: Table S1 for GRCh37 and in Additional file 1: Table S2 for GRCh38. The CADD-Splice (CADD GRCh37-v1.6) model has a total of 1029 features derived from 102 annotations. Two hundred twenty-two features Xi derive from 90 numerical annotations and one-hot-encoding of 12 categorical/Boolean annotations. Fourteen Boolean indicators Wi express whether a given feature/feature group (out of cDNApos, CDSpos, protPos, aminoacid_substitution, targetScan, mirSVR, Grantham, PolyPhenVal, SIFTval, Dist2Mutation, chromHMM, dbscSNV_ada, dbscSNV_rf, and SpliceAI) is undefined. Pairs of 12 base substitutions and 189 amino acid substitutions possible to create with SNVs correspond to another 201 features. Further, 16 different variant consequence categories and a set D consisting of the 37 annotations bStatistic, cDNApos, CDSpos, Dst2Splice, GerpN, GerpS, mamPhCons, mamPhyloP, minDistTSE, minDistTSS, priPhCons, priPhyloP, protPos, relcDNApos, relCDSpos, relProtPos, verPhCons, verPhyloP, Dist2Mutation, freq100, freq1000, freq10000, rare100, rare1000, rare10000, sngl100, sngl1000, sngl10000, SpliceAI_accgain, SpliceAI_accloss, SpliceAI_dongain, SpliceAI_donloss, MMSplice_acceptorIntron, MMSplice_acceptor, MMSplice_donorIntron, MMSplice_donor and MMSplice_exon are used to create a set of 592 consequence interactions. The full model is fitted using the logistic regression implementation in scikit-learn is: $$ {\beta}_0+{\sum}_{i=1}^{222}{\beta}_i{X}_i+{\sum}_{i=1}^4{\sum}_{j=1}^3{\gamma}_{ij}{\mathbbm{1}}_{\left\{\boldsymbol{i}-\boldsymbol{th}\ \boldsymbol{Ref}\ \boldsymbol{category}\ \boldsymbol{and}\ \boldsymbol{j}-\boldsymbol{th}\ \boldsymbol{Alt}\ \boldsymbol{category},\boldsymbol{i}\boldsymbol{\ne}\boldsymbol{j}\right\}}+{\sum}_{i=1}^{189}{\delta}_i{\mathbbm{1}}_{\left\{\boldsymbol{i}-\boldsymbol{th}\ \boldsymbol{amino}\ \boldsymbol{acid}\ \boldsymbol{exchange}\ \boldsymbol{possible}\ \boldsymbol{in}\ \boldsymbol{SNV}\right\}}+{\sum}_{i=1}^{14}{\tau}_i{W}_i+{\sum}_{i=1}^{16}{\sum}_{j\in D}{\alpha}_{ij}{\mathbbm{1}}_{\left\{\boldsymbol{i}-\boldsymbol{th}\ \boldsymbol{Consequence}\ \boldsymbol{category}\right\}}{X}_j $$ For CADD GRCh38-v1.6, the number of total features is 1028 derived from 120 annotations. The hyperparameter optimization strategy was unchanged from CADD v1.4 [20]. The full list of data sets used to develop CADD-Splice is provided in Additional file 1: Table S3. More information on model training (including a script for loading data matrixes and training in scikit-learn) is available at https://cadd.gs.washington.edu/training. Sequence-based models perform best for splice effect prediction Using the MFASS data set split into splice-disrupting variants (sdv, total n = 1050) and not-disrupting variants (no-sdv, n = 26,683, Fig. 1), we compared the performance of several recent splicing effect predictors (i.e., dbscSNV [15], HAL [37], MMSplice [40], S-CAP [32], SPANR [39], and SpliceAI [41]) and a selection of species conservation measures (Fig. 2a, Additional file 1: Fig. S1). We found that the relative performance of different scores is not dependent on the Δpsi threshold of 0.5 that was used to define sdv and no-sdv (Additional file 1: Fig. S2). We discovered that the original MFASS publication [26] inverted some scores such as PhyloP and PhastCons and that, when corrected, those scores perform better than random guessing on predicting splice effects. However, predictive power of the species conservation measures for exonic variants is limited, because most exonic variants are in the highest conservation bin (Fig. 2b). The performance of species conservation measures on intronic variants is similar to previous versions of CADD, while performance of all methods is generally better and less variable for introns (Fig. 2c). From the tested splicing effect predictors, SpliceAI and MMSplice, both DNNs based solely on genomic sequence, showed the best overall performance (Fig. 2a) with areas under the Precision Recall Curve (auPRC) of 0.328 (SpliceAI) and 0.361 (MMSplice). Precision-Recall performance of classifying intronic and exonic MFASS variants. Different machine learning models were used to separate splice disrupting variants from those without a splice effect. Shown are all variants in MFASS (a) that were scored by all splice effect predictors, b only exonic and c only intronic variants. Generally, specialized splice effect predictors, such as MMSplice, SPANR, and SpliceAI, perform better than the more general CADD, both on exonic and intronic variants. We observe the best performance by combining MMSplice and SpliceAI with the percent spliced-in (psi) value of the reference allele in a linear combination (MMAIpsi). Such a model however is assay-specific and circular with MFASS class definitions. A new CADD-Splice model, integrating MMSplice and SpliceAI as features, outperforms previous CADD models Despite their similar performance on the MFASS data set, Spearman's correlation between SpliceAI and MMSplice scores is only around 0.6. We speculate that this is due to the different model architectures. MMSplice is a convolutional neural network that was trained on data from a large massively parallel reporter assay library [37] of random sequences and takes into account 75 bp of sequence up and downstream of a known splice junction for splice donors and splice acceptors. This is in contrast to SpliceAI that, as a deep residual network, takes advantage of a much larger sequence window of 10 kbp and was trained on RNA expression data from different individuals and tissues in GTEx. We further speculated that as both scores were derived very differently, they may complement each other. Thus, we evaluated an equally weighted linear combination of the two scores (MMAI) on the MFASS data set, which indeed reached a better auPRC of 0.416 (Fig. 2). Percent spliced-in improves prediction only on the MFASS data set Cheung et al. [26] showed that HAL [37] achieves the best performance on exons (auPRC 0.274, Additional file 1: Fig. S1B). However, the hexamer sequence-based model of HAL also uses psi of the reference allele as an additional assay-derived source of information. Unfortunately, the derived measure Δpsi between reference and alternative allele was used to separate sdv and no-sdv variants. psi of the reference alone separates sdv from no-sdv variants (Additional file 1: Fig. S1B, auPRC of psi 0.143, HAL with psi 0.274, HAL without psi 0.175) and interpretation of the increased performance needs to consider the underlying circularity. Adding psi in the linear combination of MMSplice and SpliceAI (MMAIpsi) gives an auPRC of 0.472 (Fig. 2a). This combination outperforms all other models on exons and much better precision is especially achieved for high recall thresholds (Fig. 2b). Using HAL without psi does result in the same performance as MMSplice (auPRC 0.175, Additional file 1: Fig. S1B), but application of HAL is by design limited to exons, which is why we chose MMSplice over HAL for a combined score. As an assay derived measure, MFASS psi values cannot be used to predict splicing effects genome-wide, which would be a prerequisite for including them as an unbiased feature in variant prediction. While measures of psi can be derived for any RNA-Seq data set [5, 52] and are predictive of specific cell-types [53], CADD would require an organismal summary of all cell types and developmental stages. While this became available after our study [54], we explored a close proxy of psi, the proportion expressed across transcripts (pext [6]) score. pext is based on RNAseq transcript assemblies and quantifies the expression of each base in an exon in relation to the whole gene. However, neither does pext separate sdv and no-sdv variants very well (Additional file 1: Fig. S1A, auPRC of 0.058 vs 0.143 for psi) nor do we find separation of splicing variants in the CADD training set based on its value. While better equivalents may be considered, we speculate that psi values as measured in MFASS are very assay dependent. Extending CADD's splice model The performance of CADD version 1.3 compared to CADD v1.4 on the MFASS data set is very different, with auPRC increasing from 0.063 (v1.3) to 0.108 (v1.4). Up to version 1.3, CADD contained only distance information of canonical splice sites within 20 bp of variants. This had changed in CADD v1.4, where, among other annotations, dbscSNV [33] features were integrated. The dbscSNV scores are two ensemble predictors of variant splice effects around canonical splice sites (− 3 to + 8 at the 5′ splice site and − 12 to + 2 at the 3′ splice site). By splitting the MFASS data set into two sets of variants (with and without dbscSNV scores available), we found that the improvement in splice effect prediction between CADD v1.3 and CADD v1.4 was entirely dependent on this addition of dbscSNV (Additional file 1: Fig. S3). The limited distance range of the dbscSNV scores further explains why intronic variants CADD v1.4 perform similarly to PhastCons scores (which like other conservation metrics are integrated into CADD). Based on the previous results, we added MMSplice and SpliceAI submodels as features and trained a new CADD model 'CADD-Splice'. For MMSplice, the exon-intron boundaries required were obtained from Ensembl [44] v95 transcript models. We note that genome-wide computation of large DNNs, such as SpliceAI, can be computationally very expensive and that we therefore use pre-scored files. Nevertheless, we think that keeping features up-to-date with the latest gene annotation is crucial for providing unbiased variant scores for all genomic variants. Using purely sequence-based models such as DNNs is advantageous as scores can be updated with new gene annotations or even genome builds without retraining the model. In preparation for integrating the DNN scores into our model, we analyzed their score distributions in the two classes of the CADD training set. We found that masking SpliceAI submodels (as recommended by the authors) benefited the annotation, as unmasked scores (i.e., splicing loss outside of existing sites and splicing gain for already existing sites) did not show class specificity (Additional file 1: Fig. S4). Similarly for the MMSplice submodels, we did not observe a depletion in the human-derived variants for positive scores (Additional file 1: Fig. S5). We therefore prepared all scores accordingly before training the model. All MMSplice and SpliceAI features were learned with positive coefficients in the CADD-Splice model, which indicates that increased scores in the splice models are associated with increased deleteriousness in the combined model. CADD model improvements are highly specific to splicing effects The new model, labeled CADD-Splice in all figures, shows an increased auPRC of 0.185 on the entire MFASS data set (compared to 0.108 above), with better performance on both exonic and intronic variants (Fig. 2). Still, the overall performance (across variant types) is very similar to the latest version of CADD (v1.4-GRCh37, Additional file 1: Fig. S6A) with a Spearman correlation between CADD-Splice and CADD v1.4 of 0.995 for 100,000 SNV drawn randomly from throughout the genome. Larger score changes are found for variants around known splice sites, as apparent from an increased depletion of high CADD scores for "frequent" variants (gnomAD [46] MAF > 0.1%) and an enrichment of gnomAD singletons in splice regions (Fig. 3). In the splice site proximal regions, this enrichment/depletion effect increases from CADD v1.3 over v1.4 to CADD-Splice. However, for canonical splice sites, changes are within the 95% confidence interval of the CADD-Splice measures. This can also be observed for other variant categories such as intronic variants or other coding mutations (Additional file 1: Fig. S7). Increased enrichment for rare variants at high CADD scores. CADD assigns higher scores with increasing population frequency, despite allele frequency not being included in the model. Here, depletion and enrichment of variants is grouped by frequency and CADD score percentiles, with CADD-Splice outperforming previous versions. At high CADD scores, frequent (MAF > 0.001) and rare (allele count > 1) variants are depleted and singletons (observed once in gnomAD) enriched. For variants in canonical splice sites (left), the difference is mostly within the bootstrapped 95%-confidence interval, but CADD-Splice significantly outperforms previous versions within 20 bp of splice sites (right) In order to validate CADD-Splice on known disease-causing mutations, we used curated pathogenic variants from ClinVar and compared the area under the Receiver Operator Characteristic (auROC, Fig. 4). Rather than using curated benign variants with their respective ascertainment biases, we used common (MAF > 0.05) variants from gnomAD as controls. We observe that CADD-Splice outperforms on intronic variants (auROC 0.957) and splice site variants (auROC 0.978), not only previous versions of CADD (GRCh37-v1.4: auROC intronic 0.879 and splice site 0.938) but also the specialized scores MMSplice (0.886 and 0.970) and SpliceAI (0.869 and 0.959). For other variant categories, like synonymous and missense variants (Additional file 1: Fig. S6B-C), we observe small positive changes in model performance, probably due to a mixture of splicing-related and unrelated changes in the model. Improved performance of CADD for separating common and known pathogenic variants. The CADD-Splice model has a higher auROC than previous CADD versions and specialized splice scores in distinguishing between pathogenic variants from ClinVar and common variants (MAF > 0.05) from gnomAD for both splice site variants (left) and intronic variants (right) In addition to the previous test, we compared likely-pathogenic variants from ClinVar to rare population variants (MAF < 0.05, allele count > 1) from gnomAD (Additional file 1: Fig. S8). The comparison replicates the previous results in the different variant categories, while highlighting best performance of CADD on the complete variant set. This test scenario allows comparison to the specialized splicing scores like S-CAP and SPANR whose training set partially overlaps the ClinVar pathogenic set (Additional file 1: Fig. S9). While SPANR does not perform better than CADD in any of the comparisons, S-CAP outperforms CADD on canonical splice site variants (Additional file 1: Fig. S9B) and intronic SNVs (Additional file 1: Fig. S9D). However, precomputed S-CAP scores are missing for about 9% (5980 out of 66,608) of splicing-related variants in this test set (Additional file 1: Fig. S9B-D). When interpreting missing variants as benign rather than excluding them from all comparisons (Additional file 1: Fig. S9E-H), the score's performance reduces substantially and results only in an improved performance for canonical splice sites (Additional file 1: Fig. S9F). Finally, we trained a CADD model using the same features and parameters as for CADD-Splice on genome build GRCh38, extending the previously described GRCh38 models [20]. In the comparison of pathogenic variants from ClinVar to common gnomAD variants, analogous to the GRCh37 model, this new CADD model (GRCh38-v1.6) scores similar to the previous model (GRCh38-v1.5) while outperforming it on splice site variants and intronic SNVs (Additional file 1: Fig. S10). When analyzing genomes in research or clinical applications for phenotype causal variants, the affected molecular process is usually unknown. Therefore, genomic scores need to integrate knowledge across different processes in order to rank variants across different variant types, e.g., amino acid substitutions, truncating variants, and splicing alterations. However, to our knowledge, existing predictors scoring all types of genomic variants do not specifically take RNA splicing effects into account, as evident by their limited performance on specialized data sets [10, 25, 26]. Here we demonstrate that deep learning frameworks of splicing effects can improve the performance of existing genome-wide variant effect prediction solutions. Specifically, we show that the integration of deep learning derived scores from MMSplice and SpliceAI into the general variant effect predictor CADD enables splice effect prediction with high accuracy. We benchmarked available splice predictions on the experimental MFASS data set and on known disease causing mutations from ClinVar. Even though MFASS does not cover some types of variants like gain-of-function mutations and deep intronic variants, it is a very valuable data set for splicing prediction and the most comprehensive data set for experimental splice-site effects today. We were able to show that existing splice models work well in predicting splice effects, provided that tools use the genomic context of each variant and not the assay-specific sequence design as input for the prediction. It further benefits methods when they are not only available as a precomputed score but provided as software that can be run genome-wide and independent of genome build and other annotations. We note that performance of all methods differs between exonic and intronic sequence (as expected due to different levels of constraints), as well as with distance to the canonical splice site. Even CADD v1.3, which uses only a 20-bp distance to canonical splice sites, has high precision in distinguishing pathogenic variants at canonical splice sites and shows reasonable performance for intronic variants. Based on the results of the benchmark sets, it is also unknown how far we can generalize observations for intronic variants that are more than 40 bp away from a known splice junction as such variants are not included in the MFASS data set and are rarely discovered from disease studies [30]. Of note, our findings contradict the original MFASS publication [26] that found HAL among the best performing predictors. We show that including psi as a feature provides an assay-specific predictive advantage and that without this feature, HAL's performance is comparable to MMSplice and SpliceAI. While part of this observation is probably due to biases of the assay, i.e., that certain exons are more frequently integrated in the reporter construct than others, some of it could be a biological signal. More specifically, it could be argued that prevalent splice junctions (high psi) are less susceptible to disruption than less prevalent ones where multiple alternatives are generated. It has been previously observed that mutation effects scale non-monotonically with the inclusion level of an exon, with mutations having a maximum effect at a predictable intermediate inclusion level [55]. It was suggested that competition between alternative splice sites is sufficient to cause this non-linear relationship. We thought about integrating this in our model but could not determine a sensible feature. For example, the pext score, which we investigated as a genome-wide and organismal psi substitute, did not capture splice effect size. We note that for individual cases, the joint analysis of DNA and RNA samples has proven very effective to identify and prioritize splice or regulatory variants underlying differentially expressed genes [10, 56, 57]. However, due to the tissue- and cell-type specificity of such events, informative transcriptome data is limited by the availability of the relevant RNA samples. We suggest that a combination of variant prioritization and RNA data could be very effective, and future work should explore this. For example, computational predictions could motivate the collection of relevant tissues or the establishment of cell lines from which RNA transcript data would be used to validate an actual splicing effect. We found it very important to distinguish variants creating new splice junctions from those disrupting existing ones. SpliceAI is a prime example, as it specifically distinguishes between splice gain and loss at a particular position. Since we did not detect a depletion of predicted splice gain mutations at existing sites (and vice versa loss at non-existing sites), we were able to mask scores and to achieve a better signal to noise ratio. While MMSplice does not distinguish between gain and loss, it achieves a similar effect from integrating knowledge about the sequence of the associated donor or acceptor from the opposite site of a splice junction. This also underscores the importance of the annotation of existing splice junctions. Given that general variant classifiers such as CADD include annotations from many different sources, developers have to make sure that features are not inherently biased due to how they were generated. We are hopeful that community standards such as the upcoming Matched Annotation from NCBI and EMBL-EBI (MANE) project together with a rise of sequence-based models that can be more easily adapted to new annotations will help to produce more stable, reproducible, and better predictors. It is clear that the significance of individual genes for specific diseases [58, 59] is not well-represented in organismal and genome-wide models of variant effects such as CADD. Existing gene and transcript specific information may therefore aid variant prioritization. For example, information about the specific phenotype (including pathways, gene interactions, or affected tissues) is potentially of high relevance. This may also motivate a more naive and inclusive approach of integrating annotations into genome-wide models. However, integrating gene and transcript-specific measures like essentiality, protein interactions and network centrality, or specificity of expression could impair the discovery of less well-studied disease genes due to observation biases [24]. To include annotations in genome-wide models, they are preferentially base-pair/substitution level resolution, available for all instances of an effect class, and do not have major biases. Thus, even though other information is useful for a final variant ranking, we are skeptical of integrating broad-scale annotations that prioritize variants based on their location in specific genomic regions. We show that process-specific DNN models are superior for identifying splice altering variants if the only possible variant effect is a splice effect. However, typically this prior knowledge is not available and variants need to be ranked across effect classes. In such a heterogeneous variant setup, a general pathogenicity predictor, like CADD, that integrates many different features, works better than the specialized splice scores in identifying pathogenic variants. The outperformance of the specialized scores is even observed when comparisons are limited to splice proximal or intronic variants. We speculate that this is due to a combination of the annotated categorical variant effects and features of species conservation. This suggests that variant prioritization can generally be improved by integrating process-specific information like splice scores. We believe that this is universal and outlines the importance of developing process-specific scores for regulatory sequences, UTRs, or non-coding RNA species. The GRCh37 model CADD-Splice, as well as the GRCh38 model, have been released as CADD v1.6. On our website cadd.gs.washington.edu, we provide precomputed scores for all genomic SNVs, scoring of SNVs and InDels via online submission, and link to the script repository that can be used for offline scoring. Online variant scoring, as well as prescored files of all SNVs and selected InDels for the different versions of CADD, including CADD-Splice (released as CADD v1.6) and the used annotations are available for all noncommercial purposes at https://cadd.gs.washington.edu. Scripts for offline scoring are available at https://github.com/kircherlab/CADD-scripts [60]. The CADD v1.6 training data set is available at https://cadd.gs.washington.edu/training, including basic code for model training. All external data sets used are available under the locations specified in the Methods. Further information on the analyses is available on request. auPRC: Area under the precision-recall curve auROC: Area under the receiver operating characteristic DNN: Deep neural network Percent spliced-in sdv: Splice disrupting variant Sibley CR, Blazquez L, Ule J. Lessons from non-canonical splicing. Nat Rev Genet. 2016;17:407–21. https://doi.org/10.1038/nrg.2016.46. Baralle FE, Giudice J. Alternative splicing as a regulator of development and tissue identity. Nat Rev Mol Cell Biol. 2017;18:437–51. https://doi.org/10.1038/nrm.2017.27. Wang ET, Sandberg R, Luo S, et al. Alternative isoform regulation in human tissue transcriptomes. Nature. 2008;456:470–6. https://doi.org/10.1038/nature07509. Pan Q, Shai O, Lee LJ, et al. Deep surveying of alternative splicing complexity in the human transcriptome by high-throughput sequencing. Nat Genet. 2008;40:1413–5. https://doi.org/10.1038/ng.259. Katz Y, Wang ET, Airoldi EM, Burge CB. Analysis and design of RNA sequencing experiments for identifying isoform regulation. Nat Methods. 2010;7:1009–15. https://doi.org/10.1038/nmeth.1528. Cummings BB, Karczewski KJ, Kosmicki JA, et al. Transcript expression-aware annotation improves rare variant interpretation. Nature. 2020;581:452–8. https://doi.org/10.1038/s41586-020-2329-2. Melé M, Ferreira PG, Reverter F, et al. The human transcriptome across tissues and individuals. Science. 2015;348:660–5. https://doi.org/10.1126/science.aaa0355. Li YI, van de Geijn B, Raj A, et al. RNA splicing is a primary link between genetic variation and disease. Science. 2016;352:600–4. https://doi.org/10.1126/science.aad9417. Scotti MM, Swanson MS. RNA mis-splicing in disease. Nat Rev Genet. 2016;17:19–32. https://doi.org/10.1038/nrg.2015.3. Li X, Kim Y, Tsang EK, et al. The impact of rare variation on gene expression across tissues. Nature. 2017;550:239–43. https://doi.org/10.1038/nature24267. Auton A, Abecasis GR, Altshuler DM, et al. A global reference for human genetic variation. Nature. 2015;526:68–74. https://doi.org/10.1038/nature15393. Buske OJ, Manickaraj A, Mital S, et al. Identification of deleterious synonymous variants in human genomes. Bioinforma Oxf Engl. 2013;29:1843–50. https://doi.org/10.1093/bioinformatics/btt308. Vaser R, Adusumalli S, Leng SN, et al. SIFT missense predictions for genomes. Nat Protoc. 2016;11:1–9. https://doi.org/10.1038/nprot.2015.123. Adzhubei I, Jordan DM, Sunyaev SR (2013) Predicting functional effect of human missense mutations using PolyPhen-2. Curr Protoc Hum Genet chapter 7:Unit7.20. doi: https://doi.org/10.1002/0471142905.hg0720s76. Liu X, Wu C, Li C, Boerwinkle E. dbNSFP v3.0: a one-stop database of functional predictions and annotations for human nonsynonymous and splice-site SNVs. Hum Mutat. 2016;37:235–41. https://doi.org/10.1002/humu.22932. Hu J, Ng PC. Predicting the effects of frameshifting indels. Genome Biol. 2012;13:R9. https://doi.org/10.1186/gb-2012-13-2-r9. Pagel KA, Pejaver V, Lin GN, et al. When loss-of-function is loss of function: assessing mutational signatures and impact of loss-of-function genetic variants. Bioinformatics. 2017;33:i389–98. https://doi.org/10.1093/bioinformatics/btx272. Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning-based sequence model. Nat Methods. 2015;12:931–4. https://doi.org/10.1038/nmeth.3547. di Iulio J, Bartha I, Wong EHM, et al. The human noncoding genome defined by genetic diversity. Nat Genet. 2018;50:333–7. https://doi.org/10.1038/s41588-018-0062-7. Rentzsch P, Witten D, Cooper GM, et al. CADD: predicting the deleteriousness of variants throughout the human genome. Nucleic Acids Res. 2019;47:D886–94. https://doi.org/10.1093/nar/gky1016. Ionita-Laza I, McCallum K, Xu B, Buxbaum JD. A spectral approach integrating functional genomic annotations for coding and noncoding variants. Nat Genet. 2016;48:214–20. https://doi.org/10.1038/ng.3477. Shihab HA, Rogers MF, Gough J, et al. An integrative approach to predicting the functional effects of non-coding and coding sequence variation. Bioinforma Oxf Engl. 2015;31:1536–43. https://doi.org/10.1093/bioinformatics/btv009. Kircher M, Witten DM, Jain P, et al. A general framework for estimating the relative pathogenicity of human genetic variants. Nat Genet. 2014;46:310–5. https://doi.org/10.1038/ng.2892. Stoeger T, Gerlach M, Morimoto RI, Amaral LAN. Large-scale investigation of the reasons why potentially important genes are ignored. Plos Biol. 2018;16:e2006643. https://doi.org/10.1371/journal.pbio.2006643. Mather CA, Mooney SD, Salipante SJ, et al. CADD score has limited clinical validity for the identification of pathogenic variants in noncoding regions in a hereditary cancer panel. Genet Med. 2016;18:1269–75. https://doi.org/10.1038/gim.2016.44. Cheung R, Insigne KD, Yao D, et al. A multiplexed assay for exon recognition reveals that an unappreciated fraction of rare genetic variants cause large-effect splicing disruptions. Mol Cell. 2019;73:183–94. https://doi.org/10.1016/j.molcel.2018.10.037. Siepel A, Bejerano G, Pedersen JS, et al. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome Res. 2005;15:1034–50. https://doi.org/10.1101/gr.3715005. Pollard KS, Hubisz MJ, Rosenbloom KR, Siepel A. Detection of nonneutral substitution rates on mammalian phylogenies. Genome Res. 2010;20:110–21. https://doi.org/10.1101/gr.097857.109. Jian X, Boerwinkle E, Liu X. In silico tools for splicing defect prediction - a survey from the viewpoint of end-users. Genet Med Off J Am Coll Med Genet. 2014;16:497. https://doi.org/10.1038/gim.2013.176. Anna A, Monika G. Splicing mutations in human genetic disorders: examples, detection, and confirmation. J Appl Genet. 2018;59:253–68. https://doi.org/10.1007/s13353-018-0444-7. Mort M, Sterne-Weiler T, Li B, et al. MutPred splice: machine learning-based prediction of exonic variants that disrupt splicing. Genome Biol. 2014;15:R19. https://doi.org/10.1186/gb-2014-15-1-r19. Jagadeesh KA, Paggi JM, Ye JS, et al. S-CAP extends pathogenicity prediction to genetic variants that affect RNA splicing. Nat Genet. 2019;51:755–63. https://doi.org/10.1038/s41588-019-0348-4. Jian X, Boerwinkle E, Liu X. In silico prediction of splice-altering single nucleotide variants in the human genome. Nucleic Acids Res. 2014;42:13534–44. https://doi.org/10.1093/nar/gku1206. Soemedi R, Cygan KJ, Rhine CL, et al. Pathogenic variants that alter protein code often disrupt splicing. Nat Genet. 2017;49:848–55. https://doi.org/10.1038/ng.3837. Yeo G, Burge CB. Maximum entropy modeling of short sequence motifs with applications to RNA splicing signals. J Comput Biol J Comput Mol Cell Biol. 2004;11:377–94. https://doi.org/10.1089/1066527041410418. Ke S, Shang S, Kalachikov SM, et al. Quantitative evaluation of all hexamers as exonic splicing elements. Genome Res. 2011;21:1360–74. https://doi.org/10.1101/gr.119628.110. Rosenberg AB, Patwardhan RP, Shendure J, Seelig G. Learning the sequence determinants of alternative splicing from millions of random sequences. Cell. 2015;163:698–711. https://doi.org/10.1016/j.cell.2015.09.054. Reese MG, Eeckman FH, Kulp D, Haussler D. Improved splice site detection in genie. J Comput Biol J Comput Mol Cell Biol. 1997;4:311–23. https://doi.org/10.1089/cmb.1997.4.311. Xiong HY, Alipanahi B, Lee LJ, et al. RNA splicing. The human splicing code reveals new insights into the genetic determinants of disease. Science. 2015;347:1254806. https://doi.org/10.1126/science.1254806. Cheng J, Nguyen TYD, Cygan KJ, et al. MMSplice: modular modeling improves the predictions of genetic variant effects on splicing. Genome Biol. 2019;20:48. https://doi.org/10.1186/s13059-019-1653-z. Jaganathan K, Panagiotopoulou SK, McRae JF, et al. Predicting splicing from primary sequence with deep learning. Cell. 2019;176:414–6. https://doi.org/10.1016/j.cell.2018.12.015. Lek M, Karczewski KJ, Minikel EV, et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016;536:285–91. https://doi.org/10.1038/nature19057. Avsec Ž, Kreuzhuber R, Israeli J, et al. The Kipoi repository accelerates community exchange and reuse of predictive models for genomics. Nat Biotechnol. 2019;37:592–600. https://doi.org/10.1038/s41587-019-0140-0. Aken BL, Ayling S, Barrell D, et al. The Ensembl gene annotation system. Database. 2016, 2016:baw093. https://doi.org/10.1093/database/baw093. Landrum MJ, Lee JM, Benson M, et al. ClinVar: improving access to variant interpretations and supporting evidence. Nucleic Acids Res. 2018;46:D1062–7. https://doi.org/10.1093/nar/gkx1153. Karczewski KJ, Francioli LC, Tiao G, et al. The mutational constraint spectrum quantified from variation in 141,456 humans. Nature. 2020;581:434–43. https://doi.org/10.1038/s41586-020-2308-7. Zhao H, Sun Z, Wang J, et al. CrossMap: a versatile tool for coordinate conversion between genome assemblies. Bioinformatics. 2014;30:1006–7. https://doi.org/10.1093/bioinformatics/btt730. McLaren W, Gil L, Hunt SE, et al. The Ensembl variant effect predictor. Genome Biol. 2016;17:122. https://doi.org/10.1186/s13059-016-0974-4. Rentzsch P, Kircher M. CADD v1.6 release notes; 2020. https://cadd.gs.washington.edu/static/ReleaseNotes_CADD_v1.6.pdf. Davydov EV, Goode DL, Sirota M, et al. Identifying a high fraction of the human genome to be under selective constraint using GERP++. Plos Comput Biol. 2010;6:e1001025. https://doi.org/10.1371/journal.pcbi.1001025. Zerbino DR, Wilder SP, Johnson N, et al. The Ensembl regulatory build. Genome Biol. 2015;16:56. https://doi.org/10.1186/s13059-015-0621-5. Shen S, Park JW, Huang J, et al. MATS: a Bayesian framework for flexible detection of differential alternative splicing from RNA-Seq data. Nucleic Acids Res. 2012;40:e61. https://doi.org/10.1093/nar/gkr1291. Park E, Pan Z, Zhang Z, et al. The expanding landscape of alternative splicing variation in human populations. Am J Hum Genet. 2018;102:11–26. https://doi.org/10.1016/j.ajhg.2017.11.002. Ling JP, Wilks C, Charles R, et al. ASCOT identifies key regulators of neuronal subtype-specific splicing. Nat Commun. 2020;11:137. https://doi.org/10.1038/s41467-019-14020-5. Baeza-Centurion P, Miñana B, Schmiedel JM, et al. Combinatorial genetics reveals a scaling law for the effects of mutations on splicing. Cell. 2019;176:549–563.e23. https://doi.org/10.1016/j.cell.2018.12.010. Anderson D, Baynam G, Blackwell JM, Lassmann T. Personalised analytics for rare disease diagnostics. Nat Commun. 2019;10:1–8. https://doi.org/10.1038/s41467-019-13345-5. Mohammadi P, Castel SE, Cummings BB, et al. Genetic regulatory variation in populations informs transcriptome analysis in rare disease. Science. 2019;366:351–6. https://doi.org/10.1126/science.aay0256. Havrilla JM, Pedersen BS, Layer RM, Quinlan AR. A map of constrained coding regions in the human genome. Nat Genet. 2019;51:88. https://doi.org/10.1038/s41588-018-0294-6. Abramovs N, Brass A, Tassabehji M. GeVIR is a continuous gene-level metric that uses variant distribution patterns to prioritize disease candidate genes. Nat Genet. 2020;52:35–9. https://doi.org/10.1038/s41588-019-0560-2. Rentzsch P, Schubach M, Shendure J. Martin Kircher kircherlab/CADD-scripts: CADD version 1.6. GitHub. 2021. https://doi.org/10.5281/zenodo.4446709. We thank current and previous members of the Kircher and Shendure laboratories for helpful discussions and suggestions. Specifically, we would like to acknowledge input from Daniela Witten, Greg Cooper, James Lawlor, Kimberly Insigne and Sriram Kosuri, Jun Cheng and Julien Gagneur, Birte Kehr, and Manuel Holtgrewe. Computation has been performed on the HPC for Research cluster of the Berlin Institute of Health. This work was supported by the National Cancer Institute (NCI) grant number 1R01CA197139 (JS) and the Berlin Institute of Health (MK, MS, PR). Open Access funding enabled and organized by Projekt DEAL. Charité - Universitätsmedizin Berlin, 10117, Berlin, Germany Philipp Rentzsch, Max Schubach & Martin Kircher Berlin Institute of Health (BIH), 10178, Berlin, Germany Brotman Baty Institute for Precision Medicine, University of Washington, Seattle, WA, 98195, USA Jay Shendure Department of Genome Sciences, University of Washington, Seattle, WA, 98195, USA Philipp Rentzsch Max Schubach Martin Kircher All authors designed the study. PR prepared and analyzed the data. PR and MK wrote the software. All authors wrote the manuscript. All authors read and approved the submitted manuscript. Correspondence to Martin Kircher. Supplementary Materials include Supplemental Figures (Fig. S1-S10) and Supplemental Tables (Tables S1-S3). Rentzsch, P., Schubach, M., Shendure, J. et al. CADD-Splice—improving genome-wide variant effect prediction using deep learning-derived splice scores. Genome Med 13, 31 (2021). https://doi.org/10.1186/s13073-021-00835-9 Received: 29 June 2020
CommonCrawl
Harvard Publications Case Study Solution Harvard Publications is the country's largest and most important publication representing the science of economics, which it publishes on a salary basis. The publishing house provides a comprehensive textbook on economics, specializing in economics and other fields of study. As a publisher, Harvard also publishes useful articles covering these fields of study. It also publishes material from its textbooks, among which are works by scholars of various fields. Historical context Here are some (source) evidence of the extent to which Harvard has published books centered on science, both for the period 1900–1950 and in recent years, before and after World War II. From 1900 to 1939, Harvard was publishing 17 volumes a year (mostly on short-course topics). By 1969, Harvard was publishing 13 volumes a year (mostly on intermediate course topics). From 1950 to 1972, Harvard continued to publish two volumes a year (mostly on intermediate course topics). In both of these periods, the relationship between history and science went somewhat more toward the early 20th century, primarily as a result of increasing emphasis on science, and especially on the publication of books dealing with economics (e. g., the history of Germany). However, historical context has remained a fundamental concern of researchers looking to distinguish between science and history around the world today. This is given due to the continuing evolution of the subject from day to day over and since at least the 1700s, from a practice begun in a relatively recent era in the mid-19th century just prior to World War I, and during the late 1950s to 1960s; through the late 1970s, this continues today. Much work has been devoted to solving this problem, of course, but for this report, it is most useful to concentrate on one very specific topic for which Harvard is most well-placed: the subject of contemporary economics. Harcourt's book is largely based on standard biographical material; in this context, it raises a number of topics: 'the role of the New Deal in economics, the influence of U. S. export policies and national policies on other developed economies for the 1980s; the rise of the American stimulus package as a whole; the impact of a strong antiwar protest at the time of World War II and the decline in violent demonstrations among police officers. This chapter draws on extensively existing biographical material; in this case an extensive survey of the historical background to recent economic policies. 'The New Deal: The Development of the American Economy in the Post-World War World and in the 1930s'; 'The Progressive Era as Toward Development'; (hereafter) 'The New Deal: New Action to Move Forward'; 'The New Deal: Foreign Trade, The War Years, World War II, and the Rise of the World Wide Fund'; 'The New Deal: Germany Against the New Deal'; the way that Germany, as a much smaller economy (and possibly more sophisticated one) took on the currency of Britain and the United States (as the Socialists did originally, they now represented Britain's leading industrial product); how Germany got particularly great economic freedom given only a handful of major industrial structures, its way at the time of World War I, was poor, not by U.S. standards, and probably only for the big 5-4% (in terms of the European equivalent). This book has a fascinating new chapter so often cited as an argument for the New Deal is extremely instructive, and one that looks at theHarvard Publications JOSEPH H. HEWITT: AN APRETHLOAT HAS TO OFFER AN ABSENCE of this letter, for which I give to him the two-page letter. JOSEPH WRIGHT: FOR COMPLAINT TO HIM IF I WANT HIM TO TELL THE MAGNETIC TRUTH. JOSEPH SLOUD: FOR AWARE OF THIS letter I will need your written permission to make a copy in due time The letter will be published on the book's official website in less than 6 months. If you decide to send me the original it will have to be published in twice-a-week format. Unless it's received in twice-a-week format it won't be available for sale until the next book. With this letter, you'll find yourself in the deep of an abyss, an abyss where the problem lies so deep that none of us can get a handle on it. But what exactly is the problem? The problem that this letter raises is one of censorship, whether it is freedom of speech, freedom of association, autonomy, or freedom of expression. If censorship isn't oppressive then an agenda that requires a steady and transparent adherence to its structure will be. It's best if politicians do not interfere with the process. These politicians therefore have the power in this country to monitor private companies, collect taxes, or both. But without a democratic process this won't have been a problem. Censorship is a problem, but it is not a source of security. Yes, you may view as such anything you like. But there is no privacy about it that you can feel comfortable discussing now outside of company. And one can't control the practice of censorship. It's a technique that's been in the practice for many years. "As long as I look for power above myself, I don't care about my own power behind it. " When you speak of "personal" I take it seriously. Many critics say that "personal" is taken for granted, it as a right. But consider that there are literally thousands of people every day, and the people who believe in it have the talent, the potential, and the ability. The average person would have nothing to be afraid of, or afraid of being labeled criminal, or of being forced to behave out of fear of potentially undermining their own power, due to this in no way. The fear is something that many of us cannot afford to ignore: trust just grows. More than any problem that I've run across in my interview, there is a problem. There are some large groups of people out with the right beliefs. It can be hard to distinguish between that group and someone else. It can be hard to explain that part of what we really expect others to think, that we often have to sit around busy with their work and their desire to be in the front lines or in the spotlight. The reasons for this are very different. Although it's easy to find fault with some or see this website of it, they are very different. One has to find the truth within oneself. They are often hard to find. In the case of a politician, or of a journalist or editor, a fear that the rules of industry could drag on for a while is almost no security to me. But sometimes the fear is contagious. What is the other problem? We do get a lot of hate speech, because we tend not to admit that it is very bad policy, but it is not necessary yet. But remember, we may not like how we treat the American people at the moment. It seems to us that they are less valuable when you compare or even contrast that measure to the score they get for their party. I see that as a form of intimidation. When this is presented to people who are concerned about power, it's all a bad joke. A lot of us are concerned about power, a lot of problems, be it democracy or maybe socialism. I would have to live with all the facts before I would stop using such language. But there is a change in the way we act. We move to see and to think about the problem as a whole. We face new issues, new problems that take different forms in different parts of the globe. For example, the World Bank thinks that the right of the Chinese public to assert their sovereignty over their lands and people inHarvard Publications, Cambridge University Press, Toronto, Canada, 1964; _Folklore_, 2d edition, London, 1967. John B. Morris, _Principles of the Mathematical Sciences_, 3rd edition (Cambridge: Polity Press, 1964); reprinted in John B. Morris, ed., _A History of the Mathematical Sciences_, 2nd edition (Cambridge: Polity Press, 1967). 3. _Homory Mathians_, 2nd edition © © John B. Morris, s. v., 1964, ed., vol. II, London, 1967. ## **2 INTRODUCTION** In the interest of increasing the number of mathematical concepts, each of these is described in its own own peculiar way. The aim of the discussion of these subcategories is to provide a clear definition of a real number that is being compared with the number of real numbers and is more appropriate to mathematics. The details of the construction are not presented in this work by some of the authors with whom they conduct this discussion. This paper is based on a preliminary synthesis of which I have been indebted since 1966 on the problem of classification in differential geometry of algebras [II] and algebras of polynomials [III]. I have called my own 'classification' an enigma [IV] based on all known examples of real numbers except those where it is possible to choose a real number as our objective. All the other problems are considered the same. Several new definitions have been made which will give you a conceptual definition for this new class of models of algebras. Apart from the primary references to the cases I have named, many more basic proofs have also been put forth Get More Info my books dealing with discrete algebras. For example, Harveer in _Polar Controblems of Differential Geometry_ [1862–90] gives a non-trivial model of Schur and its associated Lie algebras, obtaining the famous Selberg–Shafarevich homomorphism of differential geometry gives some important results on homology [SCHUR] and on Schur–Shafarevich homotopy groups for the complex geometry arising in differential geometry. These results have also been obtained using differential geometry [SCHUR] and Schur–shafarevich homotopy groups [SCHUR] arising in differential geometry of homology, and the last of these results is based on this final result for the discrete algebras. These also apply to you could try here discrete points of a discrete group–invariant embedding. Furthermore, _Brugel's geometric Theorem_ [ _Gromogen_ of _Oeconomĭka_ [1930] in DUSSIA: _Geometric Methods in Algebra_ ]. Here p 2) which you could try here worked on, gives the first description yet of a group–invariant embedding between two discrete space types. When the embedding is of the form this article which is defined under the action of $q-$commutator algebra, the elements of $\mathrm{Ker}[(n)]$ fall off to the left in a polynomial ring $(\mathbb{C}\setminus q(\mathbb{Z})|\mathbb{C})^n$. As this is a group–invariant embedding, there exists a left inverse proper embedding (by the action of $U\times U$ with $U\subset U\times \mathbb{Z}$), which shows that $\dot\mathrm{Ker}[n] \cong \mathrm{Ker}[(n)]_U$ for any element $n\in \mathbb{Z}^{2n}$ of degree $2n$. This is the name of the following example: Here, we are given an embedding $\alpha\in \mathbb{C}\setminus q(\mathbb{Z})$ that is given by $$\alpha=b_4\times b_6\times b_4\times b_9.$$ This embedding is a group–invariant embedding of a discrete space type $G$ (where The Offshoring Of America Case Study Analysis Scanlon Technologies Inc Case Study Analysis Aiding Or Abetting The World Bank And The Judicial Reform Case Study Analysis Case Study Help The Blue Collar Green Building Boom Case Study Solution Chapman International Inc Case Study Solution
CommonCrawl
Expectation Maximisation Algorithm: E Step I was going through the wiki article of this algorithm. I got a hang of it by understanding what is happening in case of Mixture of Gaussian model.(Kind of soft clustering compared to Kmeans) But, I am stuck in some of the basics of its derivation in E step. \begin{align*} Q(\theta|\theta^{(t)}) &= \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\log L(\theta;\mathbf{x},\mathbf{Z})]\\ &= \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\log \prod_{i=1}^n f({x}_i,{Z}_i;\theta)]\\ &= \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\sum_{i=1}^n \log f({x}_i,{Z}_i;\theta)]\\ &= \sum_{i=1}^n \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\log f({x}_i,{Z}_i;\theta)]\\ &= \sum_{i=1}^n \sum_{j=1}^2 \mathbb{P}(Z_i=j|X=x_i,\theta^{(t)}) \log f({x}_i,j;\theta)\end{align*} How did they jump from step 4 to step 5 in the link above? I know what Expectation is, but how to interpret what is expectation w.r.t some distribution? How do you say it, can someone explain? machine-learning expectation-maximization Jay Patel Jay PatelJay Patel $\begingroup$ I would simply say that the expected value is always dependent on the distribution it is computed over. $\endgroup$ – Michael Chernick Nov 27 '16 at 14:20 The expectation operator $E[f(X,Y,\ldots)]$ takes the expected value of the thing inside the square brackets over the (joint) distribution of that thing. If that thing is discrete, the expectation comes down to taking a sum over all possible values $f(X,Y,\ldots)$ can have, of that value times the probability of that value. So, $$ \text{E}[f(X,Y)] = E_{X,Y}[f(X,Y)] = \sum_{i}\sum_{j} f(x_i,y_i)\text{Prob}[X=x_i,Y=y_j] $$ The "default" expansion is to average out over the joint distribution of all random variable mentioned inside the square brackets. However, sometimes it is useful to use another distribution if additional information about $X$ and $Y$ is available. For example, suppose you know that $Y=y$, then you may be interested in the expected value of $f(X,Y)$ conditional on $Y=y$. Several notations exist: \begin{align*} \text{E}_{X|Y=y}[f(X,Y)] & = \text{E}[f(X,Y)|Y=y]\\ & = \sum_{i} f(x_i,y)\text{Prob}[X=x_i|Y=y]\\ & \neq \sum_{i} f(x_i,y)\text{Prob}[X=x_i] = \text{E}[f(X,y)]\\ \end{align*} In these situations where an expectation is taken over a distribution other than the default one (i.e. the joint distribution of all involved random variables), my advice is to always revert back to the probability notation by doing the expansion. Especially if you are in doubt what the $E[\ldots]$ really means. In your specific case, the fourth line contains the expectation of a loglikelihood where the expectation is taken over the distribution of $\mathbf{Z}$ conditional on $\mathbf{X=\mathbf{x}_i}$ and on the parameter $\theta$ being $\theta^{(t)}$. The latter is not a random variable of course, but it is als something that "modifies" the probability distribution used in the expansion. Incidently, on the fourth line, it would have been better to write $\text{E}_{Z_i|\mathbf{X};\theta^{(t)}}[\ldots]$ instead of $\text{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\ldots]$. StijnDeVuystStijnDeVuyst The proper writing of the sequence of equations is \begin{align*} Q(\theta|\theta^{(t)}) &= \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\log L(\theta;\mathbf{x},\mathbf{Z})]\\ &= \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\log \prod_{i=1}^n f({x}_i,{Z}_i;\theta)]\\ &= \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\sum_{i=1}^n \log f({x}_i,{Z}_i;\theta)]\\ &= \sum_{i=1}^n \mathbb{E}_{\mathbf{Z}|\mathbf{X};\theta^{(t)}}[\log f({x}_i,{Z}_i;\theta)]\\ &= \sum_{i=1}^n \sum_{j=1}^2 \mathbb{P}(Z_i=j|X=x_i,\theta^{(t)}) \log f({x}_i,j;\theta)\end{align*} where capital letters like $X_i$ denote random variables and lower case letters like $x_i$ their realisation, and bold font symbols like $\mathbf{X}$ vectors. Hence, in the first row, $\mathbf{x}$ is a vector and a realisation of the random vector $\mathbf{X}$, $\mathbf{Z}$ is a random variable with distribution $\mathbb{P}(\mathbf{Z}=\mathbf{z}|\mathbf{X}=\mathbf{x},\theta^{(t)}))$, conditional on the realisation of $\mathbf{X}$ and parameterised by the value of $\theta$ equal to $\theta^{(t)}$; in the second row, $\mathbf{x}$ is decomposed as $\mathbf{x}=(x_1,ldots,x_n)$ and $\mathbf{Z}$ is decomposed as $\mathbf{Z}=(Z_1,\ldots,Z_n)$ and the joint density of $(\mathbf{X},\mathbf{Z})$ is the product of the densities of the pairs $(X_i,Z_i)$ as they are assumed iid; the third row is a property of the logarithmic function; the fourth row follows by linearity of the expectation, which is now also an expectation in $Z_i$ only, conditional on the realisation of ${X}_i$ and parameterised by the value of $\theta$ equal to $\theta^{(t)}$; the fifth row is obtained by definition of the expectation in probability, which is the sum of the possible values of the variate weighted by its probabilities of occurrence, namely $\mathbb{P}(Z_i=j|X=x_i,\theta^{(t)})$. Xi'anXi'an The expectation of a distribution isn't really fundamentally different from the expectated value of a variable. Sticking with the variables as they are defined in the example you linked to, you're interested in the likelihood over some vector of parameters $\theta$. The likelihood over this vector depends on the values of certain hidden variables $Z$, as well as some observed data $X$. The problem, which EM tries to solve, is that $\theta$ depends on $Z$, and $Z$ depends on $\theta$. To solve this chicken-and-egg problem, EM does the following: Initialize $\theta$ to some arbitrary value $\theta^{(t)}$ Define the probability distribution over possible states of $Z$ given the current parameter setting, i.e. $p(Z|X;\theta^{(t)})$ Compute the expectation over the log-likelihood function over $\theta$ under $p(Z|X;\theta^{(t)})$ Update $\theta^{(t)}$ to the most likely value under this expectation (the maximization step) Iterate steps 2-4 until conversion The step you're asking about it step 3, and you can think of this as computing the "average log-likelihood" given all possible states of the hidden variables $Z$. That is, given each possible state of $Z$, you figure out the log-likelihood function over $\theta$, you multiply this log-likelihood function by the probability of $Z$ (given the data and your current setting of $\theta^{(t)}$, and then you sum over all these log-likelihood functions weighted by their probabilities. So in short, just as you can think of an expected value as the "average value" of a random variable that you get when summing or integrating over a distribution over that variable, you can think of an expectation of a distribution as the "average shape" of that distribution when you sum or integrate over the distributions over other variables that the target distribution depends on. Ruben van BergenRuben van Bergen Not the answer you're looking for? Browse other questions tagged machine-learning expectation-maximization or ask your own question. EM algorithm Practice Problem Is this problem Bayesian? And can I use variational approximation? Why is there a E in the name EM algorithm? Convergence from the EM Algorithm with bivariate mixture distribution Expectation-Maximization Algorithm for Binomial Confusion with EM Algorithm for Gaussian Mixture? Derivation of M-step in EM algorithm for mixture of Gaussians Expectation of Covariance Matrix for Deep Gaussian Processes EM Derivation for Dawid-Skene Model Baum-Welch algorithm and link to EM algorithm
CommonCrawl
Newton's Laws Atwood's machine Projectiles / 2-D motion Conservation laws xaktly | Physics | Energy Potential energy is the energy of position The concept of potential energy (PE) can be a little difficult to wrap your mind around. Kinetic energy (KE) is easier: Things that are moving obviously have energy. A baseball can knock you on the head (and hurt), a moving train could flatten you, and so on. But an object (stationary or moving) also has a kind of energy just by virtue of where it is located with respect to other objects in the universe, and that may seem a little weird at first. The first, most obvious example is that of gravitational potential energy energy. A baseball on flat ground has nowhere to go, but a baseball lifted in the air or located on a hill has the potential to fall or roll – to move under the influence of gravity. That movement is an expression of KE that used to be stored as PE. Forces are the reality that create the concept of potential energy. Without forces, there would be no potential energy. The examples below illustrate the concept of the force field, a sort of map of how invisible forces between objects change as their relative position changes. The force field, forces and the potential energy are related. Here are a few examples of how potential energy is created by doing work. An object is raised against the force of gravity. A spring is stretched or compressed from its natural length. The string of a bow is pulled, flexing the limbs of the bow. The elastic bands of a slingshot are stretched backward before a shot. Air is pressurized so that it can expand against a piston and operate the air brakes on a truck. The level of water is raised behind a dam so that it can fall a long distance and generate electricity. A potential energy difference is created in a battery by charging it. Gravitational potential energy Gravity is a force that we can model mathematically very well using the universal law of gravitation, $$F_g = \frac{G m_1 m_2}{r^2}$$ where G is the gravitational constant, $$G = 6.674 \times 10^{-11} \frac{m^3}{s^2 K},$$ $m_1$ and $m_2$ are the masses of two objects, and $r$ is the distance between them in meters. Any object with mass exerts a gravitational force on any other object with mass. Planets and stars, of course, are very large (massive), thus they exert the largest forces. The gravitational force produced by any thing on another thing is inversely proportional to the square of the distance between (see box below). That means that if I double my distance from the center of Earth, the gravitational attraction that Earth and I exert on each other is reduced to ¼ its original value. In the diagram, the circles represent spheres of constant force. The closer the spacing between circles, the greater the force. In this way, the potential energy has the look of a topographic map: the closer the curves, the steeper the slope of the hill. According to universal law of gravity, the gravitational force between to objects is where $G$ is the gravitational constant, $m_1$ and $m_2$ are the masses of the two objects and $r$ is the distance between them. If our initial distance is 1, then $F = G m_1 m_2.$ If we double that distance to 2, then because 22 = 4, the force becomes $F = \frac{1}{4} G m_1 m_2$ Gravitational potential "well" This image is a representation of the gravitational potential energy around a planet. The planet would lie at the center of the "well". At far distances where the gravitational force is weak, the potential well is not steep, and the force is small. Near the planet the surface becomes much steeper. The force felt by an object attracted to the planet is proportional to the steepness of this potential. In fact, force is precisely the slope of this 3-D curve in a given direction. If the graph represents the potential energy function, then its steepness (its derivative in calculus) at any point is the gravitational force at that point. Source: AllenMcC., Wikipedia Commons Albert Einstein showed that space (or "space-time") can actually be viewed as a 2-dimensional surface like this, and that the masses of the planets and stars warp that surface as shown in the figure. Objects then "fall" into that potential energy "well." The pendulum is a great example of how gravitational potential energy works. Here we can toss aside the warping of space-time and just think of gravity as an invisible downward force. The pendulum works solely because of the force of gravity. When the weight of the pendulum is raised, we do work to give it potential energy that is proportional to the height. At the endpoints of the arc of the pendulum weight, there is a moment as it changes direction when its velocity is zero, so it has no kinetic energy at all. At those points, all of its energy is potential energy. At the bottom of the swing, when the weight points toward the center of Earth, it has no potential energy relative to where it started. It is, however, moving as fast as it will ever move, all of its PE having been converted to KE. Then the process begins all over again, as the weight rises, it loses speed (and thus KE), and gains gravitational PE. The only thing that slows the whole thing down is a little bit of friction with air molecules that eventually will cause the pendulum to stop. Potential energy is a storage mode for kinetic energy. As a pendulum moves from side to side, all of its KE is converted to PE momentarily, only to be converted back to KE, and so on. Elastic potential energy Many devices can store elastic potential energy and release it as kinetic energy: a spring, a rubber band, an archer's bow, a catapult ... The animation on the right shows a compressed spring at rest. In its compressed state, the spring stores elastic potential energy. The downward force it exerts on the hanging mass is proportional to the amount of compression (see Hooke's law). Once the mass reaches the middle of its travel, and is at its equilibrium (unstretched / uncom-pressed) length, it will then begin to be stretched, storing more potential energy in that way. The force exerted on the hanging mass is greatest at either end, less in the middle, thus the mass has its greatest elastic potential energy when the spring is most compressed or most stretched. Of course, it's also possible to over-stretch a spring, ruining its energy storage properties. My students do that with Slinkies™. I don't know why. The potential energy of a spring is given by $$PE = \frac{1}{2} kx^2$$ where k is a constant that depends on the propoerties of the particular sping and x is its stretched or compressed length from its equilibrium length. Electrostatic potential energy Charged objects, like electrons (-) and protons (+) exert invisible forces on one-another. These are called electrostatic forces. Like gravity, these are non-contact forces (one object doesn't have to touch another in order to exert a force on it). But unlike gravity, electrostatic forces can be attractive or repulsive. (Gravity is always attractive — no one ever just gets ejected off the planet while walking down the street.) Here is a sketch of oppositely-charged particles in close approach. The circles represent "lines of force." Just like in our gravity example above, the closer together the lines of force, the more force exerted on and by the particles. This is always the case with oppositely-charged particles: The electrostatic force causes oppositely-charged particles to attract. If we looked at these lines of force in three dimensions (think of them again as a topographic map), the red circles might extend above the page, and the blue ones below — the difference between positive and negative forces. The lines of force for like-charged particles, such as two electrons or two protons, looks like this. These particles repel one another. In 3-D, these lines of force would both form mountains below the plane of the page. The force between two charged particles is given by Coulomb's law: where k is a constant called the Coulomb constant (k = 8.98 × 109 N·m2·C-2), q1 and q2 are the two charges (in Coulombs, C), and r is the distance between them in meters. Notice how similar Coulomb's law is to the universal law of graviation. Both are inverse-square laws because the force is inversely proportional to the square of the distance between the particles. Remember that with an inverse-square law, the force drops off with the square of the increased distance. For example, if the distance is doubled, the force falls to ¼ of its previous value. Calculating gravitational PE & the units of potential energy Gravitational potential energy (PE) is easy to calculate, and we can use it to get an idea of the usefulness of PE. It turns out that the potential energy gained by raising a mass, m, to a height h is exactly equal to the amount of work needed to lift it there, which is the force (mg, where g = 9.8 m·s-2, is the acceleration of gravity) multiplied by the distance moved (h). $$PE = m g h$$ Units of PE, KE, work The unit of potential energy (PE), kinetic energy (KE) or work is the Joule. $$ 1 \: \text{Joule (J)} = 1 \: \frac{Kg \cdot m^2}{s^2}$$ How much potential energy does a 46 g golf ball have if elevated to the level of the observation deck (381 m) of the Empire State Building? Given that the kinetic energy of the ball will be $KE = \frac{1}{2} mv^2$ just before it hits the street after being dropped, how fast will the ball be falling, and is this reasonable? Solution: First, it's easy to calculate the gravitational potential energy of the ball: $$ \begin{align} PE &= mgh \\[5pt] &= (0.046 \, Kg)\left( 9.81 \, \frac{m}{s^2} \right) (381 \, m) \\[5pt] &= 172 \, J \end{align}$$ Now energy is conserved, so we expect that all of this potential energy will be converted to kinetic energy by the time the ball is at street level. To calculate that velocity, first rearrange $KE = \frac{1}{2} mv^2$ $$KE = \frac{1}{2} mv^2$$ $$ \begin{align} 2 \, KE &= mv^2 \: \: \color{#E90F89}{\leftarrow \: \text{multiply by 2}} \\[5pt] \frac{2KE}{m} &= v^2 \: \: \color{#E90F89}{\leftarrow \: \text{divide by m}} \\[5pt] v &= \sqrt{\frac{2 \, KE}{m}} \: \: \color{#E90F89}{\leftarrow \: \text{square root}} \\[5pt] \end{align}$$ Now we have $$v = \sqrt{\frac{2 (172 \, J)}{0.046 \, Kg}} = 86 \, \text{m/s}$$ Well, that's a very fast speed – about 192 mi./h. That's not too likely. The ball would encounter increasing air resistance on its way down as it banged into air molecules. These collisions would slow it down, eventually resulting in a steady velocity ("terminal velocity") of about 32 m/s. A certain recurve bow takes 80 lbs. of force to pull an arrow back by about 70 cm. How much potential energy will be stored in the bow before the arrow is shot? (1 lb. = 4.448 Newtons). Solution: First, recall that pounds is not a unit of mass, it is a unit of force, so converting to Newtons of force makes sense. We know that the potential energy stored in the bow will be equal to the work done on it, which is: $$ \begin{align} \require{cancel} w = PE &= F·d \\[5pt] &= 80 \cancel{lbs.} \left( \frac{4.448 \, N}{1 \, \cancel{lb}} \right)(0.70 \, m) \\[5pt] &= 249 \, J \end{align}$$ Work is done on the bow to store potential energy in its flexible arms. When the arrow is released, that PE is converted into kinetic energy of the bow arms and string, which, in turn, propels the arrow forward. Most of the PE ends up in the arrow, propelling it with high velocity. Like in example 1, you could calculate an upper limit on the speed of the arrow. Give it a try; assume that a typical target-practice arrow weighs 19 g. Answer: $ v \approx 162 \, m/s$ How much potential energy does a 50 Kg cart have (relative to being on level ground) after having been pushed up a 5˚ ramp to a height of 5 m ? Would there be any difference in PE gained if the cart was simply lifted to a height of 5 m? Solution: The PE is $$ \begin{align} PE &= mgh \\[5pt] &= (50 \, Kg)\left( 9.81 \frac{m}{s^2} \right)(5 \, m) \\[5pt] &= 2,452 \, J \: \text{ or } \: 2.45 \, KJ. \end{align}$$ The potential energy would be the same if the cart was just lifted straight up to a height of 5 m. Work and PE are known as "state functions." They only depend on the final and initial conditions (i.e. final and initial KE), and not on how the system got from one to the other. Force & potential energy and caclulus Force and potential energy (the potential energy function) are related through the derivative in calculus. Force is the slope of a one-dimensional potential function (like a mass on a spring). Force can also be a slope in any direction or a gradient on a multi-dimensional potential energy surface. In the one-dimensional case, $$F(x) = \frac{d}{dx} V(x),$$ where $V(x)$ is the potential energy function (often just the "potential function" or "the potential." Likewise, the total work done or potential energy gained by moving an object a distance x under a force F is the integral of the force over the distance: $$w = PE = \int_a^b \, F(x) dx$$ Example: Spring potential The simplest potential function representing the stretching and compressing of a spring is the harmonic potential, $$V(x) = \frac{1}{2} kx^2,$$ where k, the spring constant, is a characteristic of the spring, it's composition and how it is formed. The potential is just a parabola like this one, in which k has been set to 1. Under this potential, the restoring force of the spring is negative when the spring is stretched and positive when it is compressed, by convention. The force increases quadratically with the extent of the stretching or compressing. We calculate the force at any point by taking the derivative of the potential at that point: $$F(x) = \frac{d}{dx} V(x) = kx$$ The red arrows in the potential plot are tangents at three select points showing negative and positive restoring forces, and the horizontal tangent at the natural length of the spring, where no restoring force is required. xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012-2014, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected].
CommonCrawl
Pullback attractors of FitzHugh-Nagumo system on the time-varying domains DCDS-B Home Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity December 2017, 22(10): 3671-3689. doi: 10.3934/dcdsb.2017148 Area preserving geodesic curvature driven flow of closed curves on a surface Miroslav KolÁŘ 1,, , Michal BeneŠ 1, and Daniel ŠevČoviČ 2, Department of Mathematics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Trojanova 13, Prague, 12000, Czech Republic Department of Applied Mathematics and Statistics, Faculty of Mathematics, Physics and Informatics, Comenius University, Mlynská Dolina, 842 48, Bratislava, Slovakia * Corresponding author: Miroslav Kolář Received December 2016 Revised March 2017 Published April 2017 Fund Project: The first author is supported by the grant No. 14-36566G of the Czech Science Foundation and by the grant No. 15-27178A of Ministry of Health of the Czech Republic. Full Text(HTML) Figure(6) / Table(5) We investigate a non-local geometric flow preserving surface area enclosed by a curve on a given surface evolved in the normal direction by the geodesic curvature and the external force. We show how such a flow of surface curves can be projected into a flow of planar curves with the non-local normal velocity. We prove that the surface area preserving flow decreases the length of the evolved surface curves. Local existence and continuation of classical smooth solutions to the governing system of partial differential equations is analysed as well. Furthermore, we propose a numerical method of flowing finite volume for spatial discretization in combination with the Runge-Kutta method for solving the resulting system. Several computational examples demonstrate variety of evolution of surface curves and the order of convergence. Keywords: Geodesic curvature driven flow, surface area preserving flow, Hölder smooth solutions, flowing finite volume method. Mathematics Subject Classification: Primary:35K57, 35K65, 65N40, 65M08;Secondary:53C80. Citation: Miroslav KolÁŘ, Michal BeneŠ, Daniel ŠevČoviČ. Area preserving geodesic curvature driven flow of closed curves on a surface. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3671-3689. doi: 10.3934/dcdsb.2017148 S. Allen and J. Cahn, A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening, Acta Metall., 27 (1979), 1085-1095. doi: 10.1016/0001-6160(79)90196-2. Google Scholar S. Angenent, Nonlinear analytic semiflows, Proc. R. Soc. Edinb., Sect. A, 115 (1990), 91-107. doi: 10.1017/S0308210500024598. Google Scholar M. Beneš, Diffuse-interface treatment of the anisotropic mean-curvature flow, Appl. Math., 48 (2003), 437-453. doi: 10.1023/B:APOM.0000024485.24886.b9. Google Scholar M. Beneš, M. Kimura, P. Pauš, D. Ševčovič, T. Tsujikawa and S. Yazaki, Application of a curvature adjusted method in image segmentation, Bull. Inst. Math. Acad. Sinica (N. S.), 3 (2008), 509-523. Google Scholar M. Beneš, J. Kratochvíl, J. Křišt'an, V. Minárik and P. Pauš, A parametric simulation method for discrete dislocation dynamics, Eur. Phys. J. ST, 177 (2009), 177-192. Google Scholar M. Beneš, S. Yazaki and M. Kimura, Computational studies of non-local anisotropic Allen-Cahn equation, Math. Bohemica, 136 (2011), 429-437. Google Scholar L. Bronsard and B. Stoth, Volume-preserving mean curvature flow as a limit of a nonlocal Ginzburg-Landau equation, SIAM J. Math. Anal., 28 (1997), 769-807. doi: 10.1137/S0036141094279279. Google Scholar J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system. Ⅲ. Nucleation of a two-component incompressible fluid, J. Chem. Phys., 31 (1959), 688-699. doi: 10.1002/9781118788295.ch5. Google Scholar M. C. Dallaston and S. W. McCue, A curve shortening flow rule for closed embedded plane curves with a prescribed rate of change in enclosed area Proc. R. Soc. A 472 (2016), 20150629, 15 pp. doi: 10.1098/rspa.2015.0629. Google Scholar K. Deckelnick, Parametric mean curvature evolution with a Dirichlet boundary condition, J. Reine Angew. Math., 459 (1995), 37-60. doi: 10.1515/crll.1995.459.37. Google Scholar I. C. Dolcetta, S. F. Vita and R. March, Area preserving curve shortening flows: From phase separation to image processing, Interfaces Free Bound., 4 (2002), 325-343. doi: 10.4171/IFB/64. Google Scholar J. Escher and G. Simonett, The volume preserving mean curvature flow near spheres, Proc. Amer. Math. Soc., 126 (1998), 2789-2796. doi: 10.1090/S0002-9939-98-04727-3. Google Scholar S. Esedoḡlu, S. Ruuth and R. Tsai, Threshold dynamics for high order geometric motions, Interfaces Free Bound., 10 (2008), 263-282. doi: 10.4171/IFB/189. Google Scholar M. Gage, On an area-preserving evolution equation for plane curves, Contemp. Math., 51 (1986), 51-62. doi: 10.1090/conm/051/848933. Google Scholar M. Henry, D. Hilhorst and M. Mimura, A reaction-diffusion approximation to an area preserving mean curvature flow coupled with a bulk equation, Discrete Contin. Dyn. Syst. Ser. S, 4 (2011), 125-154. doi: 10.3934/dcdss.2011.4.125. Google Scholar M. Kolář, M. Beneš and D. Ševčovič, Computational analysis of the conserved curvature driven flow for open curves in the plane, Math. Comput. Simulation, 126 (2016), 1-13. Google Scholar C. Kublik, S. Esedoḡlu and J. A. Fessler, Algorithms for area preserving flows, SIAM J. Sci. Comput., 33 (2011), 2382-2401. doi: 10.1137/100815542. Google Scholar A. Lunardi, Abstract quasilinear parabolic equations, Math. Ann., 267 (1984), 395-416. doi: 10.1007/BF01456097. Google Scholar [19] I. V. Markov, Crystal Growth for Beginners: Fundamentals of Nucleation, Crystal Growth, and Epitaxy, 2 edition, World Scientific Publishing Company, 2004. Google Scholar J. McCoy, The surface area preserving mean curvature flow, Asian J. Math., 7 (2003), 7-30. doi: 10.4310/AJM.2003.v7.n1.a2. Google Scholar K. Mikula and D. Ševčovič, Evolution of plane curves driven by a nonlinear function of curvature and anisotropy, SIAM J. Appl. Math., 61 (2001), 1473-1501. doi: 10.1137/S0036139999359288. Google Scholar K. Mikula and D. Ševčovič, Computational and qualitative aspects of evolution of curves driven by curvature and external force, Comput. Vis. Sci., 6 (2004), 211-225. doi: 10.1007/s00791-004-0131-6. Google Scholar K. Mikula and D. Ševčovič, A direct method for solving an anisotropic mean curvature flow of plane curves with an external force, Math. Methods Appl. Sci., 27 (2004), 1545-1565. doi: 10.1002/mma.514. Google Scholar P. Pauš, M. Beneš, M. Kolář and J. Kratochvíl, Dynamics of dislocations described as evolving curves interacting with obstacles, Model. Simul. Mater. Sc., 24 (2016), 035003. Google Scholar J. Rubinstein and P. Sternberg, Nonlocal reaction-diffusion equations and nucleation, IMA J. Appl. Math., 48 (1992), 249-264. doi: 10.1093/imamat/48.3.249. Google Scholar D. Ševčovič, Qualitative and quantitative aspects of curvature driven flows of planar curves, in Topics on Partial Differential Equations, Jindřich Nečas Cent. Math. Model. Lect. Notes, 2, Matfyzpress, Prague, 2007, 55-119. Google Scholar D. Ševčovič and S. Yazaki, Evolution of plane curves with a curvature adjusted tangential velocity, Japan. J. Ind. Appl. Math., 28 (2011), 413-442. doi: 10.1007/s13160-011-0046-9. Google Scholar D. Ševčovič and S. Yazaki, Computational and qualitative aspects of motion of plane curves with a curvature adjusted tangential velocity, Math. Methods Appl. Sci., 35 (2012), 1784-1798. doi: 10.1002/mma.2554. Google Scholar S. Yazaki, On the tangential velocity arising in a crystalline approximation of evolving plane curves, Kybernetika, 43 (2007), 913-918. Google Scholar Figure 1. Illustration of a curve $\mathcal{G}_t$ on a given surface $\mathcal{M}$ and its projection $\Gamma_t$ to plane Figure Options Download as PowerPoint slide Figure 2. Discretization of a segment of a curve by flowing finite volumes Figure 3. Left: the initial curve $\mathcal{G}_{ini}$ (dashed) and the final curve $\mathcal{G}_T$ at $T = 10$ (solid) and several intermediate curves $\mathcal{G}_t$ (dotted). The underlying surface $\mathcal{M}$ is plotted in gray color. Right: time evolution of the projected planar curves $\Gamma_t$ (see Example 1) Figure 4. Left: the initial curve $\mathcal{G}_{ini}$ (dashed) and the final curve $\mathcal{G}_T$ at $T = 30$ (solid). The underlying surface $\mathcal{M}$ is plotted in gray color. Right: time evolution of the projected planar curves $\Gamma_t$ (see Example 2) Figure 5. Left: the initial curve $\mathcal{G}_{ini}$ (dashed) and the final curve $\mathcal{G}_T$ at $T = 8$ (solid) are presented. The surface $\mathcal{M}$ is plotted in gray color. Right: time evolution of the projected planar curves $\Gamma_t$ (see Example 3) Figure 6. Left: the initial curve $\mathcal{G}_{ini}$ (dashed) and the final curve $\mathcal{G}_T$ at $T = 15$ (solid) are shown. The surface $\mathcal{M}$ is plotted in gray. Right: Time evolution of the projected planar curves $\Gamma_t$ (see Example 4) Table 1. Settings of computational examples Ex. $\mathbf{X}_{ini}, u \in [0,1]$ $\varphi$ 1 $\mathbf{X}_{ini} = (\frac14 + r(u) \cos(2 \pi u), -\frac14 + r(u) \sin(2 \pi u))^T$ $\varphi(x,y) = \sqrt{4 - x^2 - y^2}$ 2 $\mathbf{X}_{ini} = (\cos(2 \pi u), \frac1{10} + \sin(2 \pi u))^T$ $\varphi(x,y) = y^2$ 3 $\mathbf{X}_{ini} = (\cos(2 \pi u), \frac15 + \sin(2 \pi u))^T$ $\varphi(x,y) = \sin(\pi y)$ 4 $\mathbf{X}_{ini} = (\frac12 \cos(2 \pi u), \sin(2 \pi u))^T$ $\varphi(x,y) = x^2 - y^4$ Table 2. Table of EOCs for Example 1 $M$ $error_{max}$ EOC $error_{L1}$ EOC 100 $3.2397 \cdot 10^{-2}$ - $3.2516 \cdot 10^{-2}$ - 200 $8.2467 \cdot 10^{-3}$ 1.9740 8.2767 $\cdot 10^{-3}$ 1.9740 200 $3.7049 \cdot 10^{-4}$ 1.9993 $3.7092 \cdot 10^{-4}$ 2.0002 Dimitra Antonopoulou, Georgia Karali. A nonlinear partial differential equation for the volume preserving mean curvature flow. Networks & Heterogeneous Media, 2013, 8 (1) : 9-22. doi: 10.3934/nhm.2013.8.9 Marie Henry, Danielle Hilhorst, Masayasu Mimura. A reaction-diffusion approximation to an area preserving mean curvature flow coupled with a bulk equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (1) : 125-154. doi: 10.3934/dcdss.2011.4.125 Jun Li, Qi Wang. Flow driven dynamics of sheared flowing polymer-particulate nanocomposites. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1359-1382. doi: 10.3934/dcds.2010.26.1359 Bendong Lou. Spiral rotating waves of a geodesic curvature flow on the unit sphere. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 933-942. doi: 10.3934/dcdsb.2012.17.933 Stefan Berres, Ricardo Ruiz-Baier, Hartmut Schwandt, Elmer M. Tory. An adaptive finite-volume method for a model of two-phase pedestrian flow. Networks & Heterogeneous Media, 2011, 6 (3) : 401-423. doi: 10.3934/nhm.2011.6.401 Dubi Kelmer, Hee Oh. Shrinking targets for the geodesic flow on geometrically finite hyperbolic manifolds. Journal of Modern Dynamics, 2021, 17: 401-434. doi: 10.3934/jmd.2021014 Zhangxin Chen. On the control volume finite element methods and their applications to multiphase flow. Networks & Heterogeneous Media, 2006, 1 (4) : 689-706. doi: 10.3934/nhm.2006.1.689 Changfeng Gui, Huaiyu Jian, Hongjie Ju. Properties of translating solutions to mean curvature flow. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 441-453. doi: 10.3934/dcds.2010.28.441 Jonatan Lenells. Weak geodesic flow and global solutions of the Hunter-Saxton equation. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 643-656. doi: 10.3934/dcds.2007.18.643 Jingzhi Yan. Existence of torsion-low maximal isotopies for area preserving surface homeomorphisms. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4571-4602. doi: 10.3934/dcds.2018200 Kohei Nakamura. An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1093-1102. doi: 10.3934/dcdss.2020385 Roman M. Taranets, Jeffrey T. Wong. Existence of weak solutions for particle-laden flow with surface tension. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 4979-4996. doi: 10.3934/dcds.2018217 Bendong Lou. Traveling wave solutions of a generalized curvature flow equation in the plane. Conference Publications, 2007, 2007 (Special) : 687-693. doi: 10.3934/proc.2007.2007.687 Hongjie Ju, Jian Lu, Huaiyu Jian. Translating solutions to mean curvature flow with a forcing term in Minkowski space. Communications on Pure & Applied Analysis, 2010, 9 (4) : 963-973. doi: 10.3934/cpaa.2010.9.963 Paola Goatin, Sheila Scialanga. Well-posedness and finite volume approximations of the LWR traffic flow model with non-local velocity. Networks & Heterogeneous Media, 2016, 11 (1) : 107-121. doi: 10.3934/nhm.2016.11.107 Arnulf Jentzen, Benno Kuckuck, Thomas Müller-Gronbach, Larisa Yaroslavtseva. Counterexamples to local Lipschitz and local Hölder continuity with respect to the initial values for additive noise driven stochastic differential equations with smooth drift coefficient functions with at most polynomially growing derivatives. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021203 Yong Chen, Hongjun Gao, María J. Garrido–Atienza, Björn Schmalfuss. Pathwise solutions of SPDEs driven by Hölder-continuous integrators with exponent larger than $1/2$ and random dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 79-98. doi: 10.3934/dcds.2014.34.79 Zhenqi Jenny Wang. The twisted cohomological equation over the geodesic flow. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3923-3940. doi: 10.3934/dcds.2019158 Dieter Mayer, Fredrik Strömberg. Symbolic dynamics for the geodesic flow on Hecke surfaces. Journal of Modern Dynamics, 2008, 2 (4) : 581-627. doi: 10.3934/jmd.2008.2.581 HTML views (83) Miroslav KolÁŘ Michal BeneŠ Daniel ŠevČoviČ Article outline
CommonCrawl
A Printed Organic Circuit System for Wearable Amperometric Electrochemical Sensors Rei Shiwaku1, Hiroyuki Matsui1, Kuniaki Nagamine1, Mayu Uematsu1, Taisei Mano1, Yuki Maruyama1, Ayako Nomura1, Kazuhiko Tsuchiya1, Kazuma Hayasaka1, Yasunori Takeda ORCID: orcid.org/0000-0001-6976-27491, Takashi Fukuda2, Daisuke Kumaki1 & Shizuo Tokito1 Scientific Reports volume 8, Article number: 6368 (2018) Cite this article Wearable sensor device technologies, which enable continuous monitoring of biological information from the human body, are promising in the fields of sports, healthcare, and medical applications. Further thinness, light weight, flexibility and low-cost are significant requirements for making the devices attachable onto human tissues or clothes like a patch. Here we demonstrate a flexible and printed circuit system consisting of an enzyme-based amperometric sensor, feedback control and amplification circuits based on organic thin-film transistors. The feedback control and amplification circuits based on pseudo-CMOS inverters were successfuly integrated by printing methods on a plastic film. This simple system worked very well like a potentiostat for electrochemical measurements, and enabled the quantitative and real-time measurement of lactate concentration with high sensitivity of 1 V/mM and a short response time of a hundred seconds. Wearable sensor devices enabling continuous real-time monitoring and analysis of biological information from the human body are promising in the fields of sports, healthcare, and medical applications1,2,3. A variety of biomarkers in body fluids, such as perspiration and saliva, can be continuously detected by a wearable electrochemical sensing system, which enables in situ analysis of physiological signals4,5. An enzyme-based amperometric sensor is one of the most important electrochemical sensors owing to the high selectivity and the connectivity to information technologies. So far, the quantitative measurement of metabolites in human fluids, including lactate6, glucose7, and uric acid8, have been carried out using enzyme-based amperometric sensors and conventional potentiostat9 systems. A potentiostat is the electronic hardware which controls the three-electrode cell for electrochemical experiments and possesses two functions: (1) maintaining the potential of the working electrode (WE) at a constant level with respect to the reference electrode (RE) by adjusting the current at a counter electrode (CE), and (2) converting the current at the working electrode to the voltage by a transimpedance amplifier with a high gain. Hence, both the enzyme-based amperometric sensors and the potentiostat need to be integrated in the wearable devices. Organic thin-film transistors (OTFTs) have potential for realizing ultra-thin, lightweight10, and flexible11 circuit components of the potentiostat for wearable sensor devices owing to their advantages such as the small Young's modulus, biocompatibility, and the processability of direct printing onto plastic films. Printability is an attraction of OTFTs because organic materials can be dissolved in organic solvents, which enables roll-to-roll manufacture of large-area devices on flexible substrates12,13. So far, OTFTs have been utilized as amplifiers for potentiometric electrochemical sensors, which is called extended-gate type OTFTs14. Although the potentiometric measurement is applicable to enzymatic sensors, it exhibits an irreversible and slow response in several minutes, which is not suitable to real-time sensing with wearable devices. On the other hand, the amperometric measurement based on enzymatic sensors exhibits a reversible and fast response in several tens of seconds4. OTFTs have never been utilized for amperometric sensors since the three-electrode cell requires integrated circuits rather than the simple extended-gate type OTFTs. Here we demonstrate a novel flexible and printed organic circuit system for wearable amperometric electrochemical sensors, implemented with two OTFT-based negative-feedback inverters. The inverters employed pseudo-CMOS design for obtaining rail-to-rail operation and low output impedance, and consisted of the OTFTs based on a blend of a small molecular p-type semiconductor and polystyrene (PS) for the active layer. The first inverter was utilized to maintain the potential at the working electrode (WE) at a constant level with respect to the reference electrode (RE). The second inverter was utilized to convert the current at the WE into voltage with a tunable gain of 106–107 V/A. A lactate sensor with a lactate oxidase membrane was used as the WE. Real-time, quantitative measurement of lactate concentration was successfully demonstrated by the developed system, showing the response time of a hundred seconds and the sensitivity of 1 V/mM in a lactate concentration range of 0–0.5 mM. Three-Electrode Circuit System for Amperometric Analysis Figure 1 displays the configuration of the developed system for amperometric sensing, based on three components: a three-electrode electrochemical cell, a feedback control unit, and a detection unit. The lactate sensor in the three-electrode cell has an immobilized lactate oxidase on the electrode for selective detection of lactate in body fluids. The current that is generated from the lactate sensor electrode (working electrode: WE) by enzyme reaction varies linearly with the concentration of lactate in the cell (at low concentration region), according to the Michaelis-Menten equation. The detection unit, which comprises an inverter and a resistor, converts current (IIN) to voltage (VOUT) by a transimpedance amplifier with predefined gain. Assuming that the open-loop gain of the inverter is high enough, VOUT is given by $${V}_{{\rm{O}}{\rm{U}}{\rm{T}}}={V}_{{\rm{M}}}-R{I}_{{\rm{I}}{\rm{N}}}.$$ A three-electrode circuit system for wearable amperometric electrochemical sensors. The system is composed of a feedback control unit, a detection unit, and a three-electrode cell with an amperometric electrochemical sensor (e.g. lactate sensor) as a working electrode. Here VM is the switching voltage of the inverter. In the detection unit, the potential of the input terminal, namely the WE, is kept at VM by low input impedance. The feedback control unit comprises an inverter only, and is used to keep the potential of the reference electrode (RE) at VM. This unit ensures, at the same time, that there is no current flow through the RE, which is necessary for the RE to function as a potential standard properly. Either CMOS, PMOS or NMOS inverters can be used for this system as long as they exhibit high open-loop gain, small variation in VM, and low output impedance. Fabrication and Characterization of Lactate Sensor The photograph and schematic structure of the lactate-sensitive working electrode is shown in Fig. 2a,b. Prussian blue (PB)15 was chosen as a mediator, and a PB-carbon graphite composite paste16 was coated onto the inkjet-printed Ag electrode surface. A fluoropolymer bank layer was formed at a periphery of the PB-carbon to define the sensing area. The interconnection part of the Ag film was also encapsulated by a fluoropolymer for prevention of Ag/liquid contact17. Finally, a blend of lactate oxidase (LOX) and chitosan for the enzyme immobilization was coated onto the mediator layer18. Figure 2c represents the principle of lactate sensing. The immobilized LOX selectively oxidizes lactate, and generates pyruvate and H2O2. The H2O2 then oxidize the PB from the reduced state (PBred) into the oxidized state (PBox). PBox accepts an electron from carbon graphite and return to PBred. These reactions continuously occur under the presence of lactate, and the electric current flows in the direction from the electrolyte to the Ag electrode. Figure 2e shows the cyclic voltammogram of the lactate sensor electrode in a three-electrode cell (Fig. 2d) with a commercial potentiostat. The peak of reduction current was observed at a potential of +0.07 V vs. Ag/AgCl reference, approximately corresponding to the values from previous reports19. Figure 2f shows the amperometric responses at a potential of 0 V vs. Ag/AgCl. When concentrated lactate solution was added into the cell, the current changed stepwise in several tens of seconds. The current amplitude was proportional to the lactate concentration with the sensitivity of 2 µA/mM in the range of 0–1 mM (inset in Fig. 2f). The average sensitivity of 5 samples was 1.9 ± 0.2 µA/mM (maximum: 2.2 µA/mM, minimum: 1.7 µA/mM). The sensitivity was stable under five repetitive measurements (see Supplementary Figure S1). Structure and electrochemical characteristics of the lactate sensor. (a) Photograph of the fabricated lactate sensor electrode. The sensing area was 15 mm2. (b) Schematic diagram of the lactate sensor electrode. (c) The principle of lactate sensing. (d) Schematic representation of amperometric measurement with a commercial potentiostat. (e) Cyclic voltammogram of the lactate sensor in phosphate-buffered saline (PBS). Scan rate was 20 mV/s. (f) The amperometric responses of the lactate sensor. Potential of the lactate sensor electrode was set to 0 V vs. Ag/AgCl reference electrode. Structure and Electrical Properties of OTFT Devices Figure 3a displays the photograph and schematic illustration of the fabricated OTFT devices. All layers except for the gate dielectric were formed by printing processes at process temperatures below 120 °C. The electrodes were fabricated by inkjet printing of a silver nanoparticle ink. A fluoropolymer bank layer, semiconducting layer, and encapsulation layer were printed by a dispenser equipment. In the same way as the lactate sensor electrodes, the OTFT devices were fabricated on flexible poly(ethylene naphthalate) (PEN) films owing to the low process temperature. A fluoropolymer bank layer was used to define the channel width precisely and also to control the crystal growth of the organic semiconductor, which leads to the uniform morphology20. A blend solution of 2,7-dihexyl-dithieno[2,3-d;2′,3′-d′]benzo[1,2-b;4,5-b′]dithiophene (DTBDT-C6)21 and polystyrene (PS) was used as the active layer to obtain high mobility and uniform electrical performances22,23. Figure 3b shows the transfer characteristics of the OTFT in a saturation regime. The mobility of 1.3 cm2/Vs, threshold voltage of −0.25 V, and subthreshold slope of 100 mV/dec were obtained at a low supply voltage of −4 V. According to the output curve in the linear regime shown in Fig. 3c, the semiconductor/electrode contact was ohmic rather than schottky. Structure and electrical properties of the printed organic semiconductor devices. (a) Photograph (top) and schematic structure (bottom) of the OTFTs. (b) Transfer curves and (c) output curves of the OTFT. (d) Circuit diagram and (e) optical microscope image of the pseudo-CMOS inverter. (f) Static input-output characteristics of the inverter. Output voltage (VOUT) and small-signal gain (|dVOUT/dVIN|) as a function of input voltage (VIN). (g) Circuit diagram of the current-to-voltage converter. (h) VOUT and (i) VIN of the current-to-voltage converter as a function of IIN. Value of resistance was set to 1–10 MΩ. Figure 3d,e shows the schematic structure and optical microscope image of the inverter circuit with pseudo-CMOS design configurable with p-type TFTs only24. The pseudo-CMOS inverter comprises a depletion-load inverter25 and an output stage for rail-to-rail output and low output impedance. Figure 3f shows the static input-output characteristics of the inverter. Two supply voltages of VC = 3 V and VDD = 4 V were applied to adjust the switching voltage of the inverter (VM). Finally, VM of 2.6 V and open-loop gain of 50 were obtained. Figure 3g represents a transimpedance amplifier (I-V converter) based on a negative feedback inverter. Assuming that the input-output characteristics of the inverter in the vicinity of VM is expressed as VOUT = VM − Aopen(VIN − VM), the relation between VOUT and IIN is represented by the following equation: $${V}_{{\rm{O}}{\rm{U}}{\rm{T}}}={V}_{{\rm{M}}}-\frac{R}{1+\frac{1}{{A}_{{\rm{o}}{\rm{p}}{\rm{e}}{\rm{n}}}}}{I}_{{\rm{I}}{\rm{N}}}.$$ (For details, see Supplementary Figure S2). Consequently, transimpedance gain is given by: $$\frac{d{V}_{{\rm{O}}{\rm{U}}{\rm{T}}}}{d{I}_{{\rm{I}}{\rm{N}}}}=-\frac{R}{1+\frac{1}{{A}_{{\rm{o}}{\rm{p}}{\rm{e}}{\rm{n}}}}},$$ where Aopen is the open-loop gain of the inverter. In the case of a sufficiently high open-loop gain, Aopen >> 1, the transimpedance gain is given by −R. The gain in this work was deviated from −R by 2%. The gain of 106–107 V/A was obtained at R of 1–10 M Ω (Fig. 3h). The voltage at the input terminal, VIN, was maintained at VM in the linear regime of VOUT, which indicates low input impedance of the transimpedance amplifier (Fig. 3i). For instance, VIN was stable against IIN of ±1 µA at R of 1 MΩ. For a wide range of IIN, reduction of the output impedance of the inverter should be required. Output impedance of the inverter was 0.1 and 0.3 MΩ at forward and reverse current, respectively. Application of The Organic Circuit System to Lactate Sensor The amperometric sensing system was demonstrated using the two pseudo-CMOS inverters on the same substrate, shown in Fig. 4a,b. Figure 4c shows VOUT, potential of the RE (VRE), potential of the WE (VWE), and IIN estimated by Eq. 2. Responding to the addition of lactate into the cell, VOUT changed stepwise in a hundred seconds at lactate concentration of 0–0.5 mM. The obtained sensitivity after the transimpedance amplification was as high as 1 V/mM. The high output voltage of several hundreds of mV can be easily read by analog-to-digital converters for further data processing, logging and wireless communication in near future26 Regardless of the concentration of lactate, the VRE and VWE were maintained at 2.74 V and 2.87 V (close to VM of the inverters), respectively. During the measurement period of 2000 seconds, the variations in each voltage were less than 0.01 V. The stability of the VRE and VWE indicates that the feedback control unit is working very well in this system. Quantitative measurement of lactate concentration in the three-electrode electrochemical cell using the printed organic circuit system. (a) Optical microscope image of the inverter pair. (b) Circuit diagram of the system. Control voltage (VC) and supply voltage (VDD) of both feedback control inverter and detection inverter was set to 3 V and 4 V, respectively. (c) Output voltage (VOUT), potential of the working electrode (VWE) and reference electrode (VRE), and estimated input current (IIN), obtained from the organic circuit system. Concentrated lactate solution was added every 300 seconds. (d) Amperometric responses from a commercial potentiostat. The potential of the working electrode was set to 0.13 V vs. Ag/AgCl. (e) Comparison of the absolute values of the change of current (|ΔI|) from a commercial potentiostat and the organic circuit system. (f) Comparison of VOUT from the organic circuit system at VWE of 0 and 0.13 V vs. VRE. The concentration of lactate was changed from 0 to 0.1 mM. The green arrow means the timing of dropping the concentrated lactate solution into the cell. At last, we discuss the influence of the voltage difference, VWE − VRE = 0.13 V, which was equivalent to the difference in VM of the two inverters (see Supplementary Fig. S3). According to the estimated IIN, the sensitivity of the developed lactate sensing system before the transimpedance amplification was 1 µA/mM, which was approximately half of that of a commercial potentiostat in Fig. 2f. In addition, the response time was two or three times longer than that with a potentiostat. The deteriorations in the sensitivity in current and response time were attributed to the difference in VM of the inverters. The VWE should be close to or less than VRE for the rapid redox reaction of PB according to the cyclic voltammogram of the lactate sensor electrode (Fig. 2e). The present difference of VWE − VRE = 0.13 V was higher than the reduction potential of 0.07 V of the PB mediator. This is because the difference caused between the cases with a commercial potentiostat and the developed organic circuit system. To check the consistency of the results in the two cases, the amperometric responses of the lactate sensor were measured at a potential of 0.13 V vs. Ag/AgCl using a commercial potentiostat (Fig. 4d). The sensitivity of 0.85 µA/mM was obtained from a commercial potentiostat (at a potential = 0.13 V vs. Ag/AgCl), which was approximately equivalent to that from the developed organic circuit system, as shown in Fig. 4e. In addition, when the mismatch between VWE and VRE in the developed system was elaboratively reduced to zero by tuning VC of the inverter of the feedback control unit (see Supplementary Figure S4), the sensitivity of the lactate sensor was improved to the same level in Fig. 2f (at a potential = 0 V vs. Ag/AgCl) as shown in Figure 4f. It also improved the response time from a hundred seconds to several tens of seconds. These results indicate that the variation in VM of the inverters should be minimized for further improving of the reproducibility, accuracy, and response time of the amperometric sensing system. However, a major issue in printed OTFTs is the relatively large variations in their electrical properties12,13. Towards reducing variations in device performances, the formation process for organic semiconducting layers should be reconsidered along with that for uniform channel dimensions. Thus far, by employing the inkjet-printed electrodes and dispenser-printed banks (Fig. 3a), the standard deviation of the channel width and length were ±8 µm and ±2 µm, respectively. Furthermore, as a result of (1) controlling the DTBDT-C6 crystal growth direction20, (2) blending DTBDT-C6 and polystyrene22, and (3) annealing of the semiconducting layers, the standard deviation of the threshold voltage of the OTFTs was less than 0.03 V23. These methods mentioned above were also significant for improving the mobility or subthreshold slope (see supplementary Figures S5–S8). By employing these methods, the maximum variation in VM of the inverters in this work was 0.15 V23, which is relatively small in the field of printed electronics. Nevertheless, further improvement of uniformity in VM is still required. If the difference in VM can be reduced to less than several tens of mV, the organic semiconductor devices will be acceptable for their potential applications to amperometric electrochemical sensing. In conclusion, we have developed a novel flexible and printed organic circuit system based on two negative-feedback inverters for wearable amperometric sensors with a three-electrode cell. The inverter was a pseudo-CMOS design for rail-to-rail operation and low output impedance, and consisted of only p-channel OTFTs employing a blend of a small molecular semiconductor, 2,7-dihexyl-dithieno[2,3-d;2′,3′-d′]benzo[1,2-b;4,5-b′]dithiophene, and polystyrene for the active layer as previously reported. The transimpedance amplifier based on the inverter with negative feedback exhibited a high linearity and tunable gain of 106–107 V/A. Input voltage was maintained at switching voltage of the inverter (VM) in the linear regime of output voltage. For a wide range of input current, reduction of the output impedance of the inverter should be required. A lactate sensor with an enzymatic membrane was used as the amperometric sensor. Quantitative measurement of lactate was successfully demonstrated by the developed system, showing the response time of a hundred seconds and the sensitivity of 1 V/mM at lactate concentration of 0–0.5 mM. The reduction of the variation in VM to several tens of mV was found to be a significant requirement for maximizing the performance (sensitivity and response time) of the amperometric sensor with a prussian blue mediator, which is still an open question. Satisfying these requirements, namely the reduction of output impedance and VM variation, allow organic semiconductor devices to have potential for realizing extremely thin and lightweight wearable devices for in situ monitoring of metabolites in body fluids. Materials and Preparation of the Chitosan and Lactate Oxidase Solution Chitosan solution (0.1 wt%, pH 5.4) was prepared by dissolving chitosan (Junsei Chemical) in HCl aqueous solution and stirring for 30 minutes. Lactate oxidase solution (1.0 UN/µl) was prepared by dissolving lactate oxidase (Toyobo, 85.6 UN/mg) in phosphate-buffered saline (Nacalai Tesque, 0.1 M, pH 7.4). The solutions were sealed and stored at 4 °C. Fabrication of the Lactate Sensors 125-µm-thick polyethylene naphthalate (PEN) films (Teijin, Teonex) were used as substrates without cleaning process. A 100-nm-thick Ag electrode was formed by inkjet-printing a silver nanoparticle ink (Harima Chemicals, NPS-JL) on the PEN substrates, followed by an annealing process of 120 °C for 30 minutes in the air. Then, a carbon graphite ink including prussian blue (Gwent Group) was coated on the printed Ag electrode, followed by an annealing process of 60 °C for 30 minutes in the air. In order to define the sensing area, a fluoropolymer solution (5 wt%, DuPont, Teflon AF1600) in Fluorinert (3M, FC-43) was printed as a bank onto the substrate except the sensing area by a dispenser, followed by an annealing process of 60 °C for 30 minutes in the air. 10 µl of the chitosan solution and 1.4 µl of the lactate solution was mixed. Then, 10 µl of the mixed solution was drop-casted onto the area defined by the fluoropolymer bank, followed by a drying process of 30 °C for 3 h in the air. The sensor electrodes were dipped in phosphate-buffered saline (PBS) and stored at 4 °C. Fabrication of the Organic Semiconductor Devices The process flow is shown in Supplementary Figure S9. 125-µm-thick polyethylene naphthalate 125-µm-thick polyethylene naphthalate (PEN) films (Teonex, Teijin) were used as substrates without cleaning process. A silver nanoparticle ink in hydrocarbon-based solution (NPS-JL, Harima Chemicals) was printed as gate electrodes using an inkjet printer (Dimatix DMP2831, Fujifilm) with 10 pL nozzles. During the inkjet printing process, the substrates and cartridge were kept at 50 and 35 °C, respectively. The substrates were then heated at 120 °C for 30 minutes in the air to sinter the silver nanoparticles. A 150-nm-thick parylene (KISCO, diX-SR) gate dielectric layer was then formed by chemical vapor deposition. Source and drain electrodes were subsequently printed and sintered in the same manner as the gate electrodes. Fluoropolymer (1 wt%, Teflon AF1600, DuPont) in Fluorinert (FC-43, 3M) bank layers (200 nm thick) were then printed using a dispenser system (Image Master 350 PC, MUSASHI Engineering) at a pattering speed of 20 mm s−1 and with a discharge pressure of 6 kPa. During the dispensing process, the plate and nozzle temperatures were kept at 60 and 30 °C, respectively. To apply the self-assembled monolayer (SAM) treatment to source and drain electrodes, the substrates were immersed in a 3 × 10−2 mol/L 2-propanol solution of pentafluorobenzenethiol (Tokyo Chemical Industry) for 5 minutes at room temperature and rinsed with pure 2-propanol. The SAM treatment changed the work function of the printed silver electrodes from 4.7 to 5.4 eV, which reduces the contact resistance. A solution of DTBDT-C6 (0.9 wt%, Tosoh) and polystyrene (0.3 wt%, MW ≈ 280,000, Sigma-Aldrich) in toluene was then printed onto the area defined by the bank layer by the dispenser system at a patterning speed of 20 mm s−1 and discharge pressure of 1 kPa, while keeping the stage and nozzle temperatures at 30 °C, followed by an anneal at 100 °C in the air for 15 minutes to remove the solvent. Finally, an encapsulation layer of Teflon was printed by the dispenser system at 30 °C, with a pattering speed of 8 mm s−1 with a discharge pressure of 6 kPa. The substrates were stored at room temperature in the air for three hours to remove the solvent. Characterization of the Lactate Sensors The amperometric measurements were carried out using an electrochemical analyzer (BAS, model ALS612E). Before the measurement, the sensor electrode was soaked in PBS for at least 1 hour. During the measurements, the subject solution was stirred at 400 rpm. Characterization of the Organic Semiconductor Devices The capacitance of the dielectric was measured using an LCR meter (NF, ZM2376). The electrical characteristics of the OTFTs and inverter circuits were measured using a semiconductor parameter analyzer (Keithley, model 4200A-SCS). All electrical measurements were carried out in the air. Optical microscope images of the devices were obtained using a digital microscope (Keyence, model VHX-5000). Measurement of Lactate with the Organic Circuit System Voltage supplying and measurement was carried out using a semiconductor parameter analyzer (Keithley, model 4200A-SCS). The inverters were connected with a resistance, a reference electrode (Ag/AgCl), counter electrode (Platinum) and a lactate sensor electrode via coaxial cables covered with aluminum foil to reduce noise. The conversion system was biased (VC = 3 V, VDD = 4 V) for 5 minutes before the measurement to stabilize the operation. Kim, D.-H. et al. Epidermal electronics. Science 333, 838 (2011). ADS CAS Article PubMed Google Scholar McAlpine, M. C., Ahmad, H., Wang, D. & Heath, J. R. Highly ordered nanowire arrays on plastic substrates for ultrasensitive flexible chemical sensors. Nat. Mater. 6, 379 (2007). ADS CAS Article PubMed PubMed Central Google Scholar Xu, S. et al. Soft microfluidic assemblies of sensors, circuits, and radios for the skin. Science 344, 70 (2014). Gao, W. et al. Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 529, 509 (2016). Lee, H. et al. Wearable/disposable sweat-based glucose monitoring device with multistage transdermal drug delivery module. Sci. Adv. 3, e1601314 (2017). ADS Article PubMed PubMed Central Google Scholar Kim, J. et al. Non-invasive mouthguard biosensor for continuous salivary monitoring of metabolites. Analyst 139, 1632 (2014). Bandodkar, A. J. et al. Tattoo-based noninvasive glucose monitoring: a proof-of-concept study. Anal. Chem. 87, 394 (2015). Kim, J. et al. Wearable salivary uric acid mouthguard biosensor with integrated wireless electronics. Biosens. Bioelectron. 74, 1061 (2015). Wang, W.-S., Kuo, W.-T., Huang, H.-Y. & Luo, C.-H. Wide dynamic range CMOS potentiostat for amperometric chemical sensor. Sensors 10, 1782 (2010). Kaltenbrunner, M. et al. T. An ultra-lightweight design for imperceptible plastic electronics. Nature 499, 458 (2013). Sekitani, T., Zschieschang, U., Klauk, H. & Someya, T. Flexible organic transistors and circuits with extreme bending stability. Nat. Mater. 9, 1015 (2010). Fukuda, K. et al. Fully-printed high-performance organic thin-film transistors and circuitry on one-micron-thick polymer films. Nat. Commun. 5, 4147 (2014). Pierre, A. et al. All-printed flexible organic transistors enabled by surface tension-guided blade coating. Adv. Mater. 26, 5722 (2014). Minami., T. et al. A novel OFET-based biosensor for the selective and sensitive detection of lactate levels. Biosens. Bioelectron. 74, 45 (2015). Karyakin, A. A. Prussian blue and its analogues: electrochemistry and analytical applications. Electroanalysis 13, 813 (2001). Moscone, D., D'Ottavi, D., Compagnone, D. & Palleschi, G. Construction and analytical characterization of prussian blue-based carbon paste electrodes and their assembly as oxidase enzyme sensors. Anal. Chem. 71, 4932 (1999). Hart, A. L., Turner, A. P. F. & Hopcroft, D. On the use of screen- and ink-jet printing to produce amperometric enzyme electrodes for lactate. Biosens. Bioelectron. 11, 263 (1996). Wei, X., Zhang, M. & Gorski, W. Coupling the lactate oxidase to electrodes by ionotropic gelation of biopolymer. Anal. Chem. 75, 2060 (2003). Ricci, F., Amine, A., Palleschi, G. & Moscone, D. Prussian blue based screen printed biosensors with improved characteristics of long-term lifetime and pH stability. Biosens. Bioelectron. 18, 165 (2003). Fukuda, K. et al. Printed organic transistors with uniform electrical performance and their application to amplifiers in biosensors. Adv. Electron. Mater. 1, 1400052 (2015). Gao, P. et al. Dithieno[2,3‐d;2′,3′‐d′]benzo[1,2‐b;4,5‐b′]dithiophene (DTBDT) as semiconductor for high‐performance, solution‐processed organic field‐effect transistors. Adv. Mater. 21, 213 (2009). Shiwaku, R. et al. Printed 2 V-operating organic inverter arrays employing a small-molecule/polymer blend. Sci. Rep. 6, 34723 (2016). Shiwaku, R. et al. Printed organic inverter circuits with ultralow operating voltages. Adv. Electron. Mater. 3, 1600557 (2017). Huang, T.-C. et al. Pseudo-CMOS: a design style for low-cost and robust flexible electronics. IEEE Trans. Electron Devices 58, 141 (2011). Nausieda, I. et al. Dual threshold voltage organic thin-film transistor technology. IEEE Trans. Electron Devices 57, 3027 (2010). Abrar, M. A., Dong, Y., Lee, P. K. & Kim, W. S. Bendable electro-chemical lactate sensor printed with silver nano particles. Sci. Rep. 6, 30565 (2016). This study was partly supported by Center of Innovation (COI) Program, the Japan Science and Technology Agency (JST) and Leading Initiative for Excellent Young Researchers (LEADER) program, the Japan Society for the Promotion of Science (JSPS). We thank Mr. C. Shepherd and Prof. T. Shiba for their technical support and valuable discussions. Research Center for Organic Electronics (ROEL), Yamagata University, 4-3-16 Jonan, Yonezawa, Yamagata, 992-8510, Japan Rei Shiwaku, Hiroyuki Matsui, Kuniaki Nagamine, Mayu Uematsu, Taisei Mano, Yuki Maruyama, Ayako Nomura, Kazuhiko Tsuchiya, Kazuma Hayasaka, Yasunori Takeda, Daisuke Kumaki & Shizuo Tokito Functional Polymers Research Laboratory, Tosoh Corporation, 1-8 Kasumi, Yokkaichi, Mie, 510-8540, Japan Takashi Fukuda Rei Shiwaku Hiroyuki Matsui Kuniaki Nagamine Mayu Uematsu Taisei Mano Yuki Maruyama Ayako Nomura Kazuhiko Tsuchiya Kazuma Hayasaka Yasunori Takeda Daisuke Kumaki Shizuo Tokito R.S., H.M., K.N. and S.T. designed the research and experiments. R.S., M.U., T.M., Y.M., A.N. and K.T. performed fabrication and characterization of lactate sensors. R.S., K.H., Y.T., T.F. and D.K. performed fabrication and characterization of organic semiconductor devices. All the authors prepared figures and wrote the manuscript. Correspondence to Hiroyuki Matsui, Kuniaki Nagamine or Shizuo Tokito. Shiwaku, R., Matsui, H., Nagamine, K. et al. A Printed Organic Circuit System for Wearable Amperometric Electrochemical Sensors. Sci Rep 8, 6368 (2018). https://doi.org/10.1038/s41598-018-24744-x Received: 13 December 2017 DOI: https://doi.org/10.1038/s41598-018-24744-x Electrochemical multi-analyte point-of-care perspiration sensors using on-chip three-dimensional graphene electrodes Meike Bauer Lukas Wunderlich Antje J. Baeumner Analytical and Bioanalytical Chemistry (2021) Laser-induced synthesis of carbon-based electrode materials for non-enzymatic glucose detection Vladimir S. Andriianov Vasily S. Mironov Ilya I. Tumkin Optical and Quantum Electronics (2020) Top 100 in Physics
CommonCrawl
TR10-062 | 7th April 2010 14:01 Logspace Versions of the Theorems of Bodlaender and Courcelle TR10-062 Authors: Michael Elberfeld, Andreas Jakoby, Till Tantau Publication: 11th April 2010 00:23 Bodlaender, Courcelle, Logspace, monadic second-order logic, partial k-trees, tree width Bodlaender's Theorem states that for every $k$ there is a linear-time algorithm that decides whether an input graph has tree width~$k$ and, if so, computes a width-$k$ tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic second-order formula $\phi$ and for every $k$ there is a linear-time algorithm that decides whether a given logical structure $\mathcal A$ of tree width at most $k$ satisfies $\phi$. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic second-order logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
CommonCrawl
A nondegeneracy condition for a semilinear elliptic system and the existence of 1- bump solutions Recent progresses on elliptic two-phase free boundary problems The method of energy channels for nonlinear wave equations Carlos E. Kenig , Department of Mathematics, University of Chicago, 5734 S University Ave, Chicago, IL, 60637-1514, USA Received October 2018 Revised October 2018 Published June 2019 Fund Project: The author is supported in part by NSF Grants DMS–1265249, DMS–1463746 and DMS–1800082. This is a survey of some recent results on the asymptotic behavior of solutions to critical nonlinear wave equations. Keywords: Nonlinear wave equations, asymptotic behavior, energy channels, traveling waves, soliton resolution. Mathematics Subject Classification: Primary: 35L70; Secondary: 35L15. Citation: Carlos E. Kenig. The method of energy channels for nonlinear wave equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 6979-6993. doi: 10.3934/dcds.2019240 H. Bahouri and P. Gérard, High frequency approximation of solutions to critical nonlinear wave equations, Amer. J. Math., 121 (1999), 131-175. doi: 10.1353/ajm.1999.0001. Google Scholar M. Borghese, R. Jenkins and K. T.-R. McLaughlin, Long time asymptotic behavior of the focusing nonlinear Schrödinger equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 35 (2018), 887-920. doi: 10.1016/j.anihpc.2017.08.006. Google Scholar G. Chen, Y. Liu and J. Wei, Nondegeneracy of harmonic maps from $ \mathbb R^{2}$ to $ \mathbb S^{2}$, preprint, arXiv: 1806.04131. Google Scholar D. Christodoulou and A. Tahvildar-Zadeh, On the asymptotic behavior of spherically symmetric wave maps, Duke Math. J., 71 (1993), 31-69. doi: 10.1215/S0012-7094-93-07103-7. Google Scholar D. Christodoulou and A. Tahvildar-Zadeh, On the regularity of spherically symmetric wave maps, Comm. Pure Appl. Math., 46 (1993), 1041-1091. doi: 10.1002/cpa.3160460705. Google Scholar R. Côte, On the soliton resolution for equivariant wave maps to the sphere, Comm. Pure. Appl. Math., 68 (2015), 1946-2004. doi: 10.1002/cpa.21545. Google Scholar R. Côte, C. Kenig, A. Lawrie and W. Schlag, Characterization of large energy solutions of the equivariant wave map problem: Ⅰ, Amer. J. Math., 137 (2015), 139-207. doi: 10.1353/ajm.2015.0002. Google Scholar R. Côte, C. Kenig, A. Lawrie and W. Schlag, Characterization of large energy solutions of the equivariant wave map problem: Ⅱ, Amer. J. Math., 137 (2015), 209-250. doi: 10.1353/ajm.2015.0003. Google Scholar R. Côte, C. Kenig, A. Lawrie and W. Schlag, Profiles for the radial focusing $4d$ energy–critical wave equation, Comm. Math. Phys., 357 (2018), 943-1008. doi: 10.1007/s00220-017-3043-2. Google Scholar R. Côte, C. Kenig and W. Schlag, Energy partition for the linear radial wave equation, Math. Ann., 358 (2014), 573-607. doi: 10.1007/s00208-013-0970-x. Google Scholar T. Duyckaerts, H. Jia, C. Kenig and F. Merle, Soliton resolution along a sequence of times for the focusing energy critical wave equation, Geom. Funct. Anal., 27 (2017), 798-862. doi: 10.1007/s00039-017-0418-7. Google Scholar T. Duyckaerts, H. Jia, C. Kenig and F. Merle, Universality of blow–up profile small blow–up solutions to the energy critical wave map equation, preprint, arXiv: 1612.04927, to appear in IMRN. doi: 10.1093/imrn/rnx073. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Universality of blow–up profile for small radial type Ⅱ blow–up solutions of the energy–critical wave equation, J. Eur. Math. Soc., 13 (2011), 533-599. doi: 10.4171/JEMS/261. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Universality of blow–up profile for small type Ⅱ blow–up solutions of the energy–critical wave equation: The nonradial case, J. Eur. Math. Soc., 14 (2012), 1389-1454. doi: 10.4171/JEMS/336. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Profiles of bounded radial solutions of the focusing, energy–critical wave equation, Geom. Funct. Anal., 22 (2012), 639-689. doi: 10.1007/s00039-012-0174-7. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Classification of radial solutions of the focusing, energy–critical wave equation, Cambridge Journ. of Math., 1 (2013), 75-144. doi: 10.4310/CJM.2013.v1.n1.a3. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Profiles for bounded solutions of dispersive equations, with applications to energy–critical wave and Schrödinger equations, Commun. Pure Appl. Anal., 14 (2015), 1275-1326. doi: 10.3934/cpaa.2015.14.1275. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Solutions of the focusing nonradial critical wave equation with the compactness property, Ann. Sc. Norm. Super. Pisa Cl. Sci., 15 (2016), 731-808. Google Scholar T. Duyckaerts, C. Kenig and F. Merle, Scattering profile for global solutions of the energy–critical wave equation, preprint, arXiv: 1601.01871, to appear in J. Eur. Math. Soc. doi: 10.4310/CJM.2013.v1.n1.a3. Google Scholar W. Eckhaus, The long–time behaviour for perturbed wave–equations and related problems, in Trends in Applications of Pure Mathematics to Mechanics (Bad Honnef, 1985) (eds. E. Kröner and K. Kirchgässner), Springer, 1986,168–194. doi: 10.1007/BFb0016391. Google Scholar W. Eckhaus and P. Schuur, The emergence of solitons of the Korteweg–de Vries equation from arbitrary initial conditions, Math. Methods Appl. Sci., 5 (1983), 97-116. doi: 10.1002/mma.1670050108. Google Scholar R. Grinis, Quantization of time–like energy for wave maps into spheres, Comm. Math. Phys., 352 (2017), 641-702. doi: 10.1007/s00220-016-2766-9. Google Scholar M. Hillairet and P. Raphael, Smooth type Ⅱ blow–up solutions to the four–dimensional energy critical wave equation, Anal. PDE, 5 (2012), 777-829. doi: 10.2140/apde.2012.5.777. Google Scholar J. Jendrej, Construction of type Ⅱ blow–up solutions for the energy–critical wave equation in dimension 5, J. Funct. Anal., 272 (2017), 866-917. doi: 10.1016/j.jfa.2016.10.019. Google Scholar H. Jia and C. Kenig, Asymptotic decomposition for semilinear wave and equivariant wave map equations, Amer. J. Math., 139 (2017), 1521-1603. doi: 10.1353/ajm.2017.0039. Google Scholar C. E. Kenig, Recent developments on the global behavior to critical nonlinear dispersive equations, in Proceedings of the International Congress of Mathematicians Volume 1, Hindustani Book Agency, 2010,326–338. Google Scholar C. E. Kenig, Critical non–linear dispersive equations: Global existence, scattering, blow–up and universal profiles, Japanese Journal of Mathematics, 6 (2011), 121-141. doi: 10.1007/s11537-011-1108-0. Google Scholar C. Kenig, A. Lawrie, B. Liu and W. Schlag, Channels of energy for the linear radial wave equation, Adv. Math., 285 (2015), 877-936. doi: 10.1016/j.aim.2015.08.014. Google Scholar C. E. Kenig and F. Merle, Global well–posedness, scattering and blow–up for the energy–critical, focusing, non–linear Schrödinger equation in the radial case, Invent. Math., 166 (2006), 646-675. doi: 10.1007/s00222-006-0011-4. Google Scholar C. E. Kenig and F. Merle, Global well–posedness, scattering and blow–up for the energy–critical focusing non–linear wave equation, Acta Math., 201 (2008), 147-212. doi: 10.1007/s11511-008-0031-6. Google Scholar C. E. Kenig and F. Merle, Scattering for $H^{1/2}$–bounded solutions to the cubic, defocusing non–linear Schrödinger equation in 3 dimensions, Trans. Amer. Math. Soc., 362 (2010), 1937-1962. doi: 10.1090/S0002-9947-09-04722-9. Google Scholar S. Klainerman and S. Selberg, Remark on the optimal regularity for equations of wave maps type, C.P.D.E., 22 (1997), 901-918. doi: 10.1080/03605309708821288. Google Scholar S. Klainerman and S. Selberg, Bilinear estimates and applications to nonlinear wave equations, Commun. Contemp. Math., 4 (2002), 223-295. doi: 10.1142/S0219199702000634. Google Scholar S. Klainerman and M. Machedon, Smoothing estimates for null forms and applications, Duke Math. J., 81 (1995), 99-133. doi: 10.1215/S0012-7094-95-08109-5. Google Scholar S. Klainerman and M. Machedon, On the optimal local regularity for gauge field theories, Diff. and Integral Eq., 10 (1997), 1019-1030. Google Scholar S. Klainerman and M. Machedon, On the algebraic properties of the $H^{n/2, 1/2}$ spaces, IMRN, 15 (1998), 765-774. doi: 10.1155/S1073792898000464. Google Scholar J. Krieger and W. Schlag, Concentration Compactness for Critical Wave Maps, 1$^{st}$ edition, European Mathematical Society, Zürich, 2012. doi: 10.4171/106. Google Scholar J. Krieger, W. Schlag and D. Tataru, Slow blow–up solutions for the $H^1(\mathbb R^3)$ critical focusing semilinear wave equation, Duke Math. J., 147 (2009), 1-53. doi: 10.1215/00127094-2009-005. Google Scholar J. Krieger, W. Schlag and D. Tataru, Renormalization and blow–up for charge one equivariant wave maps, Invent. Math., 171 (2008), 543-615. doi: 10.1007/s00222-007-0089-3. Google Scholar P. Raphaël and I. Rodnianski, Stable blow–up dynamics for the critical co–rotational wave maps and equivariant Yang–Mills problems, Publ. Math. Inst. Hautes Études Sci., 115 (2012), 1-122. doi: 10.1007/s10240-011-0037-z. Google Scholar I. Rodnianski and J. Sterbenz, On the formation of singularities in the critical $O(3)$–model, Ann. of Math., 172 (2010), 187-242. doi: 10.4007/annals.2010.172.187. Google Scholar C. Rodriguez, Profiles for the focusing energy critical wave equation in odd dimensions, Adv. Differential Equ., 21 (2016), 505-570. Google Scholar P. C. Schuur, Asymptotic Analysis of Soliton Problems, 1$^{st}$ edition, Springer–Verlag, Berlin, 1986. doi: 10.1007/BFb0073054. Google Scholar S. Selberg, Multilinear Space–Time Estimates and Applications to Local Existence Theory for Nonlinear Wave Equations, Ph.D. Thesis, Princeton University, 1999. Google Scholar J. Shatah and M. Struwe, Geometric Wave Equations, 1$^{st}$ edition, American Mathematical Society, New York, 1998. Google Scholar J. Shatah and A. S. Tahvildar–Zadeh, On the stability of stationary wave maps, Comm. Math. Phys., 185 (1997), 231-256. doi: 10.1007/s002200050089. Google Scholar J. Shatah and A. S. Tahvildar-Zadeh, On the Cauchy problem for equivariant wave maps, Comm. Pure Appl. Math., 47 (1994), 719-754. doi: 10.1002/cpa.3160470507. Google Scholar J. Sterbenz and D. Tataru, Energy dispersed large data wave maps in $2+1$ dimensions, Comm. Math. Phys., 298 (2010), 139-230. doi: 10.1007/s00220-010-1061-4. Google Scholar J. Sterbenz and D. Tataru, Regularity of wave maps in dimensions $2+1$, Comm. Math. Phys., 298 (2010), 231-264. doi: 10.1007/s00220-010-1062-3. Google Scholar T. Tao, Global regularity of wave maps Ⅱ. Small energy in two dimensions, Comm. Math. Phys., 224 (2001), 443-544. doi: 10.1007/PL00005588. Google Scholar T. Tao, Global regularity of wave maps Ⅲ. Large energy from $ \mathbb R^{1+2}$ to hyperbolic spaces, preprint, arXiv: 0805.4666. Google Scholar D. Tataru, On global existence and scattering for the wave maps equation, Am. J. Math., 123 (2001), 37-77. doi: 10.1353/ajm.2001.0005. Google Scholar D. Tataru, Rough solutions for the wave maps equation, Am. J. Math., 127 (2005), 293-337. doi: 10.1353/ajm.2005.0014. Google Scholar P. M. Topping, Rigidity in the harmonic map heat flow, J. Diff. Geom., 45 (1997), 593-610. doi: 10.4310/jdg/1214459844. Google Scholar V. E. Zakharov and A. B. Shabat, Exact theory of two–dimensional self–focusing and one dimensional self–modulation of waves in nonlinear media, Z. Eksper. Teoret. Fiz., 61 (1971), 118-134. Google Scholar Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256 Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020323 Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561 Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364 Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091 Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017 Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020466 Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316 Wei-Chieh Chen, Bogdan Kazmierczak. Traveling waves in quadratic autocatalytic systems with complexing agent. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020364 Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1749-1762. doi: 10.3934/dcdsb.2020318 Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328 Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033 Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047 Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, , () : -. doi: 10.3934/era.2021003 Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056 Chueh-Hsin Chang, Chiun-Chuan Chen, Chih-Chiang Huang. Traveling wave solutions of a free boundary problem with latent heat effect. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021028 Yoichi Enatsu, Emiko Ishiwata, Takeo Ushijima. Traveling wave solution for a diffusive simple epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 835-850. doi: 10.3934/dcdss.2020387 Carlos E. Kenig
CommonCrawl
Honesty in signalling games is maintained by trade-offs rather than costs Szabolcs Számadó ORCID: orcid.org/0000-0003-2204-97051,2 na1, István Zachar3,4 na1, Dániel Czégel3,4,5,6 & Dustin J. Penn7 BMC Biology volume 21, Article number: 4 (2023) Cite this article Signal reliability poses a central problem for explaining the evolution of communication. According to Zahavi's Handicap Principle, signals are honest only if they are costly at the evolutionary equilibrium; otherwise, deception becomes common and communication breaks down. Theoretical signalling games have proved to be useful for understanding the logic of signalling interactions. Theoretical evaluations of the Handicap Principle are difficult, however, because finding the equilibrium cost function in such signalling games is notoriously complicated. Here, we provide a general solution to this problem and show how cost functions can be calculated for any arbitrary, pairwise asymmetric signalling game at the evolutionary equilibrium. Our model clarifies the relationship between signalling costs at equilibrium and the conditions for reliable signalling. It shows that these two terms are independent in both additive and multiplicative models, and that the cost of signalling at honest equilibrium has no effect on the stability of communication. Moreover, it demonstrates that honest signals at the equilibrium can have any cost value, even negative, being beneficial for the signaller independently of the receiver's response at equilibrium and without requiring further constraints. Our results are general and we show how they apply to seminal signalling models, including Grafen's model of sexual selection and Godfray's model of parent-offspring communication. Our results refute the claim that signals must be costly at the evolutionary equilibrium to be reliable, as predicted by the Handicap Principle and so-called 'costly signalling' theory. Thus, our results raise serious concerns about the handicap paradigm. We argue that the evolution of reliable signalling is better understood within a Darwinian life-history framework, and that the conditions for honest signalling are more clearly stated and understood by evaluating their trade-offs rather than their costs per se. We discuss potential shortcomings of equilibrium models and we provide testable predictions to help advance the field and establish a better explanation for honest signals. Last but not least, our results highlight why signals are expected to be efficient rather than wasteful. Significance statement Honest signals pose a major theoretical problem for understanding animal communication, as it is unclear what prevents deception. The leading explanation for honest signals has long been the Handicap Principle, which predicts that signals are honest if are costly to produce, as it is their costliness (or wastefulness) that prevents deception and the breakdown of communication. However, in the models that reportedly validated the Handicap Principle (e.g. [1, 2]), the costs of signalling are derived from a set of rather specific assumptions, restricting the possible outcomes. These results were over-generalized and it was mistakenly concluded that honest signals must always be costly at the evolutionary equilibrium. Therefore, models are needed to investigate how signalling costs — better labelled signalling tradeoffs — influence the evolution of honesty without unnecessarily restrictive assumptions. Here, we provide a mathematical model based on general assumptions about signalling trade-offs that show how the evolution of cost-free or even beneficial honest signals can be evolutionary stable at equilibrium. Our results show that honest signals need not be costly at all, and therefore, the Handicap Principle and other costly signalling models cannot provide a general explanation for understanding the evolution honest signals. Our model provides a more general approach for addressing the evolution of honesty and deception in animal communication. We give testable predictions to advance the field and our model demonstrates why the handicap paradigm should be abandoned. Explaining the evolution of honest signalling has been a long-standing problem in research on animal [3] and human communication [4]. Zahavi's Handicap Principle [5, 6] (HP) has been the leading theoretical paradigm for honest signalling since it was reportedly validated by Grafen's 'strategic handicap' model [1]. The HP predicts that signals must be costly and reduce survival at the evolutionary equilibrium — hence the label 'handicap' — in order to be honest. This idea is often claimed to provide a general principle to explain why signals are honest, and it has been widely accepted, although some have questioned its generality [7]. There is no consensus for how to define, model or test the HP, which is often confused with other models known as 'costly signalling theory', because handicaps and signalling costs have never been clearly defined, and it has never been shown how signalling costs per se enforce honesty [7,8,9,10]. Mathematical signalling games have greatly improved our understanding of honest signalling [3, 11, 12], as they have clarified the logic of honesty in conspecific interactions, including aggression [13], mate choice [1], parent-offspring conflict [2, 14, 15], and interspecific interactions, such as plant-herbivore [16], plant-pollinator [17], aposematic displays [18], and predator-prey [19] relations. They originated from economic signalling games [20] and have been used to analyse the stability of honest signals in a variety of human social interactions [21]. While these models have proved to be useful, identifying the costs of signalling at the honest evolutionary equilibrium (equilibrium cost function) in such models is far from trivial when the signallers' quality varies continuously [1, 2]. As a consequence, it is difficult to compare the outcome of these models and make any general conclusions. The so-called 'strategic handicap' model [1] is the most influential model of honest signalling, and it critically assumes that signallers differ in their quality and bear differential marginal costs for producing a signal. This is a plausible but widely misinterpreted model because it is very different from the HP. [7] Unlike the HP, honesty in this model does not depend upon the absolute costs of signalling and signals are efficient rather than wasteful. Moreover, honest signals are selectively favoured in the model despite of their costs; not because they are costly. This model has nevertheless provided an important step towards analysing fitness trade-offs for honest signalling, but the steps used to obtain the equilibrium cost function are difficult to replicate, hence the mathematics have been described as 'brilliant but arcane' [9]. The complexity of signalling games has been widely under-estimated, as it has been generally overlooked that finding an equilibrium cost function requires solving a double optimization problem [14, 22], one for the receiver as well as one for the signaller, and that the optimal solution for the signaller depends on the receiver's optimum. This complexity is daunting and it has been circumvented by ignoring the receiver's optimization problem (Additonal file 1: sections 1–4 gives a detailed description of the steps to solve this problem; while Additional file 1: sections 5–7 provides a more detailed discussion [1, 2, 14, 22,23,24,25,26,27,28,29,30];). This issue cannot be resolved until the optimization problem of the signaller and the receiver are both evaluated. This double optimization problem has an infinite number of possible solutions [22, 31], and no general solution has ever been provided in analytical form. The lack of a clear methodology for deriving solutions to address this double optimization problem contributes to widespread misinterpretations of the HP and Grafen's strategic choice model (see [7]) and left the critiques of these ideas difficult to understand [8, 9] and the debates unresolved. It has been known for 20 years that such signalling games have an infinite number of honest equilibria [22, 31], and yet the nature and implications of these equilibria have remained unexplored due to the complexity of this problem. Consequently, the conditions for honest communication in signalling games are still unclear and controversial, and the field has stagnated due to being entwined in the erroneous and confusing handicap paradigm [7]. Here, we provide a novel and general approach for determining stable equilibria in continuous signalling games, and for calculating equilibrium signal cost functions, as a continuation of previous theoretical developments [23]. We examine signalling models with additive fitness functions (when signal costs and benefits are measured in the same currency, such as fitness), and also multiplicative fitness functions (such as when signals have survival costs that influence their reproductive benefits) [32]. First, we describe an asymmetric signalling model of animal communication, and we aim for a general approach that will apply to any signalling context, given that certain, broad conditions are met. Then, we provide solutions for games with additive or multiplicative fitness functions. We provide a formal proof of the conditions for stability being independent of equilibrium signal cost. Our general formula specifies the full, infinite set of trade-off solutions of the double optimization problem. Furthermore, we show that an infinite number of cost-free and negative cost equilibria exist in these models. The discovery of these previously unknown and evolutionarily stable equilibria shows how new approaches and interpretations can be used to investigate signalling games in general. Our approach does not require prior knowledge or assumptions about the shape of the potential solution, and hence it is applicable to any signalling model. We apply our method to calculate stable equilibria in classic signalling models, including Grafen's model of sexual signals [1], Godfray's signal-of-need model [2] for parent-offspring signalling games, and for the signalling model of Bergstrom et al. [22]. We explain how our results provide testable predictions regarding cost-free and beneficial (negative-cost) honest signals at equilibrium, and how these could support (or refute) our results and their generality. Finally, we discuss the shortcomings of equilibrium models and how signalling theory fits into the larger framework of life-history theory and Darwinian evolution. Models of signalling games Signalling games are mathematical models used to analyse how individuals (signallers) attempt to influence the decisions of others (receivers) by producing signals (action at a distance, see Fig. 1). Signals are often strictly defined as traits that provide information about some aspect of a signaller not directly observable, such as size or sex; otherwise signals are unnecessary [20]. Signalling games are usually described as conflicts over a resource, because some of the first models were contests over food and territories. Indeed, from an evolutionary perspective, a receiver's body and behaviour can be viewed as resources over which signallers compete to exploit for their own benefit [33]. Signalling games can be symmetric or asymmetric concerning information, resources, and options (strategies) available to the players. In symmetric games, players have the same information sets, resource availability and strategies at the beginning of the game, whereas in asymmetric games, players do not share the same information, resources, or strategic options. Female mate choice: an asymmetric signalling game. Courtship and mating behaviour is a sexual interaction in which a male signaller (S) expresses a signal, a secondary sexual trait, that functions to persuade choosey female receivers (R) to mate and give him on opportunity to fertilize her eggs; a scarce resource (e.g. [1]). Males vary in their quality and R will mate with S depending upon his quality or condition. S invests into producing a signal, according to his quality (he need not consciously 'know' his own quality; such a mechanism only requires condition-dependence), whereas R cannot evaluate his quality directly; she must decide based solely on attributes of S's signal (i.e. there is an information asymmetry between S and R). Axes x and y in the inset images represent the amount of shared resource z and fitness w outcomes respectively. Inset a: For a given signaller quality and no signalling trade-off, the fitness curves of S (blue) and R (yellow) have their optima (blue and yellow points) at different amounts of shared resource (dashed lines), resulting in a conflict of interest. Inset b: Signal trade-offs modify the signaller's fitness curve (blue) such that its optimum is at the same amount of resource that the receiver is willing to share, reducing or eliminating the conflict in the interaction. Costly signalling theory predicts that this trade-off function must be positive at the equilibrium for signalling to be honest [1, 2] Games can be symmetrical or asymmetrical in many respects, though it was information asymmetries between signallers and receivers that have mainly attracted the interest of biologists [34] and economists [20, 35]. Asymmetrical information is common in nature and thus asymmetrical signalling games have often been used to investigate how individuals resolve a wide variety of interactions and types of conflicts (e.g. genomic, sexual, parent-offspring, and other intra- and inter-familial conflicts). In these models, signallers use signals to persuade a receiver to take some action, which can include mating [1], feeding [2], other forms of parental investment [36], committing suicide [37, 38], and performing other actions that may or may not be in the receiver's interest. Asymmetrical signalling games have been used to model intra-genomic conflicts and molecular signals between cells within the body [39, 40]. They have also been used to model a variety of interspecific interactions, including predator-prey [19, 41], host-parasite [42,43,44], plant-herbivore [16], plant-pollinator [17] and aposematic displays [18]. They are also used to understand and address the spread of misinformation and disinformation in human societies, which is arguably one of the most important problems facing our species [45,46,47]. Here, we focus on games with asymmetries in access to both information and resources. In asymmetric games, receivers possess a resource and can decide whether to share it with signallers or not. For example, young chicks attempt to persuade their parents to feed them by producing begging calls [2]. In discrete models, receivers can either give away the entire resource or keep it for themselves [11, 16, 19, 48,49,50], whereas in continuous models, receivers can share some portion of the resource (z) [1, 2, 14, 22, 23, 31, 32, 51]. Receivers are assumed to share the resource in a way that maximizes (inclusive) receiver fitness (wR), but the potential benefits depend upon obtaining reliable information from signallers about what they offer in exchange. The problem is that receivers often have incomplete information about signallers or what they have to offer (information asymmetry). In the case of mate choice, females assess the potential benefits of mating with males that differ in social status, health, resources, or other aspects of quality (q); however, male quality cannot be directly assessed by the receiver, otherwise there is no need for signals. The signaller can influence the receiver's decision by its signal, which may or may not reliably reveal the quality q of the signaller, to ask for the resource amount (a) that should maximize signaller fitness (wS) (see Fig. 1). A signal is 'honest' if it provides receivers with reliable information about the signaller's quality, allowing the receiver to make adaptive decisions. Alternatively, the signal can be useless or deceptive, so that signallers manipulate the receivers to share more than an amount z that is in their adaptive interest. Like previous honest signalling models, we investigate the conditions under which signals provide reliable indicators of quality q (for details, see the 'Methods' section and Additional file 1:sections 1–3). Theoretical models have previously shown that honest signals are evolutionarily stable at an honest equilibrium if the following conditions are met [14, 22]: (i) the signal reveals the signaller's actual quality q (signals are honest), so that the receiver can respond adaptively; or (ii) the signaller only asks for the amount a of a resource that receivers benefit by sharing (shared interest), so that the conflict between the receiver and signaller is removed at the honest equilibrium (a = z). The mathematical formulation of these conditions is detailed in the 'Methods' section. The standard theoretical approach used for resolving conflicts of interest and to find stable equilibria in asymmetric signalling games is to introduce a cost function that transforms the signaller's fitness function wS, so that the optimal amount of resource a acquired by the signaller corresponds to the optimal amount of resource that the receiver shares z (see Fig. 1), and the optima of signaller and receiver, namely, wS and wR, then coincide. This step is crucial but missing from many previous models. It is not enough to find the optimum of wS, but wS must be transformed by using a function traditionally referred to as a 'cost function' in which max(wS) = max(wR). Here, we will refer to this transformation as a trade-off function (T). The function T transforms the benefit into the actual fitness so it is actually a trade-off function. Accordingly, the signallers' fitness wS is determined by the relation between the benefits B and trade-offs T, and without any trade-offs, wS = B. In additive models (e.g. [2]), B and T are summed, whereas in multiplicative models (e.g. [1]), they are multiplied to yield the fitness wS. We use the term 'trade-off function' because the term 'cost function' is unnecessarily restrictive to the positive domain (counter-intuitively, the cost value is positive rather than negative), and moreover, it does not represent the full set of possible solutions, as we demonstrate below. We also avoid the term 'cost function' because it has generated much confusion, and we provide a more detailed explanation in the 'Discussion' section. Our key insight is that this transformation, regardless of its label, does not necessarily represent an absolute cost, whereas it is always defined by a trade-off sensu life-history theory. We construct the most general class of trade-off functions that obey the conditions of honest signalling for both additive and multiplicative fitness functions (see the 'Methods' section and Appendices 2–3). Lastly, we apply our method to well-known models of honest signalling (Appendix 4), demonstrating its general applicability. Since it is the fitness wS that must meet the conditions of stability and honesty in an honest equilibrium, and not the benefit B, we first show that a signal trade-off function T can always be found for any B that ensures that wS meets these conditions. In order to decompose the signaller's fitness function into terms that are in one-to-one correspondence with the conditions of honest signalling, we expand the signaller's fitness function to its Taylor-series around (honest) signalling equilibrium. This representation allows us the derive the exact and most general implications of the conditions of honest signalling term by term. As we show below, the conditions of honest signalling constrain the first order and second order terms, while the rest can be chosen arbitrarily. When terms are again summed up, the resulting wS represents all honest solutions of signalling. Fig. 2 illustrates the process for additive and multiplicative models, while Fig. 3 provides a visual guide for the method of constructing T for the additive case (see the 'Methods' section, Appendix 2 and Fig. S1). Cost/benefit trade-off functions for two traditional signalling games. Left: An additive offspring begging game. Right: A multiplicative mate choice signalling game. Dashed vertical lines indicate the receiver's and the signaller's optimal amount of resource, given a signaller's quality or condition. Trade-off function T, when added to or multiplied by B, transforms the signaller's benefit B to its actual fitness wS such that its optimum amount of requested resource a coincides with the amount z shared by the receiver Method for reverse-engineering the general trade-off function T. The method transforms any (at least twice) differentiable signaller benefit function B to the fitness function wS that has the same optimal amount of shared resource as the receiver's fitness function wR. For sake of simplicity, function arguments are omitted at the right side of equations. D and higher order Taylor coefficients τ3, τ4, … can be freely chosen Additive fitness The general form of any (at least twice differentiable) trade-off function for additive fitness TA using Taylor-series expansion around the equilibrium where \(a=z=\hat{z}\) is (see Appendix 2 and Fig. S1 for details): $${T}_A\left(q,z\right)=D(q)-{B}^{\prime}\left(\hat{z}\right)\cdot \left(z-\hat{z}\right)-\left(\frac{1}{2}{B}^{\prime \prime}\left(\hat{z}\right)+\varepsilon \right)\cdot {\left(z-\hat{z}\right)}^2+\dots .$$ The zero order Taylor coefficient D(q) is the equilibrium signal trade-off function of signaller quality q. Traditionally, this coefficient has been used to specify the cost that signallers pay at the equilibrium, independently of the conditions of honest signalling [2]. Fig. 4d shows examples of costly and cost-free equilibrium trade-off functions. The effect of different trade-off functions on the fitness of the signaller in case of Godfray's additive model of parent-offspring conflict [15]. a The signaller's benefit function B (without trade-off; dependent on its quality q and the received amount of resource z) defines its optimum strategy for any q (dark green curve; optimum curves are also projected onto the q − z baseplane for all surfaces). b The receiver's fitness function wR defines its optimum strategy for any signaller quality and resource shared (yellow curve). c At the honest equilibrium, the trade-off function T ensures that the signaller's optimum coincides with the receiver's optimum (for the derivation of the terms of T, see Fig. 3). d An arbitrary set of equilibrium signal trade-off functions D(q) is selected (green curves) from left to right: \(\left\{{D}_1(q)=0,{D}_2(q)=B\left(q,\hat{z}\right),{D}_3(q)=-B\left(q,\hat{z}\right),{D}_4(q)=\sin (3q)/2\right\}\), where \(\hat{z}\) is the optimum transfer of the receiver for the given quality q. e For any Di(q), a trade-off function Ti is generated (red surfaces), describing the cost value of signals in and out of equilibrium. f The trade-off function T transforms the benefit function B of the signaller to the fitness function wS (blue surfaces) such that its optimum strategy coincides with the receiver's optimum strategy (yellow surfaces replicate the receiver's fitness wR as of panel b; note different scaling). Projected optima of wS and wR entirely overlap at the q − z baseplane. Parameters are {ψ = 1/2, γ = 1/2, G = 0.08, U = 1, Z = 10, ε = 1}, for details, see Appendix 2 and 4 The second coefficient \(-{B}^{\prime}\left(\hat{z}\right)\) describes the equilibrium path, where the first derivative of wS with regard to the amount of shared resource z is zero. This coefficient specifies that \({w}_{\textbf{S}}\left(\hat{z}\right)\) is an extremum, according to the shared-interest condition (\({w}_{\textbf{S}}^{\prime }=0\), Eq. 1a). At the equilibrium max(wS) = max(wR), and thus this path represents the receiver's optimum as well. The third coefficient \(-\left(\frac{1}{2}{B}^{\prime \prime}\left(\hat{z}\right)+\varepsilon \right)\) determines the steepness of the surface along the z dimension when deviating from the equilibrium path (stability condition). The condition ε > 0 ensures that this term is larger than the second derivative of B for the slope to be negative (\({w}_{\textbf{S}}^{\prime \prime }<0\), see Eq. 1b) so that \({w}_{\textbf{S}}\left(\hat{z}\right)\) is a maximum. When this term is equal or smaller than the second derivative of B then the signaller's strategy is not an equilibrium strategy. The conditions of honest signalling do not restrict higher order terms of the series therefore they can be arbitrarily chosen. The Taylor series expansion allows functional decomposition of the equilibrium trade-off, equilibrium path, and stability. Accordingly, the equilibrium trade-off function D(q) can be negative, zero or even positive. These conditions can be interpreted as costly signals, cost-free signals, and signals with only benefits, respectively. For any equilibrium path, (i.e. second coefficient), reflecting the receiver's optimisation problem, there is an infinite number of equilibrium trade-off functions D(q) for the signaller (see Fig. 4d for examples). In general, the equilibrium trade-off function (zero order term) is not constrained by the equilibrium path (first order term) or the stability condition (second order term). Figure 4d shows four possible equilibrium signal trade-off functions with constant, monotonically increasing, monotonically decreasing, and oscillating trade-off functions. While the last choice seems unrealistic, it proves our point that any arbitrary, continuously differentiable function can be chosen as the equilibrium trade-off function D(q) because the equilibrium signal cost is independent of the stability condition. Multiplicative fitness For multiplicative fitness, the conditions of honest signalling imply the following general form for the trade-off function TM (derivatives are all evaluated at equilibrium \(z=\hat{z}\); see Additional file: section 3 and Additional file 1: Fig. S1 for details): $${T}_M\left(q,z\right)=D(q)-\frac{B^{\prime }D(q)}{B}\left(z-\hat{z}\right)-\left(\frac{D(q)\ }{B}\left(\frac{B^{\prime \prime }}{2}-\frac{{\left({B}^{\prime}\right)}^2}{B}\right)+\varepsilon \right){\left(z-\hat{z}\right)}^2+\dots .$$ While the same functional separation is derived as in the case of additive fitness functions, the same independence of terms cannot be achieved because of the multiplication of the functions. Previous models have shown that signal cost functions D(q) exist where the cost paid at the equilibrium by honest signallers is arbitrarily close to zero in multiplicative models (e.g. this result was derived in a previous signalling model for a specific cost functions [31] from another signalling model [1]). Our formula for TM provides a method to derive all solutions for any asymmetrical signalling game with continuous (and at least twice differentiable) fitness functions. Moreover, as a novel result, it reveals, that equilibria with cost-free or beneficial signals exist in multiplicative models too, not only in additive models. In Additional file 1: section 4, we derive the general trade-off functions for well-known biological signalling games [1, 2, 22]. Additional file 1: Table S1 provides a comparison of notation across these models, Additional file 1: Table S2 compares the Taylor coefficients of the general trade-off functions of the additive and multiplicative cases, while Additional file 1: Table S3 compares the Taylor coefficients of relevant models. Our methods and results provide several important contributions towards understanding the evolution of honest signals. First, we provide a general methodology for deriving the full set of an infinite number of trade-off functions for asymmetric, continuous pairwise signalling games, which allow for honest signalling. This general class of trade-off functions consists of three components: the first term defines the cost (or benefit) of a signal at the evolutionary equilibrium, the second one defines the path along the equilibrium, and the third term specifies the stability condition at the equilibrium. Second, we confirm the suggestion that the results of asymmetric signalling models depend upon whether fitness effects are multiplicative or not [9]. For additive fitness, these three components are independent of each other, whereas for multiplicative fitness models, they are not. However, the equilibrium cost of signals can be anything, zero or even negative, in both types of models and yet signalling remains honest and evolutionarily stable as long as the stability condition is fulfilled. A negative cost implies that the signal is beneficial independent of the receiver's response. We show the existence of such beneficial equilibria for seminal models of the field, including Grafen's model of sexual selection [1] and Godfray's model of parent-offspring communication [2] (see Appendix 4a,b). Third, these results show that signal costs at equilibrium are not a necessary condition for the evolution of honest signalling, contrary to Zahavi's Handicap Principle [5] and handicap interpretations of Grafen's theoretical model [1, 2]). Our results reveal an important limitation and a surprising implication of simple asymmetric signalling games. Our model does not specify the magnitude of signal intensity at equilibrium, and just like the equilibrium signal cost, the magnitude can be any continuously differentiable function [17]. For example, in Godfray's signalling model [2], the equilibrium signal intensity (as a function of quality c) has a maximum (see Fig. 2 at [14]). Accordingly, the quality half-space below c = 0.5 was omitted to ignore the maximum, resulting in a monotonic decreasing signal intensity function. More generally, it was recently shown that it is possible to construct such 'dishonest', non-monotonous functions for a large class of signalling games [17]. In summary, overly simplified game-theoretical models have generated the apparent paradox that honest signalling games, which assume honesty at equilibrium, need not result in honest signalling, i.e. they will not necessarily generate a monotone increasing or decreasing signal intensity function at the equilibrium. This paradoxical result implies that the simplest possible model of honest signalling has not been sufficiently constrained in previous models, as 'honest' solutions were allowed where the signal cannot be used to predict the actual quality of the signaller. Existing models have other limitations, first introduced as biologically-inspired constraints [1, 2, 14, 22], and their most common assumptions are (i) the signal cost as a function of signal intensity increases monotonically (see above); and (ii) the equilibrium cost function D(q) is restricted by the assumption that the lowest quality signallers produce no signals and have no signal costs [1, 2, 14, 22]. The first assumption directly excludes any non-monotonic cost function (e.g. multimodal curves). The second assumption combined with the first excludes any potential cost function with zero or negative equilibrium signal cost. We call these the standard costly signalling model assumptions. When these assumptions are applied, the result is the traditional 'costly signalling' outcome in which the equilibrium cost function has positive values with monotonically increasing signal cost (i.e. equilibrium signals are honest and costly, e.g. see the strategic choice signalling model [1]). In other words, Grafen's claim that signals must be costly (and wasteful) at the honest equilibrium does not follow from the most general formulation of the conditions of honest signalling; it follows only from the additional constraints — the standard costly signalling model assumptions. Note that the problem here is not the application of specific assumptions, but rather the misinterpretation that signal costs directly follow from the general formulation of the model (e.g. see the claim of Grafen 'If we see a character which does signal quality, then it must be a handicap' [1] p. 521). These assumptions may or may not be realistic, but the interpretation that a costly equilibrium is necessary for honesty is incorrect and does not follow from the general formulation (as signals in Grafen's model are not honest because they are costly at the equilibrium). This misinterpretation of Grafen's strategic handicap model (i.e. category mistake, see Fig. 5) led to the widespread acceptance of the HP and the popularity of costly signalling theory (for a detailed discussion of misinterpretations, see [7]). Grafen also made the mistake of overgeneralizing his 'main handicap results' to the full set of potential solutions (i.e. overgeneralization fallacy, see Fig. 5) when he claimed that all signals of quality must be handicaps (as honest signals need not be handicaps). The overgeneralization fallacy and category mistake of Grafen's model. The figure shows the relation between the potential set of honest signalling equilibria maintained by condition-dependent trade-offs (blue set) vs. 'costly signalling' sensu economics (yellow set) vs 'costly signalling' sensu biology (orange set). (i) When additional postulates are included that unnecessarily constrain a model, a conclusion may be correct, not because of the model, but because of the additional assumptions. When one nevertheless claims that the conclusion is generally true regardless of postulated assumptions, then this is an overgeneralization fallacy. (ii) If removing the constraints switches the conclusion (strongly depending thus on the assumptions), one has also committed a category mistake, incorrectly attributing properties to the model and missing its true nature. (iii) Standard costly signalling assumptions (SCSA = A1 and A2, orange set) unnecessarily constrain the model of honest signalling (yellow set), because they exclude a potentially important class of trade-off functions. Nevertheless Grafen overgeneralized the conclusion of his model to all signals of quality (red arrow to blue set; see overgeneralization fallacy, point (i)). (iv) Moreover, biologically relevant assumptions may not be constrained to SCSA , contrary to what Grafen suggested [1] (see the 'Discussion' section). Removing SCSA switches the conclusion C of the model to !C: honest signals need not be costly, as we have proved in this paper. That is, the conclusion of Grafen's model (C= honest signals are costly), stems from its specific assumptions and not from more general properties, leading to tautology and a category mistake (see point (ii)) when Grafen identified his model with the Handicap Principle (red arrow to HP). That is, Grafen's model is not a model of HP as the HP conclusions do not follow from the general properties of the model (it is a model of condition-dependent signals). Thus, even if honest signals turn out to be costly, the Handicap Principle cannot account for them It is important to note that our findings also highlight the limitations of studying the evolutionary equilibrium for honesty using game theoretical models. Our formula clears up confusion over differences between conclusions that follow from the conditions of honest signalling models versus the consequences from additional constraints. We show that honest signalling models can only predict the value of marginal change – the behaviour of the system — in vicinity of the evolutionary equilibrium (by definition) without including additional constraints. They cannot provide predictions about (i) the cost of signals at or outside of the equilibrium, or (ii) the marginal change further away from the equilibrium path (see Fig. 4). One can add additional constraints to the models (see above discussion, e.g. [1, 2]) but then the results are simply determined by these constraints. Making such general conclusions from a model and ignoring its additional restrictive assumptions is an example of the overgeneralization fallacy. We have demonstrated here how removing the constraints of these previous models undermines their usual interpretations: honest signals need not be costly (see Fig. 5) even under conflict of interest, and hence the HP can be fully rejected. Equilibrium models have another limitation: by definition, they investigate honest equilibria only. Whether other equilibria exist or not is out of the scope of our analyses. The argument that if honest equilibria exist, then all signals need to be honest (see Grafen's 'main handicap results' [1], p. 521) is a non sequitur, as the existence of (partially) dishonest equilibria cannot be excluded. There is no guarantee that an honest equilibrium will eventually evolve as signalers and receivers may settle at an entirely or partially dishonest equilibrium. Equilibrium models cannot account for the dynamics leading to the equilibrium; for that, additional mechanisms are required, e.g. replicator dynamics [52] or individual-based simulations [53]. The results of our model help to explain why empirical studies do not match the predictions of the HP or costly signalling theory. Numerous studies have failed to find 'costly signals' predicted by Zahavi and Grafen (e.g. see [54,55,56,57,58,59,60,61]; for example, offspring begging calls are nowhere as costly as often assumed [54], see [55] for review). Not even the elaborate peacock's train, the flagship example of the HP, fits the predictions of costly signalling models. The train does not hinder movement [56]; on the contrary, males with longer trains are able to take-off faster than males with shorter ones [57]. Empirical studies have often been cautious with their conclusions and suggested that some other types of signalling costs might be discovered that would support the HP. Our results show that such signal costs are neither sufficient nor necessary to explain honesty; they are simply irrelevant for signal honesty. Cost-free or even beneficial honest equilibria are possible: high-quality offspring need not waste energy to produce honest begging calls and peacock trains need not be wasteful or even costly to signal male quality. Honesty is maintained by the potential costs of cheating through dishonest signals, but not by some general cost of signalling at the evolutionary equilibrium (see [8, 31]). It follows that efforts to measure signalling cost at the equilibrium are not informative about honesty or stability of the signalling system. Similar arguments have previously been made [8, 31], and now our equations provide the first general mathematical proof. Traditional explanations of honesty differentiate between signal costs (i.e. 'handicaps') and constraints (i.e. 'indices'). Recent theoretical models seem to undermine such a dichotomy. First, these models suggest a continuum from cost-free cues to costly signals (see [26]), and second, they claim 'that costly signalling theory provides the ultimate, adaptive rationale for honest signalling, whereas the index hypothesis describes one proximate (and potentially very general) mechanism for achieving honesty.' (see [24], Abstract) While we agree that physical constraints pose a potential proximate mechanism to explain honesty, our results do not support the first part of the claim, namely that 'costly signalling theory provides the ultimate, adaptive explanation for honest signalling...' [24]. While we also agree with the claim that there is a continuum from cost-free to the most costly signals, there is also a potential continuum from cost-free to beneficial signals, which was ignored by costly signalling theory. In summary, previous costly signalling models have used a set of unnecessarily restricted possible solutions to explain the evolution of honesty. Consequently, the costly signalling solutions provided by traditional costly signalling models are neither unexpected nor interesting, as the solutions are merely a consequence of the assumptions postulated by these models. Due to their restricted design, these models can only investigate costly signalling solutions because other solutions (e.g. cost-free or beneficial) are impossible within the boundaries of their additional costly signalling model assumptions. Given their restrictive set of assumptions, these models cannot provide general results for honest signalling or general predictions about the evolution of signals (Fig. 5). Our method provides a general calculation of signalling equilibria, and therefore, it should help research on honest signalling theory to progress beyond the domain of restricted costly signalling models. These results highlight the need for a better framework than the erroneous HP (and costly signalling) for explaining honest signalling (see Fig. 5). Signals are expected to confer fitness trade-offs -signalling trade-offs-, which are better understood as life-history trade-offs rather than as 'handicaps', i.e. signals that are honest because they are costly. The seminal models that attempted to test the HP relied on life-history trade-offs to create differential costs between signallers that differ in quality: a trade-off between reproduction and survival in Grafen's model [1] or a trade-off between current and future offspring in Godfray's model [2]. A recent laboratory experiment shows the importance of condition-dependent trade-offs versus equilibrium costs of signalling [62]. A recent model that investigated the differences between additive and multiplicative fitness functions also adopts an evolutionary trade-off framework [29] (see Appendix 7 for differences between this and our approach). Our results here show how honesty can be selectively maintained by condition-dependent signalling trade-offs. Such trade-offs can be difficult to measure [63, 64], but this approach allows the use of theoretical models and empirical methodology established in this field [64,65,66,67]. Finally, there are several additional reasons for adopting the term 'trade-off function' instead of 'cost function' in signalling theory, as we propose here. The term 'costly signalling' has different meanings in the relevant disciplines (see Fig. 5). In economics, 'costly signalling' refers to models that apply a utility function with a cost term. In biology, 'costly signalling' is usually associated with the HP, and these terms are often used interchangeably. This semantic difference has contributed to confusing costly signalling theory with the HP, and given the misleading impression that the HP is supported by mathematical models [7]. This confusion is reason enough to avoid this term in biology. Moreover, the name 'cost function' in economics can be misleading when applied to biological signalling games. As we have seen, this function, regardless of its label, provides a transformation that does not have to realize an absolute cost, as we have seen in our case of the trade-off function T. In fact, most biological models lack an explicit cost function. Furthermore, cost-bearing utility functions in economics are usually additive, whereas fitness components in biology are generally multiplicative. In the additive case, one function can always represent a cost in the absolute sense. However, in the multiplicative case, this is not possible, as we have shown. In biology, the trade-off is between benefit functions. There is no cost function in the seminal models of mate choice [1] and parent-offspring signalling communication [2, 14, 15]: neither survival nor reproduction can be considered to be the costs. Labelling these models 'costly signalling' sensu economics is just as misleading as sensu biology. Thus, for these reasons, we suggest using 'trade-off' instead of 'cost' function, and indeed, this term better reflects the key insight of biological signalling games. Our results help to understand why the handicap paradigm needs to be rejected, and why signals — their costs, benefits, and trade-offs — are better understood using a Darwinian perspective (and conventional tools such as evolutionary game theory, optimality models, and life-history theory). Signals need not be costly or wasteful to be honest, and cost-free or even beneficial honest equilibria can be evolutionarily stable. Rather than being wasteful, we should expect signals to be efficient. Signals can be both honest and efficient, and cheating may prove to be less efficient (more costly and even wasteful) than honesty. They can be better understood from a Darwinian 'Efficiency Principle' [32] rather than from the erroneous Handicap Principle. Seemingly exaggerated signals, such as the peacock's train and deer antlers, might seem wasteful but they may provide minimal or no fitness cost to the bearer [61] — as honest signallers may produce them more efficiently than cheaters. There is no reason to suspect that signals evolve under a separate, non-Darwinian process of selection, contrary to Zahavi [4]; and since they evolve through natural selection like other traits, they should be efficient rather than wasteful. The model consists of two agents, the signaller S and the receiver R. The signaller elicits a signal to request an amount a of the resource from the receiver. The signaller's fitness wS depends on the signaller's quality q, on the intensity of its signal (asking for an amount a of the resource) and on the amount of resource z provided by the receiver due to the signal. The receiver's fitness wR depends on the hidden quality q of the signaller and on its own response strategy, that specifies the amount of resource z the receiver shares with the signaller; \(\hat{z}\) denotes the equilibrium amount (in honest equilibrium, it is expected that \(\hat{z}=z=a\)). The response can be written as a function directly dependent on q. We treat the signaller fitness wS as an additive or multiplicative combination of a benefit function B and a signal trade-off function T, where both B(q, z) and T(q, z) are functions of signaller quality q and the received resource z. T defines the trade-off of asking for z = a amount of resource as a signaller of quality q, depending entirely on the signaller. B, on the other hand, is controlled entirely by the receiver's response (how much z the receiver shares, based indirectly, through a signal, on the signaller's quality q; see Appendix 1 for a formal derivation). This interpretation justifies the mathematical decomposition of wS into these two functions. Derivatives are with respect to z; a hat over a symbol indicates equilibrium value. Table 1 lists the quantities of the model. Table 1 Notation used in the model. Coloured boxes indicate the controlling party, blue for signaller, yellow for receiver. For more details, see Appendix 1 Table S2 Conditions of honest equilibrium The honest signalling equilibrium has two conditions (for details, see Appendix 1): Condition of honest optimum specifies that there exists an optimum amount of resource \(\hat{z}\) that the receiver is willing to share. That is, the receiver, depending on the received signal, shares an amount that equals to the amount it would share if she could directly assess the signaller's quality. This means, that signals are honest as they reveal the signaller's quality, so that resource allocation is optimal for the receiver. Condition of shared interest specifies that there is no conflict between receiver and signaller as the signaller asks the exact amount the receiver is willing to share and both wS and wR have their respective maxima at \(\hat{z}=z=a\). Since neither the receiver nor signaller benefits by deviating from it, the condition implies stability. It has two conditions: 2.a. Extremum condition specifies that wS has an extremum at \(z=\hat{z}\): $${w_{\textbf{S}}}^{\prime}\left(z=\hat{z}\right)=0.$$ 2.b. Stability condition specifies that the extremum at \(z=\hat{z}\) is a maximum: $${w_{\textbf{S}}}^{\prime \prime}\left(z=\hat{z}\right)<0.$$ Reverse-engineering the trade-off function Finding honest signalling solutions in signalling games requires: (i) calculating the optimal resource sharing decision for the receiver, (ii) calculating signalling trade-offs (T, traditionally called a 'cost function') that transform the signaller's optimal decision to the receiver's optimum. That is, the signaller has optimal fitness when it asks for and receives the same amount z that the receiver is willing to share in its fitness optimum (see Fig. 1). We provide a formal method to reverse engineer the trade-off function T that is general and specifies all the infinite number of solutions. We use a Taylor series expansion of the signaller's fitness wS to specify the conditions of honest signalling identified by previous models [14, 22]. Since wS is composed of B and T (additively or multiplicatively), wS can be expressed as the appropriate combination of terms of the Taylor series of B and T (see Fig. 3 and Fig. S1). The first and second order Taylor coefficients of wS can be used to express the stability and honesty conditions (Eqs. 1a, b) as constraints on the first and second derivatives of B and T. Since B is given, we use these constraints to construct a general trade-off function T that, when combined with B, yields a signaller fitness function wS that fulfils the conditions of honest signalling (its optimum coincides with the optimum of wR). In Appendix 4, we apply our method to known models. Additive fitness model First, we derive the general trade-off function for the additive model. For a visual guide, see the left panel of Fig. S1, for details, see Appendix 2. In the case of the additive fitness model, the signaller's fitness is the sum of the benefit and trade-off functions: $${w}_{\textbf{S}}=B+T.$$ Both B and T can be written as Taylor series around the equilibrium \(z=\hat{z}\) (omitting function arguments q and z for sake of simplicity): $$B(z)=B\left(\hat{z}\right)+\frac{B^{\prime}\left(\hat{z}\right)}{1!}\left(z-\hat{z}\right)+\frac{B^{\prime \prime}\left(\hat{z}\right)}{2!}{\left(z-\hat{z}\right)}^2+\dots,$$ $$T(z)=T\left(\hat{z}\right)+\frac{T^{\prime}\left(\hat{z}\right)}{1!}\left(z-\hat{z}\right)+\frac{T^{\prime \prime}\left(\hat{z}\right)}{2!}{\left(z-\hat{z}\right)}^2+\dots .$$ With βk = B(k)/k! and τk = T(k)/k!, the sum of B and T can be rewritten: $${w}_{\textbf{S}}\left(q,z\right)=\left({\beta}_0+{\tau}_0\right)+\left({\beta}_1+{\tau}_1\right)\left(z-\hat{z}\right)+\left({\beta}_2+{\tau}_2\right){\left(z-\hat{z}\right)}^2+\dots .$$ At the equilibrium \(z=\hat{z}\), conditions Eqs. 1a, b must be met by wS. According to Eq. 1a, the first derivative of wS must be zero: $${w}_{\textbf{S}}^{\prime }=0,$$ $${\beta}_1+{\tau}_1=0,$$ $${\tau}_1=-{\beta}_1,$$ $${\tau}_1=-{B}^{\prime }.$$ According to Eq. 1b, the second derivative of wS must be smaller than zero: $${w}_{\textbf{S}}^{\prime \prime }<0,$$ $${\beta}_2+{\tau}_2<0,$$ $${\tau}_2<-{\beta}_2,$$ $${\tau}_2<-\frac{1}{2}{B}^{\prime \prime }.$$ The inequality is always satisfied if ε > 0: $${\tau}_2=-\frac{1}{2}{B}^{\prime \prime }-\varepsilon .$$ Substituting τ1 and τ2 into Eq. 4 and D(q) for \(T\left(q,\hat{z}\right)\), the constructed trade-off function for additive fitness components is: $$T\left(q,z\right)=D(q)-{B}^{\prime}\left(z-\hat{z}\right)-\left(\frac{1}{2}{B}^{\prime \prime }+\varepsilon \right){\left(z-\hat{z}\right)}^2+\dots .$$ Multiplicative fitness model In the case of the multiplicative fitness model, the signaller's fitness is the product of the benefit and trade-off functions (for a visual guide, see the right panel of Fig. S1, for details, see Appendix 3): $${w}_{\textbf{S}}=B\cdot T.$$ The Taylor series of a multiplicative wS is the product of the individual Taylor series of the composite functions B and T (Eqs. 3, 4): $${w}_{\textbf{S}}\left(q,z\right)={\beta}_0{\tau}_0+\left({\beta}_0{\tau}_1+{\beta}_1{\tau}_0\right)\left(z-\hat{z}\right)+\left({\beta}_0{\tau}_2+{\beta}_1{\tau}_1+{\beta}_2{\tau}_0\right){\left(z-\hat{z}\right)}^2+\dots$$ At the equilibrium \(z=\hat{z}\), conditions Eqs. 1a, b must be met by wS. According to Eq. 1a, the first derivative of wS must be zero (omitting function arguments): $${\beta}_0{\tau}_1+{\beta}_1{\tau}_0=0,$$ $${\tau}_1=-\frac{\beta_1{\tau}_0}{\beta_0},$$ $${\tau}_1=-\frac{B^{\prime }T}{B}.$$ The first derivative of T at the equilibrium depends on T itself, unlike in the additive case. According to Eq. 1b, the second derivative of wS must be smaller than zero (substituting τ1 from above): $${w}_{\textbf{S}}<0$$ $${\beta}_0{\tau}_2+{\beta}_1{\tau}_1+{\beta}_2{\tau}_0<0,$$ $${\tau}_2<-\frac{\beta_1{\tau}_1+{\beta}_2{\tau}_0}{\beta_0},$$ $${\tau}_2<-\frac{\tau_0\ }{\beta_0}\left({\beta}_2-\frac{{\beta_1}^2}{\beta_0}\right).$$ $${\tau}_2=-\frac{\tau_0\ }{\beta_0}\left({\beta}_2-\frac{{\beta_1}^2}{\beta_0}\right)-\varepsilon,$$ $${\tau}_2=-\frac{T}{B}\left(\frac{B^{\prime \prime }}{2}-\frac{{\left({B}^{\prime}\right)}^2}{B}\right)-\varepsilon .$$ Substituting τ1 and τ2 into Eq. 4 and D(q) for \(T\left(q,\hat{z}\right)\), the constructed trade-off function for multiplicative fitness components is: $$T\left(q,z\right)=D(q)-\frac{B^{\prime }D(q)}{B}\left(z-\hat{z}\right)-\left(\frac{D(q)}{B}\left(\frac{B^{\prime \prime }}{2}-\frac{{\left({B}^{\prime}\right)}^2}{B}\right)+\varepsilon \right){\left(z-\hat{z}\right)}^2+\dots .$$ All data and models are available in the main text and in the Additional files. Handicap Principle Grafen A. Biological signals as handicaps. J Theor Biol. 1990;144:517–46. Godfray HCJ. Signalling of need by offspring to their parents. Nature. 1991;352:328–30. Maynard Smith J, Harper D. Animal Signals, vol. 166. Oxford: Oxford University Press; 2003. at https://books.google.hu/books?id=SUA51MeG1lcC Számadó S, Szathmáry E. Selective scenarios for the emergence of natural language. Trends Ecol Evol. 2006;21:555–61. Zahavi A. Mate selection—A selection for a handicap. J Theor Biol. 1975;53:205–14. Zahavi A, Zahavi A. The handicap principle: a missing piece of Darwin's puzzle. 304. New York: Oxford University Press; 1997. Penn DJ, Számadó S. The Handicap Principle: How an erroneous hypothesis became a scientific principle. Biol Rev. 2020;95:267–90. Számadó S. The cost of honesty and the fallacy of the handicap principle. Anim Behav. 2011;81:3–10. Getty T. Sexually selected signals are not similar to sports handicaps. Trends Ecol Evol. 2006;21:83–8. Higham JP. How does honest costly signaling work? Behav Ecol. 2014;25:8–11. Maynard Smith J. Honest signalling: the Philip Sidney game. Anim Behav. 1991;42:1034–5. Searcy WA, Nowicki S. Signal interception and the use of soft song in aggressive interactions. Ethology. 2006;112:865–72. Enquist M. Communication during aggressive interactions with particular reference to variation in choice of behaviour. Anim Behav. 1985;33:1152–61. Nöldeke G, Samuelson L. How costly is the honest signaling of need? J Theor Biol. 1999;197:527–39. Godfray HCJ. Signaling of need between parents and young: Parent-offspring conflict and sibling rivalry. Am Nat. 1995;146:1–24. Archetti M. The origin of autumn colours by coevolution. J Theor Biol. 2000;205:625–30. Sun S, Johanis M, Rychtář J. Costly signalling theory and dishonest signalling. Theoret Ecol. 2019;13:85–92. Blount JD, Speed MP, Ruxton GD, Stephens PA. Warning displays may function as honest signals of toxicity. Proc R Soci B: Biol Sci. 2008;276:871–7. Yachi S. How can honest signalling evolve? The role of handicap principle. Proc R S London Series B: Biol Sci. 1995;262:283–8. Spence M. Job market signaling. Quart J Econ. 1973;87:355. Barker JL, Power EA, Heap S, Puurtinen M, Sosis R. Content, cost, and context: A framework for understanding human signaling systems. Evol Anthropol: Issues, News, and Reviews. 2019;28:86–99. Bergstrom CT, Számadó S, Lachmann M. Separating equilibria in continuous signalling games. Philos Trans R Soc Lond B Biol Sci. 2002;357:1595–606. Számadó S, Czégel D, Zachar I. One problem, too many solutions: How costly is honest signalling of need? PLoS One. 2019;14:e0208443. Biernaskie JM, Grafen A, Perry JC. The evolution of index signals to avoid the cost of dishonesty. Proc R Soci B: Biol Sci. 2014;281:20140876. Zahavi A. The cost of honesty (further remarks on the handicap principle). J Theor Biol. 1977;67:603–5. Biernaskie JM, Perry JC, Grafen A. A general model of biological signals, from cues to handicaps. Evol Lett. 2018;2:201–9. Holman L. Costs and constraints conspire to produce honest signaling: insights from an ant queen pheromone. Evolution. 2012;66:2094–105. Clifton SM, Braun RI, Abrams DM. Handicap principle implies emergence of dimorphic ornaments. Proc R Soci B: Biol Sci. 2016;283:20161970. Fromhage L, Henshaw JM. The balance model of honest sexual signaling. Evolution. 2022;76:445–54. Számadó S, Penn DJ. Does the Handicap Principle explain the evolution of dimorphic ornaments? Anim Behav. 2018;138:e7–e10. Lachmann M, Számadó S, Bergstrom CT. Cost and conflict in animal signals and human language. Proc Natl Acad Sci. 2001;98:13189–94. Getty T. Handicap signalling: when fecundity and viability do not add up. Anim Behav. 1998;56:127–30. Dawkins R. The selfish gene. 224. New York: Oxford University Press; 1976. Maynard Smith J. Evolution and the theory of games. Cambridge: Cambridge University Press; 1982. at https://books.google.hu/books?id=Nag2IhmPS3gC Akerlof GA. The market for "lemons": Quality uncertainty and the market mechanism. Quart J Econ. 1970;84:488. Parker GA, Royle NJ, Hartley IR. Intrafamilial conflict and parental investment: a synthesis. Philos Trans R Soc Lond B Biol Sci. 2002;357:295–307. Rosenthal RW. Suicide attempts and signalling games. Math Soc Sci. 1993;26:25–33. Traulsen A, Reed FA. From genes to games: Cooperation and cyclic dominance in meiotic drive. J Theor Biol. 2012;299:120–5. Massey SE, Mishra B. Origin of biomolecular games: deception and molecular evolution. J Soc Interface. 2018;15:20180429. Hummert S, et al. Evolutionary game theory: cells as players. Mol Biosyst. 2014;10:3044–65. Ramesh D, Mitchell WA. Evolution of signalling through pursuit deterrence in a two-prey model using game theory. Anim Behav. 2018;146:155–63. Klotz C, et al. A helminth immunomodulator exploits host signaling events to regulate cytokine production in macrophages. PLoS Pathog. 2011;7:e1001248. Renaud F, Meeüs T. de A simple model of host-parasite evolutionary relationships. Parasitism: Compromise or conflict? J Theor Biol. 1991;152:319–27. Tago D, Meyer DF. Economic game theory to model the attenuation of virulence of an obligate intracellular bacterium. Front Cell Infect Microbiol. 2016;6:86. Kupferschmidt K. On the trail of bullshit. Science. 2022;375:1334–7. West JD, Bergstrom CT. Misinformation in and about science. Proc Natl Acad Sci. 2021;118(15):e1912444117. Bergstrom CT, West JD. Calling bullshit: The art of skepticism in a data-driven world, vol. 336. UK: Random House; 2020. at https://www.ebook.de/de/product/40246804/carl_t_bergstrom_jevin_d_west_calling_bullshit_the_art_of_skepticism_in_a_data_driven_world.html Számadó S. The validity of the handicap principle in discrete action–response games. J Theor Biol. 1999;198:593–602. Hurd PL. Communication in discrete action-response games. J Theor Biol. 1995;174:217–22. Bullock S. An exploration of signalling behaviour by both analytic and simulation. In Fourth European Conference on Artificial Life. Vol. 4. MIT Press; 1997. p. 454. Getty T. Reliable signalling need not be a handicap. Anim Behav. 1998;56:253–5. Zollman KJS, Bergstrom CT, Huttegger SM. Between cheap and costly signals: the evolution of partially honest communication. Proc R Soci B: Biol Sci. 2013;280:20121878. Kane P, Zollman KJS. An evolutionary comparison of the Handicap Principle and hybrid equilibrium theories of signaling. PLoS One. 2015;10:e0137271. McCarty JP. The energetic cost of begging in nestling passerines. Auk. 1996;113:178–88. Moreno-Rueda G. Is there empirical evidence for the cost of begging? J Ethology. 2006;25:215–22. Askew GN. The elaborate plumage in peacocks is not such a drag. J Exp Biol. 2014;217:3237–41. Thavarajah NK, Tickle PG, Nudds RL, Codd JR. The peacock train does not handicap cursorial locomotor performance. Sci Rep. 2016;6(1):1–6. Borgia G. The cost of display in the non-resource-based mating system of the satin bowerbird. Am Nat. 1993;141:729–43. Borgia G. Satin bowerbird displays are not extremely costly. Anim Behav. 1996;52:648–50. Guimarães M, MunguÍa-Steyer R, Doherty PF, Sawaya RJ. No survival costs for sexually selected traits in a polygynous non-territorial lizard. Biol J Linn Soc. 2017;122:614–26. McCullough EL, Emlen DJ. Evaluating the costs of a sexually selected weapon: big horns at a small price. Anim Behav. 2013;86:977–85. Számadó S, Samu F, Takács K. Condition-dependent trade-offs maintain honest signaling: A laboratory experiment. bioRxiv. 2019. https://doi.org/10.1101/788828. Reznick D, Nunney L, Tessier A. Big houses, big cars, superfleas and the costs of reproduction. Trends Ecol Evol. 2000;15:421–5. Roff DA, Fairbairn DJ. The evolution of trade-offs: where are we? J Evol Biol. 2007;20:433–47. Roff D. Evolution of life histories: Theory and analysis. New York: Springer US; 1993. at https://books.google.hu/books?id=_pv37gw8CIoC Stearns SC. Trade-offs in life-history evolution. Funct Ecol. 1989;3:259. Stearns SC. The evolution of life histories, vol. 249. Oxford: Oxford University Press; 1992. We are grateful for the thoughtful comments from two reviewers. The authors acknowledge support from the Hungarian Research Fund under grant numbers NKFIH #132250 (SS), #140901 (IZ), #129848 (DC, IZ), from the János Bolyai Research Scholarship of the Hungarian Academy of Sciences (BO/00570/22/8, IZ), from the ÚNKP-22-5 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund (ÚNKP-22-5-ELTE-1138, IZ), from the Austrian Science Fund P28141-B25 (DJP), and from the Human Frontier Science Program RGP003/2020 (DJP). Szabolcs Számadó and István Zachar shared first authorship. Department of Sociology and Communication, Budapest University of Technology and Economics, Egry J. u. 1, Budapest, H-1111, Hungary Szabolcs Számadó CSS-RECENS "Lendület" Research Group, MTA Centre for Social Science, Tóth Kálmán u. 4, Budapest, H-1097, Hungary Institute of Evolution, MTA Centre for Ecological Research, Konkoly-Thege Miklós út 29-33, Budapest, H-1121, Hungary István Zachar & Dániel Czégel Department of Plant Systematics, Ecology and Theoretical Biology, Biology Institute, ELTE University, Pázmány P. sétány 1/C, Budapest, 1117, Hungary Doctoral School of Biology, Institute of Biology, Eötvös Loránd University, Pázmány Péter sétány 1/C, Budapest, H-1117, Hungary Dániel Czégel BEYOND Center for Fundamental Concepts in Science, Arizona State University, AZ 85287–0506, Tempe, AZ, USA Department of Interdisciplinary Life Sciences, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Savoynestrasse 1a, 1160, Vienna, Austria Dustin J. Penn István Zachar SS, DC and DJP conceived the idea; SS, DC and IZ designed and analyzed the model; IZ and SS designed and created the figures; and SS, IZ, DC and DJP wrote the paper. All authors contributed to the article and all authors read and approved the final manuscript. Correspondence to Szabolcs Számadó. Additional file 1: Appendices 1-7 . Figs S1-S8, Tables S1-S3. Appendix 1. Conditions of honest signalling; Appendix 2. Derivation of the general trade-off function for the additive model; Appendix 3. Derivation of the general trade-off function for the multiplicative model; Appendix 4. Trade-off functions of well-known models; Appendix 5. Single step optimization models of signalling; Appendix 6. The model of (Biernaskie et al. 2018); Appendix 7. The model of [29]. 12915_2022_1496_MOESM2_ESM.jpg Számadó, S., Zachar, I., Czégel, D. et al. Honesty in signalling games is maintained by trade-offs rather than costs. BMC Biol 21, 4 (2023). https://doi.org/10.1186/s12915-022-01496-9 Costly signalling theory Life-history models Trade-offs Evolutionary stability
CommonCrawl
SLIMEr: probing flexibility of lipid metabolism in yeast with an improved constraint-based modeling framework Benjamín J. Sánchez1, 2, Feiran Li1, 2, Eduard J. Kerkhoven1, 2 and Jens Nielsen1, 2, 3Email author BMC Systems Biology201913:4 A recurrent problem in genome-scale metabolic models (GEMs) is to correctly represent lipids as biomass requirements, due to the numerous of possible combinations of individual lipid species and the corresponding lack of fully detailed data. In this study we present SLIMEr, a formalism for correctly representing lipid requirements in GEMs using commonly available experimental data. SLIMEr enhances a GEM with mathematical constructs where we Split Lipids Into Measurable Entities (SLIME reactions), in addition to constraints on both the lipid classes and the acyl chain distribution. By implementing SLIMEr on the consensus GEM of Saccharomyces cerevisiae, we can represent accurate amounts of lipid species, analyze the flexibility of the resulting distribution, and compute the energy costs of moving from one metabolic state to another. The approach shows potential for better understanding lipid metabolism in yeast under different conditions. SLIMEr is freely available at https://github.com/SysBioChalmers/SLIMEr. Genome-scale metabolic modeling Lipidomics Flux balance analysis Genome scale metabolic models (GEMs) are widely used to model and compute functional states of cellular metabolism [1] and as scaffolds for integrating various levels of high-throughput data [2]. A crucial step for achieving proper simulations with GEMs is to define a biomass pseudo-reaction [3, 4], which accounts for every single constituent comprising the cellular biomass (proteins, carbohydrates, lipids, etc.). In this step it is challenging to account for lipid requirements, as there are copious different individual lipid species: over 20 different classes of lipids can be produced in a cell, and each specific lipid belonging to any of those classes can contain various combinations of acyl chain groups, each of them with varying length and number of saturations [5]. This can yield over 1000 specific lipid species that the cell can potentially produce. Unsurprisingly, lipid metabolism therefore tends to be the most complicated part of any GEM. A requirement for formulating the biomass pseudo-reaction are abundance measurements of every single constituent; however, this is seldom available for individual lipid species. Instead, it is more common to measure separately (i) a profile of all different lipid classes, for example by high-performance liquid chromatography [6, 7]; and (ii) a distribution of all different acyl chains, by fatty acid methyl ester (FAME) analysis [8, 9]. Therefore, GEMs have been adapted to handle these data. The most common approach to represent lipid metabolism in GEMs is to enforce a specific distribution of each individual lipid species, either by using detailed experimental data [10, 11] or by assuming that lipid classes have all the same acyl chain distribution from a single FAME analysis [12, 13]. In both cases however, the model will be fixed to follow a predefined lipid distribution. This is undesirable, as lipid metabolism can show a high level of reorganization [5, 14], hence rendering the model's predictions of limited use when simulating different experimental conditions, or when looking into the network's flexibility for satisfying lipid requirements. A second common approach is to allow any specific lipid to form a corresponding generic lipid class (e.g., "phosphocholine") and to only constrain those classes with experimental abundances from lipid profiling [15, 16]. The problem with this approach is that experimental abundances from FAME analysis are neglected, and simulations always end up choosing lipid species that cost the least energy, which might not reflect reality, e.g. if there is regulation in place to ensure production of longer chain species. Hence, there is need for an approach that can incorporate both lipid profiling and FAME analysis, but at the same time can allow flexibility in the metabolic network. In this work, we introduce SLIMEr, a method for correctly representing lipid requirements in GEMs while allowing network flexibility. The approach adds so-called SLIME reactions, which split lipids into their basic components; and lipid pseudo-reactions, that impose constraints on both the lipid classes and the acyl chain distributions. By following this approach, we achieve flux simulations that respect both the lipid class and acyl chain experimental distribution, and at the same time avoid over-constraining the model to only simulate one lipid distribution. We implemented this approach for the consensus GEM of Saccharomyces cerevisiae (budding yeast), a model that has undergone iterative improvements [15, 17–20] and is currently being hosted at https://github.com/SysBioChalmers/yeast-GEM. We show that the enhanced model: (i) enforces acyl chain requirements while preserving a high degree of network flexibility and an almost equal metabolic energy demand, (ii) better predicts specific lipid distributions, and (iii) computes lipid costs of transitioning between experimental conditions. Representing lipid constraints with the aid of SLIME reactions Flux balance analysis (FBA) [21] is based on the following assumptions on metabolism: (i) a cell has a metabolic goal, which we can represent through a mathematical objective function, (ii) under short timescales there is no accumulation of intracellular metabolites, and (iii) metabolic fluxes are bounded to physical constraints, such as thermodynamics and kinetics. Those three assumptions define a basic FBA problem as followed: $$ {\displaystyle \begin{array}{c}\operatorname{Min}\left({\mathrm{c}}^{\mathrm{T}}\mathrm{v}\right)\\ {}\mathrm{S}\cdot \mathrm{v}=0\\ {}\mathrm{LB}\le \mathrm{v}\le \mathrm{UB}\end{array}} $$ where S is the stoichiometric matrix, which contains the stoichiometric coefficients for all reactions and metabolites, v is the vector of metabolic fluxes [mmol/gDWh], c is the objective function vector, and LB and UB are the corresponding lower and upper bounds for each of the fluxes (some of them based on experimental values). As we usually wish to simulate growth, a biomass pseudo-reaction is typically added to the stoichiometric matrix as follows: $$ \mathrm{protein}+\mathrm{carbohydrate}+\mathrm{lipid}+\mathrm{RNA}+\mathrm{DNA}+\dots \to \mathrm{Biomass} $$ where protein, carbohydrate, lipid, etc., are pseudo-metabolites that are produced from a combination of metabolic components. For example, protein is produced from a protein pseudo-reaction: $$ {\mathrm{s}}_1\ \mathrm{ala}+{\mathrm{s}}_2\mathrm{cys}+{\mathrm{s}}_3\ \mathrm{asp}+\dots \to \mathrm{protein} $$ where si are the measured abundances [mmol/gDW] of the corresponding amino acids in yeast. In this study we focus on the lipid pseudo-reaction, which becomes more challenging to formulate, because there are so many different individual species. One option is to define an equivalent reaction to Eq.3 with every single lipid species [10]; however, as these measurements are not available for most organisms, the lipid pseudo-reaction is usually represented as the following instead: $$ {\mathrm{s}}_1\ \mathrm{PI}+{\mathrm{s}}_2\ \mathrm{PC}+{\mathrm{s}}_3\ \mathrm{TAG}+{\mathrm{s}}_4\ \mathrm{ERG}\dots \to \mathrm{lipid} $$ where PI (phosphoinositol), PC (phosphocholine), TAG (triglyceride), ergosterol (ERG), etc. represent each of the lipid classes that exist in the model. Most of them represent not one but a plurality of different molecules, each with different combinations of acyl chain lengths and saturations. Therefore, they are also pseudo-metabolites that need to be produced in turn by additional pseudo-reactions. These pseudo-reactions can be constructed either in a restrictive or permissive approach (Fig. 1). The restrictive approach is to enforce the experimental FAME distribution to every single specific lipid species. This can be achieved by creating a generic acyl chain component [12]: $$ {\mathrm{s}}_1\ \mathrm{Acyl}\ \mathrm{CoA}\left(16:0\right)+{\mathrm{s}}_2\ \mathrm{Acyl}\ \mathrm{CoA}\left(16:1\right)+\dots \to \mathrm{Acyl}\ \mathrm{CoA} $$ where the si coefficients are fractions inferred from FAME data. This generic Acyl – CoA is then used to form each generic lipid species, forcing then every lipid class to have the same acyl chain distribution. The latter is an important limitation of this approach, considering that the acyl chain distribution can vary significantly across lipid classes [5]. Overview of the process of including SLIME reactions and new lipid pseudo-reactions for a hypothetical model of three lipid classes and two types of acyl chain. The active fluxes after simulating the models are highlighted in light blue, showing that a GEM with a restrictive approach would use the same acyl chain composition for all lipid classes (left upper corner), a GEM with a permissive approach would always choose the cheapest species from each lipid class (left lower corner), and a GEM with SLIME reactions would satisfy both the lipid class and the acyl chain distribution, but choosing freely which specific lipid species to produce for this goal (right side) On the other hand, the permissive approach for building the pseudo-reactions is to allow any of the specific lipids to form the generic lipid class [15]. For instance, the following set of pseudo-reactions can be defined for PI: $$ {\displaystyle \begin{array}{c}\mathrm{PI}\ \left(16:0-16:1\right)\to {\mathrm{s}}_1\mathrm{PI}\\ {}\mathrm{PI}\ \left(16:1-16:1\right)\to {\mathrm{s}}_2\mathrm{PI}\\ {}\mathrm{PI}\ \left(16:1-18:1\right)\to {\mathrm{s}}_3\mathrm{PI}\\ {}\vdots \end{array}} $$ where si can be set to 1 or adapted to represent the cost of producing each specific lipid. The problem of this approach is that it disregards the acyl chain distribution, even if FAME data is available. Therefore, once simulations are computed, the model will always end up preferring the "cheapest" species to produce in terms of carbon and energy, usually corresponding to species with the shortest acyl chains, unless the si coefficients are arbitrarily tuned to favor longer chains. Additionally, even though for some specific species such as ergosterol the measured abundance [mg/gDW] can be directly transformed to the stoichiometric coefficient in Eq.4 [mmol/gDW], for most lipids the measured abundance cannot be directly converted, as the molecular weight varies between specific lipid species. Hence, average molecular weights need to be estimated in both permissive and restrictive approaches, leading to skewed predictions. In this study we solve all the problems presented above through two new types of pseudo-reactions, to account for both constraints on lipid classes and on acyl chains. The first pseudo-reactions Split Lipids Into Measurable Entities and are hence referred to as SLIME reactions. As the name suggests, these pseudo-reactions take each specific lipid and split it into its basic components, i.e. its backbone and acyl chains: $$ {\mathrm{L}}_{\mathrm{i}\mathrm{j}}\to {\mathrm{s}}_{\mathrm{i}}\ {\mathrm{B}}_{\mathrm{i}}+\sum \limits_{\mathrm{k}\in \mathrm{j}}{\mathrm{s}}_{\mathrm{jk}}\ {\mathrm{C}}_{\mathrm{k}} $$ where Lij is a lipid of class i and chain configuration j, Bi the corresponding backbone, Ck the corresponding chain k, and si and sjk the associated stoichiometry coefficients. These reactions replace any pseudo-reaction of the sort of Eq.5 or Eq.6 that were already present in the model (Fig. 1). The second type of pseudo-reactions are new lipid pseudo-reactions, which will in turn replace Eq.4, the old lipid pseudo-reaction that only constrained lipid classes. There are now three different lipid pseudo reactions (Fig. 1): the first pulls all backbone species created in Eq.7 into a generic backbone and uses the corresponding abundance data [g/gDW] as stoichiometric coefficients. The second reaction does the same but for the specific acyl chains, with data from FAME analysis [g/gDW], to create a generic acyl chain. Finally, the third reaction merges back together the generic backbone and the generic acyl chain into a generic lipid, which will be used in the biomass pseudo-reaction as in Eq.2. For the new reactions to be consistent, we need to choose adequate stoichiometric coefficients for Eq.7. If the abundance data would be molar, si would be equal to 1 and sjk would be equal to the number of repetitions of the corresponding acyl chain k in lipid j. However, as the abundance data often comes in mass units, si must be equal to the molecular weight [g/mmol] of the full lipid, and sjk must be equal to the molecular weight of the corresponding acyl chain k, multiplied by the number of repetitions of k in configuration j. By choosing these values we allow the SLIME reactions to convert the molar production of the lipid [mmol/gDWh] into a mass basis [g/gDWh], which in turn will be converted to a lipid turnover [1/h] by the lipid pseudo reactions. Improved model of yeast We implemented SLIMEr in the consensus genome-scale model of yeast version 7.8.0 [22], a model which used the previously mentioned permissive approach, and had at the start 2224 metabolites and 3496 reactions. Out of those reactions, 176 corresponded to reactions of the sort of Eq.6, which were replaced by 186 SLIME reactions that cover in total 19 lipid classes and 6 different acyl chains. An additional 27 metabolites (including both specific and generic backbones and acyl chains) and 15 reactions (including transport reactions, lipid pseudo-reactions and exchange reactions) were added to the model, and 10 metabolites and 1 reaction (connected to previously deleted reactions) were removed. The final enhanced model had therefore 2241 metabolites and 3520 reactions, and kept the number of genes and gene-reaction rules constant, as only pseudo-reactions with no gene-reaction rules were modified. For the reference model, we used both lipid profiling and FAME data at low growth rate and no stress conditions [23]. The lipid profile was rescaled to be proportional to the FAME data, as detailed in the methods section. We can see that by using SLIMEr, the enhanced model was enforced to follow the acyl chain distribution of the experimental data (Fig. 2a), whereas the previous permissive model predicted mostly acyl chains of 16-carbon length (less costly), and only a small amount of 18-carbon length to satisfy the requirement of ergosterol ester (Additional file 1: Figure S1), as ergosteryl oleate is cheaper to produce mass-wise than ergosteryl palmitoleate (Additional file 1: Table S1). The enhanced GEM with improved constraints on lipid metabolism. a By using SLIMEr, a correct acyl chain composition is enforced. b Breakdown of the acyl chain distribution and variability predicted by the enhanced GEM, for each experimentally detected lipid class. Thick black lines correspond to parsimonious FBA predictions, while the FVA allowed ranges are shown with colored bars With the enhanced model we also studied in how many ways lipid requirements can be satisfied spending the same amount of energy, by performing flux variability analysis (FVA) (Fig. 2b). Comparing these predictions to the ones of the permissive model (Additional file 1: Figure S1), we saw some reductions in variability, coming mostly from changes in phosphatidylcholine and triglyceride content. However, despite the additional constraints imposed, lipid metabolism could still rearrange itself in a wide amount of combinations, and overall flux variability did not decrease significantly (Additional file 1: Figure S2a). This agrees with experimental observations that lipid metabolism is highly flexible [5]; therefore, handling lipid metabolism with SLIME reactions is preferred over alternative approaches, such as models that constrain single individual lipid species [10, 24], as the latter limit the organism to only one feasible state of lipid metabolism and hence bias results. Model predictions of specific lipid distributions To validate model predictions, we used reported data [5] including measurements of 102 specific lipid species. This data was added up to compute the totals of each lipid class and each acyl chain, and these sums were in turn used as input for creating both a permissive and an enhanced model. In the latter case, as a total lipid abundance of 8% was assumed, the acyl chain abundances were rescaled to be proportional to the lipid classes abundances (see the methods section for more details). We then performed random sampling of fluxes for the resulting models, to generate 10,000 specific lipid distributions for each model and for each of the 8 conditions of the study. Comparing these in silico lipid distributions to the original in vivo measurements (Fig. 3a), the enhanced model improved the average prediction error for all experimental conditions (Additional file 1: Table S2), and overall the simulated lipid distributions came much closer to the experimental values compared to the permissive model (Fig. 3b). Furthermore, simulations with the enhanced model are also superior to simulations with a restrictive model, as the latter approach would not capture the fact that many lipid classes show preference towards few acyl chains. Instead, it would force the model to produce all acyl chains in the same proportion for each lipid class, significantly lowering the quality of predictions (Additional file 1: Figure S3). Using experimental data to validate the model and predict energy costs. a The lipid composition of 10,000 simulations of the enhanced model achieved with random sampling are presented for each specific lipid species (blue circles), compared to the actual measured experimental values (red bars). b Principal component analysis of all (log transformed) lipid abundance distributions for both the permissive (yellow) and enhanced (blue) models, compared to experimental values (red). Different tonalities of yellow and blue indicate the 8 different simulated conditions and strains. c Carbon costs (continuous lines) and ATP costs (segmented lines) of satisfying the acyl chain distribution at four increasing levels of temperature (30, 33, 36 and 38 °C), NaCl concentrations (0, 200, 400 and 600 mM) and EtOH concentrations (0, 20, 40 and 60 g/L) It should be noted that even though SLIMEr improved the model's lipid composition predictions, many other distributions are still predicted to be equally likely for all simulated conditions (Fig. 3b, Additional file 1: Figure S4); which reinforces the previously mentioned idea of a highly flexible lipid network. Furthermore, the fact that yeast picks a certain lipid distribution in vivo for each strain and condition, but has many additional options in silico, points also to a high level of regulation in place to adapt the distribution of lipid species in S. cerevisiae depending on the genetic background and environmental conditions [25]. Energy costs at increasing levels of stress As a final study, we used lipid data of yeast grown under 9 different stress levels [23] to create both a permissive and enhanced GEM for each of those conditions. We then computed the differences in ATP turnover and carbon requirements between the permissive and enhanced model, which correspond to the extra energy and carbon costs, respectively, required to achieve the given acyl chain distribution in each condition (Fig. 3c). As increasing stress levels are associated to an increase in maintenance energy (Additional file 1: Figure S5) [26], by using SLIMEr we therefore showed an increase in lipid expenses when transitioning from a metabolic state of low energy demand to high energy demand. In the case of the reference condition, the permissive model could produce 145.9 μmol (ATP)/gDW more than the enhanced model. Also in this condition, the simulated growth-associated ATP maintenance (GAM) without accounting for known polymerization costs of proteins, carbohydrates, RNA and DNA [27] was of 36.96 mmol (ATP)/gDW, which corresponds to the maintenance costs of unspecified functions in the model, such as protein turnover, maintenance of membrane potentials, etc. The ATP cost for achieving correct acyl chain distribution under reference conditions corresponded then to 0.4% of the total costs of processes not included in the model. This is a rather low percentage, which shows that the addition of SLIME reactions will not cause a significant increase in the overall metabolic energy demand, while making the simulated fluxes in lipid metabolism better match experimentally observed distributions (Fig. 3b). As previously mentioned, we did not see a significant reduction in flux variability of predictions compared to the permissive approach (Additional file 1: Figure S2). This is partly explained as in each simulation we maximize the ATP maintenance; therefore, simulations of the permissive model (which did not have constraints on the acyl chain distribution) had a slightly higher ATP maintenance, making simulations overall similarly constrained. Nonetheless, the main advantage of using SLIMEr is not to constrain simulations more, but instead to constrain lipid fluxes such that they better match biologically feasible distributions (Fig. 3b). It is also important to note that the model does not take other physiological properties into account, such as specific regulation, or curvature and fluidity of membranes as function of lipid composition and/or temperature. It only takes FAME analysis and lipid profile data, and demonstrates that specific lipid distributions from simulations are consistent with these measurements. It would be of interest to account for additional data and processes such as the ones mentioned, but this is beyond the scope of this study. Even though developed for the consensus GEM of S. cerevisiae, this approach can be extended to any other model and/or organism. The main challenge here is to map all lipids in the model to the corresponding pseudo-metabolites (backbones and chains), as conventions for naming lipids vary a great deal between different databases and models. Introduction of standardized metabolite ids [28, 29] can significantly aid this otherwise laborious task. With SLIMEr we can now correctly represent biomass requirements from lipid metabolism in genome-scale metabolic models. The approach allows the model to satisfy at the same time requirements on the lipid class and acyl chain distributions, which is a significant improvement compared to only being able to constrain lipid classes [15, 16]. We have also shown the high degree of flexibility in lipid metabolism, which shows that approaches that over-constrain the lipid requirements by enforcing specific concentrations for individual species [10, 11, 24] or forcing a given acyl chain distribution to all species [12, 13] are not suitable for handling this flexibility. Finally, we have demonstrated the use of the expanded model as a tool to compute lipid requirements in varying experimental conditions. We expect the enhanced model to be useful for metabolic engineering applications, particularly for designing strains that can rearrange the chain length distribution of specific lipid classes [30]. Data used All data used in this study was collected from literature. For the initial model analysis and the analysis of lipid metabolism under increasing levels of stress, aerobic glucose-limited chemostat data of S. cerevisiae, strain CEN.PK113-7D, growing on minimal media at a growth rate of D = 0.1 h− 1 was used [23]. The mentioned study collected lipid abundance data in mg/gDW for both lipid classes and acyl chains for 1 reference condition plus 9 different conditions of stress (temperature, ethanol and osmotic stress). Additionally, carbohydrate, protein and RNA content [g/gDW] was measured for all stress conditions, together with flux data [mmol/gDWh] for glucose and oxygen uptake, and glycerol, acetate, ethanol, pyruvate, succinate and CO2 production. For model predictions of specific lipid distributions, we used published data of S. cerevisiae grown aerobically on SD media at maximum growth rate (shake flask cultures), under 8 different conditions: four different BY4741 strains (a wildtype plus three knockout strains), each cultivated at both 24 °C and 37 °C [5]. In that study, the authors introduced a novel quantification method for detecting the abundance of up to 250 singular species of lipids. Out of those, 102 were used in our study, as they had direct correspondence to a species in the GEM employed. Even though not all lipids were accounted for in the model, those 102 species included the ones most abundant in vivo, as such providing high mass-coverage (on average 84% of the total detected lipid abundance) without having to add any additional lipid species and reactions to the model. Abundance values were converted from mol/mol to mg/gDW assuming an 8% lipid abundance in biomass [23] and considering the unmatched lipid percentage previously mentioned. Additionally, we assumed a protein composition of 0.5 g/gDW, an RNA composition of 0.06 g/gDW, a glucose uptake of 20.4 mmol/gDWh, and biomass growth rate of 0.41 h− 1, based on previous batch simulations of the yeast GEM [31]. Model enhancement details The consensus genome-scale model of yeast, version 7.8.0 [22], was used. Compared to version 7.6 from the published paper [20], the model included a manual curation detailed in previous work [31], a clustered biomass pseudo-reaction, and metabolite formulas added to every lipid. Combining this model together with the experimental data, the following five steps were followed to create a model with SLIME reactions, specific to each experimental condition: Add pseudo-metabolites representing each specific backbone, each specific acyl chain, the generic backbone and the generic acyl chain. Add for each specific lipid species a SLIME reaction. These reactions replace the previous ones of the sort of Eq.6 in the model [15]. Add all three new lipid pseudo-reactions (Fig. 1) using the experimental data [g/gDW]. These reactions replace the original lipid pseudo-reaction (Eq.4). Scale either the lipid class or the acyl chain abundance data so that they are proportional, as the approach is based on exact mass balances. For this, an optimization problem is carried out where the coefficients of the corresponding pseudo-reaction are rescaled to minimize to zero the excretion of unused backbones and acyl chains (Additional file 1: Figure S6). Finally, scale any other component in the biomass pseudo-reaction for which there is data, and ensure that the biomass composition adds up to 1 g/gDW [32] by rescaling the total amount of carbohydrates, which was not measured in the datasets employed. To compare the performance of the new enhanced model, an additional model for each condition was created, which did not have the acyl chain pseudo-reaction, but instead exchange reactions for each acyl chain, so that the model could freely choose the acyl chain distribution. Note that by doing this, the only remaining lipid constraint is the lipid backbone pseudo-reaction, meaning that this alternative model is equivalent to the permissive approach mentioned in the results section. Therefore, we refer to this model as the "permissive" model, and use it to benchmark our analysis. In turn, a comparison to a "restrictive" model is only briefly outlined when predicting specific lipid distributions, as the experimental data showed that the acyl chain distribution in yeast varies considerably across lipid classes (Additional file 1: Figure S7), making the restrictive approach not applicable here. Simulation details For all FBA simulations, measured exchange fluxes were used to constrain the model, allowing up to a 5% of deviation from the average measurements, and a parsimonious FBA approach [33] was followed, maximizing first the ATP turnover and then minimizing the total sum of absolute fluxes, in order to find the most compact solution. The obtained ATP turnover value is equal to the sum of the growth associated ATP maintenance (GAM) and the non-growth counterpart (NGAM, equal to 0.7 mmol/gDWh in the original model), and it was used to compare ATP costs from transitioning from one state to another. The variability of each different lipid species was computed with FVA [34] on each corresponding group of SLIME reactions at a time; e.g., for assessing the variability of C18:0 in PI, FVA was applied on all SLIME reactions producing PI and any C18:0 acyl chains. Variability was also assessed with optGpSampler, an implementation of the artificial centering hit-and-run algorithm for random sampling of metabolic fluxes [35]. Abundances in mg/gDW of each lipid species were then computed from the corresponding SLIME reaction fluxes, multiplied by the molecular weight and divided by the biomass growth rate. All simulations were performed in Matlab® R2018a, using the COBRA toolbox [36], and Gurobi® 7.5 set as optimizer. Cer: CL: Cardiolipin COBRA: Constraint-based reconstruction and analysis DAG: Diglyceride FAME: Fatty acid methyl esters FBA: FVA: Flux variability analysis GAM: Growth associated ATP maintenance GEM: Genome-scale metabolic model Inositolphosphoceramide LCB: Long-chain base LCBP: Long-chain base phosphate LPI: Lysophosphatidylinositol M (IP)2C: Mannosyl-diinositolphosphoceramide MIPC: Mannosyl-inositolphosphoceramide NGAM: Non-growth associated ATP maintenance PA: PC: Phosphatidylethanolamine PG: Phosphatidylglycerol Phosphatidylinositol Ergosterol ester SLIME: Split lipid into measurable entities We would like to thank Dr. Hongzhong Lu for help in annotation of lipid formulas, Dr. Petri-Jaan Lahtvee and Dr. Paulo Teixeira for aid in data analysis, Sebastián Mendoza for guidance with the random sampling analysis, and the anonymous referees who helped with valuable feedback on the final manuscript. This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 686070, the Knut and Alice Wallenberg Foundation and the Novo Nordisk Foundation. BJS acknowledges financial support from CONICYT (grant #6222/2014), and EJK acknowledges financial support from Åforsk Foundation. None of the previously mentioned funding agencies played any role in the design of the study, in collection, analysis, interpretation of data, nor in writing the manuscript. All data analyzed in this study are from the literature [5, 23]. SLIMEr is available at https://github.com/SysBioChalmers/SLIMEr. All scripts/data necessary to reproduce the results presented in this study have been archived in Zenodo [37]. All new SLIME reactions and lipid pseudo-reactions have been added to the consensus GEM of yeast and are available from version 8.1.0 [38]. JN and BJS conceived the project. BJS, FL and EJK designed the mathematical formulation. BJS implemented the algorithm and performed all computational simulations. BJS and FL processed the literature data. BJS wrote the original draft. All authors read, edited and approved the final manuscript. Additional file 1: Supplementary material, including all supplementary tables and supplementary figures. (PDF 1090 kb) Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden Novo Nordisk Foundation Center for Biosustainability, Chalmers University of Technology, Gothenburg, Sweden Novo Nordisk Foundation Center for Biosustainability, Technical University of Denmark, Lyngby, Denmark Nielsen J. Systems biology of metabolism. Annu Rev Biochem. 2017;86:245–75.View ArticleGoogle Scholar Bordbar A, Monk JM, King ZA, Palsson BØ. Constraint-based models predict metabolic and associated cellular functions. Nat Rev Genet 2014;15 February:107–120.Google Scholar Feist AM, Palsson BO. The biomass objective function. Curr Opin Microbiol. 2010;13:344–9.View ArticleGoogle Scholar Dikicioglu D, Kırdar B, Oliver SG. Biomass composition: the "elephant in the room" of metabolic modelling. Metabolomics. 2015;11:1690–701.View ArticleGoogle Scholar Ejsing CS, Sampaio JL, Surendranath V, Duchoslav E, Ekroos K, Klemm RW, et al. Global analysis of the yeast lipidome by quantitative shotgun mass spectrometry. Proc Natl Acad Sci. 2009;106:2136–41.View ArticleGoogle Scholar Moreau RA. Lipid analysis via HPLC with a charged aerosol detector. Lipid Technol. 2009;21:191–4.View ArticleGoogle Scholar Khoomrung S, Chumnanpuen P, Jansa-Ard S, Ståhlman M, Nookaew I, Borén J, et al. Rapid quantification of yeast lipid using microwave-assisted Total lipid extraction and HPLC-CAD. Anal Chem. 2013;85:4912–9.View ArticleGoogle Scholar Abdulkadir S, Tsuchiya M. One-step method for quantitative and qualitative analysis of fatty acids in marine animal samples. J Exp Mar Bio Ecol. 2008;354:1–8.View ArticleGoogle Scholar Khoomrung S, Chumnanpuen P, Jansa-Ard S, Nookaew I, Nielsen J. Fast and accurate preparation fatty acid methyl esters by microwave-assisted derivatization in the yeast Saccharomyces cerevisiae. Appl Microbiol Biotechnol. 2012;94:1637–46.View ArticleGoogle Scholar Mardinoglu A, Agren R, Kampf C, Asplund A, Uhlen M, Nielsen J. Genome-scale metabolic modelling of hepatocytes reveals serine deficiency in patients with non-alcoholic fatty liver disease. Nat Commun. 2014;5:3083.View ArticleGoogle Scholar Lachance J-C, Monk JM, Lloyd CJ, Seif Y, Palsson BO, Rodrigue S, et al. BOFdat: generating biomass objective function stoichiometric coefficients from experimental data. bioRxiv. 2018;:243881.Google Scholar Nookaew I, Jewett MC, Meechai A, Thammarongtham C, Laoteng K, Cheevadhanarak S, et al. The genome-scale metabolic model iIN800 of Saccharomyces cerevisiae and its validation: a scaffold to query lipid metabolism. BMC Syst Biol. 2008;2:71.View ArticleGoogle Scholar Kerkhoven EJ, Pomraning KR, Baker SE, Nielsen J. Regulation of amino-acid metabolism controls flux to lipid accumulation in Yarrowia lipolytica. npj Syst Biol Appl. 2016;2:16005.View ArticleGoogle Scholar Han X, Rozen S, Boyle SH, Hellegers C, Cheng H, Burke JR, et al. Metabolomics in early Alzheimer's disease: identification of altered plasma Sphingolipidome using shotgun Lipidomics. PLoS One. 2011;6:e21643.View ArticleGoogle Scholar Heavner BD, Smallbone K, Barker B, Mendes P, Walker LP. Yeast 5 - an expanded reconstruction of the Saccharomyces cerevisiae metabolic network. BMC Syst Biol. 2012;6(1).Google Scholar Brunk E, Sahoo S, Zielinski DC, Altunkaya A, Dräger A, Mih N, et al. Recon3D enables a three-dimensional view of gene variation in human metabolism. Nat Biotechnol. 2018;36:272–81.View ArticleGoogle Scholar Herrgård MJ, Swainston N, Dobson PD, Dunn WB, Arga KY, Arvas M, et al. A consensus yeast metabolic network reconstruction obtained from a community approach to systems biology. Nat Biotechnol. 2008;26:1155–60.View ArticleGoogle Scholar Dobson PD, Smallbone K, Jameson D, Simeonidis E, Lanthaler K, Pir P, et al. Further developments towards a genome-scale metabolic model of yeast. BMC Syst Biol. 2010;4:145.View ArticleGoogle Scholar Heavner BD, Smallbone K, Price ND, Walker LP. Version 6 of the consensus yeast metabolic network refines biochemical coverage and improves model performance. Database 2013;2013:bat059.Google Scholar Aung HW, Henry SA, Walker LP. Revising the representation of fatty acid, Glycerolipid, and Glycerophospholipid metabolism in the consensus model of yeast metabolism. Ind Biotechnol. 2013;9:215–28.View ArticleGoogle Scholar Orth JD, Thiele I, Palsson BØ. What is flux balance analysis? Nat Biotechnol. 2010;28:245–8.View ArticleGoogle Scholar Sánchez B, Li F, Lu H, Kerkhoven E, Nielsen J. SysBioChalmers/yeast-GEM: yeast 7.8.0. Zenodo. 2018; https://doi.org/10.5281/zenodo.1494186. Lahtvee PJ, Sánchez BJ, Smialowska A, Kasvandik S, Elsemman IE, Gatto F, et al. Absolute quantification of protein and mRNA abundances demonstrate variability in gene-specific translation efficiency in yeast. Cell Syst. 2017;4:495–504.e5.View ArticleGoogle Scholar Monk JM, Lloyd CJ, Brunk E, Mih N, Sastry A, King Z, et al. iML1515, a knowledgebase that computes Escherichia coli traits. Nat Biotechnol. 2017;35:904–8.View ArticleGoogle Scholar Henderson CM, Zeno WF, Lerno LA, Longo ML, Block DE. Fermentation temperature modulates phosphatidylethanolamine and phosphatidylinositol levels in the cell membrane of Saccharomyces cerevisiae. Appl Environ Microbiol. 2013;79:5345–56.View ArticleGoogle Scholar Lahtvee P-J, Kumar R, Hallström B, Nielsen J. Adaptation to different types of stress converge on mitochondrial metabolism. Mol Biol Cell. 2016;27:2505–14.View ArticleGoogle Scholar Förster J, Famili I, Palsson BØ, Nielsen J. Genome-scale reconstruction of the Saccharomyces Cerevisie metabolic network. Genome Res. 2003;13:244–53.View ArticleGoogle Scholar Dräger A, Palsson BØ. Improving collaboration by standardization efforts in systems biology. Front Bioeng Biotechnol 2014;2 December:1–20.Google Scholar Moretti S, Martin O, Van Du Tran T, Bridge A, Morgat A, Pagni M. MetaNetX/MNXref - reconciliation of metabolites and biochemical reactions to bring together genome-scale metabolic networks. Nucleic Acids Res 2016;44:D523–D526.Google Scholar Bergenholm D, Gossing M, Wei Y, Siewers V, Nielsen J. Modulation of saturation and chain length of fatty acids in Saccharomyces cerevisiae for production of cocoa butter-like lipids. Biotechnol Bioeng. 2018;115:932–42.View ArticleGoogle Scholar Sánchez BJ, Zhang C, Nilsson A, Lahtvee P, Kerkhoven EJ, Nielsen J. Improving the phenotype predictions of a yeast genome-scale metabolic model by incorporating enzymatic constraints. Mol Syst Biol. 2017;13:935.View ArticleGoogle Scholar Chan SHJ, Cai J, Wang L, Simons-Senftle MN, Maranas CD. Standardizing biomass reactions and ensuring complete mass balance in genome-scale metabolic models. Bioinformatics. 2017;33:3603–9.View ArticleGoogle Scholar Lewis NE, Hixson KK, Conrad TM, Lerman JA, Charusanti P, Polpitiya AD, et al. Omic data from evolved E. coli are consistent with computed optimal growth from genome-scale models. Mol Syst Biol. 2010;6:390.View ArticleGoogle Scholar Mahadevan R, Schilling CH. The effects of alternate optimal solutions in constraint-based genome-scale metabolic models. Metab Eng. 2003;5:264–76.View ArticleGoogle Scholar Megchelenbrink W, Huynen M, Marchiori E. optGpSampler: an improved tool for uniformly sampling the solution-space of genome-scale metabolic networks. PLoS One. 2014;9:e86587.View ArticleGoogle Scholar Heirendt L, Arreckx S, Pfau T, Mendoza SN, Richelle A, Heinken A, et al. Creation and analysis of biochemical constraint-based models: the COBRA Toolbox v3.0. ArXiV. 2017;1710.04038.Google Scholar Sánchez B, Li F, Kerkhoven E, Nielsen J. SysBioChalmers/SLIMEr: SLIMEr v1.0.2. Zenodo. 2018; https://doi.org/10.5281/zenodo.1494872.
CommonCrawl
Numerical Shadow The web resource on numerical range and numerical shadow Tools Show pagesourceOld revisions Recent ChangesMedia ManagerSitemap Log In You are here: start » numerical-range » examples » 4x4 Numerical range 2x2 matrices Numerical range of random matrices Perturbation of unitary matrix numerical range Restricted numerical range Real numerical range Product numerical range Separable numerical range Maximally entangled numerical range Coherent numerical range C numerical range Higher rank numerical range k-numerical range q-numerical range Numerical range of rectangular matrices Maximal numerical range Joint numerical range Calculate numerical range Numerical range and spectrum of random Ginibre matrix Restricted numerical shadow Real numerical shadow Product numerical shadow Maximally entangled numerical shadow GHZ numerical shadow W numerical shadow $SU(2)$ coherent states A matrix of dimension 2 Random matrices Auxiliary definitions GHZ states W states Dirac notation numerical-range:examples:4x4 A generic matrix $$M=\begin{pmatrix} 1 & 1 & 1 & 1 \\ 0 & \ii & 1 & 1 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & \ii \end{pmatrix}$$ has an oval–like numerical range $W (M)$. The matrix $$M=\begin{pmatrix} 1 & 1 & 1 & 1 \\ 0 & \ii & 1 & 1 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & \ii \end{pmatrix}$$ has a numerical range $W (M)$ with one flat part of the boundary $\partial W$. The matrix $$M=\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & \ii & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$$ has a numerical range $W (M)$ with two flat parts of of the boundary $\partial W$. The matrix $$M=\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & \ii & 1 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -\ii \end{pmatrix}$$ has a numerical range $W (M)$ with two parallel flat parts of of the boundary $\partial W$. The matrix $$M=\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & \ii & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -\ii \end{pmatrix}$$ has a numerical range $W (M)$ with three flat parts of $\partial W$ connected with corners and one oval–like part. The matrix $$M=\begin{pmatrix} \ii & 0 & -1 & 0 \\ 0 & 0 & -1 & 0 \\ 1 & 1 & 1-\ii & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$$ has a numerical range $W (M)$ with three flat parts of $\partial W$ with only one corner and two oval–like parts. The matrix $$M=\begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & \ii & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -\ii \end{pmatrix}$$ has a numerical range $W (M)$ with four flat parts of $\partial W$. The matrix $$M=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \ii & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -\ii \end{pmatrix}$$ has a numerical range $W (M)$ pair of flat parts of $\partial W$ connected with a corner connected with two oval–like parts. The matrix $$M=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \ii & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -\ii \end{pmatrix}$$ has a numerical range $W (M)$ equal to the convex hull of eigenvalues. numerical-range/examples/4x4.txt · Last modified: 2014/03/30 13:10 by lpawela
CommonCrawl
High Energy, Nuclear, Particle Physics Spontaneous symmetry breaking of chiral symmetry Thread starter CAF123 The quark sector of the QCD lagrangian can be written as (restricting to two flavours) $$\mathcal L = \sum_{i=u,d} \bar q_i ( i \gamma_{\mu} D^{\mu} + m) q_i .$$ Write ##q = (u d)^T## and $$M = \begin{pmatrix} m_u & 0 \\ 0 & m_d \end{pmatrix}$$ Given that the masses of the u and d quarks are much less than the hadronic confinement scale of QCD, we can consider ##m_u \approx m_d =m ## so that M is proportional to identity matrix. In this way, we have a U(2) symmetry ##q' = U q## where U is a U(2) transformation, ##U = \exp(i\alpha_i \sigma_i)##. (I think it's correct to say that this U(2) symmetry is a vector U(2) symmetry because using Noether's theorem we get vector currents as conserved quantities arising from this U(2) symmetry) Using properties of U(2), this is then equivalent to the direct product of vector U(1) and vector SU(2). In this form, the larger symmetry group U(2) from making the masses of u and d degenerate encompasses the global phase redefinition of the quark fields (which we had prior to assuming degeneracy of quark masses) and SU(2) is denoted an isospin symmetry. Did I understand that part correctly? If we now make the masses of the quarks to be zero, then the symmetry of QCD is even larger and it turns out we can write the lagrangian as a sum of two terms, each involving only left and right handed fields. Therefore we have two independent U(2) symmetries, ##U(2)_L## and ##U(2)_R## forming the symmetry group ##U(2)_L \otimes U(2)_R##. My first question is many books say this symmetry is ##SU(2)_L \otimes SU(2)_R##. Why is this? We can write ##U(2)_L \otimes U(2)_R = SU(2)_L \otimes SU(2)_R \otimes U(1)_V \otimes U(1)_A## which also incorporate the previously discussed vector U(1) symmetry and the new axial U(1) symmetry now present given M=0. My second question is if someone could explain this statement from Wikipedia (and also mentioned in many books): 'The quark condensate spontaneously breaks the ## SU(2)_{L}\times SU(2)_{R}## down to the diagonal vector subgroup ##SU(2)_V,## known as isospin.' Related High Energy, Nuclear, Particle Physics News on Phys.org Record-breaking terahertz laser beam vanhees71 2019 Award Yes, you got the group-theoretical part correct. Note, however that the ##\mathrm{U}(1)_{\mathrm{A}}## is broken by an anomaly, i.e., although the classical field theory has this symmetry, the quantized version doesn't (see Adler-Bell-Jakiw anomaly and the decay ##\pi^0 \rightarrow \gamma \gamma##). Spontaneous symmetry breaking means that a theory has a symmetry and a corresponding conserved Noether charge but the ground state of the theory is not symmetric under the full symmetry group but only under a subgroup. Indeed, lattice QCD calculations show that that the vacuum expectation value ##\langle \bar{q} q\rangle \neq 0##, but this is obviously only invariant under ##\mathrm{SU}(2)_{\mathrm{V}}## transformations but not under the axial-vector isovector transformations. Thus ##\mathrm{SU}(2)_L \times \mathrm{SU}(2)_R## is spontaneously broken to ##\mathrm{SU}(2)_{\mathrm{V}}##. This is related to the hadronic world (the low-energy phenomenology of QCD so to say) via the corresponding currents, and these again are related to the lowest-mass scalar and pseudo scalar bosons, the ##\sigma## meson and the three pions. Now you can build an effective theory, where the isovector symmetries are realized as an SO(4). You can write down the theory in terms of the real fields ##\phi=(\sigma,\vec{\pi})^{\mathrm{T}}##. Since we want to have the symmetry spontaneously broken, i.e., the ##\sigma## field, which has the quantum numbers of the quark condensate, should have a non-zero vacuum expectation value. Thus the vector subgroup of SO(4) is realized as the SO(3) rotations of the pion fields. The Lagrangian is still symmetric. Writing down the renormalizable terms allowed by the SO(4) symmetry leads to $$\mathcal{L}=\frac{1}{2} (\partial_{\mu} \phi) \cdot (\partial^{\mu} \phi) + \frac{\mu^2}{2} \phi^2 -\frac{\lambda}{8} (\phi^2)^2= \frac{1}{2} (\partial_{\mu} \phi) \cdot (\partial^{\mu} \phi)-V(\phi).$$ Note that the mass term has the "wrong sign", and ##\phi=0## is not the stable stationary solution of the field equations. The minimum of the "mexican-hat potential" ##V(\phi)## is determined up to an SO(4) transformation, i.e., the ground state of the theory is degenerate. To get a formulation that is treatable in the usual perturbative way of QFT, we choose the vacuum expectation value arbitrarily in ##\sigma## direction and write ##\sigma=v+\tilde{\sigma}##. Then expanding the potential around ##v##, where it has a minimum, leads to a theory for a massive ##\sigma## meson and three massless pions. Indeed the symmetry is broken from SO(4) to the SO(3) rotations among only the pion fields, and this rotations leave the vacuum filed ##(v,0,0,0)## invariant. This implies that without cost of energy you can do the SO(3) rotations, and thus you have three massless excitations, the Nambu-Goldstone bosons of the spontaneous breaking for the chiral symmetry. For a very readable introduction to this, leading to "chiral perturbation theory" as the effective hadronic low-energy theory of QCD, see http://inspirehep.net/record/444848 Hi vanhees71, thanks for the detailed response. vanhees71 said: I am just wondering in what sense it is a spontaneously broken symmetry though - the theory in which we have a ##U(2)_L \otimes U(2)_R = SU(2)_L \otimes SU(2)_R \otimes U(1)_V \otimes U(1)_A## symmetry is one in which the masses of the quarks are set to zero. So, there is no quark operator of the form ##\bar q q## in the lagrangian to begin with. The addition of masses then couples together the left and right handed components of the fields so the symmetry is no longer the direct product of two U(2)'s - would it therefore not make sense to say that the mass term softly breaks the chiral symmetry? (ie. we have a form of an explicit symmetry breaking rather than spontaneous). My reasoning went wrong somewhere so thanks if you could clarify! Or, is it the case that the ground state of the theory corresponds to the situation in which all the masses of the quarks are identically zero? Sure, in Nature chiral symmetry is only approximate and broken by the quark masses, but this explicit breaking is much less than the spontaneous breaking. While the quark masses are of order of some MeV the mass differences of chiral partners like the vector meson ##\rho## with a mass of around 770 MeV and the axialvector meson ##a_1## which is around 1200 MeV. Note that also the mass difference between u and d quark are of the order of magnitude of some MeV, and this means that isospin breaking is of the same order of magnitude of chiral symmery. That we don't see so obviously the chiral symmetry is rather to the spontaneous breaking than the explicit breaking. I see. I suppose what I am not understanding is why there is any spontaneous symmetry breaking at all - in chiral QCD we have the symmetry group ##SU(2)_L \otimes SU(2)_R##. In this theory, the chiral lagrangian has no operator ##\bar q q## (because M=0) so I'm not seeing the relevance of the non vanishing VEV of this operator to imply spontaneous symmetry breaking of ##SU(2)_L \otimes SU(2)_R \rightarrow SU(2)_V##. The spontaneous symmetry breaking is due to the dynamical formation of the quark condensate, which is a non-perturbative property of QCD. It's due to the attractive nature of the strong interaction in this quark-antiquark channel. Now since ##\langle \overline{\psi} \psi \rangle \neq 0## the QCD vacuum is not invariant under the full chiral group. The "axial rotations" ##\exp(-\mathrm{i} \vec{\alpha} \cdot \vec{\tau} \gamma_5)## do not leave this VEV invariant, while the "vector rotations" ##\exp(-\mathrm{i} \vec{\beta} \cdot \vec{\tau})## do (here ##\vec{\tau}=\vec{\sigma}/2## are the generators of isospin rotations, i.e., matrices in flavor space). I am not sure if the dynamical spontaneous symmetry breaking that is occuring here is understood in terms of a description outwith the lattice QCD remit but can the SSB be thought of in the following way? - SSB of a symmetry means that the lagrangian of the theory exhibits the symmetry but the ground state of the theory does not. In this case our theory is chiral QCD described by ##\mathcal L = \sum_{i=u,d} i\bar q_i \gamma_{\mu} D^{\mu} q_i## . Are we then adding the term ##\bar q q m## onto the chiral lagrangian which can be seen as the analogous term of the Higgs potential like in SSB of free scalar field theories? The VEV of all terms are then taken and ##\langle \bar q q \rangle## is seen to be non vanishing? The ground state VEV is thus non zero and writing the ##\bar q q## in terms of left and right handed components we see that this term couples them thus the symmetry is no longer the direct product of two ##SU(2)##'s. Or perhaps in other language, ##\langle \bar \psi \psi \rangle m## is the order parameter for the phase transition such that for m identically zero it is zero but for any finite m it is non zero. Is any of those descriptions correct? No, the term ##\sum_{i} m_i \overline{q}_i q_i## is explicitly breaking the chiral symmetry. Spontaneous breaking occurs for an exact symmetry and is indeed the case that the ground state is not invariant under the full symmetry group but the Lagrangian/Hamiltonian is. Indeed ##\overline{q} q## is an order parameter and ##\propto## the ##\sigma## field, which has a non-vanishing VEV. Have a look at the above cited review by Volker Koch which is really very pedagogical and nicely readable. Here's the direct link to the freely available preprint: http://arxiv.org/abs/nucl-th/9706075 Reactions: CAF123 Thanks, that does indeed look quite tractable - do you know of a reference where I might find more details on why the non vanishing of ##\langle \bar q q \rangle## implies SSB of the chiral symmetry? I've only seen SSB in the context of a scalar field theory and gauge theories so this might be why I'm getting a little confused in tying things together with how formations of condensates can lead to SSB. I read a little in Schwartz's book too but I still feel I didn't fully understand. As I said, try Volker Koch's review on chiral symmetry. There it becomes quite clear, how you get from QCD to chiral effective models. The trick is to identify the currents in both models. After all a lot can be learnt from only using "current algebra" which was the way to address hadrons before QCD. Note that the idea of chiral models goes far back before QCD and even quarks were on the agenda. Related Threads for: Spontaneous symmetry breaking of chiral symmetry Spontaneous symmetry breaking of gauge symmetries Spontaneous Symmetry Breaking Chiral symmetry breaking and approximate flavour symmetry Questions about chiral symmetry breaking Spontaneous Symmetry breaking-weinberg's chair Electric charge and spontaneous symmetry breaking Spontaneous symmetry breaking in the standard model I Spontaneous Symmetry breaking of multiplet of scalar fields B How does carbon 14 have such a perfect halflife? I Why can we not produce a "giant" nucleus? I On the weak force and beta decay I Antimatter energy from a small amount of mass I Need a layman's explanation for how scientists contain antimatter
CommonCrawl
Many set theory and model theory questions. The questions: Are there any cardinals $\kappa$ such that $\mathrm{V}\setminus\{\kappa\}\prec\mathrm{V}$? If so, I think a neat name for them would be "ghost cardinals", because their existance leaves no impact on $\mathrm{V}$ and it's properties. Given a theory $\mathrm{T}$, what is the minimum cardinality of $\mathcal{M}$ such that $\mathcal{M}$ is nonempty and $\mathcal{M}\models\mathrm{T}$? If so, what is it respective to Peano Arithmetic, Z, ZF, ZFC, KM, etc.? (SOLVED) Are there any cardinals $\kappa$ larger than the smallest correct cardinal such that $\forall\lambda<\kappa(\mathrm{V}_\lambda\prec\mathrm{V}\rightarrow\mathrm{V}_{\lambda}\prec\mathrm{V}_\kappa)$? (In other words, every rank of a correct cardinal is a substructure of the rank of $\kappa$, making $\mathrm{V}_\kappa$ very similar to $\mathrm{V}$, a good name for these is hypercorrect) Given a structure $\mathcal{M}$, are there any non-singleton chains of substructures of $\mathcal{M}$ ordered by $\prec$? What is the largest suprememum of these chains' order types when $\mathcal{M}=\mathrm{V}$? What about when $\mathcal{M}=L$? Where I have gotten to so far on them: If they do exist, they are not "definable" (i.e. there is no formula that is true for them and only them.) Every $\aleph_\alpha$ and $\beth_\alpha$ for finite $\alpha$ is not a ghost cardinal, and GCH implies every ghost cardinal is a limit cardinal. For Peano Arithmetic, the answer is clearly $\aleph_0$. For all the others, there are no finite models of Z, ZF, ZFC, or KM, and since they are all $L(\omega,\omega)$-theories, there is a countable model (if any). Thus, for all of these, $\aleph_0$. Every one of these cardinals are correct. Since not much is known on correct cardinals, this one seems hard to work on. Yes, there is an $\mathcal{M}$ where this property is true. In fact, that $\mathcal{M}$ is $\mathrm{V}$. Clearly, the existance of a correct cardinal implies that this supremum is at least $3$ (as $\{\mathrm{V}_\kappa,\mathrm{V}\}$ is well-ordered by $\prec$). It is also true that the existence of a hypercorrect cardinal implies it is at least $4$ (as $\{\mathrm{V}_\lambda,\mathrm{V}_\kappa,\mathrm{V}\}$ is well-ordered by $\prec$) set-theory model-theory large-cardinals 1: No - since e.g. the ordinal $\kappa+1$ (remember, all cardinals are ordinals) will be a successor in $V$ but not in $V\setminus\{\kappa\}$. EDIT: a better example, given any $a\in V$, is to think about $\{a\}$. In $V$, $\{a\}$ is nonempty, but $V\setminus\{a\}$ thinks $\{a\}$ is empty; in particular, $V\setminus \{a\}$ won't satisfy the axiom of extensionality, regardless of what $a$ is. I think you're asking the wrong question here - a better question would be e.g. "Can we have an $M\prec V$ with $\kappa\not\in M$?" Another question of this form, arguably more important, is : "Can we have an elementary embedding $j:V\rightarrow M$, for some inner model $M$, with $\kappa\not\in ran(j)$?" This is connected with measurable (and larger) cardinals. 2: Lowenheim-Skolem. For 3, the answer will depend on large cardinal assumptions - e.g. if $V$ happens to be a rank-minimal model of ZFC, then there are no $\lambda\in Ord$ with $V_\lambda\prec V$. For 4, you're trying way too hard: think about (say) an infinite pure set. Or the rationals, as a linear order. As to chains of elementary substructure in $V$, $L$, or similar, my answer re: 3 applies. For what it's worth, I think it would be a good idea to get a solid grasp of basic model theory before diving in to the model theory of set theory specifically; in particular, I think that comfort with the basic tools like compactness, Lowenheim-Skolem, and omitting types is a necessity here (to be clear, I don't think omitting types itself is specifically necessary, but comfort with its proof and use is a turning point in understanding model theory). I think it's also a good idea in the future to not ask several questions at once. Noah SchweberNoah Schweber $\begingroup$ @Zetapology "they can be "switched" with other cardinals (via automorphism)" No, they can't - there's no nontrivial automorphism of $V$ at all. (Precisely: a well-founded model of ZF has no nontrivial automorphisms.) And regardless, there's a huge difference between "can be moved by an automorphism" and "can be omitted, while keeping every other element, without changing the theory of the structure." I think you need to think carefully about exactly what question you're trying to ask, and what the various terms you're saying mean precisely. $\endgroup$ – Noah Schweber Sep 12 '17 at 1:44 $\begingroup$ @Zetapology No, it doesn't. Read the definition again. It gives a nontrivial elementary embedding from $V$ into itself; that's not the same thing at all. $\endgroup$ – Noah Schweber Sep 12 '17 at 1:58 $\begingroup$ @Zetapology No. An automorphism is bijective, while an elementary embedding need not be. Let me reiterate my previous point: before diving into this stuff, you should really familiarize yourself with the basic model theory involved. $\endgroup$ – Noah Schweber Sep 12 '17 at 1:59 $\begingroup$ @Zetapology I'm aware. That statement is false. I'll say it again: an automorphism is, by definition, bijective. Elementary embeddings need not be. An elementary embedding from $V$ to itself need not be an automorphism, and in fact (so long as $V$ is well-founded and the embedding is not just the identity) it never will be. $\endgroup$ – Noah Schweber Sep 12 '17 at 2:01 $\begingroup$ @Zetapology Isomorphisms are bijective. Elementary embeddings need not be. I don't know how many times I have to say this. $\endgroup$ – Noah Schweber Sep 12 '17 at 2:04 How many nonisomorphic models of ZFC (or other theories) are there? Number of models for some theory Universe cardinals and models for ZFC On the number of countable models of complete theories of models of ZFC Are there non-equivalent cardinal arithmetics? Why isn't second-order ZFC categorical? Saturated models and $\kappa=\kappa^{<\kappa}$ What is wrong with this "proof" that there is no $\omega$th inaccessible cardinal? Are there any cardinals with this property of elementary substructures? Given a cardinal $\kappa$, what is the smallest cardinal $\lambda$ for which $2^{\lambda}\geq\kappa$?
CommonCrawl
Why is the Penrose triangle "impossible"? I remember seeing this shape as a kid in school and at that time it was pretty obvious to me that it was "impossible". Now I looked at it again and I can't see why it is impossible anymore.. Why can't an object like the one represented in the following picture be a subset of $\mathbb{R}^3$? Rodrigo de Azevedo $\begingroup$ I'm not sure why this got a downvote. I think it's a great question for this site: while its impossibility may be obvious, clearly articulating why it's impossible is a bit trickier. Moreover, this question leads naturally into the broader problem of developing a theory of impossible figures, which has been pursued to a certain extent (see e.g. here). $\endgroup$ – Noah Schweber $\begingroup$ It's not hard to produce 3-dimensional objects that match this picture when seen from a particular viewpoint: e.g. this sculpture. $\endgroup$ – Robert Israel $\begingroup$ Another interesting point is that while it is definitely impossible to embed this thing in $\mathbb{R}^3$ (or even any $\mathbb{R}^n$ I think) and have it look that way, it is a perfectly sensible manifold "intrinsically". The implied surface forms a continuous loop so that traversing it indefinitely in one or the other natural direction you travel all over the entire figure. $\endgroup$ – The_Sympathizer $\begingroup$ Same here! Actually now I think it is possible to make it (you just need to paint the sides in black, white and grey and not rely on a light source) $\endgroup$ – lalala $\begingroup$ I seem to remember some reasonably famous short article that uses sheaf cohomology to talk about this exact problem... $\endgroup$ I can't resist posting an answer based on the Mathematics Stack Exchange logo. Let's add some more cubes to the logo to make it clear that it's a subset of the Penrose triangle (or would be, if it was a real 3D object) Now note that the cubes are overlapping, so some must be in front of others. But in fact, each cube is partially obscured by at least one other cube, in such a way that it appears to be some distance behind it. You can go around the hexagon in the original logo, in clockwise order, and see that each cube appears to be located further from the 'camera' than the next one in the cycle - which means that each cube is in front of itself. There's no consistent "z ordering" that you can give to the different parts of the figure, and that's one way to see that it's impossible. In reply to some of the comments, just to be explicit, the point here isn't just that the cubes all overlap each other. If that was the case it would be incorrect, since it's possible to have mutually overlapping arragements of cubes, as in this image provided by Misha Lavrov. However, if we're assuming that the Stack Exchange logo is a subset of the Penrose triangle then we know the cubes aren't arranged like that. Instead, each cube is positioned so that some of its sides are coplanar with those of the next cube, and each cube is separated from the next by some distance in the z direction, where z is perpendicular to the plane of the image. Therefore the cubes' centres of mass can't be given consistent z coordinates. As an extra bonus point, even if we don't assume that, and instead assume that each cube is as close to the next as it can be (in the z direction) without the surfaces intersecting, the Math.SE logo still can't be made into a consistent 3D shape, as the following animation shows. Note that it doesn't quite form the Math.SE logo, since one cube ends up in front of all the rest. Of the six neighbouring pairs of cubes, three of them can have equal z coordinates, but for the remaining three pairs, one cube unavoidably has to have a greater z coordinate than the next. As another additional bonus point, although it's not possible to embed the Penrose triangle into normal, flat, Euclidean 3D space, it is possible to embed it into curved three dimensional space. The video below, by @ZenoRogue on Twitter, shows Penrose triangles embedded into something called "nil geometry". I don't pretend to understand the details, but it's a kind of curved space such that Penrose triangles really are possible. video link: https://www.youtube.com/watch?v=YmFDd49WsrY edited Jul 6 '20 at 3:11 N. VirgoN. Virgo $\begingroup$ Thanks, reading this answer made it intuitively "obvious" again to me. $\endgroup$ $\begingroup$ But ... you can have three cubes, where $A$ is in front of $B$ is in front of $C$ is in front of A, in the sense that $A$ partially obscures cube $B$ when you're looking at it.. With the cubes in the same orientation as your picture, put cube A with its bottom front corner in the center of the top face cube B, and put cube C so that its top left corner is in the middle of the right face of cube A. $\endgroup$ – Peter Shor $\begingroup$ I feel like this is missing something since it's possible to have a set of 3d objects where each one is in front of and behind another. For example, when folding the sides of a cardboard box to close it: assets-global.website-files.com/5e136d26c5e98c478100e1c7/…. $\endgroup$ – mowwwalker $\begingroup$ Another interpretation of this image would be, if you label sides of the cube as X, Y, Z corresponding to the axes: you move 3 steps on the X axis, 3 on the Y axis, and 3 on the Z axis, which doesn't leave you back where you started. At least that's why this image looks suspicious to me. (Of course, this is then just a really good illustration of the answer by John Bentin) $\endgroup$ – Milo Brandt $\begingroup$ Here is a "possible triangle" with three cubes: i.stack.imgur.com/cSCg0.png Yet I feel like the same argument as in this answer (each cube is partially obscured by another) could be made for it, so this answer proves too much... $\endgroup$ – Misha Lavrov Start at the bottom left-hand corner, taking othonormal unit vectors $\pmb i$ horizontally, $\pmb j$ inward along the cross-member bottom left-hand edge, and $\pmb k$ upward and perpendicular to $\pmb i$ and $\pmb j$. I'll take the long edge of a member as $5$ times its (unit) width; the exact number doesn't matter. Then, working by vector addition anticlockwise round the visible outer edge to get back to the starting point, we have $$5\pmb i+\pmb k+5\pmb j-\pmb i-5\pmb k-\pmb j=4\pmb i+4\pmb j-4\pmb k=\pmb0,$$which of course is impossible. John BentinJohn Bentin $\begingroup$ Great rigorous solution! $\endgroup$ $\begingroup$ Note also the "real-world" versions of this object are designed so that the total vector displacement (such as $4 \hat{\imath} + 4 \hat{\jmath} - 4 \hat{k}$ in your example) is along the line of sight, and so is visually perceived as the zero vector. $\endgroup$ – Michael Seifert $\begingroup$ I don't see where you get the $+\mathbf{k}$, $-\mathbf{i}$, $-\mathbf{j}$ terms from to begin with. I would make this whole argument simpler: the three blocks are orthogonal and of equal length; call them $\mathbf{i}$, $\mathbf{j}$, $\mathbf{k}$; then the closed-loop displacement is $\mathbf{i} + \mathbf{j} + \mathbf{k}$, which is clearly non-zero, i.e., impossible. $\endgroup$ – Noldorin $\begingroup$ @Noldorin : The terms you cite correspond respectively to the bottom right, top, and bottom left (short) edges in the OP's diagram. I'm not sure quite what a "block" is, or how such blocks could be added. However, if we consider the three corner cubes, each common to two members (like the cubes added in Nathaniel's answer), then the three (mutually orthogonal) vectors between the cubes' centroids perhaps could represent each block, and they would sum to zero. While I agree that three vectors are simpler than six, I preferred to use easily defined vectors along the visible edges. $\endgroup$ – John Bentin $\begingroup$ I agree with the comments that "along the cross-member bottom left-hand edge" is very unclear. Took me a full minute to decipher. You could edit to (1) first explain what you mean by a member, or alternatively just draw a picture. $\endgroup$ Assume the white part is facing upwards. This is without loss of generality, since it just represents a specific rotation of the whole thing, which can't affect whether a shape is possible or impossible. Now we know both the right and bottom columns (in the image) are on the same vertical plane / level (since they share the white horizontal surface). Based on the connection between the left and right columns, we also know the left column extends downwards from the above plane (since it's on the opposite side of a side that's facing upwards). This implies at least part of the bottom column is below the right column. But we've already established they're on the same vertical plane, so we have a contradiction. Thus this shape can't exist in 3D. This is of course based on the assumption that each part of the image filled with a single solid colour represents a flat (uncurved) continuous surface and adjacent surfaces are connected at the same points as in the image and they point in different directions. Bernhard BarkerBernhard Barker $\begingroup$ +1 for pointing out the underlying assumption. Without this assumption, I'm pretty sure you could produce a connected solid shape that was topologically equivalent to the Penrose triangle - though the geometric shape would be different. $\endgroup$ – Robin Saunders $\begingroup$ @RobinSaunders A torus is topologically equivalent to the Penrose triangle. :-) (The 'twist' in the shape comes out in the wash...) $\endgroup$ – Steven Stadnicki $\begingroup$ Yes, that's true. I guess I meant "smoothly equivalent". $\endgroup$ $\begingroup$ Yes, making the assumptions explicit is important. Otherwise this would be a solution: youtu.be/CVOU8OTcgOQ $\endgroup$ – ThomasW It's helpful - as is often the case - to boil the picture down to something simpler. In this case, let's just think about three particular polygons sitting in $3$-space: the (visible) black, white, and grey $L$-shapes. These are themselves contained in three planes, which I'll call $P_b, P_w, P_g$ respectively. Now let's think about how these planes intersect - say, $P_b$ and $P_w$. We have one visible intersection, namely the "front" edge of the bottom cylinder where the black and white shapes themselves meet. However, we also have another intersection: if we "continue" the top of the black $L$, it will eventually meet the white $L$ at its top. So in fact $P_b$ and $P_w$ intersect in two distinct lines, and in particular they have at least non-collinear three points of intersection. But two planes which intersect at three non-collinear points must be the same plane - and that can't be the case here, since the black and white shapes clearly meet at right angles. Noah SchweberNoah Schweber $\begingroup$ Hmm. What about perspective? This doesn't seem very rigorous. $\endgroup$ $\begingroup$ I can't quite say how, but I feel like this is relevant: commons.wikimedia.org/wiki/File:Perth_Impossible_Triangle.jpg $\endgroup$ – Burnsba $\begingroup$ @Burnsba The "right-facing" plane is no longer a plane, but actually two planes whose normals point in different directions. $\endgroup$ $\begingroup$ @user118161 I am not sure what you mean. Any analysis of the figure in question has to further interpret it, since certainly we can in reality produce objects which look like the figure in question (see Bumsba's comment). Any answer has to translate the drawing into some precise description of a mathematical configuration - either in terms of a family of vectors satisfying some algebraic relationship, or in terms of some family of planes satisfying some geometric relationship, or whatever. I have written colloquially, but this is ultimately no less rigorous than any of the other answers. $\endgroup$ $\begingroup$ Put another way, the question itself isn't fully rigorous as written since it doesn't actually say what a Penrose tile is precisely. A rigorous version of the question would be e.g. "Why can't we have in $\mathbb{R}^3$ a set of three square-based prisms such that [intersection conditions]?" The process of teasing out the "implied conditions" is necessary to answer the question; for me, I read the picture as implicitly declaring that the various "faces" are in fact flat and do in fact meet at right angles as suggested. $\endgroup$ This is only impossible because we try so hard to see three dimensionality in the figure. As I read through the answers and stared at the figure, it ceased to be 3 dimensional, and instead became Three identical asymmetrical V-shapes lying flat on a plane. Easily describable, easily drawable, and completely flat. Our experience has trained our optical neural nets to see three dimensionality, and it generally serves us well. In this case, the local fit with three dimensional corner shading bumps into our higher-level matching against known figures, and the tension is born. Clearly this is a trivially possible figure -- it appears several times in the question and answers. It is our perception and expectations that are wrong. cmmcmm Imagine keeping the corners in the same place, but reducing the width of the square cross-section of each side down to zero, until each side is a one-dimensional line segment. You would end up with a triangle with three $90^{\circ}$ angles, which is impossible in Euclidean space $\mathbb{R}^n$. Rivers McForgeRivers McForge Geometry or topology behind the "impossible staircase" Coordinate proof that the sum of a triangle's angles is $180^\circ$? Is every manifold homeomorphic to a subset of some Euclidean space? Size at a distance Measure of angle formed by chords and two circles Why, intuitively, do different shapes with the same surface area have different volumes? An algorithm for filling a moving truck Intersection of equilateral triangle and circle Draw a Square Without a Compass, Only a Straightedge Finding the angles and area of unusual shapes Is the center of mass of a convex shape can be calculate just by its sphere?
CommonCrawl
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up. Would a zombie apocalypse be possible if a zombie existed? I want to do something on the first weeks of the zombie apocalypse. I am using a virusal, slow, infinite energy, 24 hours or so transformation time, zombie model. So I want to know, if let us say a zombie is created, or even 100, can a zombie apocalypse theoretically happen? Wouldn't the military work its way through? I am planning on doing it in a receding/increasing way, so first everything seems safe, and then a zombie is missed on the search and it manages to start again. reality-check zombies MikhailTalMikhailTal $\begingroup$ 'Biting' is a horribly inefficient transmission vector. It'd take some people...but as soon as folk figured out what was going on, it'd get shut down pretty quickly. Ref: cracked.com/… $\endgroup$ – guildsbounty Oct 15 '14 at 17:50 $\begingroup$ Welcome to worldbuilding :) Are your zombies visibly different from uninfected humans? I can imagine it spreading, maybe, if they stay looking human and remain intelligent with a drive to spread the infection. Knowing what intelligence level and appearance they have will help narrow down the answers. $\endgroup$ – trichoplax Oct 15 '14 at 18:17 $\begingroup$ xkcd.com/734 $\endgroup$ – Epiglottal Axolotl Oct 15 '14 at 18:38 $\begingroup$ It's just really hard to explain. Your best answer is to think about "what started it" and go from there. For example if it's a chemical everyone in the town was exposed to that caused them all to turn, that might work. Again the threat would be short lived but at least you can explain a whole town being taken over. $\endgroup$ – Tim B Oct 15 '14 at 20:02 $\begingroup$ Do you know why we have hunting seasons? To stop our people from driving every species in the woods to extinction. If we put open season on everything for a year there would be nothing left to hunt. Now imagine a decaying zombie trying to eat people. We wouldn't need the military to nip this in the but: just put open season on the infected. $\endgroup$ – JDSweetBeat Apr 22 '15 at 17:55 The best way to do this is to make a differential equation and look at the result. Some people have done this before and it really isn't that difficult math. This paper does definitelly leave room for improvement. If you are learning differential equations or want to remember, I highly recommend trying to make a model that works for practice. There is a probability of interaction between a zombie and human proportional to the number of zombies and humans in the close region. For each interaction between a human and zombie, there is a probability of 1 human leaving (Ph for human wins), 1 zombie leaving (Pz for zombie wins), or 2 zombies leaving (Pc for conversion). This is dependant on alot of factors but lets say it is constant. There is the potential for spontaneous birth and death but lets say that is negligible. $$\frac{\partial Z}{\partial t} = CHZ(P_C-P_H)$$ $$\frac{\partial H}{\partial t} = CHZ(-P_Z-P_C)$$ As the probability for humans is only negative, it means that in short periods of time with any number zombies and humans, non-zero probabilities for conversion, and some interactions: Human will die from zombies. If the value of $P_C-P_H$ is negative, however, the number of zombies will decrease too. In most cases, this means that a small group of romero zombies would die out very quickly. In order for a zombie appocalypse to happen, you need some way to manipulate this such that $P_C>>P_H$ but not high enough such that all human die too quickly. 28 days later does this by increasing $P_C$ to high levels with vomiting blood and decreasing $P_H$ by setting it in largely gunless London. Walking dead does this by making everyone turn into zombies at any death to create a probability of spontaneous zombie / human conversion without invocing the $CHZ$ zombie human interaction parameter. This also effectively bypasses $P_H$. If you look, i'm sure you can find a good way to do this too. The best way to do this seems to be to innoculate the system. A massive number of zombies upfront will collapse society increasing the chance for victory for the zombies in each interaction (group zombie attacks and sickly underarmed humans). This is best done by adding incubation times, invisible carriers who spread it without other's knowing, or an environmental source which kills most upfront. This means that you can have $P_H>P_Z+P_C$ but have it look like a traditional zombie apocalypse. You also realistically need to include the ability for sections of either group to isolate itself/group up as it increases the ability of the weaker to survive. Some implementation of birth/natural death/ human-human killings would improve it as well. I would also find it fun to include a cyclic "night time" in which zombies have the upperhand while humans do in the daytime. kainekaine $\begingroup$ "When simple mathematics answers a complex question, the mathematics is either ludicrously oversimplified, or a work of genius." $\endgroup$ – Salmoncrusher Aug 29 '16 at 15:49 $\begingroup$ @Salmoncrusher where is the quote from? I agree with the concept. This is a very oversimplified model but helps define concepts that are hard to describe without it. A complete model is not needed to draw some informative conclusions. This math is the work of genius... in other fields haphazardly applied to a fictional scenario. $\endgroup$ – kaine Aug 29 '16 at 15:59 Unfortunately, zombie apocalypse scenarios run into a big problem with zombie propagation. Kaine already gave the simple version of the equations but note that there's a big problem here: Either the number of zombies goes up--and the humans are soon wiped out, or the number of zombies goes down--and the zombies are soon wiped out. In neither case do you end up with a zombie apocalypse scenario. Besides, the equations assume humans were stamped out by a cookie cutter. To end up with an apocalypse scenario you need to look at humans in a more complex fashion: You have your average city dweller. Few have much combat capability, Ph will be low, Pc will be high. The zombie "virus" will spread through them like wildfire unless the infection cycle is too slow. For zombies like we saw in the World War Z movie you'll get near total conversion very quickly. You have some combat-capable city dwellers. Unless they are lucky to realize what's up in time and find someplace zombie-proof to hole up they aren't going to fare better but they'll thin the herd a bit before going down. While their Ph is high they will face so many encounters the numbers will get them in the end. Finally, you have the country dwellers. The population density is much lower which means prepared individuals won't have nearly the threat of being swarmed and both firearms and the skill to use them are much more widespread. The lower population density also means more time for a warning. Ph is high and they won't be swarmed. This latter group is the only path I see to an apocalypse scenario. Much of the world becomes fully converted, the survivors are mostly farmers and ranchers. There is also the approach used in John Ringo's zombie novels--the zombie virus piggy-backs on a flu virus. (Some lunatic's genetic engineering.) The flu spreads like flu always does (especially when the lunatic places dispensers in places like airports), the disease is pandemic before anyone realizes it's more than just a nasty strain of flu. With so many infected at the start society collapses before the government gets it's act together. Since his zombies aren't actually undead they don't meet the parameters you set out, though. Loren PechtelLoren Pechtel $\begingroup$ There's also Andrei Kruz novels where virus actually boosts one's immunity and in small doses is generally beneficial, except for side-effects in recently dead flesh. Disease becomes pandemic before anyone actually realises that there is a disease. $\endgroup$ – Daerdemandt Sep 5 '16 at 19:04 Well with zombies you would need to shoot or hit them directly in their brain. Since they are undead creatures we can safely guess that a wound, even loss of limb wouldn't stop them, they also wouldn't burn easily. So for even a single zombie to go down, it would need to take a hit to the head. Granted they also wouldn't recover from wounds but that's it. Alex SPENCERAlex SPENCER Zombies are stupid Zombies are by nature extremely dumb creatures. Given time, folks will work out fairly reliable ways to herd zombies with noisemakers and spotlights (or, for style, disco balls). If this reminds you of the (as it turns out made up) story about lemmings, that was my point. If significant parts of the world are relatively unaffected for long enough to develop a real plan, their militaries will be able to muster the resources to clear areas by (for example) having helicopters fly over an area with loudspeakers blaring, dangling live bait (cows, probably) which will build an enormous swarm following it right where you want it to go. When you've got the horde where you want it, just drop the bait, have the chopper speed up and leave - then shell the area, have snipers mop up remnants. Lather, rinse, repeat. People in zombie fiction are also stupid They never take any realistic steps that could staunch the bleeding. Layered defenses for the win. I won't belabor the points from that answer too much, but in general - the answer is no. There would not be a zombie apocalypse. The death toll would be almost unimaginable, but compartmentalization, travel lockdowns, and cleanup operations would be enough to keep everyone from dying. A new world, compartmentalized Zombies would probably never go away. Cities would have to be designed to account for zombie attacks. The goal is to prevent throngs of zombies from being able to overrun them by compartmentalizing damage. Roads would be closable in ways that block all traffic. Where possible buildings would contain few entrances at ground level, instead relying mostly on retractable staircases leading to the second floor. Major arteries (such as today's expressways) would be at a lower level than the rest of the city, so they could be sealed off should it become necessary. Damage would be catastrophic but not apocalyptic Modern society relies on mass production machinery and the ability to move things around the world in days. Getting enough food into (say) Chicago without roads being clear would become a major challenge very quickly, because the supply chain would be depleted fairly rapidly. And making new computers and cars and power tools would likewise be very hard. Modern assembly lines have precision tools that are used to make all this stuff. Without those tools, and the knowhow to make them, the ability to make complex machinery with interchangeable parts goes away. Much of these capabilities reside in major metropolitan areas, the most likely to fall. But even if every major city fell, enough would still be left that people would be able to restart civilization. Tech might be set back quite far, but enough would remain usable for long enough to prevent people from forgetting how stuff used to work. And people will have (for instance) archival copies of Wikipedia. It is likely that tech levels would, in time, recover - significantly faster than we developed said tech originally. Ton DayTon Day Thanks for contributing an answer to Worldbuilding Stack Exchange! Not the answer you're looking for? Browse other questions tagged reality-check zombies or ask your own question. What are the requirements for the zombie in order to reach a full zombie apocalypse? Wasp hives in a walking corpse. Why would wasps do this? What should I consider when making walls or fortifications that can stand against zombies, bandits, and enemy militaries? Architecture of the perfect zombie apocalypse refuge Zombie apocalypse pacifist survival group Would the zombie virus spread in small-town America? Why would the government ban flamethrowers in a zombie apocalypse? Does this apocalypse and the following events make sense? What vehicle would be the best one to start a >1000 km travel in a post apocalyptic zombie situation?
CommonCrawl
Minute Math Online Math Help Solve Systems of Equations Using Matrices 4.5 Solve Systems of Equations Using Matrices Topics covered in this section are: Write the augmented matrix for a system of equations Use row operations on a matrix 4.5.1 Write the Augmented Matrix for a System of Equations Solving a system of equations can be a tedious operation where a simple mistake can wreak havoc on finding the solution. An alternative method which uses the basic procedures of elimination but with notation that is simpler is available. The method involves using a matrix. A matrix is a rectangular array of numbers arranged in rows and columns. A matrix is a rectangular array of numbers arranged in rows and columns. A matrix with $m$ rows and $n$ columns has order $m \times n$. The matrix on the left below has $2$ rows and $3$ columns and so it has order $2 \times 3$. We say it is a $2$ by $3$ matrix. Each number in the matrix is called an element or entry in the matrix. We will use a matrix to represent a system of linear equations. We write each equation in standard form and the coefficients of the variables and the constant of each equation becomes a row in the matrix. Each column then would be the coefficients of one of the variables in the system or the constants. A vertical line replaces the equal signs. We call the resulting matrix the augmented matrix for the system of equations. Notice the first column is made up of all the coefficients of $x$, the second column is the all the coefficients of $y$, and the third column is all the constants. Write each system of linear equations as an augmented matrix: $\Bigg\{ \begin{align*} &5x-3y=1 \\ &y=2x-2 \end{align*}$ $\Bigg\{ \begin{align*} &6x-5y+2z=3 \\ &2x+y-4z=5 \\ &3x-3y+z=-1 \end{align*}$ The second equation is not in standard form. We rewrite the second equation in standard form. $\begin{align*} y&=2x-2 \\ -2x+y&=-1 \end{align*}$ We replace the second equation with its standard form. In the augmented matrix, the first equation gives us the first row and the second equation gives us the second row. The vertical line replaces the equal signs. All three equations are in standard form. In the augmented matrix the first equation gives us the first row, the second equation gives us the second row, and the third equation gives us the third row. The vertical line replaces the equal signs. It is important as we solve systems of equations using matrices to be able to go back and forth between the system and the matrix. The next example asks us to take the information in the matrix and write the system of equations. Write the system of equation that corresponds to the augmented matrix: $ \left[ \begin{array}{c c c|c} 4 & -3 & 3 & -1 \\ 1 & 2 & -1 & 2 \\ -2 & -1 & 3 & -4 \end{array} \right] $ We remember that each row corresponds to an equation and that each entry is a coefficient of a variable or the constant. The vertical line replaces the equal sign. Since this matrix is a $ 4 \times 3$, we know it will translate into a system of three equations with three variables. 4.5.2 Use Row Operations on a Matrix Once a system of equations is in its augmented matrix form, we will perform operations on the rows that will lead us to the solution. To solve by elimination, it doesn't matter which order we place the equations in the system. Similarly, in the matrix we can interchange the rows. When we solve by elimination, we often multiply one of the equations by a constant. Since each row represents an equation, and we can multiply each side of an equation by a constant, similarly we can multiply each entry in a row by any real number except $0$. In elimination, we often add a multiple of one row to another row. In the matrix we can replace a row with its sum with a multiple of another row. These actions are called row operations and will help us use the matrix to solve a system of equations. ROW OPERATIONS In a matrix, the following operations can be performed on any row and the resulting matrix will be equivalent to the original matrix. Interchange any two rows. Multiply a row by any real number except $0$. Add a nonzero multiple of one row to another row. Performing these operations is easy to do but all the arithmetic can result in a mistake. If we use a system to record the row operation in each step, it is much easier to go back and check our work. We use capital letters with subscripts to represent each row. We then show the operation to the left of the new matrix. To show interchanging a row: To multiply row 2 by $-3$: To multiply row 2 by $-3$ and add it to row $1$: Perform the indicated operations on the augmented matrix: Interchange rows 2 and 3. Multiply row 2 by $5$. Multiply row 3 by $-2$ and add to row 1. $ \left[ \begin{array} {c c c |c} 6 & -5 & 2 & 3 \\ 2 & 1 & -4 & 5 \\ 3 & -3 & 1 & -1 \end{array} \right]$ We interchange rows 2 and 3. We multiply row 2 by $5$. We multiply row 3 by $-2$ and add to row 1. Now that we have practiced the row operations, we will look at an augmented matrix and figure out what operation we will use to reach a goal. This is exactly what we did when we did elimination. We decided what number to multiply a row by in order that a variable would be eliminated when we added the rows together. Given this system, what would you do to eliminate $x$? This next example essentially does the same thing, but to the matrix. Perform the needed row operation that will get the first entry in row 2 to be zero in the augmented matrix: $\left[ \begin{array}{c c |c} 1 & -1 & 2 \\ 4 & -8 & 0 \end{array} \right]$ To make the $4$ a $0$, we could multiply row $1$ by $-4$ and then add it to row $2$. 4.5.3 Solve Systems of Equations Using Matrices To solve a system of equations using matrices, we transform the augmented matrix into a matrix in row-echelon form using row operations. For a consistent and independent system of equations, its augmented matrix is in row-echelon form when to the left of the vertical line, each entry on the diagonal is a $1$ and all entries below the diagonal are zeros. ROW-ECHELON FORM For a consistent and independent system of equations, its augmented matrix is in row-echelon form when to the left of the vertical line, each entry on the diagonal is a $1$ and all entries below the diagonal are zeros. Once we get the augmented matrix into row-echelon form, we can write the equivalent system of equations and read the value of at least one variable. We then substitute this value in another equation to continue to solve for the other variables. This process is illustrated in the next example. Solve the system of equation using a matrix: $\Bigg\{ \begin{align*} &3x+4y=5 \\ &x+2y=1 \end{align*}$ The steps are summarized here. HOW TO: Solve a system of equations using matrices. Write the augmented matrix for the system of equations. Using row operations get the entry in row 1, column 1 to be $1$. Using row operations, get zeros in column 1 below the $1$. Using row operations, get the entry in row 2, column 2 to be $1$. Continue the process until the matrix is in row-echelon form. Write the corresponding system of equations. Use substitution to find the remaining variables. Write the solution as an ordered pair or triple. Check that the solution makes the original equations true. Here is a visual to show the order for getting the $1$'s and $0$'s in the proper position for row-echelon form. We use the same procedure when the system of equations has three equations. Solve the system of equations using a matrix: $\Bigg\{ \begin{align*} &3x+8y+2z=-5 \\ &2x+5y-3z=0 \\ &x+2y-2z=-1 \end{align*}$ $\Bigg\{ \begin{align*} 3x+8y+2z&=-5\\2x+5y-3x&=0 \\ x+2y-2z&=-1 \end{align*}$ Write the augmented matrix for the equations. $\left[ \begin{array} {c c c |c} 3 & 8 & 2 & -5 \\ 2 & 5 & -3 & 0 \\ 1 & 2 & -2 & -1 \end{array} \right] $ Interchange row 1 and 3 to get the entry in row 1, column 1 to be $1$. The entry in row 2, column 2 is now $1$. The matrix is now in row-echelon form. Write the corresponding system of equations. $\Bigg\{ \begin{align*}x+2y-2z&=-1 \\ y+z&=2 \\ z&=-1\end{align*}$ Use substitution to find the remaining variables. $\begin{align*} y+z&=2 \\ y+(\textcolor{red}{-1})&=2 \\ y&=3 \end{align*}$ $\begin{align*} x+2y-2x&=-1 \\ x+2(\textcolor{red}{3})-2(\textcolor{red}{-1})&=-1 \\ x+6+2&=-1 \\ x&=-9 \end{align*}$ Write the solution as an ordered pair or triple. $(9, 3, -1)$ Check that the solution makes the original equations true. We leave the check to you. So far our work with matrices has only been with systems that are consistent and independent, which means they have exactly one solution. Let's now look at what happens when we use a matrix for a dependent or inconsistent system. Solve the system of equations using a matrix: $\Bigg\{ \begin{align*} &x+y+3x=0\\ &x+3y+5z=0 \\ &2x+4z=1 \end{align*}$ $\Bigg\{ \begin{align*} &x+y+3x=0\\ &x+3y+5z=0 \\ &2x+4z=1 \end{align*}$ Write the augmented matrix for the equations. $\left[ \begin{array} {c c c |c} 1 & 1 & 3 & 0 \\ 1 & 3 & 5 & 0 \\ 2 & 0 & 4 & 1 \end{array} \right]$ The entry in row 1, column 1 is $1$. Multiply row 2 by $2$ and add it to row 3. At this point, we have all zeros on the left of row 3. Write the corresponding system of equations. $\Bigg\{ \begin{align*} x+y+3z&=0 \\ y+z&=0 \\ 0&≠1 \end{align*}$ Since $0≠1$ we have a false statement. Just as when we solved a system using other methods, this tells us we have an inconsistent system. There is no solution. The last system was inconsistent and so had no solutions. The next example is dependent and has infinitely many solutions. Solve the system of equations using a matrix: $\Bigg\{ \begin{align*} &x-2y+3z=1 \\ &x+y-3z=7 \\ &3x-4y+5z=7 \end{align*}$ $\Bigg\{ \begin{align*} &x-2y+3z=1 \\ &x+y-3z=7 \\ &3x-4y+5z=7 \end{align*}$ Write the augmented matrix for the equations. $\left[ \begin{array} {c c c |c} 1 & -2 & 3 & 1 \\ 1 & 1 & -3 & 7 \\ 3 & -4 & 5 & 7 \end{array} \right] $ Multiply row 2 by $-2$ and add it to row 3. At this point, we have all zeros in the bottom now. Write the corresponding system of equations. $\Bigg\{ \begin{align*} x-2y+3z&=1 \\ y-2x&=2 \\ 0&=0 \end{align*}$ Since $0=0$ we have a true statement. Just as when we solved by substitution, this tells us we have a dependent system. There are infinitely many solutions. Solve for $y$ in terms of $z$ in the second equation. $\begin{align*} y-2z&=2 \\ y&=2z+2 \end{align*}$ Solve the first equation for $x$ in terms of $z$. $x-2y+3z=1$ Substitute $y=2z+2$. Then simplify. $\begin{align*} x-2(2z+2)+3z&=1 \\ x – 4z-4+3z&=1 \\ x-z-4&=1 \\ x&=z+5 \end{align*}$ The system has infinitely many solutions $(\frac{8}{5}, -\frac{42}{5}, -\frac{24}{5})$. Licenses & Attributions CC Licensed Content, Original Revision and Adaption. Provided by: Minute Math. License: CC BY 4.0 CC Licensed Content, Shared Previously Marecek, L., & Mathis, A. H. (2020). Solve Systems of Equations Using Matrices. In Intermediate Algebra 2e. OpenStax. https://openstax.org/books/intermediate-algebra-2e/pages/4-5-solve-systems-of-equations-using-matrices. License: CC BY 4.0. Access for free at https://openstax.org/books/intermediate-algebra-2e/pages/1-introduction Have a question? Email us at [email protected]. Shipping, Refunds and Returns Policy Minute Math, WordPress.com. Privacy Policy Together we'll be one mug Math is No Prob-llama T-Shirt cos²x part of (sin²x +cos²x=1) Duel Math Love Newton's Cradle Together we'll be one
CommonCrawl
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up. Why don't trigonal S and P compounds undergo inversion at room temperature? Most molecules containing nitrogen atoms in trigonal pyramid configuration undergo a relatively fast process of inversion at room temperature. On the other hand, the free energy barrier for phosphines, sulfoniums and sulfoxides are high enough that they are optically stable: the rate of racemization is slow at room temperature. I wonder what effect is responsible for this difference in behaviour (the larger energy barrier). I expect that it has to do with the size of the central atom (N being one row higher than P and S), but is that all there is to it? And how does size impact the free energy barrier for the inversion: energetically or entropically (or both, of course)? stereochemistry jerepierre F'xF'x $\begingroup$ IIRC, quantum tunnelling is significant in trigonal nitrogen inversion. $\endgroup$ – Richard Terrett May 20 '12 at 14:38 $\begingroup$ related chemistry.stackexchange.com/questions/50106 chemistry.stackexchange.com/questions/38599 chemistry.stackexchange.com/questions/16190 $\endgroup$ – orthocresol♦ Sep 7 '16 at 17:15 Azane all by itself has an inversion barrier of about $6\ \mathrm{kcal/mol}$, which is low compared to phosphine's approximately $30\ \mathrm{kcal/mol}$ barrier. Pyramidal nitrogen inversion requires a planar transition state. One can use a classical theory to model the thermal rate and we could write $$k \propto \mathrm e^{-E_\mathrm a/(RT)} $$ and using models like these we would predict that ammonia at $300\ \mathrm K$ would have a rate on the order of $10^8\ \mathrm{s^{-1}}$, instead what is observed is something on the order of $10^{10}\ \mathrm{s^{-1}}$ Assuming the potential energy surface has two minima equal in energy and a higher energy TS, two routes are possible: a thermal process going over the maxima or quantum-mechanical tunneling. Determining a tunneling frequency requires writing down vibrational wave functions, finding their linear combination and overlap. If we were to do all this for a particular vibration, we'd see that the tunneling frequency decreases exponentially with a dependence on increasing $\mu$, thickness of the barrier/shape, and the barrier height. As an example, $\ce{ND3}$ has approximately an order of magnitude slower rate of inversion than $\ce{NH3}$ at a sufficiently low temperature – this has been attributed to a decrease in its tunneling frequency. But why the difference in the energy barriers and corresponding rates depending on the atomic position in the periodic table? Perhaps the easiest/quickest explanation rests on considering what the TS requires for interconversion. Ideally the classical conversion requires passing through a trigonal planar structure with bond angles on the order of 120 degrees. If you consider the series of azane, phosphane, arsane then you'll see bond angles of 108, 94, 92 degrees respectively. In the absence of other effects, the barrier goes from smaller to larger. Indeed, asane-like molecules were the first of this series resolved, followed by phosphane-like. To the best of my knowledge the fastest have yet to be resolved, because rates on the order of $10^{-5}\ \mathrm{s^{-1}}$ are required at RT for a decent shot at isolating them. Naturally, this isn't the whole story, as I've previously alluded to above. Let me know if you would like to know more. Neither of the two enantiomers is actually an energy eigenstate of the molecule. If the right-handed and left-handed enantiomers are states $|R\rangle$ and $|L\rangle$ respectively, then the ground and first-excited energy eigenstates are going to be something like $|g\rangle=\left( |R\rangle +|L\rangle \right)/\sqrt{2}$ and $|e\rangle=\left( |R\rangle -|L\rangle \right)/\sqrt{2}$. Because each enantiomer is in a superposition of energy eigenstates, a molecule initially in one chiral state will oscillate back and forth from one to the other, with a frequency determined by the difference in energy between $|g\rangle$ and $|e\rangle$. In general, the difference in energy between $|g\rangle$ and $|e\rangle$ is determined by the barrier between the $|R\rangle $ and $|L\rangle$ states: the higher and wider the energy barrier, the slower the oscillation. Since P and S are much larger than N, the energy barrier between the $|R\rangle $ and $|L\rangle$ states is larger, and the frequency of oscillation between $|R\rangle $ and $|L\rangle$ is much lower. $\begingroup$ Sorry, my question is: why is the barrier higher for larger atoms? $\endgroup$ – F'x May 20 '12 at 20:25 $\begingroup$ I don't know why (or even if) larger atoms would have a higher barrier, but they would have a wider barrier by virtue of just being larger. Part of the point was that a wider barrier is enough to slow down the oscillation. $\endgroup$ – Dan May 20 '12 at 20:28 $\begingroup$ Indeed, that's a good point (not the answer to my question, but something I hadn't considered). As you said, the wider the barrier, the slower the oscillation… but between N and P, the energy barrier actually is much higher for phosphine (132 kJ/mol) than ammonia (24 kJ/mol). I would expect that contribution to be more important than the barrier widening. $\endgroup$ – F'x May 20 '12 at 20:31 Thanks for contributing an answer to Chemistry Stack Exchange! Not the answer you're looking for? Browse other questions tagged stereochemistry or ask your own question. While finding out chiral centers should lone pair be considered as a group? Can heteroatoms with lone pairs be chiral centres? Why is the inversion barrier larger in PH3 than it is in NH3? Diagonalization of Hessian H= PkP$^{-1}$: what do the elements of P matrix physically mean? What causes rapid nitrogen inversion? Chirality of heteroatoms How does hybridisation affect an otherwise chiral centre? Cis and Trans Isomerism in Resonance-Stabilized Compounds Why six C atoms are usually seen in cyclic compounds? Why are enantiomeric excess % and specific rotation considered? Confusion on enantiomers and meso compounds Why does (3E)-3,4-dimethylhex-3-ene yield a meso dihalide when reacted with Br2/CCl4 at room temperature? Why left and right circularly polarized light have different index of refraction in chiral media? Are there chiral compounds that don't rotate plane-polarized light? Why don't ionic compounds show stereoisomerism? Why glucose and other similar carbohydrates are oxidised fully by HIO4?
CommonCrawl
This is supplemental material for The experimental state of mind in elicitation: illustrations from tonal fieldwork that follows up on Section 2.4, which recast Hyman (2007)'s work on tonotactics in Thlantlang Lai in terms of experimental design. It introduces additional examples of using experimental design principles to generalize to elicitation methods beyond Pike's toneme discovery procedure. These examples are divided into the following sections: Coarsening and refining variable levels Generalizing to different research questions A references section follows. One way to generalize beyond Pike's toneme discovery procedure is to generalize beyond the set of variables under consideration. In addition to considering other independent variables---explanatory as well as confounding variables--another way to generalize the set of variables is if we don't add any new variables, but we change the definition of some of the variables. We could do this in such a way so that a re-defined variable is incomparable to the old one. For instance, we could change the levels for a factor SEGMENT from voiced and voiceless to singleton, geminate, and supergeminate. But there is no subset/superset relation between the original levels and the new ones, and this kind of re-defining a variable is effectively the same as adding a new variable. There are also two ways we could redefine a variable so that is still comparable: Coarsening the partition of the possible instantiations of the variable (increasing the number of levels of the variable) Refining the partition of the possible instantiations of the variable (decreasing the number of levels of the variable) When we coarsen the partition, we reduce the number of levels for a variable, merging levels with one another. When we refine the partition, we increase the number of levels for a variable, splitting levels. We might also both coarsen and refine in re-defining a variable. One example of coarsening the partition is given in the table below. This is adapted from the study of contextual tonal variation in Mandarin in Xu (1997, p. 70). The explanatory variable of tonal class of the target syllable had 4 levels, following the 4-way tonal contrast in Mandarin (abstracting away from the neutral fifth tone). However, the explanatory variable of the pre-target tone, i.e., the tone class of the syllable preceding the target syllable, collapsed the 4-way distinction for Mandarin tonal classes into a 2-way distinction based on the tonal offset: both Tone 1, a high tone, and Tone 2, a rise, were classified as having a high offset, and both Tone 3 (low) and 4 (fall) were classified as having a low offset. In autosegmental-theoretic terms (Goldsmith 1990, Goldsmith 1976), one might say that the new variable assumes that contour tones are treated as tonal sequences (fall = HL, rise = LH) and pays attention only to the tone at the right edge of the syllable. Old level Tone 1 High High $\mapsto$ H H Tone 2 Rise Rise $\mapsto$ H H Tone 3 Low Low $\mapsto$ L L Tone 4 Fall Fall $\mapsto$ L L Another example of coarsening the partition defined by a variable comes from Keating (2011), a cross-linguistic study of the acoustic parameters involved in distinguishing phonation types. Here, for the purposes of standardization for comparison with other languages, the 7-way tonal contrast in the White Hmong data from Esposito (2012) was coarsened into a 3-way contrast capturing the rough location of the tone within the pitch range—either high, mid, or low—for the same data, for the purposes of the cross-linguistic comparison in Keating (2011), see the table below: b-tone High-rising High-rising $\mapsto$ H H null-tone Mid Mid $\mapsto$ M M s-tone Low Low $\mapsto$ L L j-tone High-falling High-falling $\mapsto$ H H v-tone Mid-rising Mid-rising $\mapsto$ M M m-tone Low-falling Low-falling $\mapsto$ L L g-tone Mid-falling Mid-falling $\mapsto$ M M An example of refinement of the partition induced by a variable would be the reverse of the mapping for Mandarin tones in Xu (1997): rather than collapsing a 4-way distinction into a 2-way distinction, one would refine a 2-way distinction into a 4-way distinction. Another instance of refinement would be the addition of a level for tonal class upon the discovery of evidence for a new tonal class in the course of fieldwork. Those two examples of refinement both involve increasing the number of levels to some finite number, but refinements may also involve mapping from a set of a finite number of distinctions, e.g. 4 levels, to a set of potentially infinitely many distinctions, i.e.the set of real numbers.1 An example of this kind of refinement would be changing a length variable from counting syllables, e.g. 1 syllable, 2 syllables $\ldots$, to measuring absolute time, e.g. 343.25 milliseconds, 692.11 milliseconds. This kind of refinement might seem intuitively more drastic than refining a 2-way tonal distinction into a 4-way one, and it is: it's a change in variable type (Stevens 1937), similar to a change in type in type-theoretical semantics (Gamut 1992, Ch.4; Carpenter 1997 Chs. 2, 3). A more drastic way to generalize beyond Pike's procedure is to apply principles of experimental design to other research questions. In this section, we give examples of research questions treating tone as a dependent variable rather than an independent variable (Tone as a DV), including examining phonetic tonal sandhi, i.e. tonal coarticulation, in White Hmong (Example 1); we also given an example of using a factorial design in uncovering evidence for a tonal case marker in Samoan (Example 2). Tone as a dependent variable Thus far in this paper, we've considered tonal class as an independent variable manipulated by the fieldworker, but we've never considered tonal class as a dependent variable. This is not because tonal class cannot be treated as a dependent variable, but simply due to the nature of our research questions—we've focused on making hypotheses about possible tonal classes and their reflexes in the pitch contour and refining these hypotheses. There are two main situations in which tone might appear in the dependent variable: in explorations of how tonal contrast is produced and perceived in explorations of phonological allophony and alternation Some example research questions about exploring the dimensions of tonal contrast are: What effect does tonal class have on the pitch contour over a word? What parameters in the speech signal are available for discriminating different tonal classes? What cues in the speech signal do listeners use to identify tones? Some examples of work along these lines appear in Connell (2000) (perception), Khouw (2007) (production and perception), and DiCanio (2009) (production). In explorations of phonological allophony and alternation, tone makes an appearance in the dependent variable because the mapping between underlying tonemes and surface tones (or between surface tones) is of primary importance. Underlying form is manipulated as an explanatory variable, and the dependent variable is the surface form. Note that there must be a linking hypothesis about the mapping from observables (perhaps the pitch contour over a word) to surface tones in such an elicitation experiment. We presented an example of a factorial design in tonal fieldwork exploring allophony in the body of the paper in Section 2.4. Example 1: tonal realization in White Hmong A similar factorial design for tonal bigrams, but for studying the phonetic realization of tones in White Hmong (Hmong-Mien, China) is given below. Since the focus is the acoustic variation induced by tonal classes, there is a large and detailed set of acoustic dependent variables. Research question: How are tones in White Hmong acoustically realized? Strategy: Control some known sources of variability in tonal realization and manipulate others to study a selected range of tonal variability. Linking hypothesis: Acoustic dimensions relevant for tonal discrimination in the production of White Hmong tones include f0-based parameters and various spectral parameters. Experimental unit: elicited sentences Explanatory variables $N_1$ tone: b, n, s, j, g, m, v Confounding variables prosodic position: isolation, sentence-medial carrier phrase: fixed with two phrases, with target words randomly assigned to one of the two phrases segmental features of words: fully [+sonorant] (fixed) CV skeleton: CVV (fixed) pragmatic context: out of the blue (fixed) mean fundamental frequency syllable onset fundamental frequency syllable offset fundamental frequency mean spectral tilt mean harmonic-to-noise ratio Isolation $N_1$ tone b, n, s, j, g, m, v $N_2$ tone b, n, s, j, g, m, v Sentence-medial $N_1$ tone b, n, s, j, g, m, v Example 2: tonal case marking in Samoan Moving back upwards towards the morphosyntax-prosody interface, this section gives an example of a 2$\times$2 factorial design examining the effect of the interaction of case-marking pattern (absolutive-oblique, ergative-absolutive) and word order (VSO, VOS) on the f0 contour in Samoan (Polynesian, Samoa), with the goal of examining the hypothesis that there is a high tone at the left edge of absolutive arguments. The experimental design involves minimal sets of sentences, keeping segmental material in test sentences constant except for the segmental case markers for ergative and oblique case. Some factors are controlled for optimizing our chances of observing prosodic realization realized in the f0 contour. First, words are fully sonorant so that the f0 contours is free from segmental perturbation. Secondly, arguments of long length, i.e. many words, are used to allow plenty of segmental material for intonational tonal events to be realized (Bruce 1977). Research question: Does Samoan have a high tone at the left edge of absolutive arguments? Strategy: Control any variables suspected to induce variation in surface realization of underlying tones and vary case-marking pattern and word order. To support our hypothesis, we must find an interaction effect on the intonational realization in the sentence, such that the presence of high pitch peak at the left edge of the second argument occurs when the levels of the two factors interact such that the second argument has absolutive case. Research hypothesis: Samoan has a high tone at the left edge of the absolutive argument. Linking hypothesis: A high tone in Samoan is realized as pitch peak realized at the edge of a prosodic word. Experimental unit: elicited sentence case-marking pattern: absolutive-oblique, ergative-absolutive Word order: VSO, VOS constituent length: long (fixed) coordination: absent (fixed) stress pattern: primary stress on penultimate mora (fixed) CV skeleton: CVCVCV (fixed) pragmatic context: out of the blue Dependent variable: presence of high pitch peak at the left edge of the second argument The 2 $\times$ 2 factorial design is shown in the table below: erg-abs abs-obl VSO V-erg-abs V-abs-obl VSO V-abs-erg V-obl-abs Hyman, Larry M. 2007. Elicitation as experimental phonology: Thlantlang lai tonology. In Experimental approaches to phonology, ed. Maria-Josep Solé, Patrice Speeter Beddor, and Manjari Ohala, 7–24. Oxford; New York: Oxford University Press. Xu, Yi. 1997. Contextual tonal variations in Mandarin. Journal of Phonetics 25. 61–83. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Basil Blackwell. Goldsmith, John Anton. 1976. Autosegmental phonology. Doctoral Dissertation, Massachusetts Institute of Technology. Keating, Patricia, Christina Esposito, Marc Garellek, Sameer Khan, and Jianjing Kuang. 2011. Phonation contrasts across languages. In Proceedings of the 17th International Congress of Phonetic Sciences, 1046–1049. Hong Kong, China. Esposito, Christina M. 2012. An acoustic and electroglottographic study of White Hmong tone and phonation. Journal of Phonetics 40:466–476. Stevens, S. S., J. Volkmann, and E. B. Newman. 1937. A scale for the measurement of the psychological magnitude pitch. The Journal of the Acoustical Society of America 8:185–190. Gamut, L.T.F. 1992. Logic, language and meaning. Chicago: Chicago University Press. Carpenter, Bob. 1997. Type-logical semantics. Cambridge, Massachusetts: MIT Press. Connell, Bruce. 2000. The perception of lexical tone in Mambila. Language and Speech 43:163–182. Khouw, Edward, and Valter Ciocca. 2007. Perceptual correlates of Cantonese tones. Journal of Phonetics 35:104–117. DiCanio, Christian T. 2009. The phonetics of register in Takhian Thong Chong. Journal of the International Phonetic Association 39:162–188. Bruce, Gösta. 1977. Swedish word accents in sentence perspective. Lund: CWK Gleerup. The set of real numbers contains numbers like 3.0, 1.542, $\pi$, 2.9, 2.99, 2.999, 2.9999999999999999$\ldots$. ↩
CommonCrawl
Set Language Notation Union and Intersection of Sets Venn diagrams and Set Notation A set is a collection of objects that have a common property. So, if you think about all the things in your pencil case, this could be considered a set. You can probably come up with heaps of other sets of objects. The clothes you have in your wardrobe, the names of streets you pass by on your way to school, animals that have four legs or people in your family for example. To describe a set using mathematical notation, we use large curly brackets and list all the items of the set between them. Each object in the set is called an element. $\left\{\text{pencils, pens, sharpener, protractor, scissors, eraser, compass, glue, highlighter, calculator}\right\}${pencils, pens, sharpener, protractor, scissors, eraser, compass, glue, highlighter, calculator} The mathematical convention is to use capital letters when referring to the set, and lower case letters for elements in the set. So the set $A$A of things I eat for breakfast can be written $A=\left\{\text{cereal},\text{eggs},\text{toast},\text{muffins}\right\}$A={cereal,eggs,toast,muffins} and I would refer the element $a$a that is in $A.$A. We also use the symbol $\in$∈ to make statements about whether elements are part of the set or not. For example, $\text{toast}\in A$toast∈A and $\text{eggs}\in A$eggs∈A reads as toast is an element in the set $A$A and eggs is in the set $A$A. But if the element is NOT in the set then we use the symbol $\notin$∉ instead. So $\text{peaches}\notin A$peaches∉A and $\text{yoghurt}\notin A$yoghurt∉A reads as peaches are not in the set $A$A and yoghurt is not an element in the set $A$A. Of course we can have sets in mathematics as well, and these sets tend to have numbers or algebraic symbols. Finite sets Let's have a look at some numerical sets. The set of odd numbers less than $10$10 would look like this: $\left\{1,3,5,7,9\right\}${1,3,5,7,9} The set of multiples of $5$5 up to $50$50: $\left\{5,10,15,20,25,30,35,40,45,50\right\}${5,10,15,20,25,30,35,40,45,50} The set of factors of $24$24: $\left\{1,2,3,4,6,8,12,24\right\}${1,2,3,4,6,8,12,24} Some sets can just be groups of numbers that appear to have nothing else in common except that they are in the same set together. For example, $\left\{3,7.4,1004,33^4\right\}${3,7.4,1004,334} These sets are called finite sets as they all have a finite number of elements. The number of elements in a set is also called the sets cardinality, or the sets order. Infinite sets Here as some larger sets, The set of even positive integers: $\left\{2,4,6,8,...\right\}${2,4,6,8,...} The set of multiples of $7$7: $\left\{7,14,21,28,...\right\}${7,14,21,28,...} Or this set: $\left\{\frac{1}{2},\frac{1}{3},\frac{1}{4},\frac{1}{5},...\right\}${12​,13​,14​,15​,...} These sets are all infinite sets as the number of elements in them is infinite. The $3$3 dots at the end of each, called an ellipsis, indicates that the elements of the set continue on. We can also use an ellipsis to save us from having to write all the elements in the middle of a set. For example the set of integers up to $100$100 could be written like this, $\left\{1,2,3,...,99,100\right\}${1,2,3,...,99,100}. Universal set The set of everything relevant to the question is called the universal set. For your work in school mathematics involving sets the universal sets you will use most often is the set of integers, or the set of reals. EMPTY SET Also called the null set, an empty set is (as you might guess) empty! It is a set that has no elements in it. It's important to note here that an empty set does not have a zero in it, it is completely empty! We can write $\left\{\ \right\}${ } to represent the empty set, but there is also a special symbol we use to denote the empty set: $\varnothing$∅. Equal Sets Two sets are equal if and only if they contain exactly the same elements. They can be equal even if the notation used to describe them is different. $A=\left\{2,3,6,11\right\}$A={2,3,6,11}and $B=\left\{11,2,6,3\right\}$B={11,2,6,3} then $A=B$A=B $A=\text{set of first 5 primes}$A=set of first 5 primes and $B=\left\{2,3,5,7,11\right\}$B={2,3,5,7,11} then $A=B$A=B We haven't yet finished with all the new terminology to work with sets. Imagine we have the set of all real numbers. Now imagine we take just the integers from that set. Now we take just the positive integers and then we take just the set of integers between $5$5 and $10$10. What we have described here are subsets. We define a subset as: $A$A is a subset of $B$B if and only if every element of $A$A is in $B$B. We use the symbol $\subseteq$⊆ to describe subsets. So $A\subseteq B$A⊆B is read as $A$A is a subset of $B$B. (We also have the symbol $\not\subseteq$⊈, for the "not a subset of" statement. ) If there is at least one element in $B$B that is not included in the subset $A$A, then we call this a proper subset, and use the symbol $\subset$⊂. So $A\subset B$A⊂B is read as $A$A is a proper subset of $B$B. ($\not\subset$⊄ is used to say the opposite) $A$A is the set of even integers and $B$B is the set of integers then $A\subset B$A⊂B. $A=\left\{2,11,67,344\right\}$A={2,11,67,344}, and $B=\left\{1,2,8,11,67,120,180,344\right\}$B={1,2,8,11,67,120,180,344}, then $A\subset B$A⊂B. $M=\left\{3,6,7,8\right\}$M={3,6,7,8} and $N=\left\{3,6,7,8\right\}$N={3,6,7,8}, then $M\subseteq N$M⊆N. A little more notation Not quite finished yet! Let's consider the following. From the universal set of numbers $1$1 through $10$10, we define a subset $K=\left\{\text{even numbers}\right\}$K={even numbers}. We could consider the image here as a representation of this situation. Note how the universal set is depicted using the external rectangle, and our set inside uses a circle. This is the most common representation and lead us to Venn Diagrams which we will look at later. What if we wanted to describe the elements from the universal set that are not included in the subset $K$K? Well we could define a new subset and nominate the elements not in $K$K to be part of it, or we could use the notation $K'$K′ or $\overline{K}$K as this refers to all elements NOT IN $K$K. The sets $U=\left\{20,8,26,3,15\right\}$U={20,8,26,3,15} and $V=\left\{20,8,26,3,15,2,24,10,27\right\}$V={20,8,26,3,15,2,24,10,27} are such that there are no other elements outside of these two sets. Is $U$U a subset or proper subset of $V$V? A subset. A proper subset. State the cardinality of $U$U. List the elements of $U'$U′. List the elements of the universal set. State the elements on the same line, separated by a comma. Which set is $V'$V′? The set $\left\{20,8,26\right\}${20,8,26}. The empty set $\varnothing$∅. The set $\left\{20,26,3,15\right\}${20,26,3,15}. Consider the interval $\left(8,9\right]$(8,9] and answer the following questions. Express the interval in set-builder notation: {$x$x$\mid$∣$\editable{}$ Graph the interval on the number line.
CommonCrawl
Walther MA271 Fall2020 topic20 - Rhea 1 Fourier Transforms 1.1 Author: Luke Oxley 1.2 Table of Contents: 1.3 1. Introduction 1.4 2. Euler's Formula 1.5 3. Formula Visualization 1.6 4. Example 1.7 5. Inverse 1.8 6. Applications 1.9 7. References and Further Readings Author: Luke Oxley Euler's Formula Formula Visualization Inverse Transform The Fourier transform is a method used to break a function down into a representation of its frequencies. This method can be thought of as transforming from the time domain (a function with time as the input variable) to the frequency domain (a function with frequency as the input). This function in the frequency domain is a way of representing how much of each frequency is prevalent in the function. The Fourier transform allows for signals to be broken into their individual frequency components. The advantage of extracting the individual frequencies is that this allows for the individual manipulation of each frequency and for a unique representation of the function. Once you have extracted the component frequencies, many times it is possible to express the original function with sine and cosine waves at these frequencies. I like to picture this representation as a Taylor series, but instead of being based on derivatives, it is based on frequencies. This transform is useful in many fields, especially for analyzing electrical and sound signals. The purpose of this page is for you to gain an understanding of the basics of this transform, including a way to mentally visualize what this formula is doing and its applications. One area of confusion is the difference between the Fourier series and the Fourier transform. The Fourier transform can be thought of as a limited case of the Fourier series. The series is mainly concerned with periodic functions while the transform is concerned with nonperiodic functions. 2. Euler's Formula To understand the formula for the Fourier transform, it is helpful to have knowledge of Euler's formula: $ e^{it} = \cos t + i\sin t $ One way of representing a unit circle is parametrically with sines and cosines: $ x = \cos t $ $ y = \sin t $ Euler's formula allows for an elegant representation of a unit circle in the complex plane. Instead of using sines and cosines like in our parameterization, these can be substituted out with a simpler expression of $ e^{it} $. Another nice thing about Euler's formula is that since it represents a unit circle where t is the angle in radians, t also represents the arclength traveled around the circle. Furthermore, the period of a full revolution around the circle in this formula is 2π radians. To convert this period into a desired frequency, we add a coefficient of 2πf to the exponent, where f is the desired number of rotations per second around the circle: $ e^{2\pi ift} = \cos 2\pi ft + i\sin 2\pi ft $ 3. Formula Visualization As previously stated, the Fourier transform converts a function from the time domain to the frequency domain. To do this, we need to have a way to extract the amount of each frequency contained in the function. Personally, when learning a new concept, I find it helpful, when possible, to visualize it. For example, I will break the Fourier transform formula into components and explain where they come from. First, we will start with Euler's formula containing the frequency variable: $ e^{-2\pi ift} $ Currently, we have a circle with a radius of one, determined by the coefficient of e, and frequency f. The negative in the exponent makes an increasing time t correlate to clockwise motion around the circle. Now, picture yourself taking a function and wrapping it around this circle at a certain frequency. For example, let $ g(t) = \sin(3*2\pi t) + 1 $ (graph pictured below). To wrap g(t) around the circle, we must make the radius determined by g(t). Since the coefficient of e is the radius of the circle, we make g(t) the coefficient: $ g(t)e^{-2\pi ift} $ If we wrap g around the circle at a frequency f = 1, we are basically snipping a section of the graph from t = 0 to t = 1 (between the red lines) and making one complete revolution: Notice how there are three peaks on g(t), from t = 0 to t = 1, correlating to three petals on our wrapped version. Now, we will wrap the function around the circle with f = 2. We are taking the same snippet as before, but forming two complete revolutions with it around the circle: Again, we do this with f = 3, the same frequency of g. We wrap the snippet around the circle three times. As g makes three "hills" in one second, we will form three revolutions from t = 0 to t = 1. This results in each "hill" overlapping, forming one distinct shape: Notice how using a frequency that is highly prevalent in the function produces a unique result. One way to distinguish this result from the others is by looking at the shape's center of mass. You will notice that the centers of mass for the first two shapes are very close to or at the origin. However, when the wrapping frequency matches that of the function, the center of mass is noticeably far from the origin. To model this mathematically, we take the center of mass of our current expression: $ \frac{1}{t_{2}-t_{1}}\int_{t_{1}}^{t_{2}}g(t)e^{-2\pi ift}dt $ Currently, this function uses frequency f as the input variable and outputs a value that is proportional to the amount of that frequency in the function. For example, if we set f = 3, the output value will be quite high because g literally has a frequency of 3. Our current model is dividing out the length of the time interval. However, the actual Fourier transform eliminates this portion. Eliminating the division means that the longer a specific frequency is prevalent in the function, the larger the value returned. In addition, the Fourier transform usually eliminates the bounds and instead utilizes an improper integral: $ \int_{-\infty}^{\infty}g(t)e^{-2\pi ift}dt $ Although equivalent, the common notation of the Fourier transform is denoted as transforming the function f(x) into f ̂(ξ) where ξ is the input frequency: $ \hat{f}(\xi) = \int_{-\infty}^{\infty}f(x)e^{-2\pi i\xi x}dx $ Pictured above is the formula for the Fourier transform. Note that in this specific example I used a periodic function for g(t). Keep in mind that this transform can be applied to a non-periodic function as well. To reiterate, the Fourier transform converts from the time domain (a function g(t)) to the frequency domain (a function of ξ). In many cases, it is easier to work with and modify a function once it is broken down into its component frequencies. 4. Example A classic example used to demonstrate the Fourier transform is with the simple rectangle function: $ g(t)=\left\{\begin{matrix} 1 & \left | t \right |\leq 0.5\\ 0 & \left | t \right |> 0.5 \end{matrix}\right. $ Since the function is 0 outside of -0.5 and 0.5, the value of the Fourier transform is also 0 outside the interval as well. This means we can simply integrate using bounds of -0.5 and 0.5: $ \hat{f}(\xi) = \int_{-0.5}^{0.5}g(t)e^{-2\pi i\xi t}dt $ Now, we integrate this function, only considering the real component. Notice that we can replace g(t) with a one since its value is one on our entire bounds of integration, greatly simplifying this problem: $ \int_{-0.5}^{0.5}g(t)e^{-2\pi i\xi t}dt = \int_{-0.5}^{0.5}1*\cos (-2\pi\xi t) dt = \frac{\sin (\pi \xi)}{\pi\xi} = sinc(\xi) $ This resultant expression is also referred to as the "sink" function, as shown above, due to its popularity and recurrence in the subject of Fourier transforms. Below is a graph of the Fourier transform of g, the sinc function: 5. Inverse Currently we have a way of transforming a function from the time domain to the frequency domain. How do we return from the frequency domain to the time domain? To do this we use the inverse transform: $ f(x) = \int_{-\infty}^{\infty}\hat{f}(\xi)e^{2\pi i\xi x}dx $ Notice that this is simply the same equation as the Fourier transform, but without the negative in the exponent. I like to think of it as if the Fourier transform is moving clockwise due to the negative. Then, to revert to the original function, simply go counterclockwise by removing the negative. 6. Applications Ok, so we have a way to transform a function between the time and frequency domains. Why would we ever want to do this? Sometimes a function is simpler expressed in the frequency domain. For example, when solving differential equations with boundary constraints it is sometimes quite difficult to solve in the time domain. Using the Fourier transform, one can convert to the frequency domain, solve the problem, then revert to the time domain. Other important applications are analyzing and modifying signals. If someone wants to adjust the signal based on its frequencies, they can use the Fourier transform. If someone desires to remove an annoying frequency from a soundwave, they can simply use the transform, remove the unwanted portion, then revert back. The Fourier transform makes dealing with specific frequencies quite simple. The use of the transform for signal analysis is highly prevalent in the fields of electrical and sound engineering. The transform is also used in the field of image processing. Many optical character recognition algorithms rely on the Fourier transform. It turns out that the Fourier transform of different characters, no matter the font, are quite distinct. In addition, the transform can detect unwanted periodic patterns in images and filter them out. Other applications of the Fourier transform are in quantum mechanics, especially with the Uncertainty Principle. 7. References and Further Readings In learning what the Fourier transform is, I thought that this intuitive explanation done by Blue1Brown helped a lot: https://www.youtube.com/watch?v=spUNpyF58BY&ab_channel=3Blue1Brown (Visual Model) Once I had an idea of what the transform is doing, these lecture notes provided a more formal analysis of the topic: http://www.math.ncku.edu.tw/~rchen/2016%20Teaching/Chapter%202_Fourier%20Transform.pdf (Transform Lecture Notes) And, of course, the Wikipedia article helps quite a bit: https://en.wikipedia.org/wiki/Fourier_transform (Wikipedia Page) If you still are having trouble understanding it, or would like to play around with a visual representation, I recommend the following two links. The first offers a depiction of how the Fourier transform can be used to write an equation out of sines. The next displays something similar but takes it to the next level with 2d drawings. http://www.jezzamon.com/fourier/ (Normal Model) https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/ (2D Drawing Model) https://en.wikipedia.org/wiki/Euler%27s_formula (Euler's Formula Depiction) Optical Character Recognition Document Retrieved from "https://www.projectrhea.org/rhea/index.php?title=Walther_MA271_Fall2020_topic20&oldid=77795" MA271Fall2020Walther Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras. Buyue Zhang This page was last modified on 5 December 2020, at 13:20. This page has been accessed 283 times.
CommonCrawl
BioData Mining Software article eQTpLot: a user-friendly R package for the visualization of colocalization between eQTL and GWAS signals Theodore G. Drivas ORCID: orcid.org/0000-0002-8717-01111,2, Anastasia Lucas2 & Marylyn D. Ritchie2,3 BioData Mining volume 14, Article number: 32 (2021) Cite this article Genomic studies increasingly integrate expression quantitative trait loci (eQTL) information into their analysis pipelines, but few tools exist for the visualization of colocalization between eQTL and GWAS results. Those tools that do exist are limited in their analysis options, and do not integrate eQTL and GWAS information into a single figure panel, making the visualization of colocalization difficult. To address this issue, we developed the intuitive and user-friendly R package eQTpLot. eQTpLot takes as input standard GWAS and cis-eQTL summary statistics, and optional pairwise LD information, to generate a series of plots visualizing colocalization, correlation, and enrichment between eQTL and GWAS signals for a given gene-trait pair. With eQTpLot, investigators can easily generate a series of customizable plots clearly illustrating, for a given gene-trait pair: 1) colocalization between GWAS and eQTL signals, 2) correlation between GWAS and eQTL p-values, 3) enrichment of eQTLs among trait-significant variants, 4) the LD landscape of the locus in question, and 5) the relationship between the direction of effect of eQTL signals and the direction of effect of colocalizing GWAS peaks. These clear and comprehensive plots provide a unique view of eQTL-GWAS colocalization, allowing for a more complete understanding of the interaction between gene expression and trait associations. eQTpLot provides a unique, user-friendly, and intuitive means of visualizing eQTL and GWAS signal colocalization, incorporating novel features not found in other eQTL visualization software. We believe eQTpLot will prove a useful tool for investigators seeking a convenient and customizable visualization of eQTL and GWAS data colocalization. Availability and implementation the eQTpLot R package and tutorial are available at https://github.com/RitchieLab/eQTpLot Non-protein-coding genetic variants make up the majority of statistically significant associations identified by genome wide association studies (GWAS). As these variants typically do not have obvious consequences for gene function, it can be difficult to map their effects to specific genes. To address this issue, genomic studies have increasingly begun to integrate expression quantitative trait loci (eQTL) information into their analysis pipelines, with the thought that non-coding variants might be exerting their effects on patient phenotypes through the modulation of expression levels of nearby genes. Through this approach, indirect evidence for causality can be obtained if a genetic locus significantly associated with candidate gene expression levels is found to colocalize with a genetic locus significantly associated with the phenotype of interest. A number of excellent tools have been developed to discover and analyze colocalization between eQTL and GWAS association signals [1,2,3,4,5,6,7,8], but few packages provide the necessary tools to visualize these colocalizations in an intuitive and informative way. LocusCompare [8] allows for the side-by-side visualization of eQTL and GWAS signal colocalization, but does not visually integrate this data. LocusZoom [9] produces a single plot integrating linkage disequilibrium (LD) information and GWAS data, but does not consider eQTL data. Furthermore, no colocalization visualization tool exists that takes into account the direction of effect of an eQTL with relation to the direction of effect of colocalizing GWAS signals. For these reasons, we developed eQTpLot, an R package for the intuitive visualization of colocalization between eQTL and GWAS signals. In its most basic implementation, eQTpLot takes standard GWAS summary data, formatted as one might obtain from a GWAS analysis in PLINK [10], and cis-eQTL data, formatted as one might download directly from the GTEx portal [11], to generate a series of customizable plots clearly illustrating, for a given gene-trait pair: 1) colocalization between GWAS and eQTL signals, 2) correlation between GWAS and eQTL p-values, 3) enrichment of eQTLs among trait-significant variants, 4) the LD landscape of the locus in question, and 5) the relationship between the directions of effect of eQTL signals and colocalizing GWAS peaks. These clear and comprehensive plots provide a unique view of eQTL-GWAS colocalization, allowing for a more complete understanding of the interaction between gene expression and trait associations. We believe eQTpLot will prove a useful tool for investigators seeking a convenient and robust visualization of genomic data colocalization. eQTpLot was developed in R version 4.0.0 and depends on a number of packages for various aspects of its implementation (biomaRt, dplyr, GenomicRanges, ggnewscale, ggplot2, ggplotfy, ggpubr, gridExtra, Gviz, LDheatmap, patchwork) [12,13,14,15,16,17,18,19,20,21]. The software is freely available on GitHub (https://github.com/RitchieLab/eQTpLot) and can be downloaded for use at the command line, or in any R-based integrated development environment, such as RStudio. Example data and a complete tutorial on the use of eQTpLot and its various features have also been made available on GitHub. At a minimum, eQTpLot requires two input files, imported into R as data frames: one of GWAS summary statistics (as might be obtained from a standard associations study as completed in PLINK [10]) and one of cis-eQTL summary statistics (as might be downloaded directly from the GTEx portal at gtexportal.org [11]). Table 1 summarizes the formatting parameters of the two required input files and of the two optional input files. Additionally, there are many options that can be specified to generate variations of the main eQTpLot, as discussed below. Table 2 shows the complete list of command line arguments that can be passed to eQTpLot, with descriptions of their use. Table 1 Description of required and optional input data frames for eQTpLot Table 2 Description of required and optional arguments for eQTpLot In its simplest implementation, eQTplot takes as input two data frames, one of GWAS summary data and the other of eQTL summary data, with the user specifying the name of the gene to be analyzed, the GWAS trait to be analyzed (useful if the GWAS data contains information on multiple associations, as one might obtain from a Phenome-wide Association Study (PheWAS)), and the tissue type to use for the eQTL analysis. Using these inputs, eQTpLot generates a series of plots intuitively illustrating the colocalization of GWAS and eQTL signals in chromosomal space, and the enrichment of and correlation between the candidate gene eQTLs and trait-significant variants. Additional parameters and data can be supplied, such as pairwise variant LD information, allowing for an even more comprehensive visualization of the interaction between eQTL and GWAS data within a given genomic locus. One major implementation feature that sets eQTpLot apart from other eQTL visualization software is the option to divide eQTL/GWAS variants into groups based on their directions of effect. If the argument congruence is set to TRUE, all variants are divided into two groups: congruous, or those with the same direction of effect on gene expression and the GWAS trait (e.g., a variant that is associated with increased expression of the candidate gene and an increase in the GWAS trait), and incongruous, or those with opposite directions of effect on gene expression and the GWAS trait (e.g., a variant that is associated with increased expression of the candidate gene but a decrease in the GWAS trait). The division between congruous and incongruous variants provides a more nuanced view of the relationship between gene expression level and GWAS associations – a variant associated with increased expression of a candidate gene and an increase in a given GWAS trait would seem to be operating through different mechanisms than a variant that is similarly associated with increased expression of the same candidate gene, but a decrease in the same GWAS trait. eQTpLot intuitively visualizes these differences as described below. This distinction also serves to illuminate important underlying biologic difference between different gene-trait pairs, discriminating between genes that appear to suppress a particular phenotype and those that appear to promote it. Another important feature of eQTpLot that is not found in other eQTL visualization software is the ability to specify a PanTissue or MultiTissue eQTL visualization. In some instances, it may be of interest to visualize a variant's effect on candidate gene expression across multiple tissue types, or even across all tissues. Such analyses can be accomplished by setting the argument tissue to a list of tissues contained within eQTL.df (e.g. c("Adipose_Subcutaneous", "Adipose_Visceral")) for a MultiTissue analysis, or by setting the argument tissue to "all" for a PanTissue analysis. In a PanTissue analysis, eQTL data across all tissues contained in eQTL.df will be collapsed, by variant, into a single pan-tissue eQTL; a similar approach is used in a MultiTissue analysis, but in this case eQTL data will be collapsed, by variant, across only the specified tissues. The method by which eQTpLot collapses eQTL data can be specified with the argument CollapseMethod, which accepts as input one of four options – "min," "median," "mean," or "meta." By setting CollapseMethod to "min" (the default), for each variant the tissue with the smallest eQTL p-value will be selected, such that each variant's most significant eQTL effect, agnostic of tissue, can be visualized. Setting the parameter to "median" or "mean" will visualize the median or mean p-value and NES value for each SNP across all specified tissues. Lastly, setting CollapseMethod to "meta" will perform a simple sample-size-weighted meta-analysis (i.e. a weighted Z-test) [22, 23] for each variant across all specified tissues, visualizing the resultant p-value for each variant. It should be noted that this meta-analysis method requires a sample size for each eQTL entry in eQTL.df, which should be supplied in an optional column "N." If sample size numbers are not readily available (as may be the case if directly downloading cis-eQTL data from the GTEx portal), eQTpLot gives the user the option to presume that all eQTL data is derived from identical sample sizes across all tissues – this approach may of course yield inaccurate estimates of a variant's effect in meta-analysis, but may be useful to the user. What follows is a description of the process used to generate each of the plots produced by eQTpLot, along with a series of use examples to both demonstrate the utility of eQTpLot, and to highlight some of the many options that can be combined to generate different outputs. For these examples we have analyzed a subset of data from our recently-published analysis of quantitative laboratory traits in the UK Biobank [24] – these summary statistics are available in full at https://ritchielab.org/publications/supplementary-data/ajhg-cilium, and the subset of summary data used for our example analyses can be downloaded from the eQTpLot GitHub page such that the reader may experiment with eQTpLot with the pre-supplied data. Generation of the main eQTL-GWAS Colocalization plot To generate the main eQTL-GWAS Colocalization Plot (Figs. 1A, 2A, 3A, 4A), a locus of interest (LOI) is defined to include the target gene's chromosomal coordinates (as listed in Genes.df, for the indicated gbuild, for the user-specified gene), along with a range of flanking genome (specified with the argument range, with a default value of 200 kilobases on either side of the gene). GWAS summary statistics from GWAS.df are filtered to include only variants that fall within the LOI. The variants are then plotted in chromosomal space along the horizontal axis, with the inverse log of the p-value of association with the specified GWAS trait (PGWAS) plotted along the vertical axis, as one would plot a standard GWAS Manhattan plot. The GWAS significance threshold, sigpvalue_GWAS (default value 5e-8), is depicted with a red horizontal line. Example eQTpLot for LDL cholesterol and the gene BBS1. eQTpLot was used to generate a series of plots illustrating the colocalization between eQTLs for the gene BBS1 and a GWAS signal for the LDL cholesterol trait on chromosome 11 using a PanTissue approach as described in example 1. Panel A shows the locus of interest, containing the BBS1 gene, with chromosomal space indicated along the horizontal axis. The position of each point on the vertical axis corresponding to the p-value of association for that variant with the LDL trait, while the color scale for each point corresponds to the magnitude of that variant's p-value for association with BBS1 expression. The directionality of each triangle corresponds to the GWAS direction of effect, while the size of each triangle corresponds to the NES for the eQTL data. The default genome-wide p-value significance threshold for the GWAS analysis, 5e-8, is depicted with a horizontal red line. Panel B displays the genomic positions of all genes within the LOI. Panel C depicts the enrichment of BBS1 eQTLs among GWAS-significant variants, while panel D depicts the correlation between PGWAS and PeQTL for BBS1 and the LDL trait, with the computed Pearson correlation coefficient (r) and p-value (p) displayed on the plot Example eQTpLot for LDL cholesterol and the gene ACTN3. eQTpLot was used to generate a series of plots illustrating the colocalization between eQTLs for the gene ACTN3 and a GWAS signal for the LDL cholesterol trait on chromosome 11 using a PanTissue approach as described in example 1. Panel A shows the locus of interest, containing the ACTN3 gene, with chromosomal space indicated along the horizontal axis. The position of each point on the vertical axis corresponding to the p-value of association for that variant with the LDL trait, while the color scale for each point corresponds to the magnitude of that variant's p-value for association with ACTN3 expression. The directionality of each triangle corresponds to the GWAS direction of effect, while the size of each triangle corresponds to the NES for the eQTL data. The default genome-wide p-value significance threshold for the GWAS analysis, 5e-8, is depicted with a horizontal red line. Panel B displays the genomic positions of all genes within the LOI. Panel C depicts the enrichment of ACTN3 eQTLs among GWAS-significant variants, while panel D depicts the correlation between PGWAS and PeQTL for ACTN3 and the LDL trait, with the computed Pearson correlation coefficient (r) and p-value (p) displayed on the plot Example eQTpLot for LDL cholesterol and the gene BBS1, incorporating LD data. eQTpLot was used to generate a series of plots illustrating the colocalization between eQTLs for the gene BBS1 and a GWAS signal for the LDL cholesterol trait on chromosome 11 as described in example 2, specifically within the tissue "Whole_Blood" and with the inclusion of LD data. Panels A, B, and D are generated identically to Figure panels 1A, 1B, and 1C respectively. Panel C depicts a heatmap of LD information of all BBS1 eQTL variants, displayed in the same chromosomal space as panels A and B for ease of reference. Panel E depicts the correlation between PGWAS and PeQTL for BBS1 and the LDL trait, similar to panel 1D, only here a lead variant, rs3741360, is identified (by default the upper-right-most variant on the P-P plot), with all other variants plotted using a color scale corresponding to their squared coefficient of linkage correlation with this lead variant. For reference, the same lead variant is also labelled in panel A Example eQTpLot for LDL cholesterol and the gene BBS1, discriminating between congruous and incongruous variants. eQTpLot was used to generate a series of plots illustrating the colocalization between eQTLs for the gene BBS1 and a GWAS signal for the LDL cholesterol trait on chromosome 11 as described in example 3, with an analysis identical to that described for Fig. 3, but with the additional discrimination between variants with congruous and incongruous directions of effect. Panel A is generated identically to panel 1A and 3A, only instead of using a single color scale, variants with congruous effects are plotted using a blue color scale, while variants with incongruous effects are plotted using a red color scale. Panels B-D are identical to panels 3B-D. Panel E and F both represent P-P plots, generated similarly to the P-P plot in panel 3E. For panel E, however, the analysis is confined only to variants with congruous directions of effect, while for panel F the analysis includes only variants with incongruous directions of effect. A lead variant is indicated in both panels E anf F, and both are also labeled in panel A Within this plot, variants that lack eQTL data for the target gene in eQTL.df (or for which the eQTL p-value (PeQTL) does not meet the specified significance threshold, sigpvalue_eQTL (default value 0.05)) are plotted as grey squares. On the other hand, variants that act as eQTLs for the target gene (with PeQTL < sigpvalue_eQTL) are plotted as colored triangles, with a color gradient corresponding to the inverse magnitude of PeQTL. As noted above, an analysis can be specified to differentiate between variants with congruous versus incongruous effects on the GWAS trait and candidate gene expression levels – if this is the case, variants with congruous effects will be plotted using a blue color scale, while variants with incongruous effects will be plotted using a red color scale (as seen in Fig. 4A). The size of each triangle corresponds to the eQTL normalized effect size (NES) for each variant, while the directionality of each triangle is set to correspond to the direction of effect for the variant on the GWAS trait. A depiction of the genomic positions of all genes within the LOI is generated below the plot using the package Gviz (Figs. 1B, 2B, 3B, 4B) [12]. If LD data is supplied, in the form of LD.df, a third panel illustrating the LD landscape of eQTL variants within the LOI is generated using the package LDheatmap (Fig. 3C, 4C) [20]. To generate this panel, LD.df is filtered to contain only eQTL variants that appear in the plotted LOI, and to include only variant pairs that are in LD with each other with R2 > R2min (default value of 0.1). This dataset is further filtered to include only variants that are in LD (with R2 > R2min) with at least a certain number of other variants (user-defined with the argument LDmin, default value of 10). These filtering steps are useful in paring down the number of variants to be plotted in the LDheatmap, keeping the most informative variants and reducing the time needed to generate the eQTpLot. A heatmap illustrating the pairwise linkage disequilibrium of the final filtered variant set is subsequently generated below the main eQTL-GWAS Colocalization Plot, with a fill scale corresponding to R2 for each variant pair. The location of each variant in chromosomal space is indicated at the top of the heatmap, using the same chromosomal coordinates as displayed in panels A and B. Generation of the eQTL enrichment plot For variants within the LOI with PGWAS less than the specified GWAS significance threshold, sigpvalue_GWAS, the proportion that are also eQTLs for the gene of interest (with PeQTL < sigpvalue_eQTL) are calculated and plotted, and the same is done for variants with PGWAS > sigpvalue_GWAS, (Fig. 1C, 2C, 3D, 4D). Enrichment of candidate gene eQTLs among GWAS-significant variants is determined by Fisher's exact test. If an analysis differentiating between congruous and incongruous variants is specified, these are considered separately in the analysis (as seen in Fig. 4D). Generation of P-P correlation plots To visualize correlation between PGWAS and PeQTL, each variant within the LOI is plotted with PeQTL along the horizontal axis, and PGWAS along the vertical axis. Correlation between the two probabilities is visualized by plotting a best-fit linear regression over the points. The Pearson correlation coefficient (r) and p-value of correlation (p) are computed and displayed on the plot as well (Fig. 1D, 2D). If an analysis differentiating between congruous and incongruous variants is specified, separate plots are made for each set of variants and superimposed over each other as a single plot, with linear regression lines/Pearson coefficients displayed for both sets. If LD data is supplied in the form of LD.df, a similar plot is generated, but the fill color of each point is set to correspond to the LD R2 value for each variant with a specified lead variant, plotted as a green diamond (Fig. 3E). This lead variant can be user-specified with the argument leadSNP or is otherwise automatically defined as the upper-right-most variant in the P-P plot. This same lead variant is also labelled in the main eQTpLot panel A (Fig. 3A). In the case where LD data is provided and an analysis differentiating between congruous and incongruous variants is specified, two separate plots are generated: one for congruous and one for incongruous variants (Fig. 4E-F). In each plot, the fill color of each point is set to correspond to the LD R2 value for each variant with the lead variant for that specific plot (again defined as the upper-right most variant of the P-P plot), with both the congruous and incongruous lead variants labelled in the main eQTpLot panel A (Fig. 4A). Use examples To more clearly illustrate the use and utility of the eQTpLot software, the following 3 examples are provided. In example 1, the basic implementation of eQTpLot illustrates a plausible candidate gene, BBS1, for a GWAS association peak for LDL cholesterol on chromosome 11, while also illustrating the colocalization between the GWAS signal and eQTL data for a different, less plausible candidate gene at the same locus, ACTN3. In example 2 the BBS1 gene is further investigated through the use of the TissueList function, and through the inclusion of LD data into the eQTpLot analysis. Lastly, in example 3, the visualization is further refined by differentiating between variants with congruous and incongruous directions of effect on BBS1 expression levels and the LDL cholesterol trait. Example 1 – comparing eQTpLots for two genes within a linkage peak A GWAS study of LDL cholesterol levels has identified a significant association with a genomic locus at chr11:66,196,265- 66,338,300 (build hg19), which contains a number of plausible candidate genes, including BBS1 and ACTN3. eQTpLot is employed in R to illustrate eQTL colocalization for the BBS1 and ACTN3 genes and the LDL cholesterol signal as follows. Using the GeneList function of eQTpLot, the user supplies both the BBS1 and ACTN3 genes to eQTpLot, along with all required input data, to obtain a crude estimation of which gene's eQTL data most closely correlates with the GWAS signal observed at this locus. Calling eQTpLot as follows: $$ \mathsf{eQTpLot}\ \left(\mathsf{GWAS}.\mathsf{df}=\mathsf{gwas}.\mathsf{df}.\mathsf{example},\mathsf{eQTL}.\mathsf{df}=\mathsf{eqtl}.\mathsf{df}.\mathsf{example},\mathsf{gene}=\mathsf{c}\left("\mathsf{BBS1}","\mathsf{ACTN3}"\right),\mathsf{gbuild}="\mathsf{hg}\mathsf{19}",\mathsf{trait}="\mathsf{LDL}",\mathsf{tissue}="\mathsf{all}",\mathsf{CollapseMethod}="\mathsf{\min}",\mathsf{GeneList}=\mathsf{T}\right) $$ eQpLot generates Pearson correlation statistics between PGWAS and PeQTL for both genes and the LDL trait, using a PanTissue approach (collapsing by method "min" as described above). The output generated is: $$ \mathsf{eQTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{gene}\ \mathsf{BBS1}:\mathsf{Pearson}\ \mathsf{correlation}:\mathsf{0.823},\mathit{\mathsf{p}}-\mathsf{value}:\mathsf{1.62}\mathsf{e}-\mathsf{127}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{gene}\ \mathsf{ACTN3}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}}:\mathit{\mathsf{0.245}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}}:\mathit{\mathsf{1.52}}\mathit{\mathsf{e}}-\mathit{\mathsf{07}} $$ Demonstrating that there is significantly stronger correlation between the GWAS signal at this locus and eQTLs for the gene BBS1, compared to the gene ACTN3. To visualize these differences using eQTpLot, starting with the gene BBS1, eQTpLot can be called as follows: $$ \mathsf{eQTpLot}\ \left(\mathsf{GWAS}.\mathsf{df}=\mathsf{gwas}.\mathsf{df}.\mathsf{example},\mathsf{eQTL}.\mathsf{df}=\mathsf{eqtl}.\mathsf{df}.\mathsf{example},\mathsf{gene}="\mathsf{BBS1}",\mathsf{gbuild}="\mathsf{hg}\mathsf{19}",\mathsf{trait}="\mathsf{LDL}",\mathsf{tissue}="\mathsf{all}",\mathsf{CollapseMethod}="\mathsf{\min}"\right) $$ As written, this command will analyze the GWAS data, as contained within GWAS.df.example, within a default 200 kb range surrounding the BBS1 gene, using the preloaded Genes.df to define the genomic boundaries of BBS1 based on genome build hg19. eQTL data from eQTL.df.example will be filtered to contain only data pertaining to BBS1. Since tissue is set to "all," eQTpLot will perform a PanTissue analysis, as described above. The resulting plot (Fig. 1) illustrates clear evidence of colocalization between the LDL-significant locus and BBS1 eQTLs. In Fig. 1A, it is easy to see that all variants significantly associated with LDL cholesterol (those plotted above the horizontal red line) are also very significantly associated with BBS1 expression levels, as indicated by their coloration in bright orange. Figure 1C shows that there is a significant enrichment (p = 9.5e-46 by Fisher's exact test) for BBS1 eQTLs among GWAS-significant variants. Lastly, Fig. 1D illustrates strong correlation between PGWAS and PeQTL for the analyzed variants, with a Pearson correlation coefficient of 0.823 and a p-value of correlation of 1.62e-127 (as displayed on the plot). Taken together, these analyses provides strong evidence for colocalization between variants associated with LDL cholesterol levels and variants associated with BBS1 expression levels at this genomic locus. To visualize the possibility that the LDL association signal might also be acting through modulation of the expression of ACTN3 at this locus, the same analysis can be performed, substituting the gene ACTN3 for the gene BBS1, as in the following command: $$ \mathsf{eQTpLot}\ \left(\mathsf{GWAS}.\mathsf{df}=\mathsf{GWAS}.\mathsf{df}.\mathsf{example},\mathsf{eQTL}.\mathsf{df}=\mathsf{eQTL}.\mathsf{df}.\mathsf{example},\mathsf{gene}="\mathsf{ACTN3}",\mathsf{gbuild}="\mathsf{hg}\mathsf{19}",\mathsf{trait}="\mathsf{LDL}",\mathsf{tissue}="\mathsf{all}",\mathsf{CollapseMethod}="\mathsf{\min}"\right) $$ Unlike the previous example, the resultant plot (Fig. 2) illustrates poor evidence for colocalization between ACTN3 eQTLs and LDL cholesterol-significant variants. Although there is significant enrichment for ACTN3 eQTLs among GWAS-significant variants (Fig. 2B), there is poor evidence for correlation between PGWAS and PeQTL (Fig. 2D), and it is intuitively clear in Fig. 2A that the eQTL and GWAS signals do not colocalize (the brightest colored points with the strongest association with ACTN3 expression are not among the variants most significantly associated with LDL cholesterol levels). Example 2 – the TissueList function and adding LD information to eQTpLot The plots generated in Example 1 illustrated colocalization between BBS1 eQTLs and the GWAS peak for LDL cholesterol on chromosome 11, using a PanTissue analysis approach. The user may next wish to investigate if there are specific tissues in which BBS1 expression is most clearly correlated with the LDL GWAS peak. Using the TissueList function of eQTpLot as follows: $$ \mathsf{eQTpLot}\ \left(\mathsf{GWAS}.\mathsf{df}=\mathsf{gwas}.\mathsf{df}.\mathsf{example},\mathsf{eQTL}.\mathsf{df}=\mathsf{eqtl}.\mathsf{df}.\mathsf{example},\mathsf{gene}="\mathsf{BBS1}",\mathsf{gbuild}="\mathsf{hg}\mathsf{19}",\mathsf{trait}="\mathsf{LDL}",\mathsf{tissue}="\mathsf{all}",\mathsf{TissueList}=\mathsf{T}\right) $$ eQTpLot generates Pearson correlation statistics between PGWAS and PeQTL for BBS1 and the LDL trait across each tissue contained within eQTL.df. The resultant output, ranked by degree of correlation, is as follows $$ \mathsf{eQTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Cells}\_\mathsf{Cultured}\_\mathsf{fibroblasts}:\mathsf{Pearson}\ \mathsf{correlation}:\mathsf{0.902},\mathit{\mathsf{p}}-\mathsf{value}:\mathsf{1.12}\mathsf{e}-\mathsf{65}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Whole}}\_\mathit{\mathsf{Blood}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}}:\mathit{\mathsf{0.85}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}},\mathit{\mathsf{1.64}}\mathit{\mathsf{e}}-\mathit{\mathsf{55}}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Brain}}\_\mathit{\mathsf{Frontal}}\_\mathit{\mathsf{Cortex}}\_\mathit{\mathsf{BA9}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}},\mathit{\mathsf{0.84}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}}:\mathit{\mathsf{1.02}}\mathit{\mathsf{e}}-\mathit{\mathsf{51}}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Brain}}\_\mathit{\mathsf{Nucleus}}\_\mathit{\mathsf{accumbens}}\_\mathit{\mathsf{basal}}\_\mathit{\mathsf{ganglia}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}}:\mathit{\mathsf{0.84}\mathsf{1}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}}:\mathit{\mathsf{1.74}}\mathit{\mathsf{e}}-\mathit{\mathsf{48}}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Brain}}\_\mathit{\mathsf{Cortex}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}}:\mathit{\mathsf{0.818}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}}:\mathit{\mathsf{2.44}}\mathit{\mathsf{e}}-\mathit{\mathsf{43}}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Esophagus}}\_\mathit{\mathsf{Gastroesophageal}}\_\mathit{\mathsf{Junction}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}}:\mathit{\mathsf{0.85}\mathsf{2}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}}:\mathit{\mathsf{2.15}}\mathit{\mathsf{e}}-\mathit{\mathsf{23}}\mathit{\mathsf{e}\mathsf{QTL}\ \mathsf{analysis}\ \mathsf{for}\ \mathsf{tissue}\ \mathsf{Skin}}\_\mathit{\mathsf{Sun}}\_\mathit{\mathsf{Exposed}}\_\mathit{\mathsf{Lower}}\_\mathit{\mathsf{leg}}:\mathit{\mathsf{Pearson}\ \mathsf{correlation}}:\mathit{\mathsf{0.562}},\mathit{\mathsf{p}}-\mathit{\mathsf{value}}:\mathit{\mathsf{1.52}}\mathit{\mathsf{e}}-\mathit{\mathsf{21}}. $$ This output demonstrates a strong correlation between LDL cholesterol levels and BBS1 expression levels in a number of tissues. To further explore these associations, the user can specifically run eQTpLot on data from a single tissue, for example Whole_Blood, while also supplying LD data to eQTpLot using the argument LD.df: $$ \mathsf{eQTpLot}\ \left(\mathsf{GWAS}.\mathsf{df}=\mathsf{GWAS}.\mathsf{df}.\mathsf{example},\mathsf{eQTL}.\mathsf{df}=\mathsf{eQTL}.\mathsf{df}.\mathsf{example},\mathsf{gene}="\mathsf{BBS1}",\mathsf{gbuild}="\mathsf{hg}\mathsf{19}",\mathsf{trait}="\mathsf{LDL}",\mathsf{tissue}="\mathsf{Whole}\_\mathsf{Blood}",\mathsf{LD}.\mathsf{df}=\mathsf{LD}.\mathsf{df}.\mathsf{example},\mathsf{R}\mathsf{2}\mathsf{\min}=\mathsf{0.25},\mathsf{LD}\mathsf{min}=\mathsf{100}\right) $$ Here the argument LD.df refers to the LD.df.example data frame containing a list of pairwise LD correlation measurements between all the variants within the LOI, as one might obtain from a PLINK linkage disequilibrium analysis using the --r2 option [10]. Additionally, the parameter R2min is set to 0.25, indicating that LD.df should be filtered to drop variant pairs in LD with R2 less than 0.25. LDmin is set to 100, indicating that only variants in LD with at least 100 other variants should be plotted in the LD heatmap. The resultant plot, Fig. 3, is different than Fig. 1 in two important ways. First, a heat map of the LD landscape for all BBS1 cis-eQTL variants in Whole_Blood within the LOI is shown in Fig. 3C; this heatmap makes it clear that a number of BBS1 eQTL variants are in strong LD with each other at this locus. Second, the P-P plot, Fig. 3E, now includes LD information for all plotted variants; a lead variant, rs3741360, has been defined (by default the upper-right most variant on the P-P plot), and all other variants are plotted with a color scale corresponding to their squared coefficient of linkage correlation with this lead variant. eQTpLot also labels the lead variant in Fig. 3A for reference. With the incorporation of this new data, we can now see that most, but not all, of the GWAS-significant variants are in strong LD with each other. This implies that there are at least two distinct LD blocks at the BBS1 locus with strong evidence of colocalization between the BBS1 eQTL and LDL GWAS signals. Example 3 – separating congruous from incongruous variants In addition to including LD data in our eQTpLot analysis, we can also include information on the directions of effect of each variant, with respect to the GWAS trait and BBS1 expression levels. This is accomplished by setting the argument congruence to TRUE: $$ \mathsf{eQTpLot}\ \left(\mathsf{GWAS}.\mathsf{df}=\mathsf{GWAS}.\mathsf{df}.\mathsf{example},\mathsf{eQTL}.\mathsf{df}=\mathsf{eQTL}.\mathsf{df}.\mathsf{example},\mathsf{gene}="\mathsf{BBS1}",\mathsf{gbuild}="\mathsf{hg}\mathsf{19}",\mathsf{trait}="\mathsf{LDL}",\mathsf{tissue}="\mathsf{Whole}\_\mathsf{Blood}",\mathsf{LD}.\mathsf{df}=\mathsf{LD}.\mathsf{df}.\mathsf{example},\mathsf{R}\mathsf{2}\mathsf{\min}=\mathsf{0.25},\mathsf{LD}\mathsf{min}=\mathsf{100},\mathsf{congruence}=\mathsf{TRUE}\right) $$ The resulting plot, Fig. 4, divides all BBS1 eQTL variants in Whole_Blood into two groups: congruent – those variants associated with either an increase in both, or decrease in both BBS1 expression levels and LDL levels – and incongruent – those variants with opposite directions of effect on BBS1 expression levels and LDL levels. In carrying out such an analysis, it becomes clear that it is specifically variants with congruent directions of effect that are driving the signal colocalization; that is, variants associated with decreases in BBS1 expression strongly colocalize with variants associated with decreases in LDL cholesterol. eQTpLot provides a unique, user-friendly, and intuitive means of visualizing cis-eQTL and GWAS signal colocalization in a single figure. As plotted by eQTpLot, colocalization between GWAS and eQTL data for a given gene-trait pair is immediately visually obvious, and can be compared across candidate genes to quickly generate hypotheses about the underlying causal mechanisms driving GWAS association peaks. Additionally, eQTpLot allows for Pan- and MultiTissue eQTL analysis, and for the differentiation between eQTL variants with congruous and incongruous directions of effect on GWAS traits – two features not found in any other visualization software. We believe eQTpLot will prove a useful tool for investigators seeking a convenient and customizable visualization of eQTL and GWAS data colocalization. Availability and requirements Project name: eQTpLot Project home page: https://github.com/RitchieLab/eQTpLot Operating system(s): Platform independent Programming language: R Other requirements: None License: GNU GPL Any restrictions to use by non-academics: None. The eQTpLot R package and tutorial, along with the necessary datasets to generate the four example plots discussed in this manuscript, are available at. https://github.com/RitchieLab/eQTpLot. The eQTL data used to generate the eQTL.df file were generated previously, and are freely available through the GTEx Portal [11]. The GWAS summary statistics used to generate the GWAS.df file used in this manuscript are available at https://ritchielab.org/publications/supplementary-data/ajhg-cilium and are based on a study utilizing data available through the UK Biobank (UKBB) [24, 25]. As a part of our agreement to use the data contained within UKBB, we are not allowed to share the raw data ourselves, but individuals who are interested can request access. eQTL: Expression Quantitative Trait Loci GWAS: Genome-wide Association Study LD: Linkage disequilibrium LOI: Locus of Interest NES: Normalized effect size PGWAS : p-value of a given variant's association with a GWAS trait PeQTL : p-value of a given variant's association with a gene's expression levels R2 : the squared coefficient of linkage correlation between two variants Giambartolomei C, Vukcevic D, Schadt EE, Franke L, Hingorani AD, Wallace C, et al. Bayesian test for Colocalisation between pairs of genetic association studies using summary statistics. PLoS Genet. 2014;10(5):e1004383. https://doi.org/10.1371/journal.pgen.1004383. Hormozdiari F, van de Bunt M, Segrè AV, Li X, Joo JWJ, Bilow M, et al. Colocalization of GWAS and eQTL signals detects target genes. Am J Hum Genet. 2016;99(6):1245–60. https://doi.org/10.1016/j.ajhg.2016.10.003. He X, Fuller CK, Song Y, Meng Q, Zhang B, Yang X, et al. Sherlock: detecting gene-disease associations by matching patterns of expression QTL and GWAS. Am J Hum Genet. 2013;92(5):667–80. https://doi.org/10.1016/j.ajhg.2013.03.022. Liu B, Gloudemans MJ, Rao AS, Ingelsson E, Montgomery SB. Abundant associations with gene expression complicate GWAS follow-up. Nat Genet. 2019;51(5):768–9. https://doi.org/10.1038/s41588-019-0404-0. Yao DW, O'Connor LJ, Price AL, Gusev A. Quantifying genetic effects on disease mediated by assayed gene expression levels. Nat Genet. 2020;52(6):626–33. https://doi.org/10.1038/s41588-020-0625-2. Nica AC, Montgomery SB, Dimas AS, Stranger BE, Beazley C, Barroso I, et al. Candidate causal regulatory effects by integration of expression QTLs with complex trait genetic associations. PLoS Genet. 2010; 1 [cited 2020 Jul 27];6(4). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848550/. Zhu Z, Zhang F, Hu H, Bakshi A, Robinson MR, Powell JE, et al. Integration of summary data from GWAS and eQTL studies predicts complex trait gene targets. Nat Genet. 2016;48(5):481–7. https://doi.org/10.1038/ng.3538. Liu B. boxiangliu/locuscompare [Internet]. 2020 [cited 2021 Jan 12]. Available from: https://github.com/boxiangliu/locuscompare Pruim RJ, Welch RP, Sanna S, Teslovich TM, Chines PS, Gliedt TP, et al. LocusZoom: regional visualization of genome-wide association scan results. Bioinformatics. 2010;26(18):2336–7. https://doi.org/10.1093/bioinformatics/btq419. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75. https://doi.org/10.1086/519795. GTEx Consortium. The genotype-tissue expression (GTEx) project. Nat Genet. 2013;45(6):580–5. Hahne F, Ivanek R. Visualizing genomic data using Gviz and Bioconductor. In: Mathé E, Davis S, editors. Statistical genomics: methods and protocols. New York: Springer; 2016 [cited 2020 Jun 17]. p. 335–51. (methods in molecular biology). Available from. https://doi.org/10.1007/978-1-4939-3578-9_16. Durinck S, Moreau Y, Kasprzyk A, Davis S, De Moor B, Brazma A, et al. BioMart and Bioconductor: a powerful link between biological databases and microarray data analysis. Bioinformatics. 2005;21(16):3439–40. https://doi.org/10.1093/bioinformatics/bti525. Lawrence M, Huber W, Pagès H, Aboyoun P, Carlson M, Gentleman R, et al. Software for computing and annotating genomic ranges. PLoS Comput Biol. 2013;9(8):e1003118. https://doi.org/10.1371/journal.pcbi.1003118. tidyverse/dplyr [Internet]. tidyverse; 2021 [cited 2021 Jan 13]. Available from: https://github.com/tidyverse/dplyr Campitelli E. eliocamp/ggnewscale [Internet]. 2021 [cited 2021 Jan 13]. Available from: https://github.com/eliocamp/ggnewscale Wickham H. ggplot2: Elegant Graphics for Data Analysis [Internet]. 2nd ed. Springer International Publishing; 2016 [cited 2020 Jun 16]. (Use R!). Available from: https://www.springer.com/gp/book/9783319242750 KASSAMBARA A. kassambara/ggpubr [Internet]. 2021 [cited 2021 Jan 13]. Available from: https://github.com/kassambara/ggpubr minami_SC. sourcechord/GridExtra [Internet]. 2021 [cited 2021 Jan 13]. Available from: https://github.com/sourcechord/GridExtra Shin J-H, Blay S, McNeney B, Graham J. LDheatmap: an R function for graphical display of pairwise linkage disequilibria between single nucleotide polymorphisms. J Stat Softw. 2006;16(1):1–9. Pedersen TL. thomasp85/patchwork [Internet]. 2021 [cited 2021 Jan 13]. Available from: https://github.com/thomasp85/patchwork Stouffer SA, Suchman EA, Devinney LC, Star SA, Williams RM Jr. The American soldier: Adjustment during army life. (Studies in social psychology in World War II), Vol. 1. Oxford: Princeton Univ. Press; 1949. p. 599. (The American soldier: Adjustment during army life. (Studies in social psychology in World War II), Vol. 1) Zaykin DV. Optimally weighted Z-test is a powerful method for combining probabilities in meta-analysis. J Evol Biol. 2011;24(8):1836–41. https://doi.org/10.1111/j.1420-9101.2011.02297.x. Drivas TG, Lucas A, Zhang X, Ritchie MD. Mendelian pathway analysis of laboratory traits reveals distinct roles for ciliary subcompartments in common disease pathogenesis. Am J Hum Genet. 2021;108(3):482–501. Bycroft C, Freeman C, Petkova D, Band G, Elliott LT, Sharp K, et al. The UK biobank resource with deep phenotyping and genomic data. Nature. 2018;562(7726):203–9. https://doi.org/10.1038/s41586-018-0579-z. We would like to thank members of the Ritchie Lab who provided feedback and tested implementation of eQTpLot. We would like to thank Dr. Michael P. Hart, PhD for his careful reading of the manuscript and helpful comments. The UK Biobank data was accessed via proposal #32133. TGD is supported in part by the NIH T32 training grant 5T32GM008638–23. MDR is supported in part by NIH R01 AI077505, GM115318, AI116794. Division of Human Genetics, Children's Hospital of Philadelphia, Philadelphia, PA, USA Theodore G. Drivas Department of Genetics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA Theodore G. Drivas, Anastasia Lucas & Marylyn D. Ritchie Institute for Biomedical Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA Marylyn D. Ritchie Anastasia Lucas T.G.D. developed the concept and the majority of the code for eQTpLot and wrote the manuscript with input and feedback from all authors. A.L. provided additional coding and assisted in preparing the eQTpLot package for publication. M.D.R. supervised the development of the project. All authors read and approved the final manuscript. Correspondence to Theodore G. Drivas. No new data involving human participants was generated or analyzed in this manuscript. Data used to generate the four example figures was obtained from previously-published summary statistics [24]. MDR is on the Scientific Advisory Board for Cipherome and for Goldfinch Bio. The authors declare no additional competing interests. Drivas, T.G., Lucas, A. & Ritchie, M.D. eQTpLot: a user-friendly R package for the visualization of colocalization between eQTL and GWAS signals. BioData Mining 14, 32 (2021). https://doi.org/10.1186/s13040-021-00267-6 eQTL
CommonCrawl
Home | Science | * Eclipse 2017 * Answers * Dark Energy * Evolution * Gravitation Equations * Gravity * The Doubt Factory * What is Science? * Why Science Needs Theories Are You Overweight? Gravitation Orbital Dynamics Physics / Conservation of Energy Physics / Relativity Reader Feedback for the article "Evolution" Satellite Finder Science Litmus Test Solar Computer Unicode Character Search Why is the Sky Dark at Night? Share This Page Physics / Relativity A concise relativity tutorial Introduction | Special Relativity | Spacetime Geometry | Spacetime Causality | General Relativity | Conclusion | References Most recent revision: 03.17.2013 Note: In this article, footnotes are marked with a light bulb over which one hovers. This article's goal is to give nonspecialist readers an intuitive grasp of some easily understood aspects of relativity theory, in particular the relationship between space and time. No specialized knowledge is assumed apart from some high-school mathematics. To help convey relativity's key ideas, to the extent possible the theory's ideas are modeled with graphics and animations. It's my hope that, rather than replacing mathematical ideas, the visual aids will make the mathematics easier to understand. It is often said that, when writing for a popular audience, each included equation cuts one's readership in half. It's my hope that the promise of some insight into relativity theory will suspend this rule. Special Relativity Let's start with a bit of history. In 1905 Albert Einstein published his first relativity paper (On the electrodynamics of moving bodies, A. Einstein, 1905), in which he described a theory later known as Special Relativity. In Part I of his paper Einstein included a number of equations meant to quantify the relationship between space and time "velocities" and dimensions. Here's a key equation from Einstein's paper: (1) $ \beta = \frac{1}{\sqrt{1-v^2/c^2}}$ v = space velocity c = speed of light β = a dimensionless term that quantifies changes in space and time at velocity v At the time Einstein published his paper, he was able to use such equations to predict changes in time "velocity" and spatial dimensions that would result from provided space velocities. But when Einstein's math teacher Hermann Minkowski read his former student's paper, he noticed something Einstein did not — Minkowski realized the equations united space and time in a four-dimensional entity Minkowski called spacetime. About this realization, Minkowski later said, "Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality." To see what Minkowski saw, let's write a simpler version of equation (1) that includes terms for space (v) and time (t) velocities: (2) $c^2 = t^2 + v^2$ t = time "velocity" Those familiar with geometry will recognize that equation (2) can be used to describe the relationship between a right triangle's hypotenuse (c) and its other two sides (t and v), based on the Pythagorean Theorem. Using this equation, the relationship between space (v) and time (t) velocities is easily derived: Figure 1: Interactive spacetime diagram (3) $v^2 = c^2 - t^2$ (4) $t^2 = c^2 - v^2$ Based on equations (3) and (4), if c is held constant, we see that any increase in v must cause a decrease in t — they're bound together in such a way that their combined values must produce this constant result: (5) $c = \sqrt{t^2 + v^2}$ Examine Figure 1 and notice about variables v (space) and t (time) that they're at right angles to each other — they lie in different dimensions. If your browser supports the "canvas" feature, you may drag your mouse cursor horizontally across Figure 1 and notice that: The c value, the speed of light (and the hypotenuse length), remains constant (a requirement of relativity theory). When space velocity v increases, time velocity t decreases. Figure 1 shows the relationship between space and time velocities in special relativity — when v equals 0, when there is no space velocity, the "velocity" of time equals c, the speed of light. Conversely, when space velocity equals c, there is no time velocity — time has stopped. This leads to these points: The special relativity equations imply that space and time represent related dimensions in spacetime (three dimensions for space, one for time). In relativity theory, the speed of light defines the relationship between the space and time dimensions. Relativity theory requires that c (the speed of light) remain constant in all frames of reference, therefore (as shown in Figure 1) any increase in space velocity must produce a decrease in time velocity. Figure 1 shows that, when space velocity equals zero, time's "velocity" is the speed of light. At all nonzero space velocities, time's velocity declines proportionally. If we somehow could move at the speed of light, time would stop. For reasons provided in the next section, objects with mass can't approach the speed of light. Photons, the carrier particles of the electromagnetic field, have no mass and do travel at the speed of light, as a result of which they don't experience time. Spacetime Geometry Let's look at some consequences of the space and time changes described above. Time changes Imagine that astronauts on board a spacecraft (call it spacecraft b) are traveling at a speed of $\frac{1}{2} c$. According to the equations above, the rate at which time passes should be: (6) $t' = t \sqrt{1 - v^2/c^2} = \sqrt{1 - (1/2)^2} = 0.866 t$ t = time rate at velocity 0 t' = time rate at velocity v Figure 2: Spacecraft in relative motion Equation (6) means that, for astronauts traveling at $\frac{1}{2} c$, because of relativistic time dilation the passage of an hour should require 69 minutes and 17 seconds. Can the astronauts in spacecraft b detect this time change? Well, no, they can't — their clocks will slow down, their heartbeats will slow down, even supremely accurate atomic clocks will slow down, all in step. This means any tests the astronauts conduct within spacecraft b will show perfect agreement between various sources of information about the passage of time. However, if astronauts on a relatively stationary spacecraft (call it spacecraft a) were to observe the clocks on board spacecraft b, by comparing their own clocks with those on board the moving spacecraft, they would notice spacecraft b's time dilation — indeed, they could use the time difference to determine spacecraft b's velocity relative to their own. Mass changes The above equations rule out travel at the speed of light, using this reasoning: In space, in the absence of gravity, an object's mass is measured by its resistance to changes in speed (acceleration) — more massive objects require more force (or more time) to change their speed. Imagine an experiment aboard spacecraft b that pushes a mass with a spring that exerts a known force, as in Figure 2. The experiment should be able to measure the object's mass, based on this expression of Newton's second law of motion: (7) $m = \frac{f}{a}$ m = mass, kilograms f = force, Newtons a = acceleration, meters per second According to equation (7), if a mass is pushed by a one-Newton force for one second, and accelerates to a speed of one meter per second, the object has a mass of one kilogram. Let's say the astronauts aboard spacecraft b conduct this experiment and determine that an object has a mass of one kilogram. Let's also say the astronauts aboard spacecraft a observe the experiment aboard spacecraft b, but a's crew comes to a different conclusion. In the frame of reference of spacecraft a and because of spacecraft b's time dilation, the mass aboard spacecraft b only acquired a speed of 0.866 meters per second, therefore it weighs: (8) $m' = \frac{m}{\sqrt{1 - v^2/c^2}} = \frac{m}{\sqrt{1 - (1/2)^2}} = 1.154 m$ m = mass at velocity 0 m' mass at velocity v This result means that, at a velocity of $\frac{1}{2}c$, masses on board spacecraft b weigh 15% more. Does this relativistic mass increase have any practical consequences? Yes, it does — the experimental mass weighs more, but so does the entire spacecraft. In order to get to its destination, the spacecraft's engines have to deliver more power to accelerate the heavier spacecraft. Let's imagine a more extreme example. Let's say we want to move at 99% of the speed of light. The time dilation for this speed is: (9) $t' = t \sqrt{1 - v^2/c^2} = \sqrt{1 - 0.99^2} = 0.141 t$ This means an hour aboard spacecraft b requires seven hours and five minutes aboard spacecraft a (but the astronauts on board spacecraft b subjectively experience one hour of time). At 0.99c, the mass change is: (10) $m' = \frac{m}{\sqrt{1 - v^2/c^2}} = \frac{m}{\sqrt{1 - 0.99^2}} = 7.08 m$ This means, for a given acceleration, the spacecraft's engines must deliver seven times more power. And as the spacecraft moves closer to the speed of light, its mass increases without bound and the power required to increase speed also increases without bound. At the speed of light, the above equations break down — they indicate a time rate of zero and an infinite mass. This is why massive objects cannot travel at the speed of light. Readers may wonder whether the mass increase is real — isn't it a coincidental side effect of time dilation, with no independent reality? The answer is that both the time and mass effects are real and interchangeable — one can derive time dilation from mass increase or vice versa. Both interpretations are equally valid. Spacetime Causality Spacetime interval Hermann Minkowski's contribution to relativity theory — his spacetime interpretation — changed how we picture the relationship between causes and effects. We now know there is a clear demarcation between effects and their possible causes, and the speed of light is the gatekeeper. Expressed simply, if a cause at spacetime location a can propagate to spacetime location b at less than or equal to c (the speed of light), then cause a may produce effect b, otherwise not. To understand this, we need to introduce the idea of a spacetime interval: (11) $s^2 = \Delta r^2 - c^2 \Delta t^2$ (12) $\Delta r = r_a - r_b$ (space difference) (13) $\Delta t = t_a - t_b$ (time difference) s = four-dimensional spacetime interval $r_a$ = spatial location of event a $r_b$ = spatial location of event b $t_a$ = temporal location of event a $t_b$ = temporal location of event b Because a spacetime interval takes both space and time dimensions into account and with respect to causes and effects, its meaning is unambiguous: Value of s2 $s^2 \lt 0$ $\Delta r^2 \lt c^2 \Delta t^2$ Time-like interval: Cause a can produce effect b $s^2 = 0$ $\Delta r^2 = c^2 \Delta t^2$ Light-like interval: Cause a can produce effect b $s^2 \gt 0$ $\Delta r^2 \gt c^2 \Delta t^2$ Space-like interval: Cause a cannot produce effect b Light cone diagram Because visualizing four-dimensional spacetime intervals may be difficult at first, readers may want to play with Figure 3, an interactive light cone diagram — change the viewing angle and size with your mouse: Anaglyphic ( ) mode: Inverted (black background): Figure 3: Interactive light cone diagram Some notes on Figure 3: This resource is much better viewed in 3D, using anaglyphic glasses. The green plane at the center represents the space dimensions and the present time. The axis perpendicular to the green plane represents the time dimension. The central point o, where the cones intersect, is the "origin", the present location and time — the x,y,z (space dimensions) and t (time dimension) all equal zero. It's helpful to picture the past and future cones as expanding into the space dimensions at the speed of light as they extend through time: Spacetime coordinate points a - e in Figure 3 are located at the same space position, but different time positions. Spacetime coordinate c is located in the present (it's on the surface of the green plane), but is separated from the origin o by a spatial distance, therefore it cannot influence events at the origin o. Coordinate b has the same space position as c, but a different time position. Because coordinate b is outside the past light cone, like coordinate c it cannot influence events in the present. Coordinate a has the same space position as c, but it's located farther in the past than b. Because coordinate a is inside the past light cone, it can influence events in the present. The same relationship applies to coordinates d and e — coordinate d, located outside the future light cone, cannot be influenced by events in the present, but coordinate e can. Causality, the relationship between causes and effects, is a central issue in physics, and the plausibility of a new theory can be measured in part by whether it allows effects to precede their causes. For example, some readings of General Relativity allow violations of causality, and those issues are a matter of much debate. There is no experimental confirmation of these effects, and such confirmation is unlikely. There are quantum effects such as entanglement that, at first glance, allow for instantaneous communication at superluminal speeds, but as it turns out, this isn't so — entanglement really does cause two particles to interact at any distance, but this fact cannot be used to circumvent limitations posed by the speed of light. Special vs. general relativity Special relativity deals with a subset of physical effects — those that don't involve accelerations. General relativity is a much broader theory, and because of its scope and experimental confirmations, it is regarded by some as the crowning achievement of twentieth century physics. In order for special relativity to explain effects that prior theories could not, it had to contradict certain assumptions about everyday reality like the idea that time is the same for everyone. General relativity follows this pattern, but because it explains more, it contradicts more common-sense assumptions. In the prior special relativity section we saw that quantities like velocity, time and mass depend on one's reference frame, but in the midst of these perceptions, space retained its overall shape. In general relativity, the shape of space itself is a term in the equations and depends on one's frame of reference. One might say that in general relativity, there's more relativity. In special relativity, a force called gravity pulls the moon in a curved orbit around the earth. In general relativity, gravity isn't a force, and the moon moves in a straight line through curved spacetime. Figure 4: Gravitational starlight deflection In special relativity, only objects possessing mass feel the force of gravity, and massless particles like photons travel in straight lines through a space that has no role in gravitation. In general relativity, as physicist John Wheeler famously said, "Matter tells space how to curve. Space tells matter how to move." Spacetime curvature: starlight deflection When Einstein published general relativity in 1916, because it had no experimental confirmation it was met with healthy skepticism. In an effort to make his theory less theoretical and more empirical, Einstein suggested that a field of stars be photographed during a solar eclipse and compared to the same field without the effect of the sun. The idea is that spacetime curvature near the massive sun should change the apparent positions of those stars nearest the sun (Figure 4). Figure 5: Optical lens geometry After a number of failed attempts, in 1919 the effort succeeded and (in spite of observational difficulties and marginal results) the outcome supported Einstein's theory and confirmed the idea that space curved around masses. Spacetime curvature: Einstein ring Since that early experiment and because of further theoretical and observational work, another prediction has been confirmed — under special circumstances in which a light source, a large mass and the earth are in precise alignment, spacetime curvature can produce something called an Einstein ring. Figure 6: Gravitational lens geometry Einstein rings and similar optical phenomena arise from what is called gravitational lensing, consisting of light following lines of spacetime curvature, but Figure 6 shows that the resulting ray traces have little in common with an optical lens (Figure 5) — indeed, the geometry of a gravitational lens is in some ways the opposite of an optical lens. It's important to understand about Figure 6 that it doesn't show photon paths curving through a flat background spacetime — it shows spacetime being curved by mass. The photon paths are in essence straight lines through curved spacetime. The reason for the word "ring" in Einstein ring is that, unlike the optical lens shown in Figure 5, there is only one pathway that allows light to travel from the source to the viewer, shown in Figure 6 as blue dotted lines. Remember that Figure 6 is a flat representation of a three-dimensional system, and the blue lines represent a cylinder in three dimensions — a cylinder that the viewer sees as a ring of light. Because of how gravitational lensing works, the requirements for a visible, complete Einstein ring are rather severe — three bodies must be in precise alignment, and the mass and location of the deflecting body must also meet strict requirements. In most cases, an astronomer will see one or more short arcs representing a light source behind a massive body, or sometimes an elongated arc resembling a horseshoe. Spacetime curvature: simulator Black hole mass: Figure 7: Interactive gravitational lensing diagram For readers who want to explore the effects predicted by general relativity, Figure 7 is a gravitational curvature simulator that models a supermassive black hole deflecting the light from a background galaxy. To use the model, change the location of the black hole (red dot) by dragging your mouse across the figure, and change the black hole's mass with your mouse wheel. The physical model in Figure 7 is correct, but it represents an unlikely set of circumstances. To get this effect in reality, a very large black hole, such as is thought to live at the center of most galaxies, would have to be moving freely between galaxies, and would have to align itself between the viewed galaxy and Earth — not very likely. Most real Einstein rings and arcs result from one galaxy being aligned by chance with another, farther away, so that we see some of the distant galaxy's light deflected around the foreground galaxy. But so far, we haven't observed any perfect Einstein rings. To produce an Einstein ring with Figure 7, drag the black hole (the red dot) to the position of a bright spot, two of which appear near the top center of the galaxy image. With some care, you can align the black hole over a bright spot and produce a perfect ring. Figure 8: Black hole spacetime curvature Close examination of the geometry modeled by Figure 7 shows that, as one approaches a sufficiently dense, massive object, spacetime curvature increases without bound. The area nearest the black hole in Figure 7 (red dot) is empty because the spacetime curvature is greater than 90° (shown as red lines in Figure 8), so those paths don't intersect any visible objects. Even closer to the black hole, near the event horizon's radius, spacetime curvature is such that a photon will perpetually orbit the black hole. Remember about relativity that all perceptions are local. This means that, to an observer distant from the curvature shown in Figure 8, it might be possible to observe it more or less as shown, but to an observer approaching a black hole's event horizon, instead of seeing spacetime curvature increase, the observer would instead see the event horizon's curvature decrease (assuming the horizon were visible at all). At a radius of $\frac{3}{2}r$ (r = event horizon radius), the black hole's surface would become a flat plane, infinite in extent, with eyelines extending around the horizon — indeed, in all directions at a distance of $3 \pi r$, the observer would see the back of his own head. Microlensing Gravitational lensing plays a part in the search for planets around other stars as well as free-roaming masses. There are a number of ways to detect a planet orbiting a star that's too distant to image directly: Detect a small reduction in starlight caused by a planet passing in front of the star (a method used by the Kepler spacecraft). Detect a shift in the spectrum of a star caused by its motion toward, and away from, the earth, as it orbits the common center of mass of itself and an orbiting planet. Detect a small brightening of starlight caused by a planet or other mass moving between a star and earth. The third method in the above list is called gravitational microlensing. It's only rarely used to detect planets near a star — it's more suitable for detecting dark masses distant from stars or even galaxies. As it turns out, when a massive object passes in front of a distant star, even when the range is too great to resolve an Einstein ring, because of microlensing the amount of light received can be momentarily greater than without the intervening object. This method can be used to detect objects that emit no light of their own. At this point it should be clear that relativity's key ideas are completely accessible and easy to understand, and it's a shame they're not better understood by nonspecialists. I say this in particular because of how often one hears the claim that superluminal space travel is just around the corner — all we need to do is find a wormhole, or a "tear in the fabric of spacetime," as any number of movie script writers have put it. These fantasies underestimate the importance of causality in physical theory. If superluminal speeds were possible, time would lose its conventional meaning, effects could precede their causes and energy would not be conserved — apart from the fact that it opens the door to any number of temporal paradoxes. Rather than read about how the task of jumping directly and instantaneously across the universe is only a matter of building the right spacecraft from parts available at your local hardware store, I would prefer it if people understand how this is supremely unlikely, and better, that people understand why. It's my hope that this article will serve as a brief guide to relativity's unfamiliar territory, and perhaps reduce the number of science fiction ideas that masquerade as science. On the electrodynamics of moving bodies, A. Einstein, 1905 — Einstein's first relativity paper. Special Relativity — an encyclopedia summary of the restricted version of relativity theory. Hermann Minkowski — Einstein's math teacher. Spacetime — the unification of three space dimensions, and one time dimension, into a four-dimensional entity with common properties. Pythagorean Theorem — a defining principle of geometry Photon — the carrier particle of the electromagnetic field. Newton's second law of motion — the net force on an object is proportional to the rate of change of its linear momentum. Spacetime interval — a four-dimensional spacetime distance used in causality computations. light cone — a visualization aid for picturing spacetime intervals. Causality — the relationship between causes and effects. Quantum entanglement — a bizarre effect of quantum theory in which two or more particles become interdependent, regardless of their spatial separation. General relativity — special relativity's successor theory that includes accelerations and gravitation. Tests of General Relativity: Deflection of light by the Sun — an early confirmation of general relativity. Black hole — an object so massive and dense that surface escape velocity exceeds the speed of light. Einstein ring — a somewhat dramatic optical manifestation of spacetime curvature. Gravitational lens — an effect of general relativity in which spacetime curvature produces distinct images. Kepler (spacecraft) — an ambitious and successful program to detect planets orbiting distant stars. Gravitational microlensing — an observational method that relies on spacetime curvature, moving masses, and starlight. Event horizon — the radius of a black hole at which escape velocity is equal to the speed of light. Temporal paradox — a class of logical problems associated with time travel.
CommonCrawl
Mamuth to Elephant (3) Until now, we've looked at actions of groups (such as the $T/I$ or $PLR$-group) or (transformation) monoids (such as Noll's monoid) on special sets of musical elements, in particular the twelve pitch classes $\mathbb{Z}_{12}$, or the set of all $24$ major and minor chords. Elephant-lovers recognise such settings as objects in the presheaf topos on the one-object category $\mathbf{M}$ corresponding to the group or monoid. That is, we look at contravariant functors $\mathbf{M} \rightarrow \mathbf{Sets}$. Last time we've encountered the 'Cube Dance Grap' which depicts a particular relation among the major, minor, and augmented chords. Recall that the twelve major chords (numbered for $1$ to $12$) are the ordered triples of tones in $\mathbb{Z}_{12}$ of the form $(n,n+4,n+7)$ (such as the triangle on the left). The twelve minor chords (numbered from $13$ to $24$) are the ordered triples $(n,n+3,n+7)$ (such as the middle triangle). The four augmented chords (numbered from $25$ to $28$) are the triples of the form $(n,n+4,n+8)$ (such as the rightmost triangle). The Cube Dance Graph relates two of these chords when they share two tones (pitch classes) whereas the remaining tones differ by a halftone. Picture modified from this post. We can separate this symmetric binary relation into three sub-relations: the extension of the $P$ and $L$-operations on major and minor chords to the augmented ones (these are transformations), and the remaining relation $U$ which connects the major and minor chords to the augmented chords (and which is not a transformation). Binary relations on the same set can be composed, so we get a monoid $\mathbf{M}$ generated by the three relations $P,L$ and $U$. The action of $\mathbf{M}$ on the $28$ chords no longer gives us an ordinary presheaf (because $U$ is not a transformation), but a relational presheaf as in the paper On the use of relational presheaves in transformational music theory by Alexandre Popoff. That is, the action defines a contravariant functor $\mathbf{M} \rightarrow \mathbf{Rel}$ where $\mathbf{Rel}$ is the category (actually a $2$-category) of sets, but with binary relations as morphisms (that is, $Hom(X,Y)$ is all subsets of $X \times Y$), and the natural notion of composition of such relations. The $2$-morphism between relations is that of inclusion. To compute with monoids generated by binary relations in GAP one needs to download, compile and load the package semigroups, and to represent the binary relations as partitioned binary relations as in the paper by Martin and Mazorchuk. This is a bit more complicated than working with ordinary transformations: P:=PBR([[-13],[-14],[-15],[-16],[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-9],[-10],[-11],[-12],[-25],[-26],[-27],[-28]],[[13],[14],[15],[16],[17],[18],[19],[20],[21],[22],[23],[24],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[25],[26],[27],[28]]); L:=PBR([[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-13],[-14],[-15],[-16],[-9],[-10],[-11],[-12],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-25],[-26],[-27],[-28]],[[17],[18],[19],[20],[21],[22],[23],[24],[13],[14],[15],[16],[9],[10],[11],[12],[1],[2],[3],[4],[5],[6],[7],[8],[25],[26],[27],[28]]); U:=PBR([[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-17,-21,-13,-4,-8,-12],[-5,-1,-9,-18,-14,-22],[-2,-6,-10,-15,-23,-19],[-24,-16,-20,-11,-3,-7]],[[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[25],[25],[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[17,21,13,4,8,12],[5,1,9,18,14,22],[2,6,10,15,23,19],[24,16,20,11,3,7]]); But then, GAP quickly tells us that $\mathbf{M}$ is a monoid consisting of $40$ elements. gap> M:=Semigroup([P,L,U]); gap> Size(M); The Semigroups-package can also compute Green's relations and tells us that there are seven such $R$-classes, four consisting of $6$ elements, two of four, and one of eight elements. These are also visible in the Cayley graph, exactly as last time. Or, if you prefer the cleaner picture of the Cayley graph from the paper Relational poly-Klumpenhouwer networks for transformational and voice-leading analysis by Popoff, Andreatta and Ehresmann. This then allows us to compute the Heyting algebra of the subobject classifier, and all the Grothendieck topologies, at least for the ordinary presheaf topos of $\mathbf{M}$-sets, not for the relational presheaves we need here. We can consider the same binary relation on the larger set of triads when we add the suspended triads. These are the ordered triples in $\mathbb{Z}_{12}$ of the form $(n,n+5,n+7)$, as in the rightmost triangle below. There are twelve suspended chords (numbered from $29$ to $40$), so we now have a binary relation $T$ on a set of $40$ triads. The relation $T$ is too coarse, and the art is to subdivide $T$ is disjoint sub-relations which are musically significant, between major and minor triads, between major/minor and augmented triads, and so on. For each such partition we can then consider the monoids generated by these sub-relations. In his paper, Popoff suggest relevant sub-relations $P,L,T_U,T_V$ and $T_U \cup T_V$ of $T$ which in our numbering of the $40$ chords can be represented by these PBR's (assuming I made no mistakes…ADDED march 24th: I did make a mistake in the definition of L, see comment by Alexandre Popoff, below the corect L): P:=PBR([[-13],[-14],[-15],[-16],[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-9],[-10],[-11],[-12],[-25],[-26],[-27],[-28],[-36],[-37],[-38],[-39],[-40],[-29],[-30],[-31],[-32],[-33],[-34],[-35]],[[13],[14],[15],[16],[17],[18],[19],[20],[21],[22],[23],[24],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[25],[26],[27],[28],[34],[35],[36],[37],[38],[39],[40],[29],[30],[31],[32],[33]]); L:=PBR([[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-13],[-14],[-15],[-16],[-9],[ -10],[-11],[-12],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-25],[-26],[-27],[-28],[-29], [-30],[-31],[-32],[-33],[-34],[-35],[-36],[-37],[-38],[-39],[-40]],[[17], [18], [19], [ 20],[21],[22],[23],[24],[13],[14],[15],[16],[9],[10],[11],[12],[1],[2],[3],[4],[5], [6], [7],[8],[25],[26],[27],[28],[29],[30],[31],[32],[33],[34],[35],[36],[37],[38],[39],[40] ]); TU:=PBR([[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-4,-8,-12,-13,-17,-21],[-1,-5,-9,-14,-18,-22],[-2,-6,-10,-15,-19,-23],[-3,-7,-11,-16,-20,-24],[],[],[],[],[],[],[],[],[],[],[],[]],[[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[25],[25],[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[4,8,12,13,17,21],[1,5,9,14,18,22],[2,6,10,15,19,23],[3,7,11,16,20,24],[],[],[],[],[],[],[],[],[],[],[],[]]); TV:=PBR([[-29],[-30],[-31],[-32],[-33],[-34],[-35],[-36],[-37],[-38],[-39],[-40],[-36],[-37],[-38],[-39],[-40],[-29],[-30],[-31],[-32],[-33],[-34],[-35],[],[],[],[],[-1,-18],[-2,-19],[-3,-20],[-4,-21],[-5,-22],[-6,-23],[-7,-24],[-8,-13],[-9,-14],[-10,-15],[-11,-16],[-12,-17]],[[29],[30],[31],[32],[33],[34],[35],[36],[37],[38],[39],[40],[36],[37],[38],[39],[40],[29],[30],[31],[32],[33],[34],[35],[],[],[],[],[1,18],[2,19],[3,20],[4,21],[5,22],[6,23],[7,24],[8,13],[9,14],[10,15],[11,16],[12,17]]); TUV:=PBR([[-26,-29],[-27,-30],[-28,-31],[-25,-32],[-26,-33],[-27,-34],[-28,-35],[-25,-36],[-26,-37],[-27,-38],[-28,-39],[-25,-40],[-25,-36],[-26,-37],[-27,-38],[-28,-39],[-25,-40],[-26,-29],[-27,-30],[-28,-31],[-25,-32],[-26,-33],[-27,-34],[-28,-35],[-4,-8,-12,-13,-17,-21],[-1,-5,-9,-14,-18,-22],[-2,-6,-10,-15,-19,-23],[-3,-7,-11,-16,-20,-24],[-1,-18],[-2,-19],[-3,-20],[-4,-21],[-5,-22],[-6,-23],[-7,-24],[-8,-13],[-9,-14],[-10,-15],[-11,-16],[-12,-17]],[[26,29],[27,30],[28,31],[25,32],[26,33],[27,34],[28,35],[25,36],[26,37],[27,38],[28,39],[25,40],[25,36],[26,37],[27,38],[28,39],[25,40],[26,29],[27,30],[28,31],[25,32],[26,33],[27,34],[28,35],[4,8,12,13,17,21],[1,5,9,14,18,22],[2,6,10,15,19,23],[3,7,11,16,20,24],[1,18],[2,19],[3,20],[4,21],[5,22],[6,23],[7,24],[8,13],[9,14],[10,15],[11,16],[12,17]]); The resulting monoids are huge: gap> G:=Semigroup([P,L,TU,TV]); gap> Size(G); gap> H:=Semigroup([P,L,TUV]); gap> Size(H); In Popoff's paper these monoids have sizes respectively $473,293$ and $994,624$. Strangely, the offset is in both cases $144=12^2$. (Added march 24: with the correct L I get the same sizes as in Popoff's paper). Perhaps we should try to transform such relational presheaves to ordinary presheaves. One approach is to use the Grothendieck construction and associate to a set with such a relational monoid action a directed graph, coloured by the elements of the monoid. That is, an object in the presheaf topos of the category \xymatrix{C & E \ar[l]^c \ar@/^2ex/[r]^s \ar@/_2ex/[r]_t & V} \] and then we should consider the slice topos over the one-vertex bouquet graph with one loop for each element in the monoid. If you want to have more details on the musical side of things, for example if you want to know what the opening twelve chords of "Take a Bow" by Muse have to do with the Cube Dance graph, here are some more papers: A categorical generalization of Klumpenhouwer networks, A. Popoff, M. Andreatta and A. Ehresmann. From K-nets to PK-nets: a categorical approach, A. Popoff, M. Andreatta and A. Ehresmann. From a Categorical Point of View: K-Nets as Limit Denotators, G. Mazzola and M. Andreatta. Published March 8, 2022 by lievenlb Last time, we've viewed major and minor triads (chords) as inscribed triangles in a regular $12$-gon. If we move clockwise along the $12$-gon, starting from the endpoint of the longest edge (the root of the chord, here the $0$-vertex) the edges skip $3,2$ and $4$ vertices (for a major chord, here on the left the major $0$-chord) or $2,3$ and $4$ vertices (for a minor chord, here on the right the minor $0$-chord). The symmetries of the $12$-gon, the dihedral group $D_{12}$, act on the $24$ major- and minor-chords transitively, preserving the type for rotations, and interchanging majors with minors for reflections. Mathematical Music Theoreticians (MaMuTh-ers for short) call this the $T/I$-group, and view the rotations of the $12$-gon as transpositions $T_k : x \mapsto x+k~\text{mod}~12$, and the reflections as involutions $I_k : x \mapsto -x+k~\text{mod}~12$. Note that the elements of the $T/I$-group act on the vertices of the $12$-gon, from which the action on the chord-triangles follows. There is another action on the $24$ major and minor chords, mapping a chord-triangle to its image under a reflection in one of its three sides. Note that in this case the reflection $I_k$ used will depend on the root of the chord, so this action on the chords does not come from an action on the vertices of the $12$-gon. There are three such operations: (pictures are taken from Alexandre Popoff's blog, with the 'funny names' removed) The $P$-operation is reflection in the longest side of the chord-triangle. As the longest side is preserved, $P$ interchanges the major and minor chord with the same root. The $L$-operation is refection in the shortest side. This operation interchanges a major $k$-chord with a minor $k+4~\text{mod}~12$-chord. Finally, the $R$-operation is reflection in the middle side. This operation interchanges a major $k$-chord with a minor $k+9~\text{mod}~12$-chord. From this it is already clear that the group generated by $P$, $L$ and $R$ acts transitively on the $24$ major and minor chords, but what is this $PLR$-group? If we label the major chords by their root-vertex $1,2,\dots,12$ (GAP doesn't like zeroes), and the corresponding minor chords $13,14,\dots,24$, then these operations give these permutations on the $24$ chords: P:=(1,13)(2,14)(3,15)(4,16)(5,17)(6,18)(7,19)(8,20)(9,21)(10,22)(11,23)(12,24) L:=(1,17)(2,18)(3,19)(4,20)(5,21)(6,22)(7,23)(8,24)(9,13)(10,14)(11,15)(12,16) R:=(1,22)(2,23)(3,24)(4,13)(5,14)(6,15)(7,16)(8,17)(9,18)(10,19)(11,20)(12,21) Then GAP gives us that the $PLR$-group is again isomorphic to $D_{12}$: gap> G:=Group(P,L,R);; gap> IsDihedralGroup(G); In fact, if we view both the $T/I$-group and the $PLR$-group as subgroups of the symmetric group $Sym(24)$ via their actions on the $24$ major and minor chords, these groups are each other centralizers! That is, the $T/I$-group and $PLR$-group are dual to each other. For more on this, there's a beautiful paper by Alissa Crans, Thomas Fiore and Ramon Satyendra: Musical Actions of Dihedral Groups. What does this new MaMuTh info learns us more about our Elephant, the Topos of Triads, studied by Thomas Noll? Last time we've seen the eight element triadic monoid $T$ of all affine maps preserving the three tones $\{ 0,4,7 \}$ of the major $0$-chord, computed the subobject classified $\Omega$ of the corresponding topos of presheaves, and determined all its six Grothendieck topologies, among which were these three: Why did we label these Grothendieck topologies (and corresponding elements of $\Omega$) by $P$, $L$ and $R$? We've seen that the sheafification of the presheaf $\{ 0,4,7 \}$ in the triadic topos under the Grothendieck topology $j_P$ gave us the sheaf $\{ 0,3,4,7 \}$, and these are the tones of the major $0$-chord together with those of the minor $0$-chord, that is the two chords in the $\langle P \rangle$-orbit of the major $0$-chord. The group $\langle P \rangle$ is the cyclic group $C_2$. For the sheafication with respect to $j_L$ we found the $T$-set $\{ 0,3,4,7,8,11 \}$ which are the tones of the major and minor $0$-,$4$-, and $8$-chords. Again, these are exactly the six chords in the $\langle P,L \rangle$-orbit of the major $0$-chord. The group $\langle P,L \rangle$ is isomorphic to $Sym(3)$. The $j_R$-topology gave us the $T$-set $\{ 0,1,3,4,6,7,9,10 \}$ which are the tones of the major and minor $0$-,$3$-, $6$-, and $9$-chords, and lo and behold, these are the eight chords in the $\langle P,R \rangle$-orbit of the major $0$-chord. The group $\langle P,R \rangle$ is the dihedral group $D_4$. More on this can be found in the paper Commuting Groups and the Topos of Triads by Thomas Fiore and Thomas Noll. The operations $P$, $L$ and $R$ on major and minor chords are reflexions in one side of the chord-triangle, so they preserve two of the three tones. There's a distinction between the $P$ and $L$ operations and $R$ when it comes to how the third tone changes. Under $P$ and $L$ the third tone changes by one halftone (because the corresponding sides skip an even number of vertices), whereas under $R$ the third tone changes by two halftones (a full tone), see the pictures above. The $\langle P,L \rangle = Sym(3)$ subgroup divides the $24$ chords in four orbits of six chords each, three major chords and their corresponding minor chords. These orbits consist of the $0$-, $4$-, and $8$-chords (see before) $1$-, $5$-, and $9$-chords $2$-, $6$-, and $10$-chords and we can view each of these orbits as a cycle tracing six of the eight vertices of a cube with one pair of antipodal points removed. These four 'almost' cubes are the NE-, SE-, SW-, and NW-regions of the Cube Dance Graph, from the paper Parsimonious Graphs by Jack Douthett and Peter Steinbach. To translate the funny names to our numbers, use this dictionary (major chords are given by a capital letter): The four extra chords (at the N, E, S, and P places) are augmented triads. They correspond to the triads $(0,4,8),~(1,5,9),~(2,6,10)$ and $(3,7,11)$. That is, two triads are connected by an edge in the Cube Dance graph if they share two tones and differ by an halftone in the third tone. This graph screams for a group or monoid acting on it. Some of the edges we've already identified as the action of $P$ and $L$ on the $24$ major and minor triads. Because the triangle of an augmented triad is equilateral, we see that they are preserved under $P$ and $L$. But what about the edges connecting the regular triads to the augmented ones? If we view each edge as two directed arrows assigned to the same operation, we cannot do this with a transformation because the operation sends each augmented triad to six regular triads. Alexandre Popoff, Moreno Andreatta and Andree Ehresmann suggest in their paper Relational poly-Klumpenhouwer networks for transformational and voice-leading analysis that one might use a monoid generated by relations, and they show that there is such a monoid with $40$ elements acting on the Cube Dance graph. Popoff claims that usual presheaf toposes, that is contravariant functors to $\mathbf{Sets}$ are not enough to study transformational music theory. He suggest to use instead functors to $\mathbf{Rel}$, that is Sets with as the morphisms binary relations, and their compositions. Another Elephant enters the room… From Mamuth to Elephant Here, MaMuTh stands for Mathematical Music Theory which analyses the pitch, timing, and structure of works of music. The Elephant is the nickname for the 'bible' of topos theory, Sketches of an Elephant: A Topos Theory Compendium, a two (three?) volume book, written by Peter Johnstone. How can we get as quickly as possible from the MaMuth to the Elephant, musical illiterates such as myself? What Mamuth-ers call a pitch class (sounds that are a whole number of octaves apart), is for us a residue modulo $12$, as an octave is usually divided into twelve (half)tones. We'll just denote them by numbers from $0$ to $11$, or view them as the vertices of a regular $12$-gon, and forget the funny names given to them, as there are several such encodings, and we don't know a $G$ from a $D\#$. Our regular $12$-gon has exactly $24$ symmetries. Twelve rotations, which they call transpositions, given by the affine transformations T_k~:~x \mapsto x+k~\text{mod}~12 \] and twelve reflexions, which they call involutions, given by I_k~:~x \mapsto -x+k~\text{mod}~12 \] What for us is the dihedral group $D_{12}$ (all symmetries of the $12$-gon), is for them the $T/I$-group (for transpositions/involutions). Let's move from individual notes (or pitch classes) to chords (or triads), that is, three notes played together. Not all triples of notes sound nice when played together, that's why the most commonly played chords are among the major and minor triads. A major triad is an ordered triple of elements from $\mathbb{Z}_{12}$ of the form (n,n+4~\text{mod}~12,n+7~\text{mod}~12) \] and a minor triad is an ordered triple of the form where the first entry $n$ is called the root of the triad (or chord) and its funny name is then also the name of that chord. For us, it is best to view a triad as an inscribed triangle in our regular $12$-gon. The triangles of major and minor triads have edges of different lengths, a small one, a middle, and a large one. Starting from the root, and moving clockwise, we encounter in a major chord-triangle first the middle edge, then the small edge, and finally the large edge. For a minor chord-triangle, we have first the small edge, then the middle one, and finally the large edge. On the left, two major triads, one with root $0$, the other with root $6$. On the right, two minor triads, also with roots $0$ and $6$. (Btw. if you are interested in the full musical story, I strongly recommend the alpof blog by Alexandre Popoff, from which the above picture is taken.) Clearly, there are $12$ major triads (one for each root), and $12$ minor triads. From the shape of the triad-triangles it is also clear that rotations (transpositions) send major triads to major triads (and minors to minors), and that reflexions (involutions) interchange major with minor triads. That is, the dihedral group $D_{12}$ (or if you prefer the $T/I$-group) acts on the set of $24$ major and minor triads, and this action is transitive (an element stabilising a triad-triangle must preserve its type (so is a rotation) and its root (so must be the identity)). Can we hear the action of the very special group element $T_6$ (the unique non-trivial central element of $D_{12}$) on the chords? This action is not only the transposition by three full tones, but also a point-reflexion with respect to the center of the $12$-gon (see two examples in the picture above). This point reflexion can be compositionally meaningful to refer to two very different upside-down worlds. In It's $T_6$-day, Alexandre Popoff gives several examples. Here's one of them, the Ark theme in Indiana Jones – Raiders of the Lost Ark. "The $T_6$ transformation is heard throughout the map room scene (in particular at 2:47 in the video): that the ark is a dreadful object from a very different world is well rendered by the $T_6$ transposition, with its inherent tritone and point reflection." Let's move on in the direction of the Elephant. We saw that the only affine map of the form $x \mapsto \pm x + k$ fixing say the major $0$-triad $(0,4,7)$ is the identity map. But, we can ask for the collection of all affine maps $x \mapsto a x + b$ fixing this major $0$-triad set-wise, that is, such that \{ b, 4a+b~\text{mod}~12, 7a+b~\text{mod}~2 \} \subseteq \{ 0,4,7 \} \] A quick case-by-case analysis shows that there are just eight such maps: the identity and the constant maps x \mapsto x,~x \mapsto 0,~x \mapsto 4, ~x \mapsto 7 \] and the four maps \underbrace{x \mapsto 3x+7}_a,~\underbrace{x \mapsto 8x+4}_b,~x \mapsto 9x+4,~x \mapsto 4x \] Compositions of such maps again preserve the set $\{ 0,4,7 \}$ so they form a monoid, and a quick inspection with GAP learns that $a$ and $b$ generate this monoid. gap> a:=Transformation([10,1,4,7,10,1,4,7,10,1,4,7]);; gap> b:=Transformation([12,8,4,12,8,4,12,8,4,12,8,4]);; gap> gens:=[a,b];; gap> T:=Monoid(gens); gap> Size(T); The monoid $T$ is the triadic monoid of Thomas Noll's paper The topos of triads. The monoid $T$ can be seen as a one-object category (with endomorphisms the elements of $T$). The corresponding presheaf topos is then the category of all sets equipped with a right $T$-action. Actually, Noll considers just one such presheaf (and its sub-presheaves) namely $\mathcal{F}=\mathbb{Z}_{12}$ with the action of $T$ by affine maps described before. He is interested in the sheafifications of these presheaves with respect to Grothendieck topologies, so we have to describe those. For any monoid category, the subobject classifier $\Omega$ is the set of all right ideals in the monoid. Using the GAP sgpviz package we can draw its Cayley graph (red coloured vertices are idempotents in the monoid, the blue vertex is the identity map). gap> DrawCayleyGraph(T); The elements of $T$ (vertices) which can be connected by oriented paths (in both ways) in the Cayley graph, such as here $\{ 2,4 \}$, $\{ 3,7 \}$ and $\{ 5,6,8 \}$, will generate the same right ideal in $T$, so distinct right ideals are determined by unidirectional arrows, such as from $1$ to $2$ and $3$ or from $\{ 2,4 \}$ to $5$, or from $\{ 3,7 \}$ to $6$. This gives us that $\Omega$ consists of the following six elements: $0 = \emptyset$ $C = \{ 5,6,8 \} = a.T \wedge b.T$ $L = \{ 2,4,5,6,8 \}=a.T$ $R = \{ 3,7,5,6,8 \}=b.T$ $P = \{ 2,3,4,5,6,7,8 \}=a.T \vee b.T$ $1 = T$ As a subobject classifier $\Omega$ is itself a presheaf, so wat is the action of the triad monoid $T$ on it? For all $A \in \Omega$, and $s \in T$ the action is given by $A.s = \{ t \in T | s.t \in A \}$ and it can be read off from the Cayley-graph. $\Omega$ is a Heyting algebra of which the inclusions, and logical operations can be summarised in the picture below, using the Hexboards and Heytings-post. In this case, Grothendieck topologies coincide with Lawvere-Tierney topologies, which come from closure operators $j~:~\Omega \rightarrow \Omega$ which are order-increasing, idempotent, and compatible with the $T$-action and with the $\wedge$, that is, if $A \leq B$, then $j(A) \leq j(B)$ $j(j(A)) = j(A)$ $j(A).t=j(A.t)$ $j(A \wedge B) = j(A) \wedge j(B)$ Colouring all cells with the same $j$-value alike, and remaining cells $A$ with $j(A)=A$ coloured yellow, we have six such closure operations $j$, that is, Grothendieck topologies. The triadic monoid $T$ acts via affine transformations on the set of pitch classes $\mathbb{Z}_{12}$ and we've defined it such that it preserves the notes $\{ 0,4,7 \}$ of the major $(0,4,7)$-chord, that is, $\{ 0,4,7 \}$ is a subobject of $\mathbb{Z}_{12}$ in the topos of $T$-sets. The point of the subobject classifier $\Omega$ is that morphisms to it classify subobjects, so there must be a $T$-equivariant map $\chi$ making the diagram commute (vertical arrows are the natural inclusions) \xymatrix{\{ 0,4,7 \} \ar[r] \ar[d] & 1 \ar[d] \\ \mathbb{Z}_{12} \ar[r]^{\chi} & \Omega} \] What does the morphism $\chi$ do on the other pitch classes? Well, it send an element $k \in \mathbb{Z}_{12} = \{ 1,2,\dots,12=0 \}$ to $1$ iff $k \in \{ 0,4,7 \}$ $P$ iff $a(k)$ and $b(k)$ are in $\{ 0,4,7 \}$ $L$ iff $a(k) \in \{ 0,4,7 \}$ but $b(k)$ is not $R$ iff $b(k) \in \{ 0,4,7 \}$ but $a(k)$ is not $C$ iff neither $a(k)$ nor $b(k)$ is in $\{ 0,4,7 \}$ Remember that $a$ and $b$ are the transformations (images of $(1,2,\dots,12)$) a:=Transformation([10,1,4,7,10,1,4,7,10,1,4,7]);; b:=Transformation([12,8,4,12,8,4,12,8,4,12,8,4]);; so we see that $0,1,4$ are mapped to $1$ $3$ is mapped to $P$ $8,11$ are mapped to $L$ $1,6,9,10$ are mapped to $R$ $2,5$ are mapped to $C$ Finally, we can compute the sheafification of the sub-presheaf $\{ 0,4,7 \}$ of $\mathbb{Z}$ with respect to a Grothendieck topology $j$: it consists of the set of those $k \in \mathbb{Z}_{12}$ such that $j(\chi(k)) = 1$. The musically interesting Grothendieck topologies are $j_P, j_L$ and $j_R$ with corresponding sheaves: For $j_P$ we get the sheaf $\{ 0,3,4,7 \}$ which Mamuth-ers call a Major-Minor Mixture as these are the notes of both the major and minor $0$-triads For $j_L$ we get $\{ 0,3,4,7,8,11 \}$ which is an example of an Hexatonic scale (six notes), here they are the notes of the major and minor $0,~4$ and $8$-triads For $j_R$ we get $\{ 0,1,3,4,6,7,9,10 \}$ which is an example of an Octatonic scale (eight notes), here they are the notes of the major and minor $0,~3,~6$ and $9$-triads We could have played the same game starting with the three notes of any other major triad. Those in the know will have noticed that so far I've avoided another incarnation of the dihedral $D_{12}$ group in music, namely the $PLR$-group, which explains the notation for the elements of the subobject classifier $\Omega$, but this post is already way too long. Hexboards and Heytings Published February 26, 2022 by lievenlb A couple of days ago, Peter Rowlett posted on The Aperiodical: Introducing hexboard – a LaTeX package for drawing games of Hex. Hex is a strategic game with two players (Red and Blue) taking turns placing a stone of their color onto any empty space. A player wins when they successfully connect their sides together through a chain of adjacent stones. Here's a short game on a $5 \times 5$ board (normal play uses $11\times 11$ boards), won by Blue, drawn with the LaTeX-package hexboard. As much as I like mathematical games, I want to use the versability of the hexboard-package for something entirely different: drawing finite Heyting algebras in which it is easy to visualise the logical operations. Every full hexboard is a poset with minimal cell $0$ and maximal cell $1$ if cell-values increase if we move horizontally to the right or diagonally to the upper-right. With respect to this order, $p \vee q$ is the smallest cell bigger than both $p$ and $q$, and $p \wedge q$ is the largest cell smaller than $p$ and $q$. The implication $p \Rightarrow q$ is the largest cell $r$ such that $r \wedge p \leq q$, and the negation $\neg p$ stands for $p \Rightarrow 0$. With these operations, the full hexboard becomes a Heyting algebra. Now the fun part. Every filled area of the hexboard, bordered above and below by a string of strictly increasing cells from $0$ to $1$ is also a Heyting algebra, with the induced ordering, and with the logical operations defined similarly. Note that this mustn't be a sub-Heyting algebra as the operations may differ. Here, we have a different value for $p \Rightarrow q$, and $\neg p$ is now $0$. If you're in for an innocent "Where is Wally?"-type puzzle: $W = (\neg \neg p \Rightarrow p)$. Click on the image to get the solution. The downsets in these posets can be viewed as the open sets of a finite topology, so these Heyting algebra structures come from the subobject classifier of a topos. There are more interesting toposes with subobject classifier determined by such hex-Heyting algebras. For example, the Topos of Triads of Thomas Noll in music theory has as its subobject classifier the hex-Heyting algebra (with cell-values as in the paper): Note to self: why not write a couple of posts on this topos? Another example: the category of all directed graphs is the presheaf topos of the two object category ($V$ for vertices, and $E$ for edges) with (apart from the identities) just two morphisms $s,t : V \rightarrow E$ (for start- and end-vertex of a directed edge). The subobject classifier $\Omega$ of this topos is determined by the two Heyting algebras $\Omega(E)$ and $\Omega(V)$ below. These 'hex-Heyting algebras' are exactly what Eduardo Ochs calls 'planar Heyting algebras'. Eduardo has a very informative research page, containing slides and handouts of talks in which he tries to explain topos theory to "children" (using these planar Heyting algebras) including: Sheaves for children Planar Heyting algebras for children Logic for children Grothendieck topologies for children Perhaps now is a good time to revive my old sga4hipsters-project. Learners' logic In the Learners and Poly-post we've seen that learners from $A$ to $B$ correspond to set-valued representations of a directed graph $G$ and therefore form a presheaf topos. Any topos comes with its Mitchell-Benabou language, allowing us to speak of formulas, propositions and their truth values. Two objects play a special role in this: the terminal object $\mathbf{1}$, and the subobject classifier $\mathbf{\Omega}$. It is a fun exercise to determine these special learners. $T$ is the free rooted tree with branches sprouting from every node $n \in T_0$ for each element in $A \times B$. $C$ will be our set of colours, one for each element of $Maps(A,B) \times Maps(A \times B,A)$. For every map $\lambda : T_0 \rightarrow C$ we get a coloured rooted tree $T_{\lambda}$, and for each branch $(a,b)$ from the root we get another rooted sub-tree $T_{\lambda}(a,b)$ which is again of the form $T_{\mu}$ for a certain map $\mu : T_0 \rightarrow C$. The directed graph $G$ has a vertex $v_{\lambda} \in V$ for each coloured rooted tree $T_{\lambda}$ and a directed edge $v_{\lambda} \rightarrow v_{\mu}$ if $T_{\mu}$ is the isomorphism class of coloured rooted trees of the subtree $T_{\lambda}(a,b)$ for some $(a,b) \in A \times B$. There are exactly $\# A \times B$ directed edges leaving every vertex in $G$, but there may be (many) more incoming edges. We can colour each vertex $v_{\lambda}$ with the colour of the root of $T_{\lambda}$. The coloured directed graph $G$ depicts the learning process in a neural network, being trained to find a suitable map $A \rightarrow B$. The colour of a vertex $v_{\lambda}$ gives a map $f \in Maps(A,B)$ (and a request function). If the network now gives as output $b \in B$ for a given input $a \in A$, we can move on to the end-vertex $v_{\mu}$ of the directed edge labeled $(a,b)$ out of $v_{\lambda}$. The colour of $v_{\mu}$ gives us a new (hopefully improved) map $f_{new} \in Maps(A,B)$ (and a new request function). A new training data $(a',b')$ brings us to a new vertex and map, and so on. Clearly, some parts of $G$ are more efficient to find the desired map than others, and the aim of the game is to distinguish efficient from inefficient learners. A first hint that Grothendieck topologies and their corresponding sheafifications will turn out to be important. We've seen that a learner, that is a morphism $Py^P \rightarrow C y^{A \times B}$ in $\mathbf{Poly}$, assigns a set $P_{\lambda}$ to every vertex $v_{\lambda}$ (this set may be empty) and a map $P_{\lambda} \rightarrow P_{\mu}$ to every directed edge $v_{\lambda} \rightarrow v_{\mu}$ in $G$. The terminal object $\mathbf{1}$ in this setting assigns to each vertex a singleton $\{ \ast \}$, and the obvious maps for each directed edge. In $\mathbf{Poly}$-speak, the terminal object is the morphism \mathbf{1}~:~V y^V \rightarrow C y^{A \times B} \] which sends each vertex $v_{\lambda} \in V$ to its colour $c \in C$, and where the backtrack map $\varphi^{\#}_{v_{\lambda}}[c]$ maps $(a,b)$ to $v_{\mu}$ if this is the end-vertex of the edge labelled $(a,b)$ out of $v_{\lambda}$. That is, $\mathbf{1}$ contains all information about the coloured directed graph $G$. The subobject classifier $\mathbf{\Omega}$ assigns to each vertex $v_{\lambda}$ the set $\mathbf{\Omega}(v_{\lambda})$ of all subsets $S$ of directed paths in $G$, starting at $v_{\lambda}$, such that if $p \in S$ then also all prolongated paths belong to $S$. Note that the emptyset $\emptyset$ satisfies this requirement, so is an element of this vertex set. Another special element in $\mathbf{\Omega}(v_{\lambda})$ is the set $\mathbf{1}_{\lambda}$ of all oriented paths starting at $v_{\lambda}$. $\mathbf{\Omega}(v_{\lambda})$ is an Heyting algebra with $1=\mathbf{1}_{\lambda}$, $0 = \emptyset$, partially ordered via inclusion, and logical operations $\wedge$ (intersection), $\vee$ (union), $\neg$ (with $\neg S$ the largest $S' \in \mathbf{\Omega}(v_{\lambda})$ disjoint from $S$) and $\Rightarrow$ defined by $S \Rightarrow S'$ is the union of all $S" \in \mathbf{\Omega}(v_{\lambda})$ such that $S" \cap S \subseteq S'$. $S \wedge \neg S$ is not always equal to $1$. Here, the union misses the left edge from the root. So, we will not be able to prove things by contradiction. If $v_{\lambda} \rightarrow v_{\mu}$ is the directed edge labeled $(a,b)$, then the corresponding map $\mathbf{\Omega}(v_{\lambda}) \rightarrow \mathbf{\Omega}(v_{\mu})$ takes an $S \in \mathbf{\Omega}(v_{\lambda})$, drops all paths which do not pass through $v_{\mu}$ and removes from those who do the initial edge $(a,b)$. If no paths in $S$ pass through $v_{\mu}$ then $S$ is mapped to $\emptyset \in \mathbf{\Omega}(v_{\mu})$. If $\Omega = \bigsqcup_{\lambda} \mathbf{\Omega}(v_{\lambda})$ then the subobject classifier is the morphism in $\mathbf{Poly}$ \mathbf{\Omega}~:~\Omega y^{\Omega} \rightarrow C y^{A \times B} \] sending a path starting in $v_{\lambda}$ to the colour of $v_{\lambda}$ and the backtrack map of $(a,b)$ the image of the path under the map $\mathbf{\Omega}(v_{\lambda}) \rightarrow \mathbf{\Omega}(v_{\mu})$. Ok, let's define the Learner's Mitchell-Benabou language. We'll view a learner $Py^P \rightarrow C y^{A \times B}$ as a set-valued representation $P$ of the directed graph $G$ with vertex set $P_{\lambda}$ placed at vertex $v_{\lambda}$. A formula $\phi(p)$ of the language with a free variable $p$ is a morphism (of representations of $G$) from a learner $P$ to the subobject classifier \phi~:~P \rightarrow \mathbf{\Omega} \] Such a morphism determines a sub-representation of $P$ which we can denote $\{ p | \phi(p) \}$ with vertex sets \{ p | \phi(p) \}_{\lambda} = \{ p \in P_{\lambda}~|~\phi(v_{\lambda})(p) = \mathbf{1}_{\lambda} \} \] On formulas we can apply logical connectives to get more formulas. For example, the formula $\phi(p) \Rightarrow \psi(q)$ is the composition P \times Q \rightarrow^{\phi \times \psi} \mathbf{\Omega} \times \mathbf{\Omega} \rightarrow^{\Rightarrow} \mathbf{\Omega} \] By quantifying all free variables we get a formula without free variables, and those correspond to morphisms $\mathbf{1} \rightarrow \mathbf{\Omega}$, that is, to sub-representations of the terminal object $\mathbf{1}$. For example, if $\phi(p)$ is the formula with free variable $p$ corresponding to the morphism $\phi : P \rightarrow \mathbf{\Omega}$, then we have \forall p : \phi(p) = \{ v_{\lambda} \in V~|~\{ p | \phi(p) \}_{\lambda} = P_{\lambda} \} \] \exists p : \phi (p) = \{ v_{\lambda} \in V~|~\{ p | \phi(p) \}_{\lambda} \not= \emptyset \} \] Sub-representations of $\mathbf{1}$ again form a Heyting-algebra in the obvious way, so we can assign a "truth-value" to a formula without free variables as that sub-object of $\mathbf{1}$. There's a lot more to say, so perhaps this will be continued. Every topos has its own internal language, the so called Mitchell-Bénabou language, allowing us to speak about formulas and their truth values. Sadly, Jean Bénabou died last week. Here's a nice interview with Bénabou (in French) on category theory, Grothendieck, logic, and a rant on plagiarism among topos theorists (starting at 1:00:16). Yesterday, France Culture's 'La methode scientifique' hosted Alain Connes, Laurent Lafforgue and Olivia Caramello in a special programme Grothendieck: la moisson (Grothendieck, the harvest), dedicated to the recent publication of 'Récoltes et Semailles'. An interesting item is 'le reportage du jour' by Céline Loozen in which she manages to have a look at the 60.000 pages of Grothendieck's Lasserre notes, stocked in the cellars of the Librairie Alain Brieux, and talks to Jean-Bernard Gillot who is commissioned by Grothendieck's son to appraise the work (starts at 36:40). Perhaps the publication of 'Récoltes et Semailles' is part of a deal with the family to make these notes available, at last. Towards the end of the programme Connes, Caramello and Lafforgue lament that topos theory is still not taken seriously by the mathematical community at large, whereas it is welcomed warmly by the engineers of Huawei. In more topos news, I learn from the blog of Olivia Caramello, that Laurent Lafforgue is going to give an online course on toposes as 'bridges' at the University of Warwick, the first talk starts today at 14hrs London time. The hype cycle of an idea These three ideas (re)surfaced over the last two decades, claiming to have potential applications to major open problems: (2000) $\mathbb{F}_1$-geometry tries to view $\mathbf{Spec}(\mathbb{Z})$ as a curve over the field with one element, and mimic Weil's proof of RH for curves over finite fields to prove the Riemann hypothesis. (2012) IUTT, for Inter Universal Teichmuller Theory, the machinery behind Mochizuki's claimed proof of the ABC-conjecture. (2014) topos theory : Connes and Consani redirected their RH-attack using arithmetic sites, while Lafforgue advocated the use of Caramello's bridges for unification, in particular the Langlands programme. It is difficult to voice an opinion about the (presumed) current state of such projects without being accused of being either a believer or a skeptic, resorting to group-think or being overly critical. We lack the vocabulary to talk about the different phases a mathematical idea might be in. Such a vocabulary exists in (information) technology, the five phases of the Gartner hype cycle to represent the maturity, adoption, and social application of a certain technology : Technology Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity This model can then be used to gauge in which phase several emerging technologies are, and to estimate the time it will take them to reach the stable plateau of productivity. Here's Gartner's recent Hype Cycle for emerging Artificial Intelligence technologies. Picture from Gartner Hype Cycle for AI 2021 What might these phases be in the hype cycle of a mathematical idea? Technology Trigger: a new idea or analogy is dreamed up, marketed to be the new approach to that problem. A small group of enthusiasts embraces the idea, and tries to supply proper definitions and the very first results. Peak of Inflated Expectations: the idea spreads via talks, blogposts, mathoverflow and twitter, and now has enough visibility to justify the first conferences devoted to it. However, all this activity does not result in major breakthroughs and doubt creeps in. Trough of Disillusionment: the project ran out of steam. It becomes clear that existing theories will not lead to a solution of the motivating problem. Attempts by key people to keep the idea alive (by lengthy papers, regular meetings or seminars) no longer attract new people to the field. Slope of Enlightenment: the optimistic scenario. One abandons the original aim, ditches the myriad of theories leading nowhere, regroups and focusses on the better ideas the project delivered. A negative scenario is equally possible. Apart for a few die-hards the idea is abandoned, and on its way to the graveyard of forgotten ideas. Plateau of Productivity: the polished surviving theory has applications in other branches and becomes a solid tool in mathematics. It would be fun so see more knowledgable people draw such a hype cycle graph for recent trends in mathematics. Here's my own (feeble) attempt to gauge where the three ideas mentioned at the start are in their cycles, and here's why: IUTT: recent work of Kirti Joshi, for example this, and this, and that, draws from IUTT while using conventional language and not making exaggerated claims. $\mathbb{F}_1$: the preliminary programme of their seminar shows little evidence the $\mathbb{F}_1$-community learned from the past 20 years. Topos: Developing more general theory is not the way ahead, but concrete examples may carry surprises, even though Gabriel's topos will remain elusive. Clearly, you don't agree, and that's fine. We now have a common terminology, and you can point me to results or events I must have missed, forcing me to redraw my graph. Chevalley's circle of friends Published February 8, 2022 by lievenlb Last week, Danielle Couty ArXiVed her paper Friendly views on Claude Chevalley (in French). From the abstract: "We propose to follow the itinerary of Claude Chevalley during the last twenty years of his life, through the words of Jacques Roubaud, Denis Guedj and Alexander Grothendieck. Our perspective is that of their testimonies filled with friendship." Claude Chevalley was one of the founding fathers of Bourbaki. Two of the four pre-WW2 Bourbaki-congresses were held in "La Massoterie", the Chevalley family domain in Chancay (see this post, update: later I learned from Liliane Beaulieu that the original house was destroyed by fire). In 1938 he left for Princeton and stayed there during the war, making it impossible to return to a position in France for a very long time. Only in 1957 he could return to Paris where he led a seminar which proved to be essential for the development of algebraic groups and algebraic geometry. Picture from N. Bourbaki, an interview with C. Chevalley The Couty paper focusses on the post-1968 period in which Chevalley distanced himself from Bourbaki (some of its members, he thought, had become 'mandarins' and 'reactionaires'), became involved with the ecological movement 'Survivre et vivre' and started up the maths department of a new university at Vincennes. The paper is based on the recollections of three of his friends. 1. Jacques Roubaud is a French poet, writer and mathematician. On this blog you may have run into Roubaud as the inventor of Bourbaki's death announcement, and the writer of the book with title $\in$. He's also a member of Oulipo, a loose gathering of (mainly) French-speaking writers and mathematicians. Famous writers such as Georges Perec and Italo Calvino were also Oulipo-members (see also Ouilpo's use of the Tohoku paper). Chevalley introduced Roubaud (and others) to the game of Go. From Couty's paper this quote from Roubaud (G-translated): ". . . it turns out that he had learned to play go in Japan and then, in Paris, he could not find a player […] I played go with him […] and then at a certain moment , we thought, Pierre Lusson and myself, it would still be good to create circumstances such that Chevalley could have players. And so, we had a lot of ambition, we said to ourselves: "We're going to write a treatise on go, and then lots of people will start playing go". » The resulting Go-book is A short treatise inviting the reader to discover the subtle art of Go. Here's Georges Perec (left) and Jacques Roubaud playing a game. Picture from Petit traite invitant a la decouverte de l'art subtil du Go 2. Denis Guedj was a French novelist, mathematician and historian of science professor, perhaps best known for his book The Parrot's Theorem. In May 1968, Guedj was a PhD-student of Jean-Paul Benzecri (the one defining God as the Alexandroff compactification of the univers), working in the building where 'Le Comité de Grève' installed itself. Here he met Chevalley. A Guedj-quote from Couty's paper (G-translated): "Claude Chevalley was one of the three professors of the Faculty of Science to commit himself totally to the adventure until the end, occupying the premises with the students on the Quai Saint-Bernard […] and sleeping there frequently . That's where I met him. We had taken possession of this universe which until then had only been a place of study and knowledge, and which, in the mildness of this month of May, had become a place of life, of a life wonderfully exhilarating. The college was ours. At night we walked down the aisles yet? lined with tall trees, entered the empty lecture halls, slept under the stars. Needless to say that at the beginning of the school year, in the fall of 1968, it was impossible for us to find our place in these undressed spaces from which the magic had withdrawn. » Picture from Décès de l'écrivain et universitaire Denis Guedj In June 2008, Guedj was one of the guests at the special edition of France Culture on the occasion of Grothendieck's 80th birthday, Autour d'Alexandre Grothendieck. 3. Alexander Grothendieck, mathematician and misogynist, deified by some of today's 'mandarins'. The paper by Danielle Couty may shed additional light on Grothendieck's withdrawal from Bourbaki and mathematics as a whole. A G-translated Grothendieck quote from the paper: "It was Chevalley who was one of the first, with Denis Guedj whom I also met through Survivre, to draw my attention to this ideology (they called it "meritocracy" or a name like that), and what there was in her of violence, of contempt. It was because of that, Chevalley told me […] that he could no longer bear the atmosphere in Bourbaki and had stopped setting foot there. » Claude Chevalley stayed at Vincennes until his retirement in 1978, he died on June 28th 1984. Christine Bessenrodt (1958-2022) We were pretty close once. It is a shock to read about her passing on Twitter. EWM mourns Christine Bessenrodt, who passed away on January 24 after a serious illness. Christine held the Chair of Algebra and Number Theory at The University of Hannover and she worked in many ways to promote equality for women mathematicians. pic.twitter.com/7WiJGEJ6dj — EWM (European Women in Mathematics) (@EuroWoMaths) January 30, 2022 I met Christine in the late 80ties at some representation meeting in Oberwolfach. Christine was a regular at such meetings, being in the Michler-clique from Essen. I don't recall why I was invited. We had a fun time, and had a sneaky plan to be invited more regularly to the same conferences. All we had to do was to prove a good result, together… Easier said than done. Christine's field was modular representation theory (over $\overline{\mathbb{F}_p}$), and I was interested in the geometry of quiver moduli-spaces (over $\mathbb{C}$). The next year I ran a post-graduate course on rationality problems and emailed the notes weekly to Christine. After all, results of Lenstra, Colliot-Thelene and Sansuc reduced the problem of (stable) rationality of algebraic tori to integral representation theory, a half-way meeting ground for both of us. Around that time, our youngest daughter was born, and Christine graciously accepted to be her godmother. Over the next years, she and Klaus visited us in Antwerp and we week-ended in their brand new house in the outskirts of Duisburg, close to a lake. Christine in Oberwolfach Christine and I were working on the rationality problem for matrix invariants. A sufficiently general $n \times n$ matrix is diagonalisable and is therefore determined, up to conjugacy, by $n$ free parameters (the eigenvalues), so the corresponding quotient-variety is rational. Now consider couples of $n \times n$ matrices under simultaneous conjugation. In the mid 80ties, Formanek proved rationality for $n=3$ and $n=4$, by using the theory of algebraic tori, and that was about all that was known. We were able to reduce the question of stable rationality for $n=5$ and $n=7$ to modular representation theory, after which Christine performed her magic to crack the problem. The paper appeared a year later in Inventiones. Thirty years later, it is still the best result on rationality of matrix invariants. So, we had our joint result, but its intended use never happened, and our contacts gradually watered down as our mathematical interests again diverged. My thoughts go out to Klaus and all her loved ones. Grothendieck stuff Published January 30, 2022 by lievenlb January 13th, Gallimard published Grothendieck's text Recoltes et Semailles in a fancy box containing two books. Here's a G-translation of Gallimard's blurb: "Considered the mathematical genius of the second half of the 20th century, Alexandre Grothendieck is the author of Récoltes et semailles, a kind of "monster" of more than a thousand pages, according to his own words. The mythical typescript, which opens with a sharp criticism of the ethics of mathematicians, will take the reader into the intimate territories of a spiritual experience after having initiated him into radical ecology. In this literary braid, several stories intertwine, "a journey to discover a past; a meditation on existence; a picture of the mores of a milieu and an era (or the picture of the insidious and implacable shift from one era to another…); an investigation (almost police at times, and at others bordering on the swashbuckling novel in the depths of the mathematical megapolis…); a vast mathematical digression (which will sow more than one…); […] a diary ; a psychology of discovery and creation; an indictment (ruthless, as it should be…), even a settling of accounts in "the beautiful mathematical world" (and without giving gifts…)"." All literary events, great or small, are cause for the French to fill a radio show. January 21st, 'Le grand entretien' on France Inter invited Cedric Villani and Jean-Pierre Bourguignon to talk about Grothendieck's influence on mathematics (h/t Isar Stubbe). The embedded YouTube above starts at 12:06, when Bourguignon describes Grothendieck's main achievements. Clearly, he starts off with the notion of schemes which, he says, proved to be decisive in the further development of algebraic geometry. Five years ago, I guess he would have continued mentioning FLT and other striking results, impossible to prove without scheme theory. Now, he goes on saying that Grothendieck laid the basis of topos theory ("to define it, I would need not one minute and a half but a year and a half"), which is only now showing its first applications. Grothendieck, Bourguignon goes on, was the first to envision the true potential of this theory, which we should take very seriously according to people like Lafforgue and Connes, and which will have applications in fields far from algebraic geometry. Topos20 is spreading rapidly among French mathematicians. We'll have to await further results before Topos20 will become a pandemic. Another interesting fragment starts at 16:19 and concerns Grothendieck's gribouillis, the 50.000 pages of scribblings found in Lasserre after his death. Bourguignon had the opportunity to see them some time ago, and when asked to describe them he tells they are in 'caisses' stacked in a 'libraire'. Here's a picture of these crates taken by Leila Schneps in Lasserre around the time of Grothendieck's funeral. If you want to know what's in these notes, and how they ended up at that place in Paris, you might want to read this and that post. If Bourguignon had to consult these notes at the Librairie Alain Brieux, it seems that there is no progress in the negotiations with Grothendieck's children to make them public, or at least accessible.
CommonCrawl
Uspekhi Matematicheskikh Nauk Uspekhi Mat. Nauk: Uspekhi Mat. Nauk, 2020, Volume 75, Issue 5(455), Pages 59–100 (Mi umn9953) Dynamics and spectral stability of soliton-like structures in fluid-filled membrane tubes A. T. Il'ichev Steklov Mathematical Institute of Russian Academy of Sciences, Moscow Abstract: This survey presents results on the stability of elevation solitary waves in axisymmetric elastic membrane tubes filled with a fluid. The elastic tube material is characterized by an elastic potential (elastic energy) that depends non-linearly on the principal deformations and describes the compliant elastic media. Our survey uses a simple model of an inviscid incompressible fluid, which nevertheless makes it possible to trace the main regularities of the dynamics of solitary waves. One of these regularities is the spectral stability (linear stability in form) of these waves. The basic equations of the 'axisymmetric tube – ideal fluid' system are formulated, and the equations for the fluid are averaged over the cross-section of the tube, that is, a quasi-one-dimensional flow with waves whose length significantly exceeds the radius of the tube is considered. The spectral stability with respect to axisymmetric perturbations is studied by constructing the Evans function for the system of basic equations linearized around a solitary wave type solution. The Evans function depends only on the spectral parameter $\eta$, is analytic in the right-hand complex half-plane $\Omega^+$, and its zeros in $\Omega^+$ coincide with unstable eigenvalues. The problems treated include stability of steady solitary waves in the absence of a fluid inside the tube (the case of constant internal pressure), together with the case of local inhomogeneity (thinning) of the tube wall, the presence of a steady fluid filling the tube (the case of zero mean flow) or a moving fluid (the case of non-zero mean flow), and also the problem of stability of travelling solitary waves propagating along the tube with non-zero speed. Bibliography: 83 titles. Keywords: axisymmetric elastic tube, membrane, elastic energy, ideal fluid, quasi-one-dimensional motion, internal pressure, bifurcation, spectral parameter, spectral stability, Evans function. Ministry of Science and Higher Education of the Russian Federation 075-15-2019-1614 This work was performed at the Steklov International Mathematical Center and supported by the Ministry of Science and Higher Education of the Russian Federation (agreement no. 075-15-2019-1614). DOI: https://doi.org/10.4213/rm9953 Full text: PDF file (1025 kB) First page: PDF file Russian Mathematical Surveys, 2020, 75:5, 843–882 Bibliographic databases: UDC: 532.59 PACS: 74J35 MSC: Primary 74B20; Secondary 76B15 Citation: A. T. Il'ichev, "Dynamics and spectral stability of soliton-like structures in fluid-filled membrane tubes", Uspekhi Mat. Nauk, 75:5(455) (2020), 59–100; Russian Math. Surveys, 75:5 (2020), 843–882 \Bibitem{Ili20} \by A.~T.~Il'ichev \paper Dynamics and spectral stability of soliton-like structures in fluid-filled membrane tubes \jour Uspekhi Mat. Nauk \issue 5(455) \pages 59--100 \mathnet{http://mi.mathnet.ru/umn9953} \crossref{https://doi.org/10.4213/rm9953} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=4154848} \elib{https://elibrary.ru/item.asp?id=44983139} \transl \jour Russian Math. Surveys \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000613189200002} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85099915869} http://mi.mathnet.ru/eng/umn9953 https://doi.org/10.4213/rm9953 http://mi.mathnet.ru/eng/umn/v75/i5/p59 This page: 171
CommonCrawl
How does gravity escape a black hole? My understanding is that light can not escape from within a black hole (within the event horizon). I've also heard that information cannot propagate faster than the speed of light. It would seem to me that the gravitational attraction caused by a black hole carries information about the amount of mass within the black hole. So, how does this information escape? Looking at it from a particle point of view: do the gravitons (should they exist) travel faster than the photons? general-relativity gravity black-holes speed-of-light davidsleeps NogwaterNogwater 4,59933 gold badges1414 silver badges66 bronze badges $\begingroup$ The no-hair theorem says black holes are not completely hairless. They have five hairs: mass-energy $M$, linear momentum $P$ (three components), angular momentum $J$ (three components), position $X$ (three components), electric charge $Q$. $\endgroup$ – Ali $\begingroup$ If you think of gravity as a repelling forces then it makes sense that gravity doesn't need to "escape" from a black hole. What fails to escape is the force that repels matter which makes the gravity seem stronger. $\endgroup$ – Derek Tomes $\begingroup$ I think this video explains part of it: ligo.caltech.edu/video/ligo20160211v4 ; it's a numerical simulation of a black hole merger event. You can see that the waves are not generated from inside the black holes. $\endgroup$ – OTH $\begingroup$ There are several good answers here, but another good way to look at it is to draw the Penrose diagram for a black hole that has formed by gravitational collapse (see, e.g., the diagrams here physics.stackexchange.com/a/146852/4552 ). Fix an event outside the horizon to represent some time experienced by an observer. There are surfaces of simultaneity through that point according to which the black hole hasn't even formed yet, so if you wish, the static field can simply be considered to be the field of the preexisting matter from which the hole formed. $\endgroup$ $\begingroup$ I think this question shows a misconception related to the misconception of the "snowball model" of photons, where some people think electrons repel because they are throwing photons at each other as two ice skaters would repel by throwing snowballs at each other. That's not how forces work. How would electrons and positrons attract each other? Particles interact with fields, and photons are the quantization of the EM field. Similarly for gravity. You can't fault a layperson for thinking that because this is often how it's presented in popular talks and shows. The truth is more complicated. $\endgroup$ There are some good answers here already but I hope this is a nice short summary: Electromagnetic radiation cannot escape a black hole, because it travels at the speed of light. Similarly, gravitational radiation cannot escape a black hole either, because it too travels at the speed of light. If gravitational radiation could escape, you could theoretically use it to send a signal from the inside of the black hole to the outside, which is forbidden. A black hole, however, can have an electric charge, which means there is an electric field around it. This is not a paradox because a static electric field is different from electromagnetic radiation. Similarly, a black hole has a mass, so it has a gravitational field around it. This is not a paradox either because a gravitational field is different from gravitational radiation. You say the gravitational field carries information about the amount of mass (actually energy) inside, but that does not give a way for someone inside to send a signal to the outside, because to do so they would have to create or destroy energy, which is impossible. Thus there is no paradox. Keenan PepperKeenan Pepper $\begingroup$ Note that there is NO NEED to indroduce any quantum mechanics AT ALL into this discussion. That's why I specifically said "electromagnetic radiation" and "gravitational radiation", not "photons" or "gravitons". $\endgroup$ – Keenan Pepper $\begingroup$ While agreeing about light/electromagnetism in general being trapped inside, I have to disagree with any statement of the form "... ______ cannot escape a black hole ... because it ... travels at the speed of light." The propagation speed of something is non sequitur(insufficient reason/irrelevant). However "gravitational radiation" is defined, I do believe gravity (eg. of another origin) can pass right through even a black hole. $\endgroup$ $\begingroup$ Forbidden? Is this really the right word. It makes it sound like we shouldn't question science, which is by definition what science is. $\endgroup$ $\begingroup$ @Michael Physicists use "forbidden" more or less interchangeably with "impossible". Both always have an implicit qualifier of "...according to our current best theories and axioms". $\endgroup$ – zwol $\begingroup$ Thank you for this answer. Unfortunately, it mostly reminds me that I don't know what a field really is. :) $\endgroup$ – Jiminion Well, the information doesn't have to escape from inside the horizon, because it is not inside. The information is on the horizon. One way to see that, is from the fact that nothing ever crosses the horizon from the perspective of an observer outside the horizon of a black hole. It asymptotically gets to the horizon in infinite time (as it is measured from the perspective of an observer at infinity). Another way to see that, is the fact that you can get all the information you need from the boundary conditions on the horizon to describe the space-time outside, but that is something more technical. Finally, since classical GR is a geometrical theory and not a quantum field theory*, gravitons is not the appropriate way to describe it. *To clarify this point, GR can admit a description in the framework of gauge theories like the theory of electromagnetism. But even though electromagnetism can admit a second quantization (and be described as a QFT), GR can't. VagelfordVagelford $\begingroup$ If from the perpective of an observer outside the horizon nothing ever crosees the horizon, how does the black hole grows and expands (from that observers perspective)? Clearly you are right about the boundary conditions etc. But then you should be asked who decided on the boundary conditions? Clearly the inside mass determined them, and then again the question is how? $\endgroup$ – itamarhason $\begingroup$ This is misleading. The information that escapes about the mass never actually reaches the horizon. It's encoded in the curvature of space around the black hole (this includes the part very close to the horizon, but a lot of the information isn't close to the horizon). $\endgroup$ $\begingroup$ @itamarhason presumably all the stuff sticks to the horizon making the horizon bigger $\endgroup$ $\begingroup$ Mostly agreed, but there's no need to single out the horizon I think. The information about what any bit of spacetime should do is provided by the neighbouring bits of spacetime (plus the field equation), no matter where you happen to look. And the neighbouring bits got their configuration from their neighbours in turn, and so on, back into the past. $\endgroup$ – Andrew Steane Let's get something out of the way: let's agree not to bring gravitons into this answer. The rationale is simple: when you talk about gravitons you imply a whole lot of things about quantum phenomena, none of which is really necessary to answer your main question. In any case, gravitons propagate with the very same speed as photons: the speed of light, $c$. This way we can focus simply in Classical GR, ie, the Differential Geometry of Spacetime: this is more than enough to address your question. In this setting, GR is a theory that says how much curvature a space "suffers" given a certain amount of mass (or energy, cf Stress-Energy Tensor). A Black Hole is a region of spacetime that has such an intense curvature that it "pinches out" a certain region of spacetime. In this sense, it's not too bad to understand what's going on: if you can measure the curvature of spacetime, you can definitely tell whether or not you're moving towards a region of increasing curvature (ie, towards a block hole). This is exactly what's done: one measures the curvature of spacetime and that's enough: at some point, the curvature is so intense that the light-cones are "flipped". At that exact point, you define the Event Horizon, ie, that region of spacetime where causality is affected by the curvature of spacetime. This is how you make a map of spacetime and can chart black holes. Given that curvature is proportional to gravitational attraction, this sequence of ideas completely addresses your doubt: you don't have anything coming out of the black hole, nor anything like that. All you need is to chart the curvature of spacetime, measuring what happens to your light-cone structure. Then, you find your Event Horizon and, thus, your black hole. This way you got all the information you need, without having anything coming out of the black hole. DanielDaniel $\begingroup$ Suppose, very hypothetically of course, that some extra mass were suddenly created inside the black hole. Would the spacetime curvature outside the black hole change? I realize this is an unphysical process, but if the hand of God reached down and created a large lump of stuff just inside the event horizon, what do the equations of GR tell us about whether we would we be able to tell about the event from outside the event horizon? $\endgroup$ – Mark Eichenlaub $\begingroup$ The thing to note is that curvature is not something that lives only inside the Black Hole: this is a property of spacetime as a whole, and that's what counts. Global, topological, properties are very non-intuitive things. ;-) $\endgroup$ $\begingroup$ @MarkE: only if you were God. Look, the bottom-line is that we're dealing with classical GR, and not Quantum Gravity nor its effects. And, within the framework of classical GR, it's simply not possible for you to change any of the properties (charge, mass, angular momentum) of a black hole from the inside of it. A black hole is simply a "sink" of gravitational fields. $\endgroup$ $\begingroup$ @MarkE: I could open my magic toolbox and talk about holonomies and their relation with orbits in GR (ie, with closed geodesics). And, by mapping the holonomies of a space you can get information about its curvature. So, if you're orbiting a black hole, you can gather all the information about its curvature. (That's why i made that comment about global properties of spaces: they are very non-intuitive.) $\endgroup$ $\begingroup$ @Nogwater: As far as we know, the mass itself is at the singularity, for some definition of "is" (namely that the proper time between crossing the event horizon and reaching the singularity is finite). But, speaking in vague terms, the information of how much mass there is gets "imprinted" on the horizon, it does not "fall" down to the singularity along with the mass. In more precise form, this is called the holographic principle. $\endgroup$ – David Z The problem here is a misunderstanding of what a particle is in QFT. A particle is an excitation of a field, not the field itself. In QED, if you set up a static central charge, and leave it there a very long time, it sets up a field $E=k{q \over r^2}$. No photons. When another charge enters that region, it feels that force. Now, that second charge will scatter and accelerate, and there, you will have a $e^{-}->e^{-}+\gamma$ reaction due to that acceleration, (classically, the waves created by having a disturbance in the EM field) but you will not have a photon exchange with the central charge, at least not until it feels the field set up by our first charge, which will happen at some later time. Now, consider the black hole. It is a static solution of Einstein's equations, sitting there happily. When it is intruded upon by a test mass, it already has set up its field. So, when something scatters off of it, it moves along the field set up by the black hole. Now, it will accelerate, and perhaps, "radiate a graviton", but the black hole will only feel that after the test particle's radiation field enters the black hole horizon, which it may do freely. But nowhere in this process, does a particle leave the black hole horizon. Another example of why the naïve notion of all forces coming from a Feynman diagram with two pairs of legs is the Higgs boson—the entire universe is immersed in a nonzero Higgs field. But we only talk about the 'creation' of Higgs 'particles' when we disturb the Higgs field enough to create ripples in the Higgs field—Higgs waves. Those are the Higgs particles we're looking for in the LHC. You don't need ripples in the gravitational field to explain why a planet orbits a black hole. You just need the field to have a certain distribution. Jerry SchirmerJerry Schirmer $\begingroup$ Thanks, this was a big clarification for me! $\endgroup$ $\begingroup$ If a static E field doesn't set up photons then how an other charge will feel it's presence because in QED photons propagate the E force... That's like a cyclic argument. But how will the other charge feel a static field if that field produced no photons to mediate the attraction $\endgroup$ – Shashaank $\begingroup$ @shashaank, in this case, its the initial conditions of the problem, but remember that only accelerating charges radiate, so any photons that "created the field" were radiated when the charge was firsr put in its place $\endgroup$ – Jerry Schirmer $\begingroup$ I was surprised to notice, in my reading of the Cambridge Press's 2017 book titled "The Philosophy of Cosmology", that planets and stars are sometimes referred to AS particles. $\endgroup$ – Edouard $\begingroup$ So you're kinda saying that the gravitons that mediate the gravitational force between an object and a black hole are definitely off-shell and consequently no detector outside the horizon can count any gravitons? True? $\endgroup$ – Bastam I think it's helpful to think about the related question of how the electric field gets out of a charged black hole. That question came up in the (now-defunct) Q&A section of the American Journal of Physics back in the 1990s. Matt McIrvin and I wrote up an answer that was published in the journal. You can see it at https://facultystaff.richmond.edu/~ebunn/ajpans/ajpans.html . As others have pointed out, it's easier to think about the question in purely classical terms (avoiding any mention of photons or gravitons), although in the case of the electric field of a charged black hole the question is perfectly well-posed even in quantum terms: we don't have a theory of quantum gravity at the moment, but we do think we understand quantum electrodynamics in curved spacetime. Ted BunnTed Bunn While in many ways the question was already answered, I think it should be emphasized that on the classical level, the question is in some sense backwards. The prior discussion of static and dynamic properties especially comes very close. Let's first examine a toy model of a spherically-symmetric thin shell of dust particles collapsing into a Schwarzschild black hole. The spacetime outside of the shell will then also be Schwarzschild, but with a larger mass parameter than the original black hole (if the shell starts at rest at infinity, then just the sum of the two). Intuitively, the situation is analogous to Newton's shell theorem, which a more limited analogue in GTR. At some point, it crosses the horizon and eventually gets crushed out of existence at the singularity, the black hole now gaining mass. So we have the following picture: as the shell collapses, the external gravitational field takes on some value, and as it crossed the horizon, the information about what it's doing can't get out the horizon. Therefore, the gravitational field can't change in response to the shell's further behavior, for this would send a signal across the horizon, e.g., a person riding along with the shell would be able to communicate across it by manipulating the shell. Therefore, rather than gravity having a special property that enables it to cross the horizon, in a certain sense gravity can't cross the horizon, and it is that very property that forces gravity outside of it to remain the same. Although the above answer assumed a black hole already, that doesn't matter at all, as for a spherically collapsing star the event horizon begins at the center and stretches out during the collapse (for the prior situation, it also expands to meet the shell). It also assumes that the situation has spherical symmetry, but this also turns out to not be conceptually important, although for far more complicated and unobvious reasons. Most notably, the theorems of Penrose and Hawking, as it was initially thought by some (or perhaps I should say hoped) that any perturbation from spherical symmetry would prevent black hole formation. You may also be wondering about a related question: if the Schwarzschild solution of GTR is a vacuum, does it make sense for a vacuum to bend spacetime? The situation is somewhat analogous to a simpler one from classical electromagnetism. Maxwell's equations dictate how the electric and magnetic fields change in response to the presence and motion of electric charges, but the charges alone do not determine the field, as you can always have a wave come in from infinity without any contradictions (or something more exotic, like an everywhere-constant magnetic field), and in practice these things are dictated by boundary conditions. The situation is similar in GTR, where the Einstein field equation that dictates how geometry are connected only fixes half of the twenty degrees of freedom of spacetime curvature. edited Sep 9 '16 at 9:26 Chappo Hasn't Forgotten Monica Stan LiouStan Liou In my opinion this is an excellent question, which manages to puzzle also some accomplished physicists. So I do not hesitate to provide another, a bit more detailed, answer, even though several good answers exist already. I think that at least part of this question is based upon an incomplete understanding on what it means to mediate a static force from a particle physics point of view. As others have mentioned in their answers already, you encounter a similar issue in the Coulomb problem in electrodynamics. Let me answer your question from a field theory point of view, since I believe this concurs best with your intuition about particles being exchanged (as apparent from the way you phrased the question). First, no gravitational waves can escape from inside the black hole, as you hinted already in your question. Second, no gravitational waves have to escape from inside the black hole (or from the horizon) in order to mediate a static gravitational force. Gravity waves do not mediate the static gravitational force, but only quadrupole or higher moments. If you want to think about forces in terms of particles being exchanged you can view the static gravitational force (the monopole moment, if you wish) as being mediated by "Coulomb-gravitons" (see below for the analogy with electrodynamics). Coulomb-gravitons are gauge degrees of freedom (so one may hesitate to call them "particles"), and thus no information is mediated by their "escape" from the black hole. This is quite analog to what happens in electrodynamics: photon exchange is responsible for the electromagnetic force, but photon waves are not responsible for the Coulomb force. Photon waves do not mediate the static electromagnetic force, but only dipole or higher moments. You can view the static electromagnetic force (the monopole moment, if you wish) as being mediated by Coulomb-photons. Coulomb-photons are gauge degrees of freedom (so one may hesitate to call them "particles"), and thus no information is mediated by their "instantaneous" transmission. Actually, this is precisely how you deal with the Coulomb force in the QFT context. In so-called Bethe-Salpeter perturbation theory you sum all ladder graphs with Coulomb-photon exchanges and obtain in this way the 1/r potential to leading order and various quantum corrections (Lamb shift etc.) to sub-leading order in the electromagnetic fine structure constant. In summary, it is possible to think about the Schwarzschild and Coulomb force in terms of some (virtual) particles (Coulomb-gravitons or -photons) being exchanged, but as these "particles" are actually gauge degrees of freedom no conflict arises with their "escape" from the black hole or their instantaneous transmission in electrodynamics. An elegant (but perhaps less intuitive) way to arrive at the same answer is to observe that (given some conditions) the ADM mass - for stationary black hole space-times this is what you would call the "black hole mass" - is conserved. Thus, this information is provided by boundary conditions "from the very beginning", i.e., even before a black hole is formed. Therefore, this information never has to "escape" from the black hole. On a side-note, in one of his lectures Roberto Emparan posed your question (phrased a bit differently) as an exercise for his students, and we discussed it for at least an hour before everyone was satisfied with the answer - or gave up ;-) Daniel GrumillerDaniel Grumiller $\begingroup$ Interesting answer, I am learning a lot. Can you please elaborate on what exactly you mean by "gauge degrees of freedom"? Are they merely mathematical abstractions, or is there a physical significance? $\endgroup$ – electronpusher I think the best explanation that can be given is this: you have to discern between statical and dynamical properties of the space-time. What do I mean by that? Well, there are certain space-times that are static. This is for example the case of the prototypical black-hole solution of GTR. Now, this space-time exists a priori (by definition of static: it always was there and always will be), so the gravity doesn't really need to propagate. As GTR tells us gravity is only an illusion left on us by the curved space-time. So there is no paradox here: black holes appear to be gravitating (as in producing some force and being dynamical) but in fact they are completely static and no propagation of information is needed. In reality we know that black-holes are not completely static but this is a correct first approximation to that picture. Now, to address the dynamical part, two different things can be meant by this: Actual global change of space-time as can be seen e.g. in the expansion of the universe. This expansion need not obey the speed of light but this is in no contradiction with any known law. In particular you cannot send any superluminal signals. In fact, opposite is true: by too quick an expansion parts of universe might go too far away for even their light to ever reach us. They will get causally disconnected from our sector of space-time and to us it will appear as if it never existed. So it shouldn't be surprising that no information can be communicated. Gravitational waves, which is a just fancy name for the disturbances in the underlying space-time. They obey the speed of light and the corresponding quantum particles are called gravitons. Now these waves/particles indeed wouldn't be able escape from underneath the horizon (in the precisely the same way as any other particle, except for Hawking radiation, but this is a special quantum effect). MarekMarek Gravitation doesn't work the way light does (which is why quantum gravity is hard). A massive body "dents" space and time, so that, figuratively speaking, light has a hard time running uphill. But the hill itself (i.e. the curved spacetime) has to be there in the first place. matthiasrmatthiasr $\begingroup$ But, you could also ask "How does electric force escape a charged black hole", which would be an equally valid question. $\endgroup$ The holographic principle gives a clue, as pointed out by David Zaslavsky. The Schwarzschild metric element $g_{tt}~=~1 – r_0/r$, for $r_0~=~2GM/c^2$ gives a proper distance called the delay coordinate $$ r^*~=~r~+~r_0 ln[(r-r_00)/r_0] $$ which diverges $r^*~\rightarrow~-\infty$ as you approach the horizon. What this means is that all the stuff which makes up the black hole is never seen to cross the horizon from the perspective of a distant outside observer. The clock on anything falling into a black hole is observed to slow to a near stop and never cross the horizon. This means nothing goes in or out of the black hole, at least classically. So there really is not problem of gravity escaping from a black hole, for as observed from the exterior nothing actually ever went in. Lawrence B. CrowellLawrence B. Crowell $\begingroup$ "Proper distance" means taking the length of a curve in a spatial hyperslice, but $r^*$ is not produced in the surface of constant Schwarzschild time, so it's unclear what you're referring to. For radial light rays, $\Delta t = \pm\Delta r^*$, which is relevant to "not seeing", but this comes from $g_{rr}$, not $g_{tt}$. "Nothing goes in or out of the black hole" is just wrong, although it was the view before the mid-1960s, with falling objects slowing and stopping at the infinite redshift surface. (EG: acceleration in Minkowski; things obviously cross the horizon without been seen to do so.) $\endgroup$ – Stan Liou $\begingroup$ I should have said proper interval. The toroise coordinate indicates that to see something from the horizon it is seen from the "infinite past." Nothing can be directly observed to actually reach the event horizon. $\endgroup$ – Lawrence B. Crowell $\begingroup$ There are observational evidences of matter crossing the horizon of black holes and simply increasing the BH mass, in binary systems. Because the typical spectral fingerprint of the shock wave heating in similar systems but with a white dwarf as accreting object instead of a BH, is absent. Whatever happens to proper time of the accreting matter, it crosses the horizon. $\endgroup$ – Eduardo Guerras Valera $\begingroup$ The clock on anything falling into a black hole is observed to slow to a near stop and never cross the horizon. This means nothing goes in or out of the black hole, at least classically. No, this is wrong. See physics.stackexchange.com/a/146852/4552 $\endgroup$ The various theories - QED, GTR, classical electromagnetism, quantum loop gravity, etc - are all different ways to describe nature. Nature is what it is; theories all have defects. So far as saying whether gravity resembles electromagnetism in some way or not, is just blowing warm air about how humans think and not saying anything substantial about physical reality. So what if we don't have a full grasp of quantum gravity? Gravitons are a sensible concept, and a key part in some unified (or semi-unified) field theories. It might get tricky because unlike other quantum particles, gravitons are a part of the curvature of spacetime and the relations of nearby lightcones, as they fly through said spacetime. We can sort of ignorer that for now. The question is good, and can be answered in terms of quantum theory and gravitons. We just don't know, given the existing state of physics knowledge, how far we can push the idea. When charged particle attract or repel, the force is due to virtual photons. Photons like to travel at the universal speed c, but they don't have to. Heisenberg says so! You can break the laws of conservation of energy and momentum as much as you like, but the more you deviate, the shorter the time span and smaller the bit of space in which you violate these laws. For the virtual photons connecting two charged particles, they've got the room between the two particles, and a time span matching that at lightspeed. These not running waves with a well-defined wavelength, period or phase velocity. This ill-defined velocity can be faster than c or less equally well. In QED, the photon propagator - the wavefunction giving the probability amplitude of a virtual photon connecting (x1, t1) to (x2, t2) is nonzero everywhere - inside and outside the past and future light cones, though becoming unlimited in magnitude on the light cones. So gravitons, if they are that much like photons, can exist just fine outside the horizon and inside. They are, in a rough sense, as big as the space between the black hole and whatever is orbiting or falling into it. Don't picture them as little energy pellets flying from the black hole center (singularity or whatever) - even with Heisenberg's indulgence, it's just not a matter of small particles trying to get through the horizon the wrong way. A graviton is probably already on both sides! For a more satisfying answer, I suspect it takes knowing the math,Fourier transforms, Riemann tensors and all that. DarenWDarenW $\begingroup$ EM and gravitational fields depend on the configuration of the charges/masses, not just the total charge/mass. If the exterior EM/gravitational field of a black hole were coming from the charges/masses inside by some FTL mechanism, you could send signals from the inside to the outside by moving charges/masses on the inside to change the exterior field. But that doesn't actually work. $\endgroup$ – benrg No escape is necessary (a slightly diifferent perspective). A lot of nice answers so far but a couple of things need mentioning. It's not clear where, exactly, the mass of the black hole is supposed to be. Where does the mass reside? That's one thing. The other thing is, how does the mass/energy in the gravitational field, itself, fit into this picture? I think (and I'll no doubt get hammered mercilessly for this) that the mass of a black hole resides spread out through its external gravitational field and nowhere else. The mass of a black hole resides, wholly and solely, in the gravitational field outside the hole. Fortunately for me, I'm not completely alone here. The calculation of the total gravitational field energy of a black hole (or any spherical object) was made in 1985 by the Cambridge astrophysicist Donald Lynden-Bell and Professor Emeritus J. Katz of the Racah Institute Of Physics. http://adsabs.harvard.edu/full/1985MNRAS.213P..21L, Their conclusion was that the total energy in the field is ... (drum-roll here) ... mc^2 !!! The total mass of the BH must reside, completely, and only, in the self-energy of the curvature of spacetime around the hole! Here are a couple of quotes from the paper: "... the field energy outside a Schwarzschild black hole totals Mc^2." and, " ... all these formulae lead to all the black hole's mass being accounted for by field energy outside the hole." The answer to your question, then, is this: information about the mass of a black hole doesn't have to escape from within the black hole because there is no mass inside the black hole. All the mass is distributed in the field outside the hole. Therefore, no information needs to escape from inside. dcgeorgedcgeorge $\begingroup$ Since there are no clear answers presented by others i pick this as the closest answer because it hints that the presence of the black hole mass has compressed the fabric of space, and the compressed fabric of space causes the effect of gravity to act on any other nearby mass within its sphere of influence. $\endgroup$ – George Jones $\begingroup$ @GeorgeJones Thanks for the up-vote, George, but the presence of a black hole doesn't compress the fabric of space, it thins it out. The closer to the hole, the thinner space becomes. At the horizon, the energy density of the manifold itself goes to zero. This strongly implies that black holes are, literally, holes, or cavities in the spacetime manifold (recent quantum Firewall theory also supports this notion). Here's a nice, one page illustration of the physics: dcgeorge.com/images/TheMeaningOfMatter/… $\endgroup$ – dcgeorge $\begingroup$ maybe you are correct when close to the black hole, but i believe bodies in space, like our earth or our sun do compress the fabric of space, its this compression that bends light and causes gravitational lensing. And then as you say the fabric of space breaks down when the enormous gravity from a black hole affects this fabric. The big question is what does this fabric consist of. I will look up your images, thanks $\endgroup$ $\begingroup$ look at the previous statement, your Address did not show up? $\endgroup$ $\begingroup$ @Wookie - If the spacetime manifold itself has an intrinsic energy content and all the mass/energy associated with the black hole (mc^2) is outside the hole dispersed in the gravitational field, I don't see any other conclusion. It tells me that a black hole is just that, a hole in the manifold. All there is to the black hole is its gravitational field. So, to answer your question, yes, it looks to me like there is no spacetime within the Schild radius. (Firewall Theory agrees, for what that's worth). $\endgroup$ The black hole does "leak" information, but it is not due to "gravitions, but in the form of the Hawking radiation. It has its basis in quantum mechanics, and is a thermal sort of radiation with extremely low rate. This also means that the black hole is slowly evaporates, but on a time scale that is comparable to the age of the universe. The origin of this radiation can be described in a little bit hand-waving way as such: due to quantum fluctuations, there's particle-antiparticle pair creation going on in the vacuum. If such a pair-creating happens on the horizon, one of the pair can fall into the black hole while the other can escape. To preserve the total energy (since the vacuum fluctuations are around 0) with a particle now flying away, its fallen pair has to have a negative energy from the black hole's point of view, thus it is effectively losing mass. The outside observer perceives this whole process as "evaporation". This radiation has a distribution as described by a "temperature", which is inversely proportional to the black hole's mass. Might want to check out http://en.wikipedia.org/wiki/Hawking_radiation and other sources for more details... GergelyGergely I think everyone is overcomplicating their answers. First of all, as many people have pointed out, gravitational radiation (mediated by gravitons in the quantum-mechanical context) cannot escape from the interior of a black hole. Regarding how information about the black hole's mass "escapes," the answer is different for collapsed vs. eternal black holes. For collapsed black holes, an external observer's past light cone intersects all the mass that will end up in the black hole before it crosses the horizon, so the observer can "see" all the mass. For eternal black holes, an external observer can "see" the singularity of the white hole that you get from maximally extending the Schwarzchild metric, which "tells" the observer the black hole's mass. tparkertparker it's not gravity that carries the information - we simply learn about the black hole by observing the effects of gravity on objects close to it (as you rightly pointed out, nothing escapes a black hole after crossing an event horizon, so we don't anything about what happens to objects beyond that point, except that they are never observed again). Gravity is a force and we need it to act somewhere before drawing conclusions about it's dynamic characteristics. leaveswater02leaveswater02 I think that the correct explanation for why a black hole has gravity is a quantum mechanical explanation but I think that in a lot of situations including this one, quantum mechanics simulates classical mechanics so I will explain how it's possible that classical mechanics predicts that a black hole has gravity. From reading a Quora answer, I think that according to general relativity, the gravitational field outside a black hole is self-sustaining and is not caused by the matter inside the black hole and it's the gravitational field outside the black hole that continuously makes the gravitational field inside work the way it does. According to the YouTube video https://www.youtube.com/watch?v=vNaEBbFbvcY, we don't even know that matter doesn't disappear when it reaches the singularity. I don't fully know how general relativity works but having learned about the conservation laws, I suspect when a small solid object falls into a supermassive black hole, it undergoes extremely little gravitational heating and releases way less energy than its mass multiplied by $c^2$ and as a result of the gravitational field of the object, the increase in the mass of the black hole defined by the strength of its gravitational field increases by almost exactly the mass of the object that fell in. Although that explains classically how how it's possible for a black hole to exist, the universe really follows quantum mechanics so you might be wondering how gravitons escape the black hole. Actually, an isolated black hole of any mass, charge, and angular momentum has an unchanging gravitational field so it doesn't emit gravitons excpect for maybe really super low energy ones including ones caused by the slow changing gravitational field caused by Hawking radiation. I think that two orbiting black holes emit a gravitational wave so they release higher energy gravitons. According to quantum mechanics, particles can function like waves so I think the gravitons get created outside both of the black holes with an extreme uncertainty in position and if the wave function could collapse almost exactly to an eigenfunction of the position operator, we would observe interference of each graviton with itself but I don't know if there is a way to collapse the wave function of a graviton to almost exactly an eigenfunction of the position operator like there is for a photon. Unlike before, I now have high doubts that photons actually exist, so maybe the same goes for gravitons. I first speculated that they might not exist when I thought about how a microwave heating food can be better explained classically by heating through electrical resistance. So I started asking a question and then the review gave me the question Can the photoelectric effect be explained without photons? and its answer Can the photoelectric effect be explained without photons? says that the photoelectric effect can be explained without photons. edited Jun 6 '20 at 2:09 TimothyTimothy $\begingroup$ Hi, I saw your comment in the thread "Did the big bang happen at a point" and I'd be glad if you'd try to answer this question here concerning the accumulation of matter in the universe: physics.stackexchange.com/questions/583241/… $\endgroup$ – Marcus The black hole communicates gravity via virtual gravitons. However it is not to be confused with any virtual gluons leaving the black hole. Gravitons can not leave black holes any more than light can leave a black hole. It just does not happen. Instead it should be thought of as the black hole warping the gravitational field(or the electromagnetic field to communicate electromagnetic charge) to show to the universe that it has a gravitational field. Any gravitons created inside the black hole never leaves the black hole. Instead the black hole warps space and any other quantum field that it creates virtual gravitons and virtual photons to communicate its presence to the universe. The no hair theorem says electric charge, mass, and spin are conserved in a black hole. As far as physics goes no other property is conserved. However there is a chance that there might be even other properties(other than the three hairs-charge, mass, spin) stored on the surface area of the black hole(not the volume). https://en.wikipedia.org/wiki/No-hair_theorem Roghan ArunRoghan Arun $\begingroup$ It should be thought of as fields rather than particles as all particles are excitations of quantum fields. It is also the same reason hawking radiation happens. The black hole disturbs the electromagnetic field(could be other fields) and it creates a negative energy photon and a positive energy photon and the negative energy falls into the black hole and this results in black hole lose mass. Even here nothing leaves the black holes and also fields communicate the force. All particles are just vibrations of their particular field. $\endgroup$ – Roghan Arun First of all, the gravitational effects of the balck hole are felt outside the black hole. Gravity is spacetime curvature,it it already present outside the black hole, and would always be there in some amount for matter existing in the universe. It doesnt need to escape and nothing does. Refer to this: "Why cant you escape a black hole?" by The Science Asylum answered Jan 6 at 6:27 AveerAveer Not the answer you're looking for? Browse other questions tagged general-relativity gravity black-holes speed-of-light or ask your own question. Can we detect gravitational waves generated from inside the event horizon of a black hole? If nothing in the universe can travel faster than light, how come light can't escape a black hole? Are gravitons bound by the event horizon? A question about gravity Can gravitational waves escape from inside of black holes? Events inside a Blackhole If gravitational radiation (or anything) cannot escape a black hole, how can it produce redshift or curve spacetime? Speed of light versus pull of gravity - Is $c$ really the limit? Why do matter behind event horizon exerts gravitational pull? Can gravitons travel faster than the speed of light? See more linked questions Thought experiment - would you notice if you fell into a black hole? Can tachyons escape the gravitational pull of a classical black hole? Where does the energy of a photon trying to escape a black hole go? If photons don't experience time then why can't they escape a black hole? When can information escape from a black hole through a warp bubble-like spacetime?
CommonCrawl
View source for Hermitian form ← Hermitian form {{MSC|15}} {{TEX|done}} An ''hermitian form on a left $R$-module $X$'' is a mapping $\def\phi{\varphi}\phi:X\times X \to R$ that is linear in the first argument and satisfies the condition $$\phi(y,x) = \phi(x,y)^J,\quad x,y\in X.$$ Here $R$ is a [[unital ring|ring with a unit element]] and equipped with an involutory [[anti-automorphism]] $J$. In particular, $\phi$ is a [[sesquilinear form]] on $X$. The module $X$ itself is then called a Hermitian space. By analogy with what is done for bilinear forms, equivalence is defined for Hermitian forms (in another terminology, isometry) and, correspondingly, isomorphism (isometry) of Hermitian spaces (in particular, automorphism). All automorphisms of a Hermitian form $\phi$ form a group $U(\phi)$, which is called the unitary group associated with the Hermitian form $\phi$; its structure has been well studied when $R$ is a skew-field (see [[Unitary group|Unitary group]]). A Hermitian form is a special case of an $\def\e{\epsilon}\e$-Hermitian form (where $\e$ is an element in the centre of $R$), that is, a sesquilinear form $\psi$ on $X$ for which $$\psi(y,x) = \e\psi(x,y)^J,\quad x,y\in X.$$ When $\e = 1$, an $\e$-Hermitian form is Hermitian, and when $\e=-1$ the form is called skew-Hermitian or anti-Hermitian. If $J=1$, a Hermitian form is a symmetric bilinear form, and a skew-Hermitian form is a skew-symmetric or anti-symmetric bilinear form. If the mapping $$X\to\def\Hom{ {\rm Hom}}\Hom_R(X,R),\quad y\mapsto f_y,$$ where $f_y(x) = \phi(x,y)$ for any $x\in X$, is bijective, then $\phi$ is called a non-degenerate Hermitian form or a Hermitian scalar product on $X$. If $X$ is a free $R$-module with a basis $e_1,\dots,e_n$, then the matrix $(a_{ij})$, where $a_{ij} = \phi(e_i,e_j)$, is called the matrix of $\phi$ in the given basis; it is a [[Hermitian matrix|Hermitian matrix]] (that is, $a_{ji}=a_{ij}^J$). A Hermitian form $\phi$ is non-degenerate if and only if $(a_{ij})$ is invertible. If $R$ is a skew-field, if ${\rm char}\; R \ne 2$, and if $X$ is finite-dimensional over $R$, then $X$ has an orthogonal basis relative to $\phi$ (in which the matrix is diagonal). If $R$ is a commutative ring with identity, if $R_0 = \{r\in R : r^J = r\}$, and if the matrix of $\phi$ is definite, then its determinant lies in $R_0$. Under a change of basis in $X$ this determinant is multiplied by a non-zero element of $R$ of the form $\def\a{\alpha}\a\a^J$, where $\a$ is an invertible element of $R$. The determinant regarded up to multiplication by such elements is called the determinant of the Hermitian form or of the Hermitian space $X$; it is an important invariant and is used in the classification of Hermitian forms. Let $R$ be commutative. Then a Hermitian form $\phi$ on $X$ gives rise to a quadratic form $Q(x)=\phi(x,x)$ on $X$ over $R_0$. The analysis of such forms lies at the basis of the construction of the Witt group of $R$ with an involution (see [[Witt ring|Witt ring]]; [[Witt decomposition|Witt decomposition]]; [[Witt theorem|Witt theorem]]). When $R$ is a maximal ordered field, then the [[Law of inertia|law of inertia]] extends to Hermitian forms (and there arise the corresponding concepts of the signature, the index of inertia, and positive and negative definiteness). If $R$ is a field and $J\ne 1$, then $R$ is a quadratic Galois extension of $R_0$, and isometry of two non-degenerate Hermitian forms over $R$ is equivalent to isometry of the quadratic forms over $R_0$ generated by them; this reduces the classification of non-degenerate Hermitian forms over $R$ to that of non-degenerate quadratic forms over $R_0$. If $R=\C$ and $J$ is the involution of [[complex conjugation]], then a complete system of invariants of Hermitian forms over a finite-dimensional space is given by the rank and the [[signature]] of the corresponding quadratic forms. If $R$ is a local field or the field of functions of a single variable over a finite field, then a complete system of invariants for non-degenerate Hermitian forms is given by the rank and the determinant. If $R$ is a finite field, then there is only one invariant, the rank. For the case when $R$ is an algebraic extension of $\Q$, see {{Cite|MiHu}}. Ch. Hermite was the first, in 1853, to consider the forms that bear his name in connection with certain problems of number theory. ====References==== {| |- |valign="top"|{{Ref|Bo}}||valign="top"| N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra", '''1''', Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) {{MR|0354207}} {{ZBL|}} |- |valign="top"|{{Ref|Di}}||valign="top"| J.A. Dieudonné, "La géométrie des groups classiques", Springer (1955) {{MR|}} {{ZBL|0221.20056}} |- |valign="top"|{{Ref|MiHu}}||valign="top"| J. Milnor, D. Husemoller, "Symmetric bilinear forms", Springer (1973) {{MR|0506372}} {{ZBL|0292.10016}} |- |} Template:Category (view source) Template:Cite (view source) Template:MR (view source) Template:MSC (view source) Template:MSCwiki (view source) Template:MSN HOST (view source) Template:Ref (view source) Template:TEX (view source) Template:ZBL (view source) Return to Hermitian form. How to Cite This Entry: Hermitian form. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hermitian_form&oldid=39395 This article was adapted from an original article by V.L. Popov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/wiki/Hermitian_form"
CommonCrawl
Is there a combinatorial interpretation of the identity $\sum_{k=0}^m 2^{-2k} \binom{2k}{k} \binom{2m-k}{m} =4^{-m} \binom{4m+1}{2m}$? I came across the following combinatorial identity in a paper by Victor H. Moll and Dante V. Manna 'a remarkable sequence of integers'. $$\sum_{k=0}^m 2^{-2k} \binom{2k}{k} \binom{2m-k}{m} =4^{-m} \binom{4m+1}{2m}. $$ I gave an elementary proof as follows, yet a combinatorial interpretation seems difficult to a layman like me. So I post it here for discussion. My elementary proof is through the method of coefficients. Let $[t^n]f(t)$ be the coefficient of $t^n$ in $f(t)$. Lemma: $[t^k]\frac{1}{\sqrt{1-t}}=4^{-k} \binom{2k}{k}$. $$ \begin{aligned} [t^k]\frac{1}{\sqrt{1-t}} &=\binom{-1/2}{k} (-1)^k \\\\ &=\binom{1/2+k-1}{k} \\\\ &=\frac{(k-1/2)(k-3/2)\cdots (1/2)}{k!}\\\\ &=\frac{(2k-1)(2k-3)\cdots 1}{k!}2^{-k} \\\\ &=\frac{(2k)(2k-1)(2k-2)(2k-3)\cdots 2\cdot1}{k!\cdot k!} 4^{-k} \\\\ &= 4^{-k} \binom{2k}{k} \end{aligned} $$ Moreover, it is easy to see $$ \begin{aligned} \binom{2m-k}{m}&=\binom{2m-k}{m-k} \\\\ &=\binom{-(2m-k)+m-k-1}{m-k}(-1)^{m-k}\\\\ &= \binom{-m-1}{m-k} (-1)^{m-k} \\\\ &=[t^{m-k}]\frac{1}{(1-t)^{m+1}} \end{aligned} $$ Proposition: $$\sum_{k=0}^m 2^{-2k} \binom{2k}{k} \binom{2m-k}{m}= 4^{-m} \binom{4m+1}{2m}.$$ $$ \begin{aligned} \sum_{k=0}^m 2^{-2k} \binom{2k}{k} \binom{2m-k}{m} &= [t^m]\left(\frac{1}{\sqrt{1-t}} \frac{1}{(1-t)^{m+1}}\right) \\\\ &= [t^m]\frac{1}{(1-t)^{m+(3/2)}} \\\\ &=\binom{-m-(3/2)}{m} (-1)^m \\\\ &=\binom{2m+(1/2)}{m} \\\\ &= 2^{-m} \frac{(4m+1)(2m-1)\cdots(2m+3)}{m!} \\\\ &=2^{-m} \frac{(4m+1)(2m-1)\cdots(2m+3)}{m!} \frac{4m(4m-2)\cdots(2m+2)}{4m(4m-2)\cdots(2m+2)} \\\\ &=2^{-2m}\frac{(4m+1)!}{(2m+1)!(2m)!} \\\\ &=2^{-2m} \binom{4m+1}{2m} \end{aligned} $$ co.combinatorics combinatorial-identities SunniSunni $\begingroup$ Could you give the full reference to the paper where you found it? $\endgroup$ – Nate Eldredge Apr 13 '10 at 21:16 $\begingroup$ @Nate: I bet it's this one: dx.doi.org/10.1016/j.exmath.2009.02.005 "A remarkable sequence of integers" by Victor H. Moll and Dante V. Manna. Expositiones Mathematicae Volume 27, Issue 4, 2009, Pages 289-312 $\endgroup$ – j.c. Apr 13 '10 at 22:24 $\begingroup$ @Nate: jc have told you that. $\endgroup$ – Sunni Apr 14 '10 at 0:59 $\begingroup$ You might want to check out "Proofs that Really Count" by Art Benjamin. It has a bunch of combinatorial proofs of identities and one whole chapter is on binomial coefficient identities. I don't know if this one is in there, but it might be a good place to look for general insight. $\endgroup$ – Aeryk Apr 15 '10 at 2:37 I don't have an answer, but I have spend a couple of hours on it, so here is some of my thoughts on the problem. In the following I will identify expressions with sets, so $2^n$ corresponds to the set of 01-sequences of length n, $\binom{n}{k}$ is the set of 01-sequence of length n with exactly k 1s, and products and sums corresponds to taking product sets and unions. We want to find a bijective function from $\sum_{k=0}^m 2^{2(m-k)} \binom{2k}{k} \binom{2m-k}{m}$ to $\binom{4m+1}{2m}$ (I have multiplied with $4^m$ on both sides). Let me give an example a similar looking equality (you can skip the rest of this paragraph if you want): $\sum_{k=0}^m2^{2(m-k)}\binom{2k}{k}\binom{2m}{2k}=\binom{4m}{2m}$. Take an element in $\binom{4m}{2m}$, and pair the terms, so we have a sequence over {(00),(01),(10),(11)} of length $2m$. We have an even number of 1s in the sequence, so there must be an even number of pairs that contain exactly one 1, and thus and even number of pairs with (00) or (11). Let the number of (00) and (11) in the sequence of pairs be 2k. Now k of these must be (00) and k of them is (11) is the number of 0s and 1s are the same. The 2k terms in the 2m length sequence can be chosen in $\binom{2m}{2k}$ ways, the k (11)s of these 2k terms can chosen in $\binom{2k}{k}$ ways and in the rest of the $2m-2k$ terms we must choose between (10) and (01). This gives a factor $2^{2(m-k)}$, so we have a 1-1 correspondence between $\sum_{k=0}^m2^{2(m-k)}\binom{2k}{k}\binom{2m}{2k}$ and $\binom{4m}{2m}$. An important part of such a proof is to find out what k represents. In your equality, it turns out that the k=0 part of the sum is about $\sqrt{\frac{1}{2}}$ of the whole sum. Do anyone know where the $\sqrt{\frac{1}{2}}$ could come from? One way to find out what k is, would be to find an injective function from the k=0 term, $2^{2m}\binom{2m}{m}$, to $\binom{4m+1}{2m}$ and see what the image set looks like. But I haven't been able to find such a function, that is, I cannot find a combinatorial proof that $2^{2m}\binom{2m}{m}\leq \binom{4m+1}{2m}$ (nor that $\binom{4m}{2m}<2^{2m}\binom{2m}{m}$). Perhaps you should try to ask this in you question? Sune JakobsenSune Jakobsen $\begingroup$ Maybe it is easier to find something multiplying both sides with 1/(m+1), so you get odd Catalan numbers in total. $\endgroup$ – Martin Rubey Apr 18 '10 at 12:26 $\begingroup$ The identity you prove is the same as the one appeared in <A remarkable sequence of integers> pp298. You may check that. $\endgroup$ – Sunni Apr 18 '10 at 16:46 Since you describe yourself as a "layman" I'm guessing you don't want to hear about the Haar measure on Grassmannian space G(n,1), so here's my best intuitive explanation of the left hand side of the equation combinatorically: Imagine you have a set of n families. Each family has either 1 or 2 children. The two-child families have an older and a younger child, so the children are distinguishable. Exactly 1/2 of them must have exactly one boy. The remaining single-child families must have girls. Of the two-child families, when counting up all the boys and girls together there must be an equal number of boys and girls. (It's possible for 0 families up to (1/2)n families to have two children.) Without the $2^{-2k}$ term, the LHS enumerates the possible sets of families that follow the conditions above given $2m=n$. With the $2^{-2k}$ term, there's an additional condition in enumerating: take the configurations that involve r two-child families, say there's c of them. Form the ratio between c and the total ways the genders of the two-child families could come out if it wasn't an equal number of boys and girls. The sum of all ratios for all values of r from 0 to (1/2)n achieves the LHS described in the equation. Jason DyerJason Dyer $\begingroup$ As a non-layman, I'm curious: what do you have in mind in terms of the Haar measure on G(n,1)? $\endgroup$ – Michael Lugo Apr 14 '10 at 18:34 $\begingroup$ I think that OP wanted a "counting in two ways"-proof of the equality, and not just a interpretation of LHS. (Did you?) $\endgroup$ – Sune Jakobsen Apr 14 '10 at 18:54 $\begingroup$ @ Jakobsen: That is true. Dyer's explanation is insufficient. Also, I don'y know whether we shall introduce Haar measure to this problem. $\endgroup$ – Sunni Apr 14 '10 at 19:00 $\begingroup$ Michael, regarding the Haar measure on G(n,1): take the reciprocal of the rational part of every 4th term (there's an offset but I don't know offhand, I think 2) and you get the sequence. $\endgroup$ – Jason Dyer Apr 14 '10 at 19:13 $\begingroup$ @miwalin: I'll think about it more then, and edit if a proof comes to me. $\endgroup$ – Jason Dyer Apr 14 '10 at 19:16 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged co.combinatorics combinatorial-identities or ask your own question. A product approximation to the Taylor series of the exponential Is this similar to a known combinatorial identity? The combinatorial interpretation of an identity found in "Primes in tuples I" Identity for Power Series and Binomial Coefficients $q$-Eulerian type B enjoy symmetry In search of a combinatorial reasoning for a vanishing sum Binomial Coefficient Identity, Double Series, Floor Function A combinatorial identity Rational generating function and recursion On the polynomial $\sum_{k=0}^n\binom{n}{k}(-1)^kX^{k(n-k)}$
CommonCrawl
Sympatric speciation in structureless environments Wayne M. Getz ORCID: orcid.org/0000-0001-8784-93541,2, Richard Salter3, Dana Paige Seidel1 & Pim van Hooft4 Darwin and the architects of the Modern Synthesis found sympatric speciation difficult to explain and suggested it is unlikely to occur. Increasingly, evidence over the past few decades suggest that sympatric speciation can occur under ecological conditions that require at most intraspecific competition for a structured resource. Here we used an individual-based population model with variable foraging strategies to study the evolution of mating behavior among foraging strategy types. Initially, individuals were placed at random on a structureless resource landscape, with subsequent spatial variation induced through foraging activity itself. The fitness of individuals was determined by their biomass at the end of each generational cycle. The model incorporates three diallelic, codominant foraging strategy genes, and one mate-choice or m-trait (i.e. incipient magic trait) gene, where the latter is inactive when random mating is assumed. Under non-random mating, the m-trait gene promotes increasing levels of either disassortative or assortative mating when the frequency of m respectively increases or decreases from 0.5. Our evolutionary simulations demonstrate that, under initial random mating conditions, an activated m-trait gene evolves to promote assortative mating because the system, in trying to fit a multipeak adaptive landscape, causes heterozygous individuals to be less fit than homozygous individuals. Our results extend our theoretical understanding that sympatric speciation can evolve under nicheless or gradientless resource conditions: i.e. the underlying resource is monomorphic and initially spatially homogeneous. Further the simplicity and generality of our model suggests that sympatric speciation may be more likely than previously thought to occur in mobile, sexually-reproducing organisms. Sympatric speciation is thought to be uncommon [1], although several systems—including races of Rhagoletis pomonella (apple maggots) and the parasitic Braconid wasps (Diachasma alloeum) they host [2], sibling species of Monostroma (i.e. M. latissimum and M. nitidummarine, green algae off the coast of Japan [3]), cichlid species ((Amphilophus sp.) complexes in isolated lakes [4] (but see [5]), and the iconic Darwin finches [6]—are considered to be examples of such speciation. The current prevalent view of sympatric speciation is that it is driven by disruptive selection through ecological competition on traits linked to assortative mating mechanisms that, when of genetic origin and not associated with sexual selection, are referred to as "magic traits" [7–11]. The genetic mechanisms underlying sympatric speciation can be quite varied, but they are often thought to involve either some type of recognition system (see [12] and the references therein) or ecological-mediated mate-sorting, such as heteropatry (individuals mate within preferred patch types on a mosaic landscapes [13]). Other, more subtle types of mechanisms also exist [14], suggesting that more mechanisms are likely to be discovered. Ecological models underlying disruptive selection have invoked either multiple distinct habitat types (which have been referred to as "Levene models" [1]) or a resource gradient, such as seed size in the case of gramnivorous vertebrates. These cases, of course, encompass a considerable variety of situations. An influential, logistic-equation-based, adaptive dynamics analysis by Dieckmann and Doebeli [15] showed that sympatric speciation is a likely outcome of competition for resources. Disruptive selection in their model arose from the competitive exclusion process associated with resource competition processes [16]. In concert with disruptive selection, Kirkpatrick and Ravigne [17] identified a concatenation of mechanisms needed for speciation to occur: an isolating mechanism (e.g. associative mating), a mechanism to link disruptive selection and isolation, a genetic basis for increased isolation, and an appropriate initial situation. Dieckmann and Doebeli's [15] simulations involved pitting individuals against one another that have different density-dependent growth responses to a given implicit ecological background (the implicitness was reflected in terms of the value of the carrying capacity parameter in the logistic model for the individual in question) and, as such, fell within the growing genre of using individual-based models (IBM) to address ecological and evolutionary questions [18]. Recently, we used an IBM to demonstrated that if a uniform monomorphic resource landscape is peppered at random with consumers that are identical in terms of their ability to compete, extract and convert resources for growth—but employ individually variable movement behavior strategies (in terms of when and where to move, based on evaluations of resource levels and number of competitors in different directions of the compass)—then a polymorphic movement strategy guild emerges with structure dependent on historical quirks [19] rather than on an intrinsic system's attractor. This result considerably weakens the ecological precursors that are necessary for sympatric speciation to occur. Further, if we assume some type of mate labeling cues are linked to behavioral strategies, such as visual coloration correlated with both behavioral type and mating strategies in side-blotched lizard (Uta stansburiana) [20] and Midas cichlids (Amphilophus citrinellus) [21]—then we have a magic trait system [7–11] that can be used both to promote the reproductive separation of individuals belonging to different behavioral syndromic groups. Further, mating strategies that promote fitness often arise even though we currently may have no verified explanation for their origin, as with observations that olfactory cues are used to avoid sibmating in house mice (Mus musculus domesticus) [22]. In our model assortative mating, which is widespread through the animal kingdom [23], promotes fitness, as evidenced by the greater efficiency of clonal versus sexual foraging guilds in exploiting the model resource space [19]. The specific question we address here, in the context of individuals foraging on a structureless resource landscape [19], is the following: if the genetic precursors are in place for a magic-trait system to emerge within a randomly mating population, will assortative mating, as a precursor to sympatric speciation, emerge and become firmly established? The question is answered here in the affirmative, both through our simulation studies and with a well-supported explanation of why we should expect consumer-resource systems to behave in this way. In addition, the results we obtain serve to refute the following hypothesis, articulated in terms of a nicheless resource environment; by which we mean the resource is structurally homogeneous (i.e. it has no spectral qualities such as size or even color variation and exhibits no density gradients, though it can consist of randomly distributed patches or packages of resource): Hypothesis: Two closely competing morphs (or strains), coevolving in a mixed population, cannot coexist if they both exploit a nicheless resource environment. Given that it only takes one counterexample to disprove an hypothesis, the example we provide rejects this hypothesis and lays to rest the issue that some kind of resource niche structure is needed to ensure that sympatric speciation can occur when the supporting mechanisms of disruptive selection and genetically driven reproductive isolation, as identified by Kirkpatrick and Ravigne [17], emerge and become fixed. In short, the results we present below demonstrate that different foraging types may not only coexist (as demonstrated in [19]), but that they may be corralled by a magic-trait system [7–11] into mating assortatively, which is a precursor to reproductive isolation and, ultimately, speciation [17]. Evolutionary simulations Simulations were carried out using the model described in the Methodology Section, essentially running an evolutionary algorithm on top of our individual-based, single-generation foraging (i.e. ecological) model over evolutionary epochs that were either 250 or 500 generations long. The ranges of final population sizes at generation 250 and 500, together with the means and standard deviations of the mating-phenotype (magic) parameter m across runs, as well as mean values of the standard deviation across runs, are given in Table 1. Statistical analyses of these results reveal that the mean values of m for the random (m = 0.49 ± 0.13, n = 124) and m-trait (m = 0.21 ± 0.09, n = 80) mating are significantly different at generation 250, as well as at generation 500 (n = 15 for both the random m = 0.53 ± 0.20 and m-trait m = 0.10 ± 0.09 mating cases), suggesting that the m-trait gene evolves to promote assortative mating. The distributions of the values of m for the 250-generation cases are plotted in Fig. 1, as a histogram binned into 0.1 unit intervals. Table 1 Range of population sizes and basic statistics of the parameter values of m for random (value of m has no affect) and m-trait mating (i.e. assortative mating when m < 0.5, dissasortative mating when m > 0.5) simulations at the end of the 250 and 500 generation epochs Frequency histograms of the mean value of m under random (blue) and m-trait (red) mating (purple represents areas of overlap) for the 500 generation cases. See Table 1 for more details Emergent genetic structure To provide a sense of the emergent genetic structure at the end of the 500 generation runs (where this structure is more evolved, and hence sharper than after 250 generations), in Table 2 we list the mean values of m across all agents in each of the 15 random and 15 m-trait mating runs along with the heterozygote deviance (i.e. an index of inbreeding and clustering; see Methods for details) associated with α, the most diverse of the three behavioral parameters in the model (See Methods for further explanation). The differences in each of these three measures when compared across random versus m-trait mating are all highly significant (p < 0.001 in all three cases, Mann–Whitney U tests). Table 2 Information on the genetic structure of the population at generation 500 for 15 evolutionary runs under random versus m-trait mating (run numbers sorted on m separately for cases A. and B.; genetic structure of runs 1 and 15 for both cases illustrated in Fig. 2). The heterozygote deviance value (see Method section) is with respect to the parameter α, while the % variance explained by our principal components analysis (PCA) is with respect to the first two components For the four simulations corresponding to the highest and lowest average m values for the random and m-trait mating cases in Table 2, the genetic structures that emerged at generation 500 are plotted in Fig. 2 (with results from all 30 simulations illustrated in the Additional file 1). Specifically, the plots are: the values of all four parameters (see the Methods Section for an explanation of these parameters) for each individual (Fig. 2, top panels in each of the four cases), the location of each individual in the first two principal components space (Fig. 2, middle panels in each case), and of the dendrograms associated with the principal components analysis (PCA; Fig. 2, bottom panels in each case). The m-trait simulations show defined genetic structure has evolved across all four parameters, with the lowest m value run of the m-trait mating case showing a particularly clear structure (Fig. 2) of two alleles for δ (red—the two dominant bands are the homozygote phenotypes), two alleles for ρ (green—the two dominant bands are the two homozygote phenotypes), one allele for m (black) parameters, and three alleles for α (blue—three dominant homogyzote bands plus smaller heterozygote bands). The PCA analysis shows very clear genetic groupings for the case of the smallest m value for the m-trait mating (Fig. 2) while PCA space grouping are much less distinct for the remaining three cases. Similarly the dendrograms indicate that the populations in the m-trait mating cases consist of a couple or several (depending at what vertical axis distance the groups are parsed) groups of more highly related individuals within groups and more distantly related individuals among groups, consistent with the fact that assortative mating groups are forming and increasing their reproductive isolation among groups. Genetic structure of runs 1 and 15 in Table 2 for the random and m-trait mating cases are illustrated here in three different ways (see Additional file 1: Figures S1 and S2 in the supplementary online file for figures depicting all 30 cases for panel types i. and iii.). These are i.) bottom left panels for each case: the α (blue), δ (red), ρ (green) and m (black) parameter values (see Methods Section for an explanation of these parameters) ordered along the horizontal axis according to the final biomass achieved by individuals during generation 500 of the ecological simulation; ii.) bottom right panels for each case: a plot of each individual in the space of the first two principal components of a principal components analysis (PCA) for individuals located in the four-dimensional (α, δ, ρ, m)-parameter space, with the colored vectors (color coded as in 1. above) indicating the relative weightings of these four parameters (in the m-trait cases the weights of two parameters are almost identical resulting in one vector obscuring another); iii) top panels for each case: a plot of the dendrogram generated by the PCA Interpretation of the results At the start of each generation the resource landscape consists of a uniformly distributed landscape of a monomorphic set of resource patches (top left panel, Fig. 3; top panel Fig. 4), where the resource biomass in each patch changes over time due to resource growth and forager extraction processes [19]. Foragers, in the form of 100 agents are peppered at random over this landscape at the start of each generational cycle. These agents at the start of the evolutionary epoch (250 or 500 generations, as the case may be) have variable foraging strategies assigned to them (second from top panel, Fig. 4). Some time into each intra-generational ecological simulation, the resources on the landscape have increased, except in those cells where agents have been exploiting resources (top middle panel, Fig. 3). Agents with poor foraging strategies appear as small white rectangles, while more successful agents appear as round purple dots in this same panel (bigger dots correspond to agents that have grown the most; cf. top middle panel, Fig. 3). Various stages of the within-generation simulation (top three panels) show agent location and biomass state (blue-to-purple circles, size an indication of relative biomass) and within cell resource levels (light to dark green indicating low to high resource levels). Solid yellow rectangle contains the eight nearest-neighbors around agent 1's current location); broken yellow rectangle contains the eight nearest-neighbors and 16 next-to-nearest neighbors around agent 2's location. Individuals choose mates from across the whole landscape and not just local neighborhoods. Red and blue graphs in middle and right bottom panels show the result from two repeated runs of the number of agents each generation and the total final biomass of these agents and the end of each generation over a 500-generation evolutionary simulation A cartoon of the processes involved in priming the system for sympatric speciation. See text for further discussion As time progresses, the number of agents reaches a carrying capacity with stochastic perturbations (bottom middle and right panels in Fig. 3), but the relatively intense level of competition ensures that individuals are smaller on average than we see in the early stages of the evolutionary epoch (low values of the generation index g, as g increases from 1 to 250 or 500; compare top middle and right panels in Fig. 3), when population levels are around 100 (the first 10 generations) rather than above 300 (>150 generations—see lower middle panel of Fig. 3). In this latter period of intense competition, the homogeneous environment takes on a mosaic structure of resource patches at various stages of regeneration (upper right panel Fig. 3; third panel from the top Fig. 4). At this stage, the optimal foraging strategy type depends on the mix of existing foraging strategy types and is affected by both absolute and relative numbers. Thus, at the start of each generation, the fittest foraging strategy type depends both on the number of within generation foragers and on each of their strategy types. As in the iconic, but deterministic, hawk-dove game the evolutionarily stable strategy (ESS) is a polymorphism represented by a ratio of hawks to doves that satisfies Nash equilibrium conditions (a discussion of the concepts in this sentence can all be found in a review by Nowak and Sigmund [24]). Unlike the hawk-dove game, our system is dynamic with regard to the exploitation of resources by foragers (i.e., during each iteration of the ecological component of the model) and also stochastic. The latter implies that an ESS does not exist in the deterministic sense. To address the question of the existence of a long-run average ESS entails the formulation of the concept of a quasi-stationary strategy (QSS), as defined by Zhou et al. [25]. They demonstrated that a QSS can only be regarded as the long-run average ESS if the stochastic dynamical system's approach to its QSS is not too rapid. If it is too rapid, as happens in our system, the QSS that emerges is different in each simulation (of 250 or 500 generations in our case). Put in other terms, the dynamic adaptive landscape that is associated with our system's evolutionary dynamics is the basis for support of a polymorphic guild of foraging strategy types, where the evolving configuration is dependent on the early evolutionary history (i.e. on early stochastic events) of the simulation. The m-trait mating gene that we included in our model, as an extension to our earlier work [19], has the potential to take the population in two directions—assortative or disassortative mating. In our simulation of this extended model we observed that: i) when the gene is inactive (i.e. it does not actually influence mating), the value of m wanders in either direction, but across runs realizes an average value very close to m = 0.5 (Table 1); ii). when the gene is active (i.e. it influences mate choice in individuals with m phenotypes sufficiently different from 0.5: viz. it engages disassortative or assortative mating 72 % of the time when m is respectively 0.6 or 0.4: see Methods section for details), the value of m evolves downwards over time, as illustrated in the panel second from the bottom in Fig. 4. This evolution of m is associated with the emergence of an organized genetic structure (bottom panel in Fig. 4), defined by homozygous individuals grouped into "strain" types. These strains are different behavioral foraging types, with regard to tradeoffs involving avoidance of competition, propensity to move from current to richer resources patches, and one-step-tactical versus two-steps-strategic planning (as described more precisely in the Methods section). Guilds of strains (or foraging types) are better at exploiting adaptive landscapes with multiple peaks (which may dynamically adjust with changes in strain frequencies and environmental factors: e.g., see [26]) than non-guild populations, much as clonal populations better exploit the environment than a randomly reproducing sexual population, as demonstrated in our previous study [19]. Since the integrity of these guilds is eroded by the continual production of heterozygote individuals, evolution drives mate selection to become associative when an m-trait system (e.g. recognition based on physical matching of mates to self) that permits mate selection is in place. Generality of the model The model we use is rather generically formulated. Resource consumption is modeled by a Holling type II response function that incorporates interference competition (as originally formulated by Beddington [27] in the context of both predators and parasite search efficiency, by DeAngelis and colleagues [28] in the context of trophic interactions at several levels, and by Getz in the context of biomass transformation webs [29]), but also includes an additional 'abruptness' parameter that interpolates between more contest-like and more scramble-like competition [30]. We set the abruptness parameter to be intermediate between these extremes, though interference competition in general for food resources is known to exert a strong selective force on all animals [31]. The movement rules in the model are as applicable to unicellular organisms able to detect resources and conspecifics using chemical gradient signals, as they are invertebrate or vertebrate herbivores, or even omnivores or carnivores if resources patches are suitably scaled to appropriate movement ranges. The primary requirements of the model are that individuals should: i) locally reduce resource density before moving on to areas of greater resource density; ii) have the means to perceive environmental conditions beyond their immediate surroundings (if necessary through the implementation of scouting activities before finally deciding where next to feed); and iii) be able to choose mates based on characteristics correlated with foraging behavior. The model, however, requires no assumptions about the structure of the resource other than the extraordinarily weak assumption that it is locally depletable but renewable. Resources in the model are monomorphic, grow everywhere at the same rate, and are initially homogenously distributed across the landscape. Spatial gradients or distribution across any kind of spectrum (such as seeds occurring in different sizes) are not needed, but any additional resource structure only enhances opportunities for individual specialization to occur, provided this structure is not overly elaborate [32, 33]. Applicability of the model Biologists are increasingly identifying species complexes that are best understood in terms of sympatric speciation taking place due to disruptive selection—that is, heterozygotes (hybrids) are less fit than homozygotes (true-species)—and magic trait type mechanisms promoting assortative mating. Merrill et al. [34], for example, have shown in Heliconius butterflies that hybrid color-pattern phenotypes are attacked more frequently than parental forms, thereby demonstrating disruptive ecological selection on a trait that also acts as a mating cue. In the side-blotched lizard, Uta stansburiana, for example, Corl et al. [20] found geographically widespread throat color polymorphisms where, in some areas, these polymorphisms are reduced in numbers of different morphs. Their phylogenetic reconstructions show that ancestral polymorphisms, though often lost, give rise to morphologically distinct subspecies/species. They further showed that this polymorphism loss was associated with accelerated evolution of sexual dimorphisms, thereby suggesting that polymorphism loss is implicated in species formation. Podos et al. [11], for example, studied the contribution of a magic trait scenario in the divergence of song elements among the Galapagos Santa Cruz Island's medium ground finches (Geospiza fortis). They used the results they obtained to argue that song divergence and discrimination, which are fundamental elements of assortative mating in these finches, is likely fostered in early stages of subspecies divergence under a magic trait mating scenario. Red crossbills provide another ornithological example [35, 36], as likely do some species of ducks [37]; and, in insects, fruitfly provide a possible example [38]. Foraging behavior and correlated cues To some extent we should expect foraging behavior to be an expression of a plastic response to environmental cues, though the threshold values of cues for producing behavior could be under genetic control. This is known, for example, to be the case in honeybees [39] (e.g. switching from within hive tasks to foraging). How likely is it, however, that variation in foraging behavior is correlated with detectable visual, auditory or olfactory cues that may be used in an m-trait mate selection system? Searle et al. [40] provide a detailed discussion regarding the ubiquity of variation of foraging behavior due to environmental and genetic causes in large mammalian herbivores, with a summary of the mechanisms driving this variation (cf. Table 1 in [40]). Variation can be morphological with associated visual cues: viz. stockier individuals may prevail under contest competition, lither individuals may be more efficient movers, taller individuals more able to assess the state of surrounding resources and so on. Additionally, physiological variation may be correlated with odor cues [41], while morphological variation with auditory cues [42]. In species that have evolved mechanisms to discriminate among individuals—which both invertebrates and vertebrates are able to do using purely genetic cues (i.e. no environmental influences are needed) in contexts as fine as discriminating among individuals based on degree of relatedness to self [43]—these mechanisms are in place to function as an m-system, provided mate-selection systems are also in place (which they often are [44, 45]). Additionally, it is conceivable that sympatric species, which vary with respect to foraging behavior, may have separated under a magic trait system that subsequently atrophied (e.g. the cue system disappeared) once other prezygote barriers emerged to entrench speciation. Assortative mating systems In our model, mate selection is not influenced in any way by constraints on movement, because we assume that individuals choose mates from among the population as a whole. In many species, individuals may avoid choosing siblings or even close cousins as mates, with recognition systems evolving to facilitate outbreeding [43]. Beyond this constraint, however, individuals may still assortatively mate, or at least it may be advantageous for them to assortatively mate. For example, it has been shown that individuals in a species of mouse (Mus spicilegus) reproduce more rapidly when mates are of similar personality types [46]. Similarly, it has been shown in the great tit (Parus major) that parental pairs with similar environmental exploratory rate scores interacted more at the nest than pairs with dissimilar scores [47], while it has been shown in Stellar's jays that parental compatibility with regard to behavioral type (referred to as behavioral syndromes) increase fitness [48]. While assortative mating purely in the context of genetically expressed phenotype frequencies leads to increased homozygosity, it likely also leads to inbreeding, particularly in small populations, unless a counter prevailing system exists to avoid mating with close relatives. Though we did not include an assessment of inbreeding levels in our m-trait mating system, both our random and magic-mating simulations are subject to the same level of 'small population' inbreeding because they involve populations of similar sizes (see "Agent Ranges" column in Table 1). Thus any differences in levels of homozygosity that occur between our two simulation treatments (random vs. magic-mating) cannot be explained in terms of 'small population' inbreeding effects, but are due to assortative mating alone. Existence of magic trait systems How likely is it that magic trait systems exist? Thibert-Plante and Gavrilets [10] used a stochastic, individual-based model to study six different mechanisms of non-random mating, including magic trait mating, evolving in the context of dispersal, niche invasion, and adaptive radiation. As a result of their study, Thibert-Plante and Gavrilets [10] conjecture that mate choice is likely based on a few 'major traits' that have direct impact on fitness, which is the case in our model. Further, Thibert-Plante and Gavrilets [10] suggest that magic traits may emerge by co-opting locally adaptive traits for mating decisions, as is also the case in our model. Additionally, Servedio et al. [9] review a variety of mechanisms by which magic traits can be produced and conclude that magic traits occur more frequently than previously thought. Our study builds on that of Dieckmann and Doebeli [15] who used an adaptive dynamics approach, in the framework of logistic growth models for a population of phenotypes characterized by a variable x, to show that the hypothesis articulated in the opening section of this paper does not hold, thereby rendering speciation a much more likely outcome of competition for resources than previously thought. Implicit in their model is a resource spectrum that implies individuals of phenotype x have an environmental carrying capacity K(x). Additionally we note that this function is an input rather than an emergent property of their model. This model was recently generalized by Haller et al. [33] and used to show that on complex landscapes—which may include environmental gradients, metapopulation structure, and patchiness at different spatial levels of resolution—that intermediate levels of heterogeneity are most likely to lead to the emergence of evolutionary branching, a pattern showed earlier to hold for species as well [32]. Haller et al. [33] also showed that the effects of different types of heterogeneity appear to some extent to be additive in causing evolutionary branching. In another recent study, Debarre [49] asked the question "Can speciation occur in a single population when different types of resources are available, in the absence of any geographical isolation, or any spatial or temporal variation in selection?" and answered the question by stating that "… sympatric ecological speciation is favored when (i) selection is disruptive (i.e. individuals with an intermediate trait are at a local fitness minimum), (ii) resources are differentiated enough and (iii) mating is assortative. In our model, unlike Debarre's, no resource structure is specified. Further, an implicit resource spectrum is not implied, as in Doebeli and Dieckmann [15] in terms of an input function K(x), or explicitly specified, as in a recent analysis by Thibert-Plante and Hendry [50]. In fact, we model resource extraction and the ensuing growth effects identically for all individuals: all individuals have precisely the same growth rates and competitive interaction parameter values when exploiting resources at a particular level within any resource cell (patch) on the landscape. In our model, differences in the strategies of individuals to gather resources over time arises from the particular behavioral strategy employed by individuals to efficiently search out resources over an initially homogeneous resource landscape, while individuals may also move to reduce competitive interactions to differing extents. This initially homogeneous landscape, however, takes on a stochastically generated mosaic structure as a result of the foraging patterns of competitors and of resource regrowth (or replacement) within patches. This induced spatial heterogeneity, without additional gradient structures, is both as simple as can be expected in nature with regard to overall resource structure (the resource is monomorphic with no specified spatial gradients) and more realistic than assuming a constant homogeneous background. The latter follows since all organisms locally deplete resources unless they are located in a constant resource flux (e.g. a spatially homogeneous photon flux and individuals are located so they do not shade the flux from competitors, which is a severely restrictive requirement). Dieckmann and Doebeli [15] also consider the evolution of assortative mating using the type of m-gene approach that we took here, and they demonstrated in their model that assortative mating often arises and takes the population in the direction of reproductive isolation among ecologically diverging subpopulations. In our case, though, the divergence emerges from a behavioral polymorphism rather than requiring a structural ecological input through a specified carrying capacity function K(x). Thus, to the extent that Dieckmann and Doebeli [15] conclude that their "… theory conforms well with mounting empirical evidence for the sympatric origin of many species," we can conclude the same with less restrictive conditions in that we do not specify any growth rate or other ecologically-related variation with regard to individual phenotypes. Agent-based consumer-resource model The model was developed on the Nova Platform [51] following methods more fully described elsewhere [19]. The model is a discrete time, agent-based, stochastic, consumer-resource formulation that simulates the movement behavior and growth of a population of consumers on a cellular array. In each of 100 time steps, representing the passage of a single generation, and a single pass through of the ecological component of the model, individuals can either stay within their current cells (i.e., resource patches) and consume amounts of resources determined by a Beddington-DeAngelis type response function (i.e., compensatory with regard to resource density, decreasing with regard to the level of intraspecific competition and has the same parameter values for all individuals—for more details see [19, 29]), and hence gain in biomass, or individuals can move at some cost to their current biomass to one of 8 neighboring cells (lower left panel, Fig. 3). The resources in cells grow logistically, including a reservoir component (root mass below ground), but lose biomass due to extraction by consumers. Foraging strategy within generation simulation The only variation among individuals is their current location, their accumulated biomass state, and the value of three foraging-strategy parameters: α, δ and ρ. The specifics of how these parameters affect foraging behavior are described below and a description of the mathematical equations used are provided elsewhere [19]. For the sake of clarity, however, we summarize this foraging behavior in terms of the parameter values that represent the (α,δ,ρ)-strategy phenotypes of individuals: i) individuals compare the resources and competitors in their current cells, as well as eight neighboring cells, where each cell represents a movement direction, and competitors are counted in terms of the number of individuals that could potentially be inside cells in the next time step; ii) individuals also compute average resource levels and competitors in each of the eight neighboring directions, across those cells that can be reached in two moves (see dotted yellow rectangle in left bottom panel of Fig. 3); iii) individuals weigh the relative value of a unit of resource to the cost of competing with a conspecific, using the parameter value δ ≥0 (i.e. the competition tradeoff parameter) to obtain a weighted value for each cell; iv) individuals only move to the best neighboring cell if their current cells weighted value is less than ρ ≥ 0 times that of the best neighboring cell (i.e. the movement threshold parameter); v.) each neighboring cell also includes an average value of that cell's nearest neighbors discounted by a factor α ≥ 0 (i.e. the next neighbor-discount parameter). Note that α ≥ 1 implies an inflation of the relative importance of the values of the cells two steps removed from an individual's current location compared with its immediate neighbors. A simulation run of the model begins with an initial number of individuals, which we set to 100 in the first generation (lower middle panel, Fig. 3). For the first few time steps each individual exploits its local patch in an otherwise homogeneous landscape (top left panel, Fig. 3) until individuals with high movement threshold phenotypes values ρ move (the closer ρ ≥ 0 is to 1 the more readily individuals move). The individuals then accumulate biomass by grazing pathways through a regenerating resource landscape (top middle panel, Fig. 3), with the individual's biomass state at the end of the 100 times steps (i.e. 1 generation) representing the individual's relative reproductive fitness. Reproduction model At the end of each 100 step, intergenerational cycle or single pass through of the ecological component of the model, individuals are ranked according to their biomass and then allowed to reproduce sexually, assuming a diploid genetic structure and hermaphroditic mating (i.e. parents pair up, using rules described below, without regard to sexual designation). In a previous study [19], we employed hard selection by allowing the top half of individuals to pair up at random and produce four young each, thereby restoring a preselected number of individuals to compete each generation. Here we employed soft selection as follows. After pairing individuals, we took the average biomass B pair of each pair, and produced a pair fecundity index P pair, based on the maximum biomass B max over all individuals: viz., P pair = B pair /B max. We then used the binomial distribution with maximum possible number of progeny n max to stochastically calculate the actual number of progeny n pair ~ BINOMIAL[n max,P pair] (for "~" read "is drawn from"). This produced at the end of generation g (g = 1,…,250 or 500, as the case may be) a total of N g+1 progeny to start off the next generation, each with the same initial biomass condition. Note that each iteration of g represents one run through of the ecological component of the model, followed by one run through of the reproduction component of the model. As we see in Fig. 3 (bottom middle panel), for the ecological parameter values used in our model (i.e., those used in [19]), the system evolves from an initial 100 progeny in the first generation to stabilize at an across-generation average of around 300 plus individuals from generation 100 onwards. Note that we used the same parameter values for the ecological component of the model as we did in [19], because the purpose of this study was to not explore the evolutionary aspects of the ecological behavior of the system in more depth, but to extend our previous work to study how easily assortative mating may arise in a systems that has the potential for non-random mating (either assortative or disassortative) to evolve. Genetic and magic-trait model Beyond the three foraging-strategy phenotype parameters (α,δ,ρ) included in our previous study [19], we included a fourth magic trait parameter m. Because our individuals are diploid, under the assumption that all traits are governed by co-dominant phenotypic determination, individual k (k = 1,…, N g in generation g) has the following genotype and phenotype: Genotype of individual k: (α κ1, α κ2; δ κ1, δ κ2; ρ κ1, ρ κ2; m κ1, m κ1) Phenotype of individual k: ([α κ1 + α κ2]/2; [δ κ1 + δ κ2]/2; [ρ κ1 + ρ κ2]/2; [m κ1 + m κ1]/2) In all our simulations, we created from 0 to n max (number given by binomial drawing described above) progeny genotypes under Mendelian random segregation: i.e. each parent contributed one of its two alleles at random for each of the parameters in the progeny genotype. We then allowed for mutations, using a procedure described in [19] (i.e. each allele in each progeny could be perturbed by a small amount that declined from around 10 to 0.1 % over time using a simulated annealing approach within our genetic algorithm). Mate choice process At the start of the evolutionary simulation, the allelic values for m of all individuals were all assigned the value 0.5, so non-random mate selection played no role in the mating process. Under random mating, the phenotypic value of m, though it would drift, and even self-organize because of linkages that arise to other genes through inbreeding in small population sizes, played no role in mate choice. Under m-trait mating, however, the value of m played a role in the model as follows. The measures D k = 2*|m i − 0.5| where calculated thereby insuring 0 ≤ D k ≤1 for all individuals k = 1,…,N g ,. Starting with the largest individual (ranked by biomass at the end of each generation), individual k selected a mate at random with probability 1-(D k )γ, 0 < γ ≤ 1 (thus as γ approaches 0 the probability of random mating approaches 0 for all possible value of D i ). In our study we used the value γ =0.2, which results in non-random mating 72 % of the time when m is respectively 0.6 or 0.4. We model the assumption that individual k could have a sense of its degree of similarity to individual j, by defining the measures $$ \begin{array}{ccc}\hfill {d}_{kj}=\sqrt{{\left({\alpha}_k-{\alpha}_j\right)}^2+{\left({\delta}_k-{\delta}_j\right)}^2+{\left({\rho}_k-{\rho}_j\right)}^2},\hfill & \hfill j\ne k,\ \hfill & \hfill j=1,\dots, {N}_g\hfill \end{array} $$ and then applying the following deterministic rules when individuals have been selected to mate non-randomly If m k < 0.5 then individual k chooses individual j, where \( j=\underset{i}{ \min}\left\{{d}_{ki}\right\} \) If m k > 0.5 then individual k chooses individual j where \( j=\underset{i}{ \max}\left\{{d}_{ki}\right\} \) If m k = 0.5 then individual i choose individual j at random. Under this algorithm, individuals are increasingly likely to mate non-randomly as their phenotypic value of m drifts away from 0.5, mating disassortatively when m k > 0.5, but mating according to the a magic trait assumption (i.e. assortatively) when m k < 0.5. Analysis of evolutionary data Initially, we undertook a series of random mating and m-trait mating runs of the model over an evolutionary epoch of 250 generations. For the sake of efficiency, we ran these on several different computers simultaneously, experiencing some failures due to web issues and computer crashes. At the point where we had accumulated 124 random mating and 80 m-trait mating runs we decided to compare the results using a Mann–Whitney U test of the average m-trait values across the final population of progeny produced at generation 500. We selected the Mann–Whitney U test, rather than the more general Kolmogorov-Smirnov test for differences between two distributions, because we were interested in evaluating shift rather than general shape differences in the two distributions of m-trait values. To sharpen the outcomes of the evolutionary process, we conducted an additional 15 runs each of random mating and m-trait mate selection of an evolutionary epoch of 500 generations. At the end of each of the runs, we generated a csv (comma separated values) file that organized the following output data, with rows being individuals the following information by columns (phenotype and then genotype information follows the biomass column): Individual#, biomass, α, δ, ρ, m, α 1, α 2, δ 1, δ 2, ρ 1, ρ 2, m 1, m 2. We computed the means and standard deviation for each of the columns, as well as the means of these column summary statistics across groups of runs. We also produced graphs of the phenotypes of individuals (vertical axis) organized by biomass of individuals (horizontal axis). We applied cluster analyses (Ward's method) to the α-phenotype data in each run and plotted the resulting phylogenetic trees and the data in the plane spanned by the first two principal components (PC analysis or PCA) of these data. We calculated the heterozygote deviance of the population from Hardy-Weinberg equilibrium [52] using the following formula involving the observed proportion, H obs, of heterozygotes and the expected proportion, H exp, at Hardy-Weinberg equilibrium $$ \mathrm{Heterozygote}\kern0.5em \mathrm{deviance}=\left({H}_{obs}\hbox{--} {H}_{exp}\right)/{H}_{exp} $$ Our reason for focusing on the α parameter rather than δ or ρ (note: an individual may be homogeneous in one and heterogeneous in another of these parameters) is that greatest allelic variation in our model is observed at the α gene (i.e. the gene that weighs the relative importance of being tactical versus strategic—i.e. looking at the state of the 'immediate' versus 'next-to-immediate' neighborhoods). This study was not based on empirical data, but rather simulations obtained by running a model built using the Nova Software Platform. This Nova can be downloaded from the Nova Software Website https://www.novamodeler.com/. The software platform is free, but users need to register and obtain a license to run the model under Windows, Mac OS X, and Linux operating systems. The model itself can downloaded from https://nature.berkeley.edu/getzlab/nova.html by clicking on the link: "Sympatric Speciation Foraging System". PCA: principle components analysis supporting information (file online) Bolnick DI, Fitzpatrick BM. Sympatric speciation: models and empirical evidence. Annu Rev Ecol Evol Syst. 2007;38:459–87. doi:10.1146/Annurev.Ecolsys.38.091206.095804. Forbes AA, Powell THQ, Stelinski LL, Smith JJ, Feder JL. Sequential sympatric speciation across trophic levels. Science. 2009;323(5915):776–9. doi:10.1126/Science.1166981. Article CAS PubMed Google Scholar Bast F, Kubota S, Okuda K. Phylogeographic assessment of panmictic Monostroma species from Kuroshio Coast, Japan, reveals sympatric speciation. J Appl Phycol. 2014:1-11. doi:10.1007/s10811-014-0452-x. Barluenga M, Stolting KN, Salzburger W, Muschick M, Meyer A. Sympatric speciation in Nicaraguan crater lake cichlid fish. Nature. 2006;439(7077):719–23. doi:10.1038/Nature04325. Martin CH, Cutler JS, Friel JP, Touokong CD, Coop G, Wainwright PC. Complex histories of repeated gene flow in Cameroon crater lake cichlids cast doubt on one of the clearest examples of sympatric speciation. Evolution. 2015;69(6):1406–22. doi:10.1111/Evo.12674. Grant BR, Grant PR. Darwin finches–population variation and sympatric speciation. Proc Natl Acad Sci U S A. 1979;76(5):2359–63. doi:10.1073/Pnas.76.5.2359. Gavrilets S. Fitness landscapes and the origin of species. Princeton, N.J.; Oxford: Princeton University Press; 2004. Servedio MR, Kopp M. Sexual selection and magic traits in speciation with gene flow. Curr Zool. 2012;58(3):510–6. Servedio MR, Van Doorn GS, Kopp M, Frame AM, Nosil P. Magic traits in speciation: 'magic' but not rare? Trends Ecol Evol. 2011;26(8):389–97. doi:10.1016/J.Tree.2011.04.005. Thibert-Plante X, Gavrilets S. Evolution of mate choice and the so-called magic traits in ecological speciation. Ecol Lett. 2013;16(8):1004–13. doi:10.1111/Ele.12131. Podos J, Dybboe R, Ole Jensen M. Ecological speciation in Darwin's finches: parsing the effects of magic traits. Curr Zool. 2013;59(1):8–19. Hepper PG. Kin recognition. Cambridge: CUP; 1991. Getz WM, Kaitala V. Ecogenetic models, competition, and heteropatry. Theor Popul Biol. 1989;36(1):34–58. Norrström N, Getz WM, Holmgren NMA. Selection against accumulating mutations in niche-preference genes can drive speciation. Plos One. 2011;6(12), e29487. doi:10.1371/journal.pone.0029487. Dieckmann U, Doebeli M. On the origin of species by sympatric speciation. Nature. 1999;400(6742):354–7. doi:10.1038/22521. Levin SA. Community equilibria and stability, and an extension of competitive exclusion principle. Am Nat. 1970;104(939):413. doi:10.1086/282676. Kirkpatrick M, Ravigne V. Speciation by natural and sexual selection: models and experiments. Am Nat. 2002;159:S22–35. doi:10.1086/338370. DeAngelis DL, Mooij WM. Individual-based modeling of ecological and evolutionary processes. Annu Rev Ecol Evol Syst. 2005;36:147–68. doi:10.1146/annurev.ecolsys.36.102003.152644. Getz WM, Salter RM, Lyons AJ, Sippl-Swezey N. Panmictic and clonal evolution on a single patchy resource produces polymorphic foraging guilds. Plos One. 2015;10(10), e0133732. doi:10.1371/journal.pone.0133732. Corl A, Davis AR, Kuchta SR, Sinervo B. Selective loss of polymorphic mating types is associated with rapid phenotypic evolution during morphic speciation. Proc Natl Acad Sci U S A. 2010;107(9):4254–9. doi:10.1073/Pnas.0909480107. Kusche H, Elmer KR, Meyer A. Sympatric ecological divergence associated with a color polymorphism. BMC Biol. 2015;13. doi:10.1186/S12915-015-0192-7 Sherborne AL, Thom MD, Paterson S, Jury F, Ollier WER, Stockley P, et al. The genetic basis of inbreeding avoidance in house mice. Curr Biol. 2007;17(23):2061–6. doi:10.1016/J.Cub.2007.10.041. Jiang YX, Bolnick DI, Kirkpatrick M. Assortative mating in animals. Am Nat. 2013;181(6):E125–38. doi:10.1086/670160. Nowak MA, Sigmund K. Evolutionary dynamics of biological games. Science. 2004;303(5659):793–9. doi:10.1126/Science.1093411. Zhou D, Wu B, Ge H. Evolutionary stability and quasi-stationary strategy in stochastic evolutionary game dynamics. J Theor Biol. 2010;264(3):874–81. doi:10.1016/j.jtbi.2010.03.018. Sasaki A, Dieckmann U. Oligomorphic dynamics for analyzing the quantitative genetics of adaptive speciation. J Math Biol. 2011;63(4):601–35. doi:10.1007/S00285-010-0380-6. Beddington JR. Mutual interference between parasites or predators and its effect on searching efficiency. J Anim Ecol. 1975;44:331–40. DeAngelis DL, Goldstein RA, Oneill RV. Model for trophic interaction. Ecology. 1975;56:881–92. Getz WM. Biomass transformation webs provide a unified approach to consumer-resource modelling. Ecol Lett. 2011;14(2):113–24. doi:10.1111/j.1461-0248.2010.01566.x. Getz W. A hypothesis regarding the abruptness of density dependence and the growth rate of populations. Ecology. 1996;77:2014–26. Ward AJW, Webster MM, Hart PJB. Intraspecific food competition in fishes. Fish Fish. 2006;7(4):231–61. doi:10.1111/J.1467-2979.2006.00224.X. Chow SS, Wilke CO, Ofria C, Lenski RE, Adami C. Adaptive radiation from resource competition in digital organisms. Science. 2004;305(5680):84–6. doi:10.1126/Science.1096307. Haller BC, Mazzucco R, Dieckmann U. Evolutionary branching in complex landscapes. Am Nat. 2013;182(4):E127–41. doi:10.1086/671907. Merrill RM, Wallbank RWR, Bull V, Salazar PCA, Mallet J, Stevens M, et al. Disruptive ecological selection on a mating cue. P Roy Soc B-Biol Sci. 2012;279(1749):4907–13. doi:10.1098/Rspb.2012.1968. Benkman CW. Divergent selection drives the adaptive radiation of crossbills. Evolution. 2003;57(5):1176–81. Hynes DP, Miller EH. Vocal distinctiveness of the Red Crossbill (Loxia curvirostra) on the island of Newfoundland, Canada. Auk. 2014;131(3):421–33. doi:10.1642/auk-13-224.1. Kraus RHS, Kerstens HHD, van Hooft P, Megens H-J, Elmberg J, Tsvey A et al. Widespread horizontal genomic exchange does not erode species barriers among sympatric ducks. BMC Evol Biol. 2012;12. doi:10.1186/1471-2148-12-45. Korol A, Rashkovetsky E, Iliadi K, Nevo E. Drosophila flies in "Evolution Canyon" as a model for incipient sympatric speciation. Proc Natl Acad Sci U S A. 2006;103(48):18184–9. Page RE, Robinson GE, Britton DS, Fondrk MK. Genotypic Variability for rates of behavioral-development in worker honeybees (Apis-Mellifera L). Behav Ecol. 1992;3(2):173–80. doi:10.1093/Beheco/3.2.173. Searle KR, Hunt LP, Gordon IJ. Individualistic herds: individual variation in herbivore foraging behavior and application to rangeland management. Appl Anim Behav Sci. 2010;122(1):1–12. doi:10.1016/J.Applanim.2009.10.005. Torr SJ, Mangwiro TNC, Hall DR. The effects of host physiology on the attraction of tsetse (Diptera: Glossinidae) and Stomoxys (Diptera: Muscidae) to cattle. B Entomol Res. 2006;96(1):71–84. doi:10.1079/Ber2005404. Fischer J, Hammerschmidt K, Cheney DL, Seyfarth RM. Acoustic features of male baboon loud calls: Influences of context, age, and individuality. J Acoust Soc Am. 2002;111(3):1465–74. doi:10.1121/1.1433807. Hauber ME, Sherman PW. Self-referent phenotype matching: theoretical considerations and empirical evidence. Trends Neurosci. 2001;24(10):609–16. doi:10.1016/S0166-2236(00)01916-0. Jennions MD, Petrie M. Variation in mate choice and mating preferences: a review of causes and consequences. Biol Rev. 1997;72(2):283–327. doi:10.1017/S0006323196005014. Kokko H, Brooks R, Jennions MD, Morley J. The evolution of mate choice and mating biases. P Roy Soc B-Biol Sci. 2003;270(1515):653–64. doi:10.1098/Rspb.2002.2235. Rangassamy M, Dalmas M, Feron C, Gouat P, Rodel HG. Similarity of personalities speeds up reproduction in pairs of a monogamous rodent. Anim Behav. 2015;103:7–15. doi:10.1016/J.Anbehav.2015.02.007. David M, Pinxten R, Martens T, Eens M. Exploration behavior and parental effort in wild great tits: partners matter. Behav Ecol Sociobiol. 2015;69(7):1085–95. doi:10.1007/S00265-015-1921-1. Gabriel PO, Black JM. Behavioural syndromes, partner compatibility and reproductive performance in Steller's Jays. Ethology. 2012;118(1):76–86. doi:10.1111/J.1439-0310.2011.01990.X. Debarre F. Refining the conditions for sympatric ecological speciation. J Evolution Biol. 2012;25(12):2651–60. doi:10.1111/j.1420-9101.2012.02621.x. Thibert-Plante X, Hendry AP. Factors influencing progress toward sympatric speciation. J Evolution Biol. 2011;24(10):2186–96. doi:10.1111/j.1420-9101.2011.02348.x. Salter RM. Nova: a modern platform for system dynamics, spatial, and agent-based modeling. Procedia Computer Science. 2013;18:1784–93. Zhou JJ, Lange K, Papp JC, Sinsheimer JS. A heterozygote-homozygote test of Hardy-Weinberg equilibrium. Eur J Hum Genet. 2009;17(11):1495–500. doi:10.1038/ejhg.2009.57. We thank Pauline Kamath and Neil Tsutsui for comments that have helped improve this paper. This work was supported by NSF/CPATH-2 CNS0939153 to RS. Department ESPM, University of California, Berkeley, CA, 94720-3114, USA Wayne M. Getz & Dana Paige Seidel School of Mathematical Sciences, University of KwaZulu-Natal, PB X54001, Durban, 4000, South Africa Wayne M. Getz Computer Science Department, Oberlin College, Oberlin, OH, 44074, USA Resource Ecology Group, Wageningen University, Droevendaalsesteeg 3a, 6708 PB, Wageningen, The Netherlands Pim van Hooft Dana Paige Seidel Correspondence to Wayne M. Getz. WMG conceived the study, designed the model, carried out simulations and help generate figures, RS coded the model, DPS carried out analysis on the simulation data and helped generate figures, PvH helped place the results in context and develop the discussion. All authors helped write and edit the manuscript. All authors have read and approved the final version of the manuscript. The supplementary online file contains a link to a related publication, information on running our simulation model using the Nova Software Platform, and Additional file 1 : Figures S1 and S2 referred to in the caption to Fig. 2 . (PDF 3328 kb) Getz, W.M., Salter, R., Seidel, D.P. et al. Sympatric speciation in structureless environments. BMC Evol Biol 16, 50 (2016). https://doi.org/10.1186/s12862-016-0617-0 Magic traits Foraging guilds Agent-based models
CommonCrawl
Scaling-laws of Radio Spike Bursts and Their Constraints on New Solar Radio Telescopes by Baolin Tan et al. 2019-06-18 Solar Radio Science Highlights Radio observation is one of the most important methods in solar physics and space science. Sometimes, it is almost the sole approach to observing physical processes such as the acceleration, emission, and propagation of non-thermal energetic particles, etc. Long-term observation and study have revealed that a strong solar radio burst is always composed of many small bursts with different time-scales. Among them, a radio spike burst is the smallest one, with the shortest lifetime, narrowest bandwidth, and smallest source region. Solar radio spikes are considered to be related to a single magnetic energy release process, and can be regarded as an elementary burst in solar flares. It is a basic requirement for new solar radio telescopes to observe and discriminate these solar radio spike bursts, even though their temporal and spatial scales actually vary with the observing frequency. Here, we presents the scaling laws of the lifetime and bandwidth of solar radio spike bursts with respect to the observing frequency, which provide some constraints for the next generation of solar radio telescopes, and help us to select the rational telescope observing parameters. As well as this, we propose a spectrum-image combination mode as the best observation mode for new solar radio telescopes with high temporal, spectral, and spatial resolutions, which may have an important significance for revealing the physical essence of the various non-thermal processes in violent solar eruptions. Solar radio observation is the most important approach for obtaining information about solar energetic particles, violent energy release, and mass ejections in solar eruptions. Solar radio telescopes include solar radiometers, radiospectrometers, and radioheliographs with various frequency bandwidth, cadence, spectral and spatial resolutions. Based on new scientific assumptions and technical development, new plans of solar radio telescopes are continuously proposed. In the development of new generation solar radio telescopes, it is very important to select a set of suitable observing parameters, such as frequency range, bandwidth, cadence, spectral resolution, and spatial resolution, etc. So, how does one select a reasonable group of observing parameters of a proposed solar radio telescope? Previous statistical studies indicate that a solar eruption lasting several tens of minutes always contains several big pulses with timescales of minutes, and each big pulse is frequently composed of a group of pulses with timescales of seconds, and each pulse is still composed of many sub-pulses with timescales of sub-seconds. Actually, a violent solar eruption always contains a great number of sub-second radio bursts, which are called fast fine spectral structures (FFS). FFS includes spike bursts, dot bursts, and narrow-band type III bursts. In the microwave range, they are called small-scale microwave bursts (SMB) (Tan 2013). They have a very short lifetime, very narrow frequency band, and very high brightness temperature. They always occur in large groups and form various kinds of complex structures, such as QPPs, Zebra patterns, and other long-lasting pulses. Each SMB may represent an elementary energy release process, which can be regarded as the elementary burst (EB) in solar eruptions. Therefore, it becomes the basic requirement to identify clearly SMBs for the new generation of solar radio telescopes. This work investigated the previous observational results of solar radio spike bursts, dot bursts, and narrowband type III bursts, including the previous publications (Gudel & Benz 1990, Rozhansky et al. 2008, etc.), and tried to obtain a modified scaling law of solar SMBs. Such a modified scaling law will be the theoretical basis to select reasonable parameters for designing the new generation of solar radio telescopes, and help us understand the nature of solar eruptions. Figure 1 presents the statistical relationship between the averaged lifetime and frequency among solar radio spike bursts. The frequency range covers from 210 MHz to 7.0 GHz, and the lifetime ranges from 5 ms to 91 ms. It shows that the averaged lifetime of radio spike bursts is anti-correlated to the emission frequency, the correlation coefficient is -0.58. A fitted function is near a power-law function: \[ \tau \approx 8.2 \times 10^3 f^{-0.84\pm0.15} \] is the averaged lifetime of SMB in units of ms, $f$ is the frequency in units of MHz. Figure 1 – The relationship between the averaged lifetime and frequency among the solar radio spike bursts. Here, the crosses represent the results published in Gudel & Benz 1990 and Rozhansky et al. 2008 and the dashed line is obtained by least squared fitting method. The diamonds represent the results observed by the Chinese Solar Broadband Radio Spectrometers at Huairou (SBRS/Huairou) since 2006 (Wang et al. 2008, Tan 2013) and the solid line is obtained by least squared fitting method over the total sample. Figure 2 presents the relationship between the averaged bandwidth and frequency among the solar radio spike bursts. The observing frequencies of the whole sample range from 305 MHz to 7.0 GHz. The narrowest bandwidth is 1.4 MHz at central frequency of 710 MHz, while the widest bandwidth is 115 MHz at 1250 MHz. We found that the higher the observing frequency, the wider the bandwidth of SMB. The statistical correlation coefficient between the bandwidth and the central frequency is 0.47 among the 166 samples, which is obviously positive correlation. A fitting function is also obtained: \[ f_{bw}\approx 0.011 \times f^{0.99\pm0.018}\sim 1.1\% f \] Here, $f_{bw}$ is the averaged bandwidth of SMB in unit of MHz. Figure 2 – The relationship between the averaged bandwidth and frequency among the solar radio spike bursts. Here, the crosses represent the results published in Gudel & Benz 1990 and Rozhansky et al. 2008. The diamonds represent the results observed by the Chinese Solar Broadband Radio Spectrometers at Huairou (SBRS/Huairou) since 2006 (Wang et al. 2008, Tan 2013) and the dot-dashed line is obtained by least squared fitting method over the total sample. Because SMBs, including spike, dot, and narrow band type III bursts are the smallest eruptive units in solar eruptions, their scaling laws may provide a most important and fundamental basis for understanding the nature of solar eruptions and for the designing of the next generation solar radio telescopes. For the studies of solar radio astronomy, we always hope that the telescope has high parameters configuration as soon as possible, such as high sensitivity, high resolutions, and broad frequency coverage. However, one high parameter is always at the cost of decrement of other parameters. For example, the high frequency resolution inevitably means the frequency bandwidth of individual channel becomes narrow, and this will decrease the sensitivity. When the time resolution increases, the integration time will become short which will cause the sensitivity decrease and make the relatively weak burst to be vague and submerged in noise. The scaling laws of SMB show that the time scale of the detailed variation in solar radio bursts decreases with the increase of frequency, and the bandwidth increases with the increase of frequency. The scaling laws may help us to determine the optimal parameters configuration of the new generation solar radio telescopes, so as to ensure the scientific output from the observed data to the greatest extent. For the imaging observation, if we select too high time and frequency resolutions, it will not only face to a great challenge in techniques, but also inevitably reduce the observational sensitivity, and sacrifice the scientific objective of the relevant telescope. Therefore, we propose the spectrum-image combination mode to observe the solar radio eruptions on the basis of the scaling laws of radio spike emission, it can realize the observation simultaneously with high temporal, spatial, and spectral resolutions, as well as a high sensitivity, and can be taken as the principal mode for the future new generation solar radio observations, it will have broad prospects for the relevant studies. However, the high parameters here are relative, they will be gradually upgraded with the new development of radio and computer techniques. Based on a recently published paper: Tan, Bao-lin, Cheng, Jun, Tan Cheng-ming, Kou, Hong-xiang, ChA&A, 2019, 43, 59-74, doi: 10.1016/j.chinastron.2019.02.005 Guedel, M., Benz, A. O.: 1990, A&A, 231, 202 Rozhansky, I. V., Fleishman, G. D., Huang, G.-L.: 2008, ApJ, 681, 1688 Tan, B.L.: 2013, ApJ, 773, 165 Tan, B. L., Cheng, J., Tan, C. M., Kou, H. X.: 2019, ChA&A, 43, 59 Wang, S. J., Yan, Y. H., Liu, Y. Y., Fu, Q. J., Tan, B. L., Zhang, Y.: 2008, SoPh, 253, 133 fine spectral structures radio spikes type III bursts
CommonCrawl
Near infrared spectroscopy for body fat sensing in neonates: quantitative analysis by GAMOS simulations Fatin Hamimi Mustafa ORCID: orcid.org/0000-0002-7996-41141, Peter W. Jones1 & Alistair L. McEwan1 Under-nutrition in neonates is closely linked to low body fat percentage. Undernourished neonates are exposed to immediate mortality as well as unwanted health impacts in their later life including obesity and hypertension. One potential low cost approach for obtaining direct measurements of body fat is near-infrared (NIR) interactance. The aims of this study were to model the effect of varying volume fractions of melanin and water in skin over NIR spectra, and to define sensitivity of NIR reflection on changes of thickness of subcutaneous fat. GAMOS simulations were used to develop two single fat layer models and four complete skin models over a range of skin colour (only for four skin models) and hydration within a spectrum of 800–1100 nm. The thickness of the subcutaneous fat was set from 1 to 15 mm in 1 mm intervals in each model. Varying volume fractions of water in skin resulted minimal changes of NIR intensity at ranges of wavelengths from 890 to 940 nm and from 1010 to 1100 nm. Variation of the melanin volume in skin meanwhile was found to strongly influence the NIR intensity and sensitivity. The NIR sensitivities and NIR intensity over thickness of fat decreased from the Caucasian skin to African skin throughout the range of wavelengths. For the relationship between the NIR reflection and the thickness of subcutaneous fat, logarithmic relationship was obtained. The minimal changes of NIR intensity values at wavelengths within the ranges from 890 to 940 nm and from 1010 to 1100 nm to variation of volume fractions of water suggests that wavelengths within those two ranges are considered for use in measurement of body fat to solve the variation of hydration in neonates. The stronger influence of skin colour on NIR shows that the melanin effect needs to be corrected by an independent measurement or by a modeling approach. The logarithmic response obtained with higher sensitivity at the lower range of thickness of fat suggests that implementation of NIRS may be suited for detecting under-nutrition and monitoring nutritional interventions for malnutrition in neonates in resource-constrained communities. The World Health Organization (WHO) recorded about 104 million children were undernourished in 2010 with most being neonates in developing countries [1]. Undernourished neonates were generally small and had low body fat. They need sufficient amount of fat in their body because the fat provides energy to fight infections, resistance to high and low temperature, and protection against hypoglycemia and hypothermia [2]. Several technologies for measuring body fat include computer tomography, ultrasound imaging, dual-energy X-ray absorptiometry and air displacement plethysmography. These are expensive and need trained operators [3]. Skinfold thickness is low-cost but suffers from observer variability [4], while deuterium dilution measurements incur a delay for sample processing [5]. Additional file 1: Table S1 shows the comparison of body composition methods using a Figure of Merit (FOM) equation according to estimated cost (including equipment set-up), estimated measurement time, requirement for skilled operators, noninvasiveness, mobility, and safety [3, 6]. Also included the primary measurements of each method (see Additional file 1). Near infrared spectroscopy (NIRS) method has a great potential for undernutrition monitoring because it is safe, fast, non-invasive, mobile, relatively low-cost, and can be directly connected to portable computing devices, which makes this method is feasible to be applied easily on neonates in low-middle income settings. The term 'body fat measurements using NIRS' is related to measurement of the amount of adipose tissue underneath skin using NIR spectroscopy. The NIRS for measuring body fat has also been applied for other purposes such as in dietary monitoring [7–12], and also in liposuction surgery, where the condition of adipose tissue is checked pre-surgery and post-surgery [11, 13]. In muscle oxygenation measurements using NIRS, the effect of fat thickness from NIRS measurements is corrected using developed models [14, 15]. While NIRS provides several advantages, the limitations of using the NIR spectrum however include its sensitivity to hydration and skin colour [16]. This motivated the first aim of this study, which was to study quantitatively the effect of different skin colours and different hydration on NIRS measurements of the skin. We develop two single fat layer models and four skin models using simulations having varied volume fractions of melanin (skin colour), V melanin and volume fractions of water (hydration), V water . Note that the variation of V melanin is only for the skin models because melanin only presents in the epidermal layer of the skin. The single fat layer models at varied V water are developed in order to get picture of basic response of the interest layer. The second aim of our study was to define the sensitivity and the relationship between the reflected NIR intensity and the thickness of fat. We set a range of fat thickness in the two developed single fat layer models and in the four developed skin models from 1 to 15 mm in 1 mm intervals in the simulation. Past studies by simulation and/or phantom experiment showed inconsistency between the relationships proposed by the various studies, which exhibited either a logarithmic, an exponential or a peak response of NIR reflection with the increase in thickness of subcutaneous fat [7–14]. Additional file 2: Table S2 summarises a literature review of fat measurement using NIRS by simulation and phantom experiment (see Additional file 2). We improve our earlier study [12] in several aspects including assignment of optical properties. We exploit Meglinski's equation model by changing values of V water and V melanin in the equation to define new absorption values suiting the two developed single fat layer models and the four developed skin models. Note that we previously used absorption coefficient, μ a data directly from Simpson et al. [17] and then implemented it into our previous developed equation model [12]. In this paper, we also consider changes values of refractive indices, n, and reduced scattering coefficient, \(\mu_{s}^{{{\prime}}}\) due to changes of melanin and water in the skin tissue. Our study is also different from [12] in term of source-detector configuration. In the simulation, we follow the real source-detector device specifications and parameters. We introduce an arrangement of the source and the detector having +45° and −45° angles to the skin surface respectively (i.e. a 90° included angle between the axes of the source and detector) with 10 mm separation. The position angle of the source-detector is different from that commonly presented in the literature, where both the source and detector were aligned at +90° to the skin surface (i.e. the axes of the source and detector were parallel) (see Additional file 2: Table S2) [7–14]. We select 45° angles because we have shown in a previous study that performance in terms of sensitivity shown by 45° angles was higher than 90° angles in measuring body fat using NIRS [18]. We perform the simulation using Geant4-based Architecture for Medicine Oriented Simulation (GAMOS), an open-source software package that applies the Monte Carlo simulation method. GAMOS was first developed in 2006 for medical physics applications and provides easy design simulations without requiring C++ coding. A tissue optics plug-in interfaced with GAMOS was then introduced in 2013 [19]. In a validation study with accepted standards within the biomedical optics community between GAMOS and other simulation methods such as Monte Carlo Multilayer (MCML), GAMOS performed with the lowest error of total diffuse reflectance, R d [19]. Figure 1 illustrates a flow chart of methodologies and steps involved in setting up GAMOS simulation for body fat sensing using NIRS. Two (2) single fat layer models (having dimensions infinite × variable fat thickness × infinite) at different V water were developed. The single fat model 1 was the single fat layer having normal hydration while dehydrated single fat layer was for the single fat layer model 2. Four (4) skin models were also developed at different V water and V melanin each consisting of an upper epidermis-dermis, a middle subcutaneous fat layer and a lower muscle layer. The skin model 1 and the skin model 2 were Caucasian skin models (lower V melanin ) with normal hydration (higher V water ) and dehydrated (lower V water ) respectively. The skin model 3 was the African skin (higher V melanin ) having normal hydration (higher V water ) while the dehydrated African skin (higher V melanin and lower V water ) was used in the skin model 4. In each model, the thickness of the fat layer was varied from 1 mm to 15 mm in 1 mm intervals. Epidermis and dermis were combined into one single layer following models validated in the literature [17]. For mimicking real tissue, the models were implemented with the Geant4 Material Database (GMD) function invoking suitable materials. Figure 2 shows the implementation of GMD on a developed skin model as well as the assignment of thickness to each layer. The thicknesses follow reported values from real neonatal skin [20]. Flow-chart of methodologies and steps involved in GAMOS simulation for body fat sensing using NIRS Implementation of GMD and assignment of thickness on epidermis–dermis, subcutaneous fat and muscle layer By implementing GMD, materials or mixture properties of mass per mole, density state and pressure were composed by a number of elements. For instance, material of G4_ADIPOSE_TISSUE_ICRP was made from combinations of hydrogen, carbon, nitrogen, oxygen, sodium, sulfur and chlorine elements forming total density of 0.95 g/cm3. Properties and compositions of G4_AIR material and G4_MS20_TISSUE material refer to the National Institute Standard and Technology (NIST) standard, while G4_ADIPOSE_TISSUE_ICRP and G4_MUSCLE_SKELETAL_ICRP refer to the International Commission on Radiological Protection (ICRP) standard [21]. Optical properties as a function of wavelength in terms of absorption coefficient, μ a , refractive indices, n, reduced scattering coefficient, \(\mu_{s}^{{{\prime}}}\) and anisotropy, g were then assigned to each layer of the developed models. The values of optical properties were defined based on changes of V water and V melanin in adult tissue. Data from adults were used due to an absence of published data for neonates at NIR wavelengths. A simplified form of Meglinski's equation model for determining μ a of the tissue layer was [22–24]; $$\begin{aligned} \mu_{a}^{layer} \left( \lambda \right) = & \left[ {V_{fat} \mu_{a}^{fat} (\lambda )} \right] & + \left[ {V_{water} \mu_{a}^{water} (\lambda )} \right] + \left[ {V_{blood} \mu_{a}^{blood} (\lambda )} \right] + \left[ {V_{melanin} \mu_{a}^{melanin} (\lambda )} \right] \\ + \left[ {\mu_{a}^{{intrinsic}} \left( \lambda \right)\left( {1 - V_{fat} - V_{water} - V_{blood} - V_{melanin} } \right)} \right] \\ \end{aligned}$$ where, λ is the wavelength in nm while μ a layer (λ) is the absorption coefficient of the tissue layer (either epidermis-dermis, subcutaneous fat or muscle). V blood is the volume fraction of blood while V fat is the volume fraction of fat. μ a blood (λ), μ a water (λ), μ a melanin (λ) and μ a fat (λ) indicate the absorption coefficients spectra of the blood, water, melanin and fat constituents respectively while μ a intrinsic (λ) is the absorption coefficient of skin free of any absorber that was expressed by [25]; $$\mu_{a} \left( \lambda \right)^{intrinsic} = 7.84 \times 10^{7} \lambda^{ - 3.255}$$ The absorption coefficient spectra of the blood, μ a blood (λ) in Eq. (1) was defined as [22]; $$\mu_{a}^{blood} \left( \lambda \right) = \left[ {(1 - V_{oxy} )V_{hemoglobin} \,\mu_{a}^{Hb} (\lambda )] + [V_{oxy} V_{hemoglobin} \,\mu_{a}^{Hb02} (\lambda )]} \right]$$ where V hemoglobin and V oxy are the volume fractions of hemoglobin and oxygen saturation, which were assigned the values of 0.6 and 0.1 respectively [23]. μ a Hb (λ) is the absorption coefficient spectra of the deoxy-hemoglobin while μ a Hb02 (λ) is the absorption coefficient spectra of the oxy-hemoglobin. The melanin, hemoglobin and small-scale tissues were assumed be evenly distributed in the epidermis-dermis. Figure 3 illustrates the spectra for μ a Hb (λ), μ a Hb02 (λ), μ a water (λ), μ a melanin (λ), μ a intrinsic (λ) and μ a fat (λ), which are also known as the primary absorbers in skin tissues. The value of μ a water (λ) was obtained from Hale et al. [26], the value of μ a fat (λ) was defined from Veen et al. [27], while the μ a Hb (λ), μ a Hb02 (λ) and μ a melanin (λ) values were referred from Jacques et al. [16]. Those values and information of the μ a (λ) (water, fat, hemoglobin and melanin) can be obtained from the Oregon Medical Laser Center, omlc website [28]. Absorption coefficient spectra of primary absorbers in the skin tissue. Absorption coefficient spectra of water, μ a water (λ), melanin, μ a melanin (λ), fat, μ a fat (λ), deoxy- hemoglobin, μ a Hb (λ), oxy-hemoglobin, μ a Hb02 (λ), and intrinsic, μ a intrinsic (λ) The values of the volume fractions in Eq. (1) were referred to the portion of constituents in the tissue layers given by the literature [29–34]. Table 1 shows the values of V blood , V water , V melanin , and V fat in epidermis–dermis, subcutaneous fat layer and muscle layer. V intrinsic contains constituents having a lower absorption effect in skin such as potassium and sodium [30–32]. Substituting of the μ a values of constituents (Fig. 3) and their corresponding volume fractions (Table 1) into Eq. (1), absorption coefficients spectra of epidermis-dermis, μ a epidermis (λ), subcutaneous fat layer, μ a subcutaneous (λ) and muscle layer, μ a muscle (λ) were obtained at varying V water and V melanin in Fig. 4. The μ a muscle (λ) is not shown in Fig. 4 because it was not involved in studying the effect of water and melanin. The graphs in Fig. 4 are plotted from the values obtained using Eq. (1). Table 1 Volume fractions of V blood , V melanin , V fat , V water , and V intrinsic in skin layers Absorption coefficient spectra from the values obtained using Eq. (1). Absorption coefficient spectra of epidermis–dermis, \(\mu_{a}^{epidermis}\) (\(\lambda\)) at varying combinations of V water (normal or dehydrated (Dehyd)) and V melanin (low or high) as well as absorption coefficient spectra of subcutaneous fat, \(\mu_{a}^{subcutaneous}\) (\(\lambda\)) at varying combinations of V water (normal or dehydrated) The refractive index, n has a direct relationship with the reduced scattering coefficient \(\mu_{s}^{{{\prime}}}\)[33], therefore if one of them changes due to vary melanin or water in the skin tissue, the other parameter was assumed to change equally. The anisotropy, g was set as 0 in all conditions (varied V water and V melanin ) presuming the constant variable of (1 − g) from an equation of \(\mu_{s}^{{{\prime}}}\) = (1 − g) μ s , where the \(\mu_{s}^{{{\prime}}}\) proportions to the μ s , the scattering coefficient. The value of the g (g = 0) was chosen because GAMOS has not yet offered GPU-based acceleration or mesh-based grid generation particularly for quantitative analysis [19]. Nevertheless, other past quantitative NIR simulation studies also used the similar value of g (g = 0) [7, 14]. A past study has shown that similar values of \(\mu_{s}^{{{\prime}}}\) were obtained from measurements on Caucasian skin and African skin, thus values of the n and \(\mu_{s}^{{{\prime}}}\) in this study were similar while V melanin varied [17]. Decreasing water in skin tissue estimated to increase n by 5% [34], hence dehydrated tissues of epidermis as well as dehydrated subcutaneous possess 5% higher \(\mu_{s}^{{{\prime}}}\). Findings by Roggan et al. from 456 to 1064 nm were used for the n with normal hydration, n epidermis(λ) = 1.4, n subcutaneous(λ) = 1.44 and n muscle(λ) = 1.37 [35, 36]. For dehydrated tissue, the n epidermis(λ) and n subcutaneous(λ) became 1.47 and 1.512 respectively (values increased 5%) due to the water loss effect on scattering. An equation developed from [16] was used to define normal hydration values of scattering \(\mu_{s}^{{{\prime}}}\), as; $$\mu_{s}^{{\prime}} \left( \lambda \right)^{layer} = a \left( {\frac{\lambda }{500\, (nm)}} \right)^{ - b}$$ where \(\mu_{s}^{{{\prime}}}\)(λ) is the reduced scattering coefficient spectra of the given layers. Dimensionless values for a and b were assigned based on [16]: 45.3 and 1.29 respectively for μ s ′ epidermis(λ), 15.4 and 0.68 respectively for μ s ′ subcutaneous(λ) while for μ s ′ muscle(λ), 9.8 was set for a and 2.82 for b. Figure 5 shows normal and dehydrated (after increased by 5%) values of μ s ′ epidermis(λ) and μ s ′ subcutaneous(λ) as well as only normal hydrated values of μ s ′ muscle(λ). The graphs in Fig. 5 are plotted from the values obtained using Eq. (4). Reduced scattering coefficient spectra from the values obtained using Eq. (4). Reduced scattering coefficient spectra of normal (Norm) and dehydrated (Dehyd) epidermis–dermis (Epi), \(\mu_{s}^{{{\prime }epidermis}}\) (λ) and subcutaneous fat layer (Subcut), \(\mu_{s}^{{{\prime }subcutaneous}}\) (λ) and included reduced scattering coefficient spectra of normal hydrated muscle layer, \(\mu_{s}^{{{{{\prime }muscle}} }}\) (λ) A source and a detector were created in GAMOS following the specifications and parameters shown in Table 2. The source and the detector were positioned at +45° and −45° angles to the skin surface respectively with 10 mm separation. From the detector numerical aperture (NA) of 1.0 (in Table 2), its acceptance angle was calculated using an equation of θ = sin −1 (NA). Fundamentally, a detector can collect light if the light falls within an angle that is two times its acceptance angle; hence, the cosine corrector detector can record photons from 0° angle up to 180° angle. Table 2 Specifications and parameters of the source and the detector utilised in GAMOS simulation Following the development of the skin models and the source-detector arrangement, 108 optical photons from 800 to 1100 nm (in 50 nm intervals) were launched by the light source. The photons were launched into the two single fat layer models and four skin models. At the receptor side, the detector recorded photons quantitatively as a function of wavelength. NIR reflection was defined to be the number of recorded photons over the number of launched photons in term of percentage. Figure 6 shows the source-detector arrangement used in the simulation and Fig. 7 demonstrates an example of the traces of photon paths during the simulation in GAMOS (2D view). The sensitivity of the response to the changing thickness of fat or the slope of reflected NIR intensity over 14 intervals of 1 mm thickness from 1 to 15 mm (expressed as percent reflection change per mm) was calculated using the following where t is the step index of the thickness of fat; Simulation diagram of the source-detector arrangement used in the simulation An example of the traces of photon paths during the simulation in GAMOS (2D view). The photons trace was obtained with regard to Fig. 6, where the photons were launched into the developed skin model (upper epidermis–dermis, middle subcutaneous fat, lower muscle) by the source at +45° angles to the skin surface and the escaped photons were recorded by the detector at −45° angles to the skin surface. The source and detector separation was at 10 mm $$Sensitivity = \mathop \sum \limits_{t = 1}^{14} \left( {\frac{{ Reflection_{t + 1} - Reflection_{t} }}{14}} \right)$$ The effect produced by varying the V water can clearly be seen in the NIR spectra from the single fat layer models in Fig. 8. The variation is minimal at ranges of wavelengths from 890 to 940 nm and also from 1010 to 1100 nm (note that the V melanin was not involved because melanin only presents in the epidermal layer of the skin). Similar minimal variation at the two ranges of wavelengths (from 890 to 940 nm and also from 1010 to 1100 nm) is also seen in the spectra of complete skin models having different hydration presented in Fig. 9, where in Fig. 9, V water was less for models 2 and 4. Different skin colours has a significant affect on the NIR spectra in Fig. 9 as photons were absorbed more in African skin (skin models 3 and 4) than Caucasian skin (skin models 1 and 2) due to the presence of greater melanin constituent (V melanin ) in the African skin. NIR reflection spectra from two developed single fat layer models at varied V water . The single fat model 1 was the single fat layer having normal hydration while dehydrated single fat layer was for the single fat layer model 2. The fat thickness was 5 mm NIR reflection spectra from four developed skin models at varied V melanin and V water . The skin model 1 and the skin model 2 were the Caucasian skins (lower V melanin ) with normal hydration (higher V water ) and dehydrated (lower V water ) respectively. The skin model 3 was the African skin (higher V melanin ) having normal hydration (higher V water ) while the dehydrated African skin (higher V melanin and lower V water ) was for the skin model 4. The thickness of the subcutaneous was 5 mm Figure 10a–c show the NIR reflection from the single fat layer, Caucasian skin and African skin respectively at 930 nm. The NIR reflection increases logarithmically with increases in the thickness of fat. Note that wavelength at 930 nm was one of the wavelengths that showed minimal variation due to changes in V water over the NIR spectra (in Figs. 8, 9). The logarithmic response of NIR intensity with thickness of fat is due to the affect of scattering being dominant over the affect of absorption in the adipose tissue. More photons were increasingly being scattered back with increases of the thickness until reaching a critical thickness, where the change of the response becomes saturated and less sensitive, and there is negligible increase in backscattered light with any further increases of the thickness of fat. Logarithmic response of NIR reflection (%) over the thickness of fat at 930 nm. Logarithmic response from a the single fat layer, b the full complete Caucasian skin and c the full complete African skin The sensitivities [using Eq. (5)] were reduced with the presence of other skin layers (full complete skin), where the sensitivities were 13.1 × 104%/mm for the single fat layer, 2.81 × 104%/mm for the complete Caucasian skin and 0.81 × 104%/mm for the complete African skin. Reflected NIR intensity obtained from African skin was lower than Caucasian skin. The different sensitivities acquired between these skin colours shows the influence of skin colours in NIRS measurements when there is no adjustment for melanin. To our knowledge, this is the first study that has applied a source-detector arrangement at 45° angles to the skin surface in GAMOS simulations to study the effect of varying skin colour and hydration over NIR intensity particularly for body fat sensing for neonates. We also first studied and analysed the relationship between NIR reflectance and thickness of fat due to the differing relationships found in the past studies. Our findings showed that varying hydration in the skin resulted minimal changes to the NIR intensity at two ranges of wavelengths from 890 to 940 nm and also from 1010 to 1100 nm. Those wavelengths (within the two ranges) may consider to be implemented in the development of a NIRS device for the measurement of body fat in order to address the wide variation of water in the neonates' body. Total body water particularly in newborns normally fluctuates in their first week of life due to the critical transition period from foetus in womb to newborn [37]. Our study showed that increasing melanin volume in the skin decreased the sensitivity of the measurement to the intended goal of determining subcutaneous fat thickness. This limitation is expected and a considerable challenge because skin colour varies from one neonate to another. The strong influence by melanin in NIRS measurements was also found in a past study, which they studied reliability of NIRS in people with dark skin pigmentation for tissue oxygen delivery application and found that the presence of melanin clearly interfered quality of the reflected NIR signal [38]. Their NIR device failed to register tissue saturation more often in individuals with darker skin. One of the possible ways to solve this problem is to quantify the skin colour using a promising Antera 3D device and include values of the measured colour space (L*a*b*) in the developed NIR equation model of body fat [39]. Another alternative method is to include the use of ratios of reflection at different wavelengths in the developed NIR equation model of body fat [40]. Our recent studies have implemented the ratios technique in the developed body fat percentage (BF%) model from clinical measurements on neonates using NIRS referring to BF% from an air displacement plethysmography (ADP) [41]. The developed equation model consisted three ratios using five different wavelengths with a parameter of sex. The results demonstrated a significant correlation and in agreement with the ADP BF% (R-squared of 0.82 (p < 0.001) and RMSE 2.1%). However, our subjects recruited from Sydney, Australia were majority from white skin colour (n = 26) while only four subjects from dark skin. This has recently been expanded with our study of 98 infants (dark skin) from Soweto, South Africa [42]. The ratios based model combining with weight and sex yielded a correlation R-squared of 0.773 (p < 0.001) and RMSE 4.6%. The high correlation R-squared obtained from dark skin indicates that the use of ratios can reduce the sensitivity due to melanin, however other clinical studies with higher number of subjects studied are required to confirm this result. The use of reflection ratios at different wavelengths may also reduce the effect of dissimilar reflected NIR intensity values obtained from Africans and Indians even though they possessed similar content of melanin (V melanin ), which may due to different sizes of melanosomes [43]. Larger size of melanosomes in African skin possibly increase forward scattering, thus less backscattered light(s) is captured [44]. Our obtained NIR reflection recorded by the detector exhibited small NIR reflection values and may be considered as an unamplified signal by the photo-detector at the receptor side. Thus in the final implementation, the NIR reflection values might be increased with an amplifier. The relationship between the reflected NIR intensity and the fat thickness in the past studies showed a range of different forms those being either a logarithmic, a peak or an exponential. We obtained logarithmic response, which in agreement with curves obtained from the majority of the past studies ([12, 8] and some results of [7]). The study in [14] and one result from [7] obtained a polynomial or exponential relationship while for [11], it showed a peak curve, where light intensity increased first and then decreased at a 10 mm fat thickness. A potential explanation for these differences is the different of source-detector separations used in the studies. For example in [7], the logarithmic relationship that was applied to small source-detector separations (less than 30 mm) changed to an exponential relationship at 40 mm source-detector separation at which point it agreed with [14]. In [11] that used 8.5 mm source-detector separation, two different equation models were developed, where the second model was applied after 10 mm fat thickness. A similar logarithmic relationship to other studies can be obtained if the quantitative method was used [11]. Other reasons for the difference of the curves acquired (a logarithmic, a peak or an exponential) may also be due to the unknown differences in source diameter and angle of incidence used in the study. The critical thickness of the logarithmic response indicates that there is a maximum thickness detection (MTD) limit for NIRS to measure body fat. The MTD defines whether or not NIRS could be used to detect and monitor body fat particularly low body fat (undernutrition). Information of the thickness of fat in real neonates was found to be 3.0 to 5.0 mm in normal full term neonates while only 1.7 to 3.0 mm in low birth weights [45]. While the past studies only showed and discussed certain aspects of the curve response [7, 8, 11, 12], to our knowledge there was no particular study defining optimal MTD. Hence, we suggest implementing a phantom experiment to test power sources emitted and types of NIR equipment used for defining an optimum cut off value (or MTD). The cut off is when the slope of the measured NIR intensity over thickness drops below a defined level, and becomes small enough and unchanged (constant) on increases of thickness. There are several limitations in this study that need improvement and further study. The effect of the volume fraction variation over NIR spectra is expected to decrease with the introduction of a more complex skin model, where other components can contribute to the overall volume. Since we used and assumed the optical property value of anisotropy, g to be constant at various concentrations of the melanin and water, the effect on NIR spectra may be different with the use of actual g values to the respective tissue layer. However, the g values based on those conditions (varied melanin and water) were not available in the literature. The third limitation is that we assigned optical parameters from adult skin, thus the effect of the volume fraction variation on NIR spectra may be different in newborn skin. Total body water found in a newborn infant's body is generally different from an adult, which is 81% in an infant while 73% in an adult [46]. For different skin colours, the difference in distributions and quantities of melanin pigments in the epidermal layer determines the range of colours of skin [47]. Presence of melanin in newborn infant's epidermis may influence the NIRS results in a different way to the presented model with adult values. This is because it was found that the production of melanin was lower in the newborns but numbers of melanocytes (melanin pigments) per area was quantitatively comparable with the adults [48, 49]. Other factors including size of cells and size of fibre bundles in the skin tissues may also influence the NIRS results as past studies found that size of cells and fibre bundles in the neonatal skin tissue were in smaller sizes than that of the adults [50, 51]. As regards the size of cells and fibre bundles, light penetration in the neonatal skin would be expected to be less than in the adults because the smaller objects reduce forward scattering of the light or the anisotropy value, where the forward Mie scattering is dependent on size of objects [52]. Thus more backscattered light (reflected NIR intensity) would be captured. Contrarily, increasing total body water in neonates would decrease the value of refractive index and scattering coefficient [34], which leads to a decrease of the backscattered light. The behaviour of the light transport and the captured backscattered light in comparison with the adults can be utilised for parameterisation of the optical parameters in a developed model for obtaining the new NIR reflection values. However, comparing with the values of optical parameters from measurements on the real skin tissue of neonates is essential. The NIR reflection showed higher sensitivity at lower thicknesses of subcutaneous fat, which shows good potential for the use of the NIRS method for the detection of under-nutrition in newborn infants particularly in low-income settings, where newborns are at risk for under-nutrition and morbidity. Selecting correct wavelengths for measuring body fat is necessary to circumvent the influence on absorption and scattering that occur with variation of hydration and skin colour in the skin tissue. Wavelengths within a range from 890 to 940 nm and also from 1010 to 1100 nm would be the wavelengths to be considered in developing the NIRS device due to the minimal affect on reflected NIR intensity at varied V water . For different skin colours meanwhile, sensitivities and reflected NIR intensity decreased with increases of melanin, which indicates that NIR is sensitive to changes in melanin. Thus, solutions to overcome this limitation in developing the real NIRS device are necessary including correction by an independent measurement or by a modeling approach. The effectiveness of the suggested solutions in addressing the limitations due to hydration and skin colour would also be validated via in vitro experiment, where NIR reflection response would be tested on the developed skin tissues using an epoxy resin mimicking the real skin tissue with varied water concentration and ink (melanin) [53]. As we utilised some optical parameters obtained from adult skin in this study, further studies using equivalent parameters of newborn skin are essential, but defining first the values of the optical properties of neonates by measurements on the real neonatal tissue or by parameterisation approach in the developed model based on the results from adults is urged. As regards the size of melanosomes were different between ethnics even though they possessed similar values of V melanin , it also suggests further study of the effect of different hydration and skin colours on NIR spectra for other ethnics including Indian and Chinese with using their optical parameters of the skin for comparison with the Caucasians and Africans. For implementation, our next step is to develop a NIRS body fat device which can compensate for various skin colours and using multiple LEDs at several wavelengths. NIR: near infrared NIRS: MTD: maximum thickness detection MCML: Monte Carlo multilayer Geant4 material database BF%: World Health Organization (WHO).http://www.who.int/nutrition/challenges/en/. Accessed 20th Aug 2016. Gustafsson J. Neonatal energy substrate production. Indian J Med Res. 2009;130(5):618–23. Lee SY, Gallagher D. Assessment methods in human body composition. Curr Opin Clin Nutr Metab Care. 2008;11(5):566. Olhager E, Forsum E. Assessment of total body fat using the skinfold technique in full-term and preterm infants. Acta Paediatr. 2006;95(1):21–8. Pietrobelli A, Tatò L. Body composition measurements: from the past to the future. Acta Paediatr. 2005;94(s448):8–13. Wells J, Fewtrell M. Measuring body composition. Arch Dis Child. 2006;91(7):612–7. Nilubol C, Treerattrakoon K, Mohammed WS. Monte Carlo modeling (MCML) of light propagation in skin layers for detection of fat thickness. Proc SPIE. 2010:77430 J-77430 J-77411. Hwang ID, Shin K, Ho DS, Kim BM. Evaluation of chip LED sensor module for fat thickness measurement using tissue phantoms. EMBS'06 28th annual international conference of the IEEE. IEEE. 2006:5993–6. Hwang ID, Shin K. Fat thickness measurement using optical technique with miniaturized chip LEDs: A preliminary human study. EMBS'07 29th annual international conference of the IEEE 2007. IEEE. 2007;4548–51. Hartmann S, Moschall M, Schäfer O, Stüpmann F, Timm U, Klinger D, Kraitl J, Ewald H. Phantom of human adipose tissue and studies of light propagation and light absorption for parameterization and evaluation of noninvasive optical fat measuring devices. OPJ. 2015;5(02):33. Hong HK, Jo YC, Choi YS, Park HD, Kim BJ. An optical system to measure the thickness of the subcutaneous adipose tissue layer. In: The 8th annual IEEE conference on sensors: IEEE SENSORS 2009. Christchurch: IEEE; 2009. pp. 695–8. Morhard R, Jeffery H, McEwan A. Simulation-based optimization of a near-infrared spectroscopic subcutaneous fat thickness measuring device. EMBC, 36th annual international conference of the IEEE. 2014;510–3. Song S, Kobayashi Y, Fujie MG. Monte-carlo simulation of light propagation considering characteristic of Near-infrared LED and evaluation on tissue phantom. Procedia CIRP. 2013;5:25–30. Yang Y, Soyemi O, Landry M, Soller B. Influence of a fat layer on the near infrared spectra of human muscle: quantitative analysis based on two-layered Monte Carlo simulations and phantom experiments. Opt Express. 2005;13(5):1570–9. Yamamoto K, Niwayama M, Lin L, Shiga T, Kudo N, Takahashi M. Accurate NIRS measurement of muscle oxygenation by correcting the influence of a subcutaneous fat layer. BiOS Europe'97. SPIE. 1998;166–73. Jacques SL. Optical properties of biological tissues: a review. Phys Med Biol. 2013;58(11):R37. Simpson CR, Kohl M, Essenpreis M, Cope M. Near-infrared optical properties of ex vivo human skin and subcutaneous tissues measured using the Monte Carlo inversion technique. Phys Med Biol. 1998;43(9):2465. Mustafa FH, Jones PW, Huvanandana J, McEwan AL. Improvement of near infrared body fat sensing at 45-degree source-detector position angle. In: Biomedical Engineering (BME-HUST), International Conference on 2016 Oct 5. IEEE. pp. 70–74. Glaser AK, Kanick SC, Zhang R, Arce P, Pogue BW. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media. Biomed Opt Express. 2013;4(5):741–59. Eichenfield LE, Hardaway CA. Neonatal dermatology. Curr Opin Pediatr. 1999;11(5):471–4. Valentin J. Basic anatomical and physiological data for use in radiological protection: reference values. ICRP Publication 89. Ann ICRP. 2002;32(3):1–277. Meglinski IV, Matcher SJ. Quantitative assessment of skin layers absorption and skin reflectance spectra simulation in the visible and near-infrared spectral regions. Physiol Meas. 2002;23(4):741. Meglinski I, Matcher S. Computer simulation of the skin reflectance spectra. Comput Methods Programs Biomed. 2003;70(2):179–86. Petrov GI, Doronin A, Whelan HT, Meglinski I, Yakovlev VV. Human tissue color as viewed in high dynamic range optical spectral transmission measurements. Biomed Opt Express. 2012;3(9):2154–61. Vogel AJ. Noninvasive optical imaging techniques as a quantitative analysis of Kaposi's sarcoma skin lesions. Ph.D. University of Maryland; 2007. Hale GM, Querry MR. Optical constants of water in the 200-nm to 200-μm wavelength region. Appl Opt. 1973;12(3):555–63. van Veen RL, Sterenborg H, Pifferi A, Torricelli A, Cubeddu R. Determination of VIS-NIR absorption coefficients of mammalian fat, with time-and spatially resolved diffuse reflectance and transmission spectroscopy. In: Biomedical Topical Meeting. J Opt Soc Am Cogn Med Sci. 2004:SF4. Optical properties spectra.http://omlc.ogi.edu/spectra. Accessed 20th Aug 2016. Thomas LW. The chemical composition of adipose tissue of man and mice. Q J Exp Physiol. 1962;47(2):179–88. Pearce R, Grimmer B. Age and the chemical constitution of normal human dermis. J Invest Dermatol. 1972;58(6):347–61. Donner C, Jensen HW. A Spectral BSSRDF for shading human skin. Rendering techniques 2006, EGSR06. Eurograph Assoc. 2006:409–18. Heymsfield S, Stevens V, Noel R, McManus C, Smith J, Nixon D. Biochemical composition of muscle in normal and semistarved human subjects: relevance to anthropometric measurements. Am J Clin Nutr. 1982;36(1):131–42. Bashkatov AN, Genina EA, Kochubey VI, Stolnitz MM, Bashkatova TA, Novikova OV, Peshkova AY, Tuchin VV. Optical properties of melanin in the skin and skin like phantoms. In: EOS/SPIE European biomedical optics week. SPIE. 2000:219–26. Gurjarpadhye AA. Effect of localized mechanical indentation on skin water content evaluated using OCT. Int J Biomed Image. 2011;2011:17. Roggan A, Dorschel K, Minet O, Wolff D, Muller G. The optical properties of biological tissue in the near infrared wavelength range. Bellingham: Laser-induced interstitial therapy SPIE Press; 1995. p. 10–44. Bashkatov A, Genina E, Kochubey V, Tuchin V. Optical properties of human skin, subcutaneous and mucous tissues in the wavelength range from 400 to 2000 nm. J Phys D Appl Phys. 2005;38(15):2543. Méio BB, Moreira EL. Total body water in newborns. In: Preedy VR, editor. Handbook of anthropometry. New York: Springer; 2012. p. 1121–35. Wassenaar E, Van den Brand J. Reliability of near-infrared spectroscopy in people with dark skin pigmentation. J Clin Monit Comp. 2005;19(3):195–9. Matias AR, Ferreira M, Costa P, Neto P. Skin colour, skin redness and melanin biometric measurements. Comparison study between Antera® 3D, Mexameter® and Colorimeter®. Skin Res Technol. 2015;21(3):346–62. McEwan A, Bian S, Gargiulo G, Morhard R, Jones P, Mustafa F, Bek BE, Jeffery H. SPIE smart structures and materials + nondestructive evaluation and health monitoring. SPIE. 2014;90600A-90600A-90608.41. Mustafa FH, Bek EJ, Huvanandana J, Jones PW, Carberry AE, Jeffery HE, Jin CT, McEwan AL. Length-free near infrared measurement of newborn malnutrition. Sci Rep. 2016;6. Carberry A, Huvanandana J, Mustafa FH, Jones P, Norris C, McEwan A, Jeffery H. Low cost body composition measurement for nutrition assessment using Near Infrared (NIR) light reflection from birth up to 2 years. In: International conference on nutrition and growth 2–4th Mar 2017 (in press). Alaluf S, Atkins D, Barrett K, Blount M, Carter N, Heath A. Ethnic variation in melanin content and composition in photoexposed and photoprotected human skin. Pigment Cell Melanoma Res. 2002;15(2):112–8. Syvitski JP. Principles, methods and application of particle size analysis. Cambridge: University Press; 2007. Lo Y-S, Lu C-C, Chen L-Y, Huang L-Y, Jong Y-J. Quantitative measurement of muscle and subcutaneous fat thickness in newborn by real-time ultrasonography: a useful method for site and depth evaluation in vaccination. Kaohsiung J Med Sci. 1992;8(2):75–81. Wang Z, Deurenberg P, Wang W, Pietrobelli A, Baumgartner RN, Heymsfield SB. Hydration of fat-free body mass: new physiological modeling approach. Am J Physiol Endocrinol Metab. 1999;276(6):E995–1003. Montagna W. The structure and function of skin 3E. Amsterdam: Elsevier; 2012. Holbrook K. A histological comparison of infant and adult skin—neonatal skin: structure and function. New York City: Marcel Dekker; 1982. p. 3–31. Mancini A, Lawley L. Structure and function of newborn skin. Textbook of Neonatal: Dermatology; 2001. p. 18–32. Stamatas GN, Nikolovski J, Luedtke MA, Kollias N, Wiegand BC. Infant skin microstructure assessed in vivo differs from adult skin in organization and at the cellular level. Pediatr Dermatol. 2010;27(2):125–31. Barel AO, Paye M, Maibach HI. Handbook of cosmetic science and technology. Boca Raton: CRC Press; 2014. Bigio IJ, Fantini S. Quantitative biomedical optics methods, and applications. Theory: Cambridge University Press; 2016. Pogue BW, Patterson MS. Review of tissue simulating phantoms for optical spectroscopy, imaging and dosimetry. J Biomed Opt. 2006;11(4):041102. Adams J, Shaw N. A practical guide to bone densitometry in children. Camerton: National Osteoporosis Society; 2004. Niiniviita H, Kiljunen T et al. Comparison of effective dose and image quality for newborn imaging on seven commonly used CT scanners. Radiation Protection Dosimetry. 2016. FHM and PJ were the corresponding authors and wrote the manuscript. FHM performed the simulations while PJ contributed to the methodologies section. FHM and AM provided the main idea. All authors read and approved the final manuscript. We would like to thank the support, financial and academic, provided by The Bill and Melinda Gates Foundation Grand Challenges Program and University of Sydney International scholarship from the University of Sydney. The datasets supporting the conclusions of this article are included as Additional file 3. Financial support provided by The Bill and Melinda Gates Foundation Grand Challenges Program (OPP1111820) and University of Sydney International scholarship (UsydIS) from the University of Sydney. School of Electrical and Information Engineering, Faculty of Engineering, University of Sydney, New South Wales, Australia Fatin Hamimi Mustafa , Peter W. Jones & Alistair L. McEwan Search for Fatin Hamimi Mustafa in: Search for Peter W. Jones in: Search for Alistair L. McEwan in: Correspondence to Fatin Hamimi Mustafa or Peter W. Jones. Additional file 1: Table S1. Comparison of body composition methods using FOM equation and the measurements concept of each method. The variables in the FOM equation include estimated cost (including equipment set-up), estimated measurement time, requirement for skilled operators, noninvasiveness, mobility, and safety. The FOM should be the highest for the best device, which is the NIR method. Additional file 2: Table S2. Literature review of fat measurements using NIRS by simulation and phantom experiment. 12938_2016_310_MOESM3_ESM.xlsx Additional file 3. Absorption coefficient spectra of water, hemoglobin and pure fat. Mustafa, F.H., Jones, P.W. & McEwan, A.L. Near infrared spectroscopy for body fat sensing in neonates: quantitative analysis by GAMOS simulations. BioMed Eng OnLine 16, 14 (2017) doi:10.1186/s12938-016-0310-y Near-infrared spectroscopy Body fat sensing Fat thickness GAMOS simulation Neonates Under-nutrition detection
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. What's the physical significance of the off-diagonal element in the matrix of moment of inertia In classical mechanics about rotation of rigid object, the general problem is to study the rotation on a given axis so we need to figure out the moment of inertia around some axes. In 3-dimensional case, we have a matrix (i.e. moment of inertia tensor) $$ I = \left( \begin{matrix} I_{xx} & I_{xy} & I_{xz}\\ I_{xy} & I_{yy} & I_{yz}\\ I_{zx} & I_{zy} & I_{zz} \end{matrix} \right) $$ I am curious what's the physical significance of the matrix element. I guess the moment of inertia in element $ij$ is the moment of inertia when the object is rotating about the axis $ij$. For example, $I_{xy}$ is the moment of inertia when the object is rotating about $xy$ axis and $I_{yy}$ is the moment of inertia when the object is rotating about $y$ axis, is that correct? When I read further into the text, it then introduce the a method to diagonalize the moment of inertia tensor such that the non-vanishing elements only appears on diagonal. In that case, the text call the diagonal elements as the principal moment of inertia, my question is what's the physical significance of the principal moment of inertia? moment-of-inertia Qmechanic♦ 141k1919 gold badges325325 silver badges16751675 bronze badges $\begingroup$ What, exactly, do you consider the $xy$ axis to be??? In a body-fixed frame you can always choose coordinates to make the inertia tensor diagonal. Then the diagonal components mean what you say - the moment of inertia for rotations about one of the principle axes. In other frames the inertia tensor will be off diagonal. This just represents the fact that you are using coordinates which are rotated relative to the principle axes, which are adapted to the shape of the body. $\endgroup$ – Michael Brown Apr 12 '13 at 4:30 $\begingroup$ I think it's reducing 9 rotation axes to 3 components by finding the eigenvectors, so finding the three (orthogonal, likely), uncorrelated axes into which all rotations in the entire space can be decomposed. Looks like it's directly analogous to principal components, about which there is some significant literature: en.wikipedia.org/wiki/Principal_components $\endgroup$ – Paul Apr 12 '13 at 4:31 $\begingroup$ The special directions are "principal" not "principle" axes. $\endgroup$ – DarenW Apr 12 '13 at 5:01 $\begingroup$ @DarenW Thank you, of course you're right. Derp on my part - I'm in a bad headspace today. :) Unfortunately I can't edit my comment. $\endgroup$ – Michael Brown Apr 12 '13 at 5:06 $\begingroup$ I feel like there's still something missing from the responses so far. Consider the analogy with the stress tensor. Yes we can diagonalize it. But the off-diagonal components do have a physical interpretation, as they are shears (compared to the compression/tensile stresses appearing on the diagonal). They are not just artifacts of a poor choice of coordinates. $\endgroup$ – user10851 Apr 12 '13 at 5:19 For a non-spherical object, there is a unique direction along which the object is "longest", that is, to have the smallest moment of inertia if rotated about an axis with that direction. The material of the object are as close to that axis as can be, compared to other directions. There's another direction perpendicular to that about which the moment of inertia is maximum. Then finally we have an intermediate amount of moment of inertia in a third direction perpendicular to the previous two. I lied; that "intermediate" moment of inertia may be the same as the minimum one or maximum one, in which case you have some freedom to pick an arbitrary angle for one axis, but never mind this detail for present purposes. A spherical object, of course, has the same moment of inertia about any axis, so is boring. You have freedom to pick axes however you like, but never mind that special case either, since it's not interesting. For the non-special case, we have the unique directions for minimum, maximum, and intermediate moments of inertia. We could name these directions, the 'principal axes', with letters like, oh maybe: 'X', 'Y', and 'Z' and thus have the tensor $$ I = \left( \begin{matrix} I_{xx} & 0 & 0\\ 0 & I_{yy} & 0\\ 0&0 & I_{zz} \end{matrix} \right) $$ These three numbers are physically meaningful, giving a general overall measure of size and mass distribution of the object. But maybe the object is positioned at some crazy angle with respect to things we care about, like our nice level tabletop, our local notion of 'east' and 'north'. So we must rotate the object and its various physical vectors and tensors (and spinors if it's a fermion). An arbitrary rotation is described by three angles (e.g. Euler angles). The fully general $I$ tensor then has six independent quantities. We see nine components, but they count as six due to always being a symmetric tensor. The physical significance of the off-diagonal components is that you're using a coordinate system not aligned with the principal directions of the object. They tell us nothing interesting about the object itself. DarenWDarenW $\begingroup$ This is the correct answer. The only physical significance of an inertia tensor with non-zero of diagonal elements is that the basis directions of the body coordinate system are not the body's principal axes with regard to rotation. An airplane, for instance typically has +x axis pointing forward along the center of the fuselage, y pointing out the right wing, and z pointing down. This makes perfect sense from the perspective of the aircraft designer and the pilot, but almost inevitably is not aligned with the rotational principle axes. $\endgroup$ – David Hammen Jun 10 '16 at 4:05 The fundamental equation that contains $I$ is: $\vec{L} = I \vec{\omega}$ where $\vec{L}$ is the angular momentum vector, and $\vec{\omega}$ is the angular velocity vector. The fact that $I$ is a tensor (and not just a scalar) means that $\vec{L}$ and $\vec{\omega}$ do not necessarily point in the same direction. When you want a particular matrix representation of the moment of inertia tensor, you may get off-diagonal elements. However, since by its very definition, any matrix representation of $I$ is a symmetric matrix. Any real, symmetric matrix can always be diagonalized by orthogonal matrices. What this physically means is that, for at least one choice of $x$, $y$, $z$ axes, the representation of $I$ will be perfectly diagonal, with all the off-diagonal elements zero. This does not at all mean that the off-diagonal elements are not important. And I will not escape the question of their meaning. There is also the equation describing the dynamics of the whole business: $\frac{d \vec{L}}{ dt} = \vec{\tau}$ This is analogous to $\frac{d\vec{p}}{dt} = \vec{F}$ which describes linear motion; here of course $\tau$ is the torque, while $\vec{L}$ is the angular momentum. Substituting $\vec{L} = I \vec{\omega}$ and assuming a time-invariant moment of inertia tensor, this becomes: $I \frac{d\vec{\omega}}{dt} = \vec{\tau}$ Now, the torque you are applying does not really have to coincide with a vector that diagonalizes your $I$ (such vectors are called principal axes). For instance, take the wheel of your car. It is driven by a rod, going through its axis. Let us take that to be the $z$-axis. So, in this case, $\tau_z$ is non-zero, while $\tau_x$ and $\tau_y$ are zero. The above equation is actually three equations, let me show the third one explicitly: $I_{zx} \frac{d\omega_x}{dt} + I_{zy} \frac{d\omega_y}{dt} + I_{zz} \frac{d\omega_z}{dt} = \tau_z$ Now, if the off-diagonal elements $I_{zx}$ and $I_{zy}$ are zero (assuming zero initial conditions) your wheel will just acquire $\omega_z$, that is, it will start rotating nicely about the axis you are applying the torque on. However, if they are not zero... $\omega_x$ and $\omega_y$ will not remain zero. That is, the wheel will tend to start to rotate around the other axes as well! So, you will not get a rotation around the $z$-axis, and the wheel will tend to wobble! This is the condition where you take your car to the shop, and have the wheels balanced! So, the off-diagonal elements really mean that any attempt to rotate the body by applying a torque about a given axis will not result in a rotation about just that axis and there will be rotation around the other axes as well. Note that if, for instance, we have $I_{xy} \neq 0$ and $I_{xz} = I_{yz} = 0$, a torque around the $z$ axis will result in a balanced rotation, while a torque around the $x$ or $y$ axes will not... safkansafkan $\begingroup$ The clearest explanation I've found on the internet so far; thanks! Could you also explain what this person is trying to say answering the same question? $\endgroup$ – user1717828 Aug 2 '17 at 13:11 Your guess is wrong, because first of all, there is no "xy"-axis, there's only an "xy"-plane, the plane perpendicular to the z-direction. As you note, your text describes how the matrix can be made diagonal. Not sure how well the text explains this, but the way to get the matrix to be diagonal is to go to a new coordinate system: You started out with $x$, $y$ and $z$ in "some" way. But your initial $x, y$ and $z$ might not be the "natural" axis of your object. If you go to new axes $x', y'$ and $z'$ such that the tensor is diagonal, you are now in a coordinate system where the object will rotate freely only around those new axes. That's why they are called the principal axes. The nice thing about a diagonal matrix is that matrix-vector products in them are very easy to calculate. For example, if you rotate with angular velocity $\omega$ around an axis $\vec \omega = (\omega_x, \omega_y, \omega_y)$, then the rotational energy of that is $$\frac{1}{2} \vec{\omega}^T \cdot I \cdot \vec{\omega}$$ If $I$ is diagonal, this simply becomes $$\frac{1}{2} \omega_x^2 I_{xx} + \frac{1}{2} \omega_y^2 I_{yy} + \frac{1}{2} \omega_z^2 I_{zz}$$ It is, however, correct that $I_{yy}$ is the moment of inertia for rotation around the $y$ axis. The off-diagonal elements would come into play if you don't go to the coordinate system that makes $I$ diagonal and then look at rotations around axes different from the coordinate axes. For example $\vec \omega = (\omega, \omega, 0)$. There, the rotational energy would be $$\frac{\omega^2}{2}(I_{xx} + I_{yy} + 2 I_{xy})$$. So if you want to compute the rotational energy in a coordinate system where $I$ is not diagonal, you get all those pesky off-diagonal matrix elements in there, cluttering up your expression, whereas you can get rid of all of them if you transform the coordinate system so that they are $0$. This diagonalization, btw, is something that comes up very often in all fields that deal with matrices, just because it makes dealing with those matrices so much easier, and the remaining diagonal entries (the "eigenvalues" of that matrix) contain a lot of useful information about the nature of the matrix. LagerbaerLagerbaer $\begingroup$ Thanks for your reply. It is not hard to understand the significance of the principal axes and moment of inertia. But like what you said, if I use the off-diagonal matrix, when calculate the rotational kinetic energy, I have to count all those off-diagonal elements, so should it be any physical significance of the off-diagonal entries? Otherwise, how do you understand the cross terms in energy calculation? $\endgroup$ – user1285419 Apr 12 '13 at 12:38 $\begingroup$ I believe $\frac{\omega}{2}$ should be $\frac{1}{2} \omega^2$. $\endgroup$ – Psi Apr 12 '13 at 13:28 Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged moment-of-inertia or ask your own question. Interpretation of Moment of Inertia Tensor Off-diagonal elements of the inertia tensor and its eigenvalues What is physical significance of product of Inertia? Moment of inertia- a tensor quantity Physical meaning of product of inertia What is the physical significance of the off-diagonal moment of inertia matrix elements? Why are the products of inertia zero when an object rotates on a principal axis? Relation between inertia tensor and moment of inertia about an axis Principal moment of inertia, and principal axis Physical interpretation of product of inertia Geometry in diagonal matrix and inertia tensor Physical meaning of the moment of inertia about an axis What is the physical meaning of the principal axes of inertia? What's the meaning of zero principal moment of inertia? Moment of inertia tensor and symmetry of the object
CommonCrawl
Möbius disjointness for interval exchange transformations on three intervals 2019, 14: 87-120. doi: 10.3934/jmd.2019004 Benjamin Dozier Mathematics Department, Stony Brook University, Stony Brook, NY 11794-3651, USA Received September 05, 2017 Revised December 20, 2017 Published March 2019 Fund Project: Supported in part by NSF grant DGE-114747. Figure(6) Fix a translation surface $ X $, and consider the measures on $ X $ coming from averaging the uniform measures on all the saddle connections of length at most $ R $. Then, as $ R\to\infty $, the weak limit of these measures exists and is equal to the area measure on $ X $ coming from the flat metric. This implies that, on a rational-angled billiard table, the billiard trajectories that start and end at a corner of the table are equidistributed on the table. We also show that any weak limit of a subsequence of the counting measures on $ S^1 $ given by the angles of all saddle connections of length at most $ R_n $, as $ R_n\to\infty $, is in the Lebesgue measure class. The proof of the equidistribution result uses the angle result, together with the theorem of Kerckhoff-Masur-Smillie that the directional flow on a surface is uniquely ergodic in almost every direction. Keywords: Translation surfaces, billiards, equidistribution, Teichmüller dynamics. Mathematics Subject Classification: Primary: 37E35; Secondary: 32G15. Citation: Benjamin Dozier. Equidistribution of saddle connections on translation surfaces. Journal of Modern Dynamics, 2019, 14: 87-120. doi: 10.3934/jmd.2019004 J. S. Athreya, Quantitative recurrence and large deviations for Teichmuller geodesic flow, Geom. Dedicata, 119 (2006), 121-140. doi: 10.1007/s10711-006-9058-z. Google Scholar M. Boshernitzan, G. Galperin, T. Krüger and S. Troubetzkoy, Periodic billiard orbits are dense in rational polygons, Trans. Amer. Math. Soc., 350 (1998), 3523-3535. doi: 10.1090/S0002-9947-98-02089-3. Google Scholar R. Bowen, The equidistribution of closed geodesics, Amer. J. Math., 94 (1972), 413-423. doi: 10.2307/2374628. Google Scholar J. Chaika, Homogeneous approximation for flows on translation surfaces, preprint, 2011, arXiv: 1110.6167. Google Scholar B. Dozier, Convergence of Siegel–Veech constants, Geometriae Dedicata, (2018), 1–12. doi: 10.1007/s10711-018-0332-7. Google Scholar A. Eskin and H. Masur, Asymptotic formulas on flat surfaces, Ergodic Theory and Dynamical Systems, 21 (2001), 443-478. doi: 10.1017/S0143385701001225. Google Scholar A. Eskin, G. Margulis and S. Mozes, Upper bounds and asymptotics in a quantitative version of the Oppenheim conjecture, Ann. of Math. (2), 147 (1998), 93-141. doi: 10.2307/120984. Google Scholar A. Eskin, M. Mirzakhani and A. Mohammadi, Isolation, equidistribution, and orbit closures for the SL(2, $\mathbb{R}$) action on moduli space, Ann. of Math. (2), 182 (2015), 673-721. doi: 10.4007/annals.2015.182.2.7. Google Scholar A. Eskin, Counting problems in moduli space, Handbook of Dynamical Systems, Vol. 1B, Elsevier B. V., Amsterdam, 2006, 581–595. doi: 10.1016/S1874-575X(06)80034-2. Google Scholar R. H. Fox and R. B. Kershner, Concerning the transitive properties of geodesics on a rational polyhedron, Duke Math. J., 2 (1936), 147-150. doi: 10.1215/S0012-7094-36-00213-2. Google Scholar S. Kerckhoff, H. Masur and J. Smillie, Ergodicity of billiard flows and quadratic differentials, Ann. of Math. (2), 124 (1986), 293-311. doi: 10.2307/1971280. Google Scholar H. Masur, Lower bounds for the number of saddle connections and closed trajectories of a quadratic differential, in Holomorphic Functions and Moduli, Vol. I (Berkeley, CA, 1986), Math. Sci. Res. Inst. Publ., vol. 10, Springer, New York, 1988, 215–228. doi: 10.1007/978-1-4613-9602-4_20. Google Scholar H. Masur, The growth rate of trajectories of a quadratic differential, Ergodic Theory and Dynamical Systems, 10 (1990), 151-176. doi: 10.1017/S0143385700005459. Google Scholar L. Marchese, R. Treviño and S. Weil, Diophantine approximations for translation surfaces and planar resonant sets, preprint, 2016, arXiv: 1502.05007v2. Google Scholar A. Nevo, Equidistribution in measure-preserving actions of semisimple groups: Case of $SL_2(\mathbb{R})$, preprint, 2017, arXiv: 1708.03886. Google Scholar W. A. Veech, Teichmüller curves in moduli space, Eisenstein series and an application to triangular billiards, Invent. Math., 97 (1989), 553-583. doi: 10.1007/BF01388890. Google Scholar W. A. Veech, Siegel measures, Ann. of Math. (2), 148 (1998), 895-944. doi: 10.2307/121033. Google Scholar Y. Vorobets, Periodic geodesics on generic translation surfaces, in Algebraic and Topological Dynamics, Contemp. Math., 385, Amer. Math. Soc., Providence, RI, 2005, 205–258. doi: 10.1090/conm/385/07199. Google Scholar A. Wright, Translation surfaces and their orbit closures: An introduction for a broad audience, EMS Surv. Math. Sci., 2 (2015), 63-108. doi: 10.4171/EMSS/9. Google Scholar A. N. Zemljakov and A. B. Katok, Topological transitivity of billiards in polygons, Mat. Zametki, 18 (1975), 291-300. Google Scholar A. Zorich, Flat surfaces, in Frontiers in Number Theory, Physics, and Geometry, I, Springer, Berlin, 2006, 437–583. doi: 10.1007/978-3-540-31347-2_13. Google Scholar Figure 1. Saddle connections of length at most $R = 7$ on a genus two translation surface (opposite sides are identified), in units where the height of the figure is approximately 2. The thickness of each saddle connection is drawn inversely proportional to its length (so the total amount of "paint" used to draw a saddle connection is independent of its length). This choice of thickness is meant to represent the measures $\mu_s$, which are all probability measures, in Theorem 1.1. That theorem says that, as the length bound $R$ goes to infinity, the picture will be uniformly colored. This picture was generated with the help of Ronen Mukamel's $\texttt{triangulated\_surfaces}$ SAGE package. Figure 2. Opposite sides of the polygon are identified to give a genus two translation surface. A cylinder is shown, together with a long saddle connection contained in that cylinder. Figure 3. Regions used in proof of Lemma 2.2 Figure 4. Proof of Lemma 4.1. The red points are group $A_1$, while the blue are $A_2$. Figure 5. Adding a saddle connection to a complex, in proof of Proposition 5.4. Figure 6. Comparing averages for Lemma 5.8 (Shadowing) Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319 Yi Zhou, Jianli Liu. The initial-boundary value problem on a strip for the equation of time-like extremal surfaces. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 381-397. doi: 10.3934/dcds.2009.23.381 Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301 Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047 Yueh-Cheng Kuo, Huey-Er Lin, Shih-Feng Shieh. Asymptotic dynamics of hermitian Riccati difference equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020365 Yu Jin, Xiang-Qiang Zhao. The spatial dynamics of a Zebra mussel model in river environments. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020362 Hua Shi, Xiang Zhang, Yuyan Zhang. Complex planar Hamiltonian systems: Linearization and dynamics. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020406 Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316 Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020045 Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 Shao-Xia Qiao, Li-Jun Du. Propagation dynamics of nonlocal dispersal equations with inhomogeneous bistable nonlinearity. Electronic Research Archive, , () : -. doi: 10.3934/era.2020116 Ebraheem O. Alzahrani, Muhammad Altaf Khan. Androgen driven evolutionary population dynamics in prostate cancer growth. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020426 Zhimin Li, Tailei Zhang, Xiuqing Li. Threshold dynamics of stochastic models with time delays: A case study for Yunnan, China. Electronic Research Archive, 2021, 29 (1) : 1661-1679. doi: 10.3934/era.2020085 Rong Wang, Yihong Du. Long-time dynamics of a diffusive epidemic model with free boundaries. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020360 Linfeng Mei, Feng-Bin Wang. Dynamics of phytoplankton species competition for light and nutrient with recycling in a water column. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020359 Chang-Yuan Cheng, Shyan-Shiou Chen, Rui-Hua Chen. Delay-induced spiking dynamics in integrate-and-fire neurons. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020363 Attila Dénes, Gergely Röst. Single species population dynamics in seasonal environment with short reproduction period. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020288 Guihong Fan, Gail S. K. Wolkowicz. Chaotic dynamics in a simple predator-prey model with discrete delay. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 191-216. doi: 10.3934/dcdsb.2020263 Sze-Bi Hsu, Yu Jin. The dynamics of a two host-two virus system in a chemostat environment. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 415-441. doi: 10.3934/dcdsb.2020298 Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264
CommonCrawl
Only show content I have access to (193) Only show open access (29) Last 3 months (11) Last 12 months (32) Last 3 years (85) Over 3 years (858) Physics and Astronomy (244) Life Sciences (216) Materials Research (192) Earth and Environmental Sciences (74) Statistics and Probability (40) Politics and International Relations (23) Area Studies (21) MRS Online Proceedings Library Archive (179) The Classical Review (34) Cardiology in the Young (31) Epidemiology & Infection (31) The Journal of Roman Studies (24) Infection Control & Hospital Epidemiology (23) Microscopy and Microanalysis (22) Parasitology (21) Geological Magazine (19) The British Journal of Psychiatry (19) Journal of Dairy Research (18) Weed Science (18) Psychological Medicine (16) Publications of the Astronomical Society of Australia (16) Journal of Materials Research (12) Proceedings of the Nutrition Society (11) The Journal of Hellenic Studies (9) British Journal of Nutrition (8) Journal of the International Neuropsychological Society (8) Proceedings of the International Astronomical Union (8) The Canadian Entomologist (7) Cambridge University Press (134) Boydell & Brewer (12) Amsterdam University Press (2) Acumen Publishing (1) The University of Adelaide Press (1) Materials Research Society (192) AEPC Association of European Paediatric Cardiology (31) Weed Science Society of America (25) The Roman Society - JRS and BRI (24) Society for Healthcare Epidemiology of America (SHEA) (23) Classical Association (17) The Classical Association (17) International Astronomical Union (15) The Royal College of Psychiatrists (13) International Glaciological Society (10) MSC - Microscopical Society of Canada (10) Royal College of Psychiatrists / RCPsych (10) Mineralogical Society (9) SPHS Soc for the Promotion of Hellenic Studies (9) The Paleontological Society (9) International Neuropsychological Society INS (8) Entomological Society of Canada TCE ESC (7) European Psychiatric Association (7) Cambridge Planetary Science (12) Studies in German Literature Linguistics and Culture (12) Cambridge Handbooks in Psychology (3) British Mycological Society Symposia (1) Cambridge Companions to Religion (1) Cambridge Pocket Clinicians (1) Cambridge Studies in Biological and Evolutionary Anthropology (1) Conservation Biology (1) Key Concepts (1) Lecture Notes in Logic (1) New Cambridge History of the Bible (1) The Cambridge History of Iran (1) Cambridge Handbooks of Psychology (4) Cambridge Histories (2) Cambridge Companions (1) Cambridge Histories - Middle East & African Studies (1) Cambridge Histories - Religion (1) The Cambridge Companions to Philosophy and Religion (1) The relationship between sleep and depression and bipolar disorder in children and young people Monica Comsa, Kirstie N. Anderson, Aditya Sharma, Vanishri C. Yadav, Stuart Watson Journal: BJPsych Open / Volume 8 / Issue 1 / January 2022 Published online by Cambridge University Press: 14 January 2022, e27 Sleep difficulties are often reported in practice, and are part of the diagnostic criteria for depression and bipolar disorder. To inform the understanding of the relationship between sleep and both depression and bipolar disorder. We conducted a narrative literature review of affective disorders and sleep difficulties in children and young people. Specific sleep disorders, such as parasomnias, narcolepsy and sleep-related movement disorders, are associated with depression, whereas insomnia, obstructive sleep apnoea and circadian rhythm disorders are associated with both depression and bipolar disorder in children and young people. Conversely, children and young people with depression can present with a number of sleep difficulties, and these are associated with higher depression severity and greater fatigue, suicidal ideation, physical complaints, pain and decreased concentration. Sleep disturbances among adolescents with bipolar disorder can affect the severity of depressive and manic symptoms, are a poor prognostic indicator and have been associated with social and academic impairment. Antidepressants and antipsychotics can directly affect sleep architecture, which clinicians need to be aware of. Non-pharmacological interventions for sleep problems could prevent and/or minimise the risk of relapse in affective disorders. Sleep difficulties can occur before, during and after an episode of depression or bipolar disorder, and have a higher prevalence in affective disorders compared with the general population. A multi-modal approach would include the treatment of both the affective and specific sleep disorder. Further research is needed in this field to understand the impact of combined interventions on clinical outcomes. Steven C. Anderson Journal: Iranian Studies / Volume 31 / Issue 3-4 / Summer Fall 1998 Print publication: Summer Fall 1998 The Principal Users of the Encyclopaedia are Likely to be Students and scholars of Iranian culture, history, literature, geography, and area studies. Most of these will not be looking so much for technical articles in their own specialties as for general introductory entries in ancillary areas. I hope that another category of user of the fauna and other natural history entries, for which some forty articles have now been published, will be the student or scholar beginning studies of Iranian natural history looking for summary and review articles and bibliographic sources. For such users, the EIr can perform an invaluable service, as there is no other existing source in any language attempting to pull together the knowledge of Iranian natural history as a whole. To serve the needs of this diverse audience, entries about individual species of native animals ought to include some description or definition of the creature, a brief summary of its natural history, viz., its habits and behavior, its habitat, its status in the biotic community (as prey, predator, relative abundance, etc.), its general distribution, and its range within Iran (and the other areas covered by the EIr); cultural information that applies, such as conservation, economic importance, history of use or human interactions, appearance in literature, folklore, and mythology, should be discussed. Evolution of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) seroprevalence among employees of a US academic children's hospital during coronavirus disease 2019 (COVID-19) pandemic Brian T. Fisher, Anna Sharova, Craig L. K. Boge, Sigrid Gouma, Audrey Kamrin, Jesse Blumenstock, Sydney Shuster, Lauren Gianchetti, Danielle Collins, Elikplim Akaho, Madison E. Weirick, Christopher M. McAllister, Marcus J. Bolton, Claudia P. Arevalo, Eileen C. Goodwin, Elizabeth M. Anderson, Shannon R. Christensen, Fran Balamuth, Audrey R. Odom John, Yun Li, Susan Coffin, Jeffrey S. Gerber, Scott E. Hensley Journal: Infection Control & Hospital Epidemiology , First View Published online by Cambridge University Press: 02 December 2021, pp. 1-9 To describe the cumulative seroprevalence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies during the coronavirus disease 2019 (COVID-19) pandemic among employees of a large pediatric healthcare system. Design, setting, and participants: Prospective observational cohort study open to adult employees at the Children's Hospital of Philadelphia, conducted April 20–December 17, 2020. Employees were recruited starting with high-risk exposure groups, utilizing e-mails, flyers, and announcements at virtual town hall meetings. At baseline, 1 month, 2 months, and 6 months, participants reported occupational and community exposures and gave a blood sample for SARS-CoV-2 antibody measurement by enzyme-linked immunosorbent assays (ELISAs). A post hoc Cox proportional hazards regression model was performed to identify factors associated with increased risk for seropositivity. In total, 1,740 employees were enrolled. At 6 months, the cumulative seroprevalence was 5.3%, which was below estimated community point seroprevalence. Seroprevalence was 5.8% among employees who provided direct care and was 3.4% among employees who did not perform direct patient care. Most participants who were seropositive at baseline remained positive at follow-up assessments. In a post hoc analysis, direct patient care (hazard ratio [HR], 1.95; 95% confidence interval [CI], 1.03–3.68), Black race (HR, 2.70; 95% CI, 1.24–5.87), and exposure to a confirmed case in a nonhealthcare setting (HR, 4.32; 95% CI, 2.71–6.88) were associated with statistically significant increased risk for seropositivity. Employee SARS-CoV-2 seroprevalence rates remained below the point-prevalence rates of the surrounding community. Provision of direct patient care, Black race, and exposure to a confirmed case in a nonhealthcare setting conferred increased risk. These data can inform occupational protection measures to maximize protection of employees within the workplace during future COVID-19 waves or other epidemics. Perinatal mental healthcare in Northern Ireland: challenges and opportunities D. Mongan, J. Lynch, J. Anderson, L. Robinson, C. Mulholland Journal: Irish Journal of Psychological Medicine , First View Published online by Cambridge University Press: 29 November 2021, pp. 1-6 Perinatal mental health is a vital component of public mental health. The perinatal period represents the time in a woman's life when she is at the highest risk of developing new-onset psychiatric disorders or relapse of an existing mental illness. Optimisation of maternal mental health in the perinatal period is associated with both short- and long-term benefits not only for the mother, but also for her infant and family. However, perinatal mental health service provision remains variable across the world. At present in Northern Ireland, 80% of women do not have access to specialist community perinatal mental health services, and without access to a mother and baby unit, mothers who require a psychiatric admission in the postnatal period are separated from their baby. However, following successful campaigns, funding for development of specialist perinatal mental health community teams has recently been approved. In this article, we discuss the importance of perinatal mental health from a public health perspective and explore challenges and opportunities in the ongoing journey of specialist service development in Northern Ireland. Insights from the Evaluations of the NIH Centers for Accelerated Innovation and Research Evaluation and Commercialization Hubs Programs Benjamin J. Anderson, Olena Leonchuk, Alan C. O'Connor, Brooke K. Shaw, Amanda C. Walsh Journal: Journal of Clinical and Translational Science / Accepted manuscript Published online by Cambridge University Press: 22 November 2021, pp. 1-26 Interplay between the Genetics of Personality Traits, severe Psychiatric Disorders, and COVID-19 Host Genetics in the Susceptibility to SARS-CoV-2 Infection - ADDENDUM Urs Heilbronner, Fabian Streit, Thomas Vogl, Fanny Senner, Sabrina K. Schaupp, Daniela Reich-Erkelenz, Sergi Papiol, Mojtaba Oraki Kohshour, Farahnaz Klöhn-Saghatolislam, Janos L. Kalman, Maria Heilbronner, Katrin Gade, Ashley L. Comes, Monika Budde, Till F. M. Andlauer, Heike Anderson-Schmidt, Kristina Adorjan, Til Stürmer, Adrian Loerbroks, Manfred Amelang, Eric Poisel, Jerome Foo, Stefanie Heilmann-Heimbach, Andreas J. Forstner, Franziska Degenhardt, Jörg Zimmermann, Jens Wiltfang, Martin von Hagen, Carsten Spitzer, Max Schmauss, Eva Reininghaus, Jens Reimer, Carsten Konrad, Georg Juckel, Fabian U. Lang, Markus Jäger, Christian Figge, Andreas J. Fallgatter, Detlef E. Dietrich, Udo Dannlowski, Bernhardt T. Baune, Volker Arolt, Ion-George Anghelescu, Markus M. Nöthen, Stephanie H. Witt, Ole A. Andreassen, Chi-Hua Chen, Peter Falkai, Marcella Rietschel, Thomas G. Schulze, Eva C. Schulte Journal: BJPsych Open / Volume 7 / Issue 6 / November 2021 Published online by Cambridge University Press: 18 November 2021, e206 The impact of coronavirus disease 2019 (COVID-19) response on hospital infection prevention programs and practices in the southeastern United States Sonali D. Advani, Andrea Cromer, Brittain Wood, Esther Baker, Kathryn L. Crawford, Linda Crane, Linda Roach, Polly Padgette, Elizabeth Dodds-Ashley, Ibukunoluwa C. Kalu, David J. Weber, Emily Sickbert-Bennett, Deverick J. Anderson, for the Centers for Disease Control and Prevention Epicenters Program Initial assessments of coronavirus disease 2019 (COVID-19) preparedness revealed resource shortages and variations in infection prevention policies across US hospitals. Our follow-up survey revealed improvement in resource availability, increase in testing capacity, and uniformity in infection prevention policies. Most importantly, the survey highlighted an increase in staffing shortages and use of travel nursing. The ASKAP Variables and Slow Transients (VAST) Pilot Survey Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 12 October 2021, e054 The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified. Body mass index and age are associated with ventricular end-diastolic pressure in adults with a Fontan circulation Mary Howell, William E. Anderson, Jorge Alegria, Joseph Paolillo, Matthew C. Schwartz Journal: Cardiology in the Young , First View Published online by Cambridge University Press: 07 October 2021, pp. 1-6 Systemic ventricular end-diastolic pressure is an important haemodynamic variable in adult patients with Fontan circulation. Risk factors associated with elevated end-diastolic pressure have not been clearly identified in this population. All patients > 18 years with Fontan circulation who underwent cardiac catheterisation at our centre between 1/08 and 3/19 were included. Relevant patient variables were extracted. Univariate and multivariate general linear models were analysed to identify variables associated with end-diastolic pressure. Forty-two patients were included. Median age was 24.0 years (20.9–29.0) with a body mass index of 23.7 kg/m2 (21.5–29.7). 10 (23.8%) patients had a systemic right ventricle. The median (Interquartile range) and mean pulmonary artery pressure were 11.0 mmHg (9.0–12.0) and 16.0 mmHg (13.0–18.0), respectively. On univariate analysis, end-diastolic pressure was positively associated with body mass index (p < 0.01), age > 25 years (p = 0.04), symptoms of heart failure (p < 0.01), systemic ventricular systolic pressure (p = 0.03), pulmonary artery mean pressure (p < 0.01), and taking diuretics (p < 0.01) or sildenafil (p < 0.01). End-diastolic pressure was negatively associated with aortic saturation (p < 0.01). On multivariate analysis, end-diastolic pressure was positively associated with age ≥ 25 years (p < 0.01), and body mass index (p = 0.04). In a cohort of adult patients with Fontan circulation undergoing catheterisation, end-diastolic pressure was positively associated with age ≥ 25 years and body mass index on multivariate analysis. Maintaining a healthy body mass index may offer haemodynamic benefit in adults with Fontan physiology. Interplay between the genetics of personality traits, severe psychiatric disorders and COVID-19 host genetics in the susceptibility to SARS-CoV-2 infection The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, with its impact on our way of life, is affecting our experiences and mental health. Notably, individuals with mental disorders have been reported to have a higher risk of contracting SARS-CoV-2. Personality traits could represent an important determinant of preventative health behaviour and, therefore, the risk of contracting the virus. We examined overlapping genetic underpinnings between major psychiatric disorders, personality traits and susceptibility to SARS-CoV-2 infection. Linkage disequilibrium score regression was used to explore the genetic correlations of coronavirus disease 2019 (COVID-19) susceptibility with psychiatric disorders and personality traits based on data from the largest available respective genome-wide association studies (GWAS). In two cohorts (the PsyCourse (n = 1346) and the HeiDE (n = 3266) study), polygenic risk scores were used to analyse if a genetic association between, psychiatric disorders, personality traits and COVID-19 susceptibility exists in individual-level data. We observed no significant genetic correlations of COVID-19 susceptibility with psychiatric disorders. For personality traits, there was a significant genetic correlation for COVID-19 susceptibility with extraversion (P = 1.47 × 10−5; genetic correlation 0.284). Yet, this was not reflected in individual-level data from the PsyCourse and HeiDE studies. We identified no significant correlation between genetic risk factors for severe psychiatric disorders and genetic risk for COVID-19 susceptibility. Among the personality traits, extraversion showed evidence for a positive genetic association with COVID-19 susceptibility, in one but not in another setting. Overall, these findings highlight a complex contribution of genetic and non-genetic components in the interaction between COVID-19 susceptibility and personality traits or mental disorders. Hospital-acquired influenza in the United States, FluSurv-NET, 2011–2012 through 2018–2019 Charisse N. Cummings, Alissa C. O'Halloran, Tali Azenkot, Arthur Reingold, Nisha B. Alden, James I. Meek, Evan J. Anderson, Patricia A. Ryan, Sue Kim, Melissa McMahon, Chelsea McMullen, Nancy L. Spina, Nancy M. Bennett, Laurie M. Billing, Ann Thomas, William Schaffner, H. Keipp Talbot, Andrea George, Carrie Reed, Shikha Garg To estimate population-based rates and to describe clinical characteristics of hospital-acquired (HA) influenza. Cross-sectional study. US Influenza Hospitalization Surveillance Network (FluSurv-NET) during 2011–2012 through 2018–2019 seasons. Patients were identified through provider-initiated or facility-based testing. HA influenza was defined as a positive influenza test date and respiratory symptom onset >3 days after admission. Patients with positive test date >3 days after admission but missing respiratory symptom onset date were classified as possible HA influenza. Among 94,158 influenza-associated hospitalizations, 353 (0.4%) had HA influenza. The overall adjusted rate of HA influenza was 0.4 per 100,000 persons. Among HA influenza cases, 50.7% were 65 years of age or older, and 52.0% of children and 95.7% of adults had underlying conditions; 44.9% overall had received influenza vaccine prior to hospitalization. Overall, 34.5% of HA cases received ICU care during hospitalization, 19.8% required mechanical ventilation, and 6.7% died. After including possible HA cases, prevalence among all influenza-associated hospitalizations increased to 1.3% and the adjusted rate increased to 1.5 per 100,000 persons. Over 8 seasons, rates of HA influenza were low but were likely underestimated because testing was not systematic. A high proportion of patients with HA influenza were unvaccinated and had severe outcomes. Annual influenza vaccination and implementation of robust hospital infection control measures may help to prevent HA influenza and its impacts on patient outcomes and the healthcare system. The Evolutionary Map of the Universe pilot survey Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. A broadband radio view of transient jet ejecta in the black hole candidate X-ray binary MAXI J1535–571 Jaiverdhan Chauhan, J. C. A. Miller-Jones, G. E. Anderson, A. Paduano, M. Sokolowski, C. Flynn, P. J. Hancock, N. Hurley-Walker, D. L. Kaplan, T. D. Russell, A. Bahramian, S. W. Duchesne, D. Altamirano, S. Croft, H. A. Krimm, G. R. Sivakoff, R. Soria, C. M. Trott, R. B. Wayth, V. Gupta, M. Johnston-Hollitt, S. J. Tingay We present a broadband radio study of the transient jets ejected from the black hole candidate X-ray binary MAXI J1535–571, which underwent a prolonged outburst beginning on 2017 September 2. We monitored MAXI J1535–571 with the Murchison Widefield Array (MWA) at frequencies from 119 to 186 MHz over six epochs from 2017 September 20 to 2017 October 14. The source was quasi-simultaneously observed over the frequency range 0.84–19 GHz by UTMOST (the Upgraded Molonglo Observatory Synthesis Telescope) the Australian Square Kilometre Array Pathfinder (ASKAP), the Australia Telescope Compact Array (ATCA), and the Australian Long Baseline Array (LBA). Using the LBA observations from 2017 September 23, we measured the source size to be $34\pm1$ mas. During the brightest radio flare on 2017 September 21, the source was detected down to 119 MHz by the MWA, and the radio spectrum indicates a turnover between 250 and 500 MHz, which is most likely due to synchrotron self-absorption (SSA). By fitting the radio spectrum with a SSA model and using the LBA size measurement, we determined various physical parameters of the jet knot (identified in ATCA data), including the jet opening angle ( $\phi_{\rm op} = 4.5\pm1.2^{\circ}$ ) and the magnetic field strength ( $B_{\rm s} = 104^{+80}_{-78}$ mG). Our fitted magnetic field strength agrees reasonably well with that inferred from the standard equipartition approach, suggesting the jet knot to be close to equipartition. Our study highlights the capabilities of the Australian suite of radio telescopes to jointly probe radio jets in black hole X-ray binaries via simultaneous observations over a broad frequency range, and with differing angular resolutions. This suite allows us to determine the physical properties of X-ray binary jets. Finally, our study emphasises the potential contributions that can be made by the low-frequency part of the Square Kilometre Array (SKA-Low) in the study of black hole X-ray binaries. Characterisation of age and polarity at onset in bipolar disorder Janos L. Kalman, Loes M. Olde Loohuis, Annabel Vreeker, Andrew McQuillin, Eli A. Stahl, Douglas Ruderfer, Maria Grigoroiu-Serbanescu, Georgia Panagiotaropoulou, Stephan Ripke, Tim B. Bigdeli, Frederike Stein, Tina Meller, Susanne Meinert, Helena Pelin, Fabian Streit, Sergi Papiol, Mark J. Adams, Rolf Adolfsson, Kristina Adorjan, Ingrid Agartz, Sofie R. Aminoff, Heike Anderson-Schmidt, Ole A. Andreassen, Raffaella Ardau, Jean-Michel Aubry, Ceylan Balaban, Nicholas Bass, Bernhard T. Baune, Frank Bellivier, Antoni Benabarre, Susanne Bengesser, Wade H Berrettini, Marco P. Boks, Evelyn J. Bromet, Katharina Brosch, Monika Budde, William Byerley, Pablo Cervantes, Catina Chillotti, Sven Cichon, Scott R. Clark, Ashley L. Comes, Aiden Corvin, William Coryell, Nick Craddock, David W. Craig, Paul E. Croarkin, Cristiana Cruceanu, Piotr M. Czerski, Nina Dalkner, Udo Dannlowski, Franziska Degenhardt, Maria Del Zompo, J. Raymond DePaulo, Srdjan Djurovic, Howard J. Edenberg, Mariam Al Eissa, Torbjørn Elvsåshagen, Bruno Etain, Ayman H. Fanous, Frederike Fellendorf, Alessia Fiorentino, Andreas J. Forstner, Mark A. Frye, Janice M. Fullerton, Katrin Gade, Julie Garnham, Elliot Gershon, Michael Gill, Fernando S. Goes, Katherine Gordon-Smith, Paul Grof, Jose Guzman-Parra, Tim Hahn, Roland Hasler, Maria Heilbronner, Urs Heilbronner, Stephane Jamain, Esther Jimenez, Ian Jones, Lisa Jones, Lina Jonsson, Rene S. Kahn, John R. Kelsoe, James L. Kennedy, Tilo Kircher, George Kirov, Sarah Kittel-Schneider, Farah Klöhn-Saghatolislam, James A. Knowles, Thorsten M. Kranz, Trine Vik Lagerberg, Mikael Landen, William B. Lawson, Marion Leboyer, Qingqin S. Li, Mario Maj, Dolores Malaspina, Mirko Manchia, Fermin Mayoral, Susan L. McElroy, Melvin G. McInnis, Andrew M. McIntosh, Helena Medeiros, Ingrid Melle, Vihra Milanova, Philip B. Mitchell, Palmiero Monteleone, Alessio Maria Monteleone, Markus M. Nöthen, Tomas Novak, John I. Nurnberger, Niamh O'Brien, Kevin S. O'Connell, Claire O'Donovan, Michael C. O'Donovan, Nils Opel, Abigail Ortiz, Michael J. Owen, Erik Pålsson, Carlos Pato, Michele T. Pato, Joanna Pawlak, Julia-Katharina Pfarr, Claudia Pisanu, James B. Potash, Mark H Rapaport, Daniela Reich-Erkelenz, Andreas Reif, Eva Reininghaus, Jonathan Repple, Hélène Richard-Lepouriel, Marcella Rietschel, Kai Ringwald, Gloria Roberts, Guy Rouleau, Sabrina Schaupp, William A Scheftner, Simon Schmitt, Peter R. Schofield, K. Oliver Schubert, Eva C. Schulte, Barbara Schweizer, Fanny Senner, Giovanni Severino, Sally Sharp, Claire Slaney, Olav B. Smeland, Janet L. Sobell, Alessio Squassina, Pavla Stopkova, John Strauss, Alfonso Tortorella, Gustavo Turecki, Joanna Twarowska-Hauser, Marin Veldic, Eduard Vieta, John B. Vincent, Wei Xu, Clement C. Zai, Peter P. Zandi, Psychiatric Genomics Consortium (PGC) Bipolar Disorder Working Group, International Consortium on Lithium Genetics (ConLiGen), Colombia-US Cross Disorder Collaboration in Psychiatric Genetics, Arianna Di Florio, Jordan W. Smoller, Joanna M. Biernacka, Francis J. McMahon, Martin Alda, Bertram Müller-Myhsok, Nikolaos Koutsouleris, Peter Falkai, Nelson B. Freimer, Till F.M. Andlauer, Thomas G. Schulze, Roel A. Ophoff Journal: The British Journal of Psychiatry / Volume 219 / Issue 6 / December 2021 Published online by Cambridge University Press: 25 August 2021, pp. 659-669 Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools. To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics. Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts. Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO. AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses. Challenges in hospital-acquired coronavirus disease 2019 (COVID-19) surveillance and attribution of infection source Sarah S. Lewis, Ibukunoluwa C. Kalu, Jessica Seidelman, Deverick J. Anderson, Rebekah W. Moehring, Becky A. Smith, for the Centers for Disease Control and Prevention Epicenters Program Published online by Cambridge University Press: 02 August 2021, pp. 1-4 We performed surveillance for hospital-acquired COVID-19 (HA-COVID-19) and compared time-based, electronic definitions to real-time adjudication of the most likely source of acquisition. Without real-time adjudication, nearly 50% of HA-COVID-19 cases identified using electronic definitions were misclassified. Both electronic and traditional contact tracing methods likely underestimated the incidence of HA-COVID-19. Implications of new technologies for future food supply systems S. Asseng, C. A. Palm, J. L. Anderson, L. Fresco, P. A. Sanchez, F. Asche, T. M. Garlock, J. Fanzo, M. D. Smith, G. Knapp, A. Jarvis, A. Adesogan, I. Capua, G. Hoogenboom, D. D. Despommier, L. Conti, K. A. Garrett Journal: The Journal of Agricultural Science / Volume 159 / Issue 5-6 / July 2021 The combination of advances in knowledge, technology, changes in consumer preference and low cost of manufacturing is accelerating the next technology revolution in crop, livestock and fish production systems. This will have major implications for how, where and by whom food will be produced in the future. This next technology revolution could benefit the producer through substantial improvements in resource use and profitability, but also the environment through reduced externalities. The consumer will ultimately benefit through more nutritious, safe and affordable food diversity, which in turn will also contribute to the acceleration of the next technology. It will create new opportunities in achieving progress towards many of the Sustainable Development Goals, but it will require early recognition of trends and impact, public research and policy guidance to avoid negative trade-offs. Unfortunately, the quantitative predictability of future impacts will remain low and uncertain, while new chocks with unexpected consequences will continue to interrupt current and future outcomes. However, there is a continuing need for improving the predictability of shocks to future food systems especially for ex-ante assessment for policy and planning. Murchison Widefield Array rapid-response observations of the short GRB 180805A G. E. Anderson, P. J. Hancock, A. Rowlinson, M. Sokolowski, A. Williams, J. Tian, J. C. A. Miller-Jones, N. Hurley-Walker, K. W. Bannister, M. E. Bell, C. W. James, D. L. Kaplan, Tara Murphy, S. J. Tingay, B. W. Meyers, M. Johnston-Hollitt, R. B. Wayth Published online by Cambridge University Press: 10 June 2021, e026 Here we present stringent low-frequency (185 MHz) limits on coherent radio emission associated with a short-duration gamma-ray burst (SGRB). Our observations of the short gamma-ray burst (GRB) 180805A were taken with the upgraded Murchison Widefield Array (MWA) rapid-response system, which triggered within 20s of receiving the transient alert from the Swift Burst Alert Telescope, corresponding to 83.7 s post-burst. The SGRB was observed for a total of 30 min, resulting in a $3\sigma$ persistent flux density upper limit of 40.2 mJy beam–1. Transient searches were conducted at the Swift position of this GRB on 0.5 s, 5 s, 30 s and 2 min timescales, resulting in $3\sigma$ limits of 570–1 830, 270–630, 200–420, and 100–200 mJy beam–1, respectively. We also performed a dedispersion search for prompt signals at the position of the SGRB with a temporal and spectral resolution of 0.5 s and 1.28 MHz, respectively, resulting in a $6\sigma$ fluence upper-limit range from 570 Jy ms at DM $=3\,000$ pc cm–3 ( $z\sim 2.5$ ) to 1 750 Jy ms at DM $=200$ pc cm–3 ( $z\sim 0.1)$ , corresponding to the known redshift range of SGRBs. We compare the fluence prompt emission limit and the persistent upper limit to SGRB coherent emission models assuming the merger resulted in a stable magnetar remnant. Our observations were not sensitive enough to detect prompt emission associated with the alignment of magnetic fields of a binary neutron star just prior to the merger, from the interaction between the relativistic jet and the interstellar medium (ISM) or persistent pulsar-like emission from the spin-down of the magnetar. However, in the case of a more powerful SGRB (a gamma-ray fluence an order of magnitude higher than GRB 180805A and/or a brighter X-ray counterpart), our MWA observations may be sensitive enough to detect coherent radio emission from the jet-ISM interaction and/or the magnetar remnant. Finally, we demonstrate that of all current low- frequency radio telescopes, only the MWA has the sensitivity and response times capable of probing prompt emission models associated with the initial SGRB merger event. Genetically determined variations of selenoprotein P are associated with antioxidant, muscular, and lipid biomarkers in response to Brazil nut consumption by patients using statins Lígia Moriguchi Watanabe, Ana C. Bueno, Livia F. de Lima, Rafael Ferraz-Bannitz, Renata Dessordi, Mariana P. Guimarães, Maria C. Foss-Freitas, Fernando Barbosa, Jr., Anderson M. Navarro Journal: British Journal of Nutrition , First View Published online by Cambridge University Press: 05 May 2021, pp. 1-8 Several single nucleotide polymorphisms (SNPs) could indirectly, as well directly, influence metabolic parameters related to health effects in response to selenium (Se) supplementation. This study aimed to investigate whether the selenoprotein SNPs were associated with the response of Se status biomarkers to the Brazil nut consumption in patients using statins and if the variation in Se homoeostasis could affect antioxidant protection, lipid profile, muscle homoeostasis and selenoproteins mRNA. The study was performed in the Ribeirão Preto Medical School University Hospital. Thirty-two patients using statins received one unit of Brazil nut daily for 3 months. Body composition, blood Se concentrations, erythrocyte glutathione peroxidase (GPX) activity, total cholesterol, low-density lipoprotein (LDL), high-density lipoprotein (HDL), triacylglycerol (TAG), creatine kinase (CK) activity and gene expression of GPX1 and selenoprotein P (SELENOP) were evaluated before and after Brazil nut consumption. The volunteers were genotyped for SNP in GPX1 (rs1050450) and SELENOP (rs3877899 and rs7579). SNPs in selenoproteins were not associated with plasma and erythrocyte Se, but SNPs in SELENOP influenced the response of erythrocyte GPX activity and CK activity, TAG and LDL after Brazil nut consumption. Also, Brazil nut consumption increased GPX1 mRNA expression only in subjects with rs1050450 CC genotype. SELENOP mRNA expression was significantly lower in subjects with rs7579 GG genotype before and after the intervention. Thus, SNP in SELENOP could be associated with interindividual differences in Se homeostasis after Brazil nut consumption, emphasising the involvement of genetic variability in response to Se consumption towards health maintenance and disease prevention. Early Science from POSSUM: Shocks, turbulence, and a massive new reservoir of ionised gas in the Fornax cluster C. S. Anderson, G. H. Heald, J. A. Eilek, E. Lenc, B. M. Gaensler, Lawrence Rudnick, C. L. Van Eck, S. P. O'Sullivan, J. M. Stil, A. Chippendale, C. J. Riseley, E. Carretti, J. West, J. Farnes, L. Harvey-Smith, N. M. McClure-Griffiths, Douglas C. J. Bock, J. D. Bunton, B. Koribalski, C. D. Tremblay, M. A. Voronkov, K. Warhurst Published online by Cambridge University Press: 23 April 2021, e020 We present the first Faraday rotation measure (RM) grid study of an individual low-mass cluster—the Fornax cluster—which is presently undergoing a series of mergers. Exploiting commissioning data for the POlarisation Sky Survey of the Universe's Magnetism (POSSUM) covering a ${\sim}34$ square degree sky area using the Australian Square Kilometre Array Pathfinder (ASKAP), we achieve an RM grid density of ${\sim}25$ RMs per square degree from a 280-MHz band centred at 887 MHz, which is similar to expectations for forthcoming GHz-frequency ${\sim}3\pi$-steradian sky surveys. These data allow us to probe the extended magnetoionic structure of the cluster and its surroundings in unprecedented detail. We find that the scatter in the Faraday RM of confirmed background sources is increased by $16.8\pm2.4$ rad m−2 within 1 $^\circ$ (360 kpc) projected distance to the cluster centre, which is 2–4 times larger than the spatial extent of the presently detectable X-ray-emitting intracluster medium (ICM). The mass of the Faraday-active plasma is larger than that of the X-ray-emitting ICM and exists in a density regime that broadly matches expectations for moderately dense components of the Warm-Hot Intergalactic Medium. We argue that forthcoming RM grids from both targeted and survey observations may be a singular probe of cosmic plasma in this regime. The morphology of the global Faraday depth enhancement is not uniform and isotropic but rather exhibits the classic morphology of an astrophysical bow shock on the southwest side of the main Fornax cluster, and an extended, swept-back wake on the northeastern side. Our favoured explanation for these phenomena is an ongoing merger between the main cluster and a subcluster to the southwest. The shock's Mach angle and stand-off distance lead to a self-consistent transonic merger speed with Mach 1.06. The region hosting the Faraday depth enhancement also appears to show a decrement in both total and polarised radio emission compared to the broader field. We evaluate cosmic variance and free-free absorption by a pervasive cold dense gas surrounding NGC 1399 as possible causes but find both explanations unsatisfactory, warranting further observations. Generally, our study illustrates the scientific returns that can be expected from all-sky grids of discrete sources generated by forthcoming all-sky radio surveys. Balloon dilatation versus surgical valvotomy for congenital aortic stenosis: a propensity score matched study Aortic Coarctation Benjamin C. Auld, Julia S. Donald, Naychi Lwin, Kim Betts, Nelson O. Alphonso, Prem S. Venugopal, Robert N. Justo, Cameron J. Ward, Igor E. Konstantinov, Tom R. Karl, Benjamin W. Anderson Journal: Cardiology in the Young / Volume 31 / Issue 12 / December 2021 Published online by Cambridge University Press: 16 April 2021, pp. 1984-1990 Balloon valvuloplasty and surgical aortic valvotomy have been the treatment mainstays for congenital aortic stenosis in children. Choice of intervention often differs depending upon centre bias with limited relevant, comparative literature. This study aims to provide an unbiased, contemporary matched comparison of these balloon and surgical approaches. Retrospective analysis of patients with congenital aortic valve stenosis who underwent balloon valvuloplasty (Queensland Children's Hospital, Brisbane) or surgical valvotomy (Royal Children's Hospital, Melbourne) between 2005 and 2016. Patients were excluded if pre-intervention assessment indicated ineligibility to either group. Propensity score matching was performed based on age, weight, and valve morphology. Sixty-five balloon patients and seventy-seven surgical patients were included. Overall, the groups were well matched with 18 neonates/25 infants in the balloon group and 17 neonates/28 infants in the surgical group. Median age at balloon was 92 days (range 2 days – 18.8 years) compared to 167 days (range 0 days – 18.1 years) for surgery (rank-sum p = 0.08). Mean follow-up was 5.3 years. There was one late balloon death and two early surgical deaths due to left ventricular failure. There was no significant difference in freedom from reintervention at latest follow-up (69% in the balloon group and 70% in the surgical group, p = 1.0). Contemporary analysis of balloon aortic valvuloplasty and surgical aortic valvotomy shows no difference in overall reintervention rates in the medium term. Balloon valvuloplasty performs well across all age groups, achieving delay or avoidance of surgical intervention.
CommonCrawl
Research article | Open | Published: 30 May 2017 In vitro quality evaluation of leading brands of ciprofloxacin tablets available in Bangladesh Md. Sahab Uddin ORCID: orcid.org/0000-0002-0805-78401, Abdullah Al Mamun1, Md. Saddam Hossain1, Md. Asaduzzaman1, Md. Shahid Sarwar2, Mamunur Rashid1,3 & Oscar Herrera-Calderon4 BMC Research Notesvolume 10, Article number: 185 (2017) | Download Citation Ciprofloxacin is a broad-spectrum antibiotic that acts against a number of bacterial infections. The study was carried out to examine the in vitro quality control tests for ten leading brands of ciprofloxacin hydrochloride 500 mg tablet formulation, registered in Bangladesh by Directorate General of Drug Administration. The quality control parameters of ten different brands of ciprofloxacin hydrochloride 500 mg tablets were determined by weight variation, friability, hardness, disintegration, dissolution and assay tests. All the tablets were evaluated for conformity with United States Pharmacopoeia-National Formulary (USP-NF) and British Pharmacopoeia (BP) standards. Among ten brands of tablets Brand C had lower mean weight variation of 1.59% and Brand E had highest mean weight variation of 3.32%. For friability test Brand F had lowest mean friability (0.27%) and Brand G had highest mean friability (0.54%). Among ten brands mean lowest and highest hardness were founded in Brand G (4.49 kg/cm2) and Brand F (7.13 kg/cm2) respectively. The disintegration time for ten brands of ciprofloxacin tablet obtained were in the subsequent order: Brand G (8.19 min) < Brand C (9.25 min) < Brand E (9.61 min) < Brand D (10.11 min) < Brand B (11.07 min) < Brand A (12.15 min) < Brand H (13.68 min) < Brand I (14.59 min) < Brand J (16.32 min) < Brand F (17.49 min). Among ten brands for dissolution test mean percentages of drug release were not less than 80% in 45 min for four tablets (Brand E, 81.52%; Brand D, 86.44%; Brand G, 86.82% and Brand C, 94.12%), consequently they met BP standard and as per USP-NF standard six brands (Brand B, 75.62%; Brand A, 76.18%; Brand E, 81.52%; Brand D, 86.44%; Brand G, 86.82% and Brand C, 94.12%) had released not less than 75% drugs, so they also complied with the standard. The percentages of the drug content of the ten brands of ciprofloxacin tablet were obtained in the following sequence: Brand H (96.84%) < Brand J (97.34%) < Brand D (98.15%) < Brand I (98.47%) < Brand E (99.37%) < Brand F (100.28%) < Brand B (100.38%) < Brand A (100.54%) < Brand G (101.39%) < Brand C (101.46%). All of the brands met the BP and USP-NF specifications for assay. First-order, Higuchi and Korsmeyer–Peppas kinetics model fit for all of the mentioned ten brands. The present study revealed that all of the leading brands of this tablet met the quality control parameters as per pharmacopoeial specifications except dissolution test for four brands (Brand J, Brand H, Brand I, and Brand F). Ciprofloxacin is an antibiotic in a group of drugs called fluoroquinolones [1]. In 1981 it was discovered by Bayer, Germany. The Food and Drug Administration (FDA) approved this drug in 1987 for uses in the United States as first oral broad-spectrum antibiotic [2]. It is one of the most important medications needed in the basic health care system and available on the World Health Organization's (WHO) list of essential medicines [3]. It is available as a generic medication and not very expensive [4, 5]. Generic drugs must be chemically and biopharmaceutically equivalent in comparison to the innovator drug. Quality parameters such as strength, purity, content uniformity, disintegration time (DT) and dissolution rate must be identical for pharmaceuticals that are chemically and biopharmaceutically equivalent [6]. Generic drugs are not only decreasing the health care costs [2] but also the quality of the drugs. The qualities of generic drugs are in doubt in case of poor, developing as well as industrialized countries. There are a number of cases related to substandard and counterfeit drugs. Composition and ingredients of substandard drugs don't meet the correct scientific specifications for these reasons they are ineffective and often dangerous to the patient. Counterfeit drugs may include products with the correct ingredients but fake packaging, with the wrong ingredients, without active ingredients or with insufficient active ingredients [7]. It is believed that the health hazard effects of counterfeit drugs are greater than substandard drugs [8]. Substandard and counterfeit drugs are a major cause of morbidity, mortality and loss of public confidence in drugs and health structures [9]. WHO has estimated that approximately 10% of the global pharmaceuticals market consists of counterfeit drugs, but this estimate increases to 25% in developing countries, and may exceed 50% in certain countries [10]. FDA estimates that up to 25% of the drugs consumed in poor countries are substandard or counterfeit [11]. China and India are recognized as the leading countries in the production of counterfeit drugs and bulk active ingredients used for counterfeiting worldwide [12]. Several studies showed that counterfeits of pharmaceuticals sourced in China and India were detected in 42 and 33 countries respectively [13]. Almuzaini et al. showed that prevalence of substandard or counterfeit medicines in Lao PDR, Tanzania, Cambodia and Uganda are 12.2–44.5%, followed by Indonesia, Nigeria, Cameroon 18–48% and in Myanmar, Cambodia, Lao PDR, Ghana, Kenya, Tanzania, Uganda, Madagascar, Mali, Mozambique, Zimbabwe 11–44% [14]. In 2009, 25 children were dying after taking paracetamol syrup due to the presence of poisonous diethylene glycol in Bangladesh [15, 16]. Substandard and counterfeit drugs are not only limited to poor and developing countries but also intensely noticeable in developed countries. In 2007–2008, due to the uses of adulterated blood thinner, heparin 149 Americans were dying that was legally imported. In 2012, contaminated steroids killed 11 people and sickened another 100 people in the US. In another case, vials of the cancer medicine, avastin were found to contain no active ingredients [17]. In a study of WHO found that 28% of antibiotic and 20–90% of antimalarial drugs were failed quality specifications [18]. Pharmaceuticals must satisfy certain standards to claim it to be a quality drug. The main criteria for the quality of any drug in dosage form are its safety, potency, efficacy, stability, patient acceptability and regulatory compliance [19]. To ensure the safety and efficacy of pharmaceutical products, the quality of pharmaceutical must be reliable and reproducible from batch to batch [20]. To ensure the requisite quality, drug manufacturers are required to test their products during and after manufacturing at various intervals during the shelf-life of the product [21]. WHO supports the practice of prescribing of generic medicines to reduce the cost of the health care system, but this should be supported with adequate evidence for the substitution of one brand for another [22]. Comparison of bioequivalence study between the generic products versus the innovator product is one of the major challenges and prime factors for a generic marketing authorization [23]. Several studies showed that switching from branded to generic medicine might result in changes of pharmacokinetics/pharmacodynamics profile, leading to subtherapeutic concentration or therapeutic failure and or adverse reactions [24]. It is very essential to do bioequivalence studies for generic products on account of any significant difference in the rate and extent by which the therapeutic ingredients become available at the site of drug action, administered under uniform conditions in an adequately designed study [25]. To identify bioavailability problems dissolution testing serves as an indicator [26]. Biopharmaceutically as well as chemically equivalent drug products must have the same quality, strength, purity, content uniformity, disintegration and dissolution rates [27]. In vitro quality control (QC) of pharmaceutical products is a fixed set of investigation started during production by in-process quality control tests and after production by finished product quality control tests as per official pharmacopoeias and different regulatory agencies. QC tests help in avoiding the confusion regarding safety, potency, efficacy and stability of pharmaceuticals [28]. The prevalence of substandard and or counterfeit medicines is significantly higher in poor and developing countries. As ciprofloxacin is widely used antibiotic in Bangladesh, the objective of this study was to assess the quality of different leading brands of ciprofloxacin hydrochloride 500 mg tablet formulation commercially available in the market of Bangladesh. Drugs and chemicals Ten commercially available leading brands of ciprofloxacin hydrochloride tablet each with a label claim 500 mg were purchased from the various retail pharmacies of Dhaka city in Bangladesh. Details information about the brands are shown in Table 1. The samples were blindly named as Brand A, Brand B, Brand C, Brand D, Brand E, Brand F, Brand G, Brand H, Brand I and Brand J. The standard ciprofloxacin hydrochloride powder equivalent to ciprofloxacin 200 mg was obtained from the Modern Pharmaceuticals Ltd, Dhaka, Bangladesh. Unless otherwise specified, all other chemicals were of analytical grade. Table 1 Label information of ten leading brands of ciprofloxacin tablets Instruments used in this study were mortar, pestle, electronic balance (Model: D455007359, Shimadzu Corp.), Roche friabilator (Model: 902, Intech REV), Monsanto hardness tester (Model: Mht-20, Campbell Elec.), USP disintegration apparatus (Model: LTD-DV, Intech), USP dissolution apparatus (Model: VDA-6DR, Veego Instruments Cor.) and ultra violet (UV) spectrophotometer (Model: UV-1800, Shimadzu Corp.). In vitro quality control tests Weight variation test For this test according to the USP-NF weight variation test was run by weighting 20 tablets for each of the ten brands individually using an electronic balance, then calculating the average weights and comparing the individual tablet weights to the average. The difference in the two weights was used to calculate weight variation by using the following formula [19, 29, 30]: $${\text{Weight variation}} = ({\text{I}}_{\text{w}}-{\text{A}}_{\text{w}} )/{\text{A}}_{\text{w}} \times 100\%$$ where, Iw = Individual weight of the tablet and Aw = Average weight of the tablet. The tablet complies with the test if not more than 2 of the individual weights deviate from the average weight by more than the 5% [19, 29, 30]. Friability test For this test Roche friabilator was used. Twenty tablets from each of the ten brands were weighed and placed in the friabilator and then operated at 25 rpm for 4 min. The tablets were then dedusted and weighed. The difference in the two weights was used to calculate friability by using the following formula [19, 31]: $${\text{Friability}} = ({\text{I}}_{\text{w}} - {\text{F}}_{\text{w}} )/{\text{I}}_{\text{w}} \times 100\%$$ where, Iw = Total Initial weight of the tablets and Fw = Total final weight of the tablets. The tablet complies with the test according to USP-NF if tablets loss less than 1% of their weight [19, 31]. Hardness test For this test Monsanto hardness tester was used. Ten tablets were randomly selected from each of the ten brands and tested. This test measures the pressure required to break diametrically placed tablets by applying pressure with coiled spring [19, 32]. In-house acceptable limit for this test is 6 ± 2 kg/cm2. Disintegration test For this test USP disintegration apparatus was used. To test for DT, one tablet was placed in each tube for each brand and the basket rack was positioned in a 1000 ml vessel containing 900 ml of water maintained at 37 ± 2 °C, so that the tablets remained 2.5 cm below the surface of the liquid on their upward movement and descent not closer than 2.5 cm from the bottom of the beaker. A standard motor driven device was used to move the basket assembly containing the tablets up and down through a distance of 5–6 cm at a frequency of 28–32 cycles per minute. Perforated plastic discs were used to prevent the floating of tablets. The apparatus was operated for 30 min [19, 29]. To comply with the USP-NF standards, the tablets must disintegrate and all particles must pass through the 10-mesh screen within 30 min. If any residue remains, it must have a soft mass with no palpably firm core [25, 29]. Dissolution test For this test USP dissolution apparatus was used. To test for dissolution, one tablet was placed in each vessel (6 vessels) for each brand, containing 900 ml of 0.1 M hydrochloric acid (HCl) as a dissolution medium maintained at 37 ± 0.5 °C. The rotational speed of the apparatus was held constant at 50 rpm. A sample of 5 ml was withdrawn at a fixed time intervals (15, 30, 45 and 60 min) and this was immediately replaced with the same volume of fresh test media [25, 29, 33]. The sample was filtered and 1 ml of filtrate was taken and diluted to 50 ml with distilled water. So the solution was 50 times diluted. The absorbance of the diluted filtrate was determined spectrophotometrically at the wavelength of 276 nm, using 0.1 M HCl as blank. The percentage of drug release at each interval was calculated by using standard ciprofloxacin. As per USP-NF tablets meet with this test if not less than 75% dissolves in 45 min. According to BP tablet comply with this test if not less than 80% dissolves in 45 min [19, 29, 33]. Assay test Analysis of drug potency in tablets helps to determine the strength or content of drug in a dosage form. 100 mg of standard ciprofloxacin hydrochloride powder was weighed and dissolved in 10 ml of distilled water and diluted up to 100 ml to get 1000 µg/ml concentration of standard stock solution. From this stock solution 10 ml was taken to another 100 ml volumetric flask and diluted to get 100 µg/ml of drug concentration. Then, using this stock solution various other concentrations were prepared like 5, 10, 15, 20, 25 and 30 µg/ml. Absorbance values of these concentrations were measured at 276 nm by using UV spectrophotometer and standard graph was plotted by taking absorbance values on Y-axis and concentration values on X-axis. For this test tablets from each brand were crushed into fine powder and sufficient amount of powder was weighed so that the amount contains 100 mg of active ciprofloxacin and dissolved in 100 ml 0.1 M HCl and further dilution was made to obtain 100 µg/ml for each brand. Then 4 ml of each brand made up to 100 ml with 0.1 M HCl and the absorbance of each brand was taken at 276 nm against the blank [21]. The USP-NF specification is that the content of ciprofloxacin hydrochloride should not be less than 90% and not more than 110%, while BP specifies that the content should not be less than 95% and not more than 105% [29, 33]. Drug release kinetics To evaluate the kinetics of drug release from the tablets the results of in vitro drug release study of formulations were fitted with various kinetic equations like zero-order, first-order, Higuchi and Korsmeyer–Peppas model [34]. The equations of different release kinetics are given below: $${\text{Zero-order kinetics:}}\;{\text{Q}}_{\text{t}} = {\text{Q}}_{0} + {\text{K}}_{0} {\text{t}}$$ $${\text{First-order kinetics:}}\;\log {\text{Q}}_{\text{t}} = \log {\text{Q}}_{0} + {\text{K}}_{1} {\text{t}}/2.303$$ $${\text{Higuchi kinetics: Q}}_{\text{t}} = {\text{K}}_{\text{h}} {\text{t}}^{1/2}$$ $${\text{Korsmeyer}}{-}{\text{Peppas kinetics: Q}}_{\text{t}} /{\text{Q}}_{ 0} = {\text{Kt}}^{\text{n}}$$ where, K0, K1 and Kh indicates zero-order, first-order and Higuchi rate constants respectively, Qt/Q0 means fraction of drug released at time t, K means rate constant and n means release exponent. The kinetics that gives high regression coefficient (R2) value is considered as the best fit model [34,35,36]. All the results were expressed as mean ± SD. The results of dissolution test were analyzed by one-way analysis of variance (ANOVA) followed by Post Hoc t test. Microsoft Excel 2010 (Roselle, IL, USA) was used for the statistical and graphical evaluations. A probability of P < 0.05 was considered as significant. The tablet complies with the weight variation test if not more than 2 of the individual weights deviate from the average weight by more than the 5 percent. The mean results for weight variation for ten brands obtained were in the following order: Brand C (1.59%) < Brand I (1.95%) < Brand G (2.12%) < Brand H (2.34%) < Brand D (2.36%) < Brand A (2.39%) < Brand J (2.44%) < Brand F (2.79%) < Brand B (2.99%) < Brand E (3.32%), given in Fig. 1. Among all tablets mean highest weight variation was found in Brand E, 3.32% and the lowest was found in Brand C, 1.59%. This means that all the brands complied with the compendial specifications. Results of weight variation test of ten leading brands of ciprofloxacin tablets. Results were expressed as mean ± SD (n = 20/brand) Figure 2 showed the mean results of friability for ten brands of ciprofloxacin tablet in the subsequent order: Brand F (0.27%) < Brand I (0.28%) < Brand J (0.31%) < Brand A (0.39%) < Brand D (0.48%) < Brand H (0.51%) < Brand B (0.52%) < Brand E (0.53%) < Brand C (0.54%) < Brand G (0.54%). Thus, the brand most likely to lose particles during handling was Brand G, 0.54%, while the least likely to lose particles was Brand F, 0.27%. Friability for all the brands was below 1% and they complied with the compendial specifications. Results of friability test of ten leading brands of ciprofloxacin tablets. Results were expressed as mean ± SD (n = 20/brand) The mean hardness results (Fig. 3) for ten brands obtained were in the specified order: Brand G (4.49 kg/cm2) < Brand C (5.12 kg/cm2) < Brand E (5.94 kg/cm2) < Brand B (6.35 kg/cm2) < Brand D (6.45 kg/cm2) < Brand H (6.55 kg/cm2) < Brand A (6.64 kg/cm2) < Brand J (6.73 kg/cm2) < Brand I (7.06 kg/cm2) < Brand F (7.12 kg/cm2). From Fig. 3, it can be seen that Brand F, 7.12 kg/cm2 had the highest hardness value while Brand G, 4.49 kg/cm2 had the lowest value. Results of hardness test of ten leading brands of ciprofloxacin tablets. Results were expressed as mean ± SD (n = 10/brand) Disintegration could be directly related to dissolution and subsequent bioavailability of a drug. The DT for ten brands of ciprofloxacin tablet obtained were in the succeeding order: Brand G (8.19 min) < Brand C (9.25 min) < Brand E (9.61 min) < Brand D (10.11 min) < Brand B (11.07 min) < Brand A (12.15 min) < Brand H (13.68 min) < Brand I (14.59 min) < Brand J (16.32 min) < Brand F (17.49 min). Highest DT was found in Brand F, 17.49 min and lowest was found in Brand G, 8.19 min. All the brands complied with the compendial specifications for this test given in Fig. 4. Results of disintegration test of ten leading brands of ciprofloxacin tablets. Results were expressed as mean ± SD (n = 6/brand) The calibration curve of standard ciprofloxacin is given in Fig. 5 (y = 0.1212x + 0.2931, R2 = 0.997). The percentages of drug release for ten brands of ciprofloxacin tablet in 45 min were in the resulting order: Brand F (61.87%) < Brand I (65.77%) < Brand H (69.36%) < Brand J (72.86%) < Brand B (75.62%) < Brand A (76.18%) < Brand E (81.52%) < Brand D (86.44%) < Brand G (86.82%) < Brand C (94.12%), shown in Fig. 6. According to BP the percentages of drug release at 45 min were less than 80% for Brand F, Brand I, Brand H, Brand J, Brand B and Brand A. But among these six brands Brand B and Brand A met the USP-NF standard. Standard calibration curve for ciprofloxacin hydrochloride Results of dissolution profile of ten leading brands of ciprofloxacin tablets. Results were expressed as mean ± SD (n = 6/brand) The percentage of the drug per tablet was then measured using the calibration curve represented in Fig. 5 (y = 0.1212x + 0.2931, R2 = 0.997). The percentages of the drug content of the ten brands of ciprofloxacin tablet were obtained in the stated sequence: Brand H (96.84%) < Brand J (97.34%) < Brand D (98.15%) < Brand I (98.47%) < Brand E (99.37%) < Brand F (100.28%) < Brand B (100.38%) < Brand A (100.54%) < Brand G (101.39%) < Brand C (101.46%). All of the brands met the BP and USP-NF specifications (Fig. 7) for assay. Results of drug content of ten leading brands of ciprofloxacin tablets. Results were expressed as mean ± SD (n = 20/brand) The statistical evaluation (ANOVA) of dissolution test given in Table 2 showed that there was significant variation (P < 0.05) found among ten brands of ciprofloxacin tablets. The kinetics of drug release of the proposed brands (Brand A, Brand B, Brand C, Brand D, Brand E, Brand F, Brand G, Brand H, Brand I and Brand J) were treated in different kinetics model such as zero-order, first-order, Higuchi and Korsmeyer–Peppas mentioned in Table 3. Table 2 Results of ANOVA for dissolution test of ten leading brands of ciprofloxacin tablets Table 3 Kinetics of drug release from ten leading brands of ciprofloxacin tablets Quality is not an accident, it is the result of intelligent effort [37]. The quality of pharmaceuticals is under great risks in developing countries especially in Bangladesh. There are several factors related to bad quality among these uses of substandard raw material and lacks of facility are most prominent. Therefore, it is necessary to check the quality. Pharmacopeial testing confirms these properties according to fixed standards. Different brands of ciprofloxacin hydrochloride tablets were obtained from different retail pharmacy outlets within Dhaka City and were subjected to weight variation, friability, hardness, disintegration, dissolution and assay tests. The weight of the tablet is the amount of granules which contains the labeled amount of the therapeutic ingredient. A large weight variation precludes good content uniformity. Due to a variety of reasons tablets may be excessively overweight or underweight. Patients receiving the overdose or underdose tablet, experiences unpredictable therapeutic response [38]. The tablet complies with this test if not more than 2 of the individual weights deviate from the average weight by more than the 5 percent. The mean results of weight variation for ten brands obtained were in the following order: Brand C (1.59%) < Brand I (1.95%) < Brand G (2.12%) < Brand H (2.34%) < Brand D (2.36%) < Brand A (2.39%) < Brand J (2.44%) < Brand F (2.79%) < Brand B (2.99%) < Brand E (3.32%) revealed in Fig. 1. Among ten brands of ciprofloxacin tablet, all of the brands met the specification, highest weight variation was seen in Brand E and the lowest was in Brand C. In the study of quality control and in vitro bioequivalence studies on four brands of ciprofloxacin tablets commonly sold in Uyo Metropolis, Nigeria, Jackson et al. also reported similar results [6]. Friability of tablet is the capacity to withstand shock and abrasion in packaging, handling and shipping. Friable tablets no longer have sharp edges, consequent pharmaceutical elegance and patient acceptance. Tablet friability results in weight loss of tablets which may affect the therapeutic response [39]. The mean results of friability for ten brands obtained were in the subsequent order: Brand F (0.27%) < Brand I (0.28%) < Brand J (0.31%) < Brand A (0.39%) < Brand D (0.48%) < Brand H (0.51%) < Brand B (0.52%) < Brand E (0.53%) < Brand C (0.54%) < Brand G (0.54%) exposed in Fig. 2. The results for friability were below 1% for ten brands of ciprofloxacin tablets, which met the specification. Salih and Hamam in the study of comparative in vitro evaluation of generic ciprofloxacin hydrochloride tablets showed that all the four different manufacturing brands of ciprofloxacin tablets complied with USP-NF requirements for this test [40]. Tablets require a certain amount of hardness to withstand mechanical shocks of handling in manufacturing, packaging and shipping. In addition, tablets should be able to withstand reasonable abuse when in the hands of consumers. Adequate tablet hardness and resistance to powdering are necessary requisites for consumer acceptance. More recently, this relationship of hardness to tablet disintegration and perhaps more significantly, to the drug dissolution release rate, has become apparent [41]. Tablet hardness can be attributed to the difference in properties of excipients employed in the manufacture of the different brands. Hardness values did not correlate with friability values [42]. The mean results of hardness for ten brands obtained were in the resulting order: Brand G (4.49 kg/cm2) < Brand C (5.12 kg/cm2) < Brand E (5.94 kg/cm2) < Brand B (6.35 kg/cm2) < Brand D (6.45 kg/cm2) < Brand H (6.55 kg/cm2) < Brand A (6.64 kg/cm2) < Brand J (6.73 kg/cm2) < Brand I (7.06 kg/cm2) < Brand F (7.12 kg/cm2) exhibited in Fig. 3. But the Figs. 2 and 3 showed that highest friable brand, Brand G has lowest hardness and lowest friable brand, Brand F has highest hardness. All of the tablets met the in-house specification for this test. In the bioequivalence studies on some selected brands of ciprofloxacin hydrochloride tablets in the Nigerian market with ciproflox® as innovator brand Ayodeji et al. showed that all of the brands comply with the specification for this test [43]. Before absorption of drug takes place in the body, it must be in solution form. For most tablets the first important step toward solution is the breakdown of the tablet into smaller particles or granules, a process known as disintegration [44]. Disintegration must be directly related to dissolution and subsequent bioavailability of a drug [45]. The DTs for ten brands were under 30 min. The mean results of DT for ten brands obtained were in the aforementioned order: Brand G (8.19 min) < Brand C (9.25 min) < Brand E (9.61 min) < Brand D (10.11 min) < Brand B (11.07 min) < Brand A (12.15 min) < Brand H (13.68 min) < Brand I (14.59 min) < Brand J (16.32 min) < Brand F (17.49 min) shown in Fig. 4. As per results shown, Brand G has lowest DT and Brand F has highest DT, but all of the brands meet the compendial requirements. Similar findings were reported by Kahsay and Egziabher [46]. When a drug is administered orally in the form of the tablet, the absorption of the tablet depends on how fast it goes into solution, i.e., absorption of a drug is totally dependents on the dissolution of the tablet. Dissolution is a rate limiting step prior to absorption. The rate of dissolution is directly related to the efficacy of the tablet products, as well as to bioavailability difference between formulations [47]. To meet the BP standard the percentages of drug release at 45 min must be not less than 80 and 75% according to USP-NF standard. The percentages of drug release for ten brands of ciprofloxacin tablet in 45 min were in the succeeding order: Brand F (61.87%) < Brand I (65.77%) < Brand H (69.36%) < Brand J (72.86%) < Brand B (75.62%) < Brand A (76.18%) < Brand E (81.52%) < Brand D (86.44%) < Brand G (86.82%) < Brand C (94.12%), presented in Fig. 6. Among ten brands the percentages of drug release were more than 80% for four brands (Brand E, Brand D, Brand G and Brand C) and less than 80% for six brands (Brand F, Brand I, Brand H, Brand J, Brand B and Brand A) as per BP standard. But according to USP-NF standard six brands (Brand B, Brand A, Brand E, Brand D, Brand G and Brand C) complied with the specification and remaining four brands (Brand F, Brand I, Brand H and Brand J) did not comply with the specification given in Fig. 6. In the study on in vitro quality assessment and bioequivalence studies on four brands of ciprofloxacin tablets, marketed in Ambo, Ethiopia, Fereja et al. stated that out of four brands of ciprofloxacin tablets one brand failed to meet the dissolution profile [48]. Analysis of the assay of the drug is very important to determine the presence, absence, or quantity of one or more components in the dosage form [19]. In this study ciprofloxacin hydrochloride tablets were assayed by using UV spectroscopic technique due to lack of instrumentation. A number of literatures suggest the UV spectroscopy for the analysis of ciprofloxacin hydrochloride tablet [49]. In fact there is no major problem in the assay of ciprofloxacin tablet by UV spectroscopic technique instead of high performance liquid chromatography technique. The percentages of the drug content of the ten brands of ciprofloxacin tablet were obtained in the following sequence: Brand H (96.84%) < Brand J (97.34%) < Brand D (98.15%) < Brand I (98.47%) < Brand E (99.37%) < Brand F (100.28%) < Brand B (100.38%) < Brand A (100.54%) < Brand G (101.39%) < Brand C (101.46%) displayed in Fig. 7. All of the brands were complied with the BP and USP-NF specification for assay test. The highest percentage of drug content was obtained for Brand C (101.46%), whereas the lowest percentage of drug content was obtained from Brand H (96.84%) given in Fig. 7. Usman et al., in the evaluation of dissolution testing for ciprofloxacin (500 mg) tablets: post market surveillance of different brands available in Ras Al Khaimah (UAE) showed that all ciprofloxacin tablets comply with the content assay test [34]. To compare the quality of all the ten brands used in the study ANOVA was performed. Results presented in Table 2 indicate that there are no significant differences in the release pattern of different brands at P < 0.05 and the F value (2.660375) is higher than F crit value (2.124029). Table 3 shows the different kinetics model that was used to plot various parameters for considering the determination of R2. It shows that the all of the mentioned kinetics fit for all brands. However the first-order kinetics, described the drug dissolution with R2 approximately 1 for Brand F and Brand H, Higuchi kinetics model for Brand F and Brand G and Korsmeyer–Peppas kinetics model for Brand A, Brand B and Brand D respectively. Zero-order kinetics did not best fit to any brands. Among ten brands only Brand F and Brand H were best fit model for first-order kinetics. In case of Brand F and Brand G, Higuchi kinetics model was the best fit model. Korsmeyer–Peppas kinetics model was the best fit model for Brand A, Brand B and Brand D among ten brands. From the present study it was clearly demonstrated that all of the leading brands of the ciprofloxacin hydrochloride tablet met the criteria laid in the official monographs for in vitro quality control tests except dissolution test for four brands (Brand F, Brand I, Brand H and Brand J). But each brand should meet dissolution criteria to be therapeutically effective. So the healthcare professionals should focus on quality rather gift items to ensure better health of people. It will ultimately force the pharmaceutical industry to invest more in quality to ensure better pharmaceuticals for the betterment of the health sector of the country. Hamam HAB. Comparative in vitro evaluation of generic ciprofloxacin hydrochloride tablets. World J Pharm Pharm Sci. 2014;3(12):388–90. Hailu GS, Gutema GB, Asefaw AA, Hussen DA, Hadera MG. Comparative assessment of the physicochemical and in vitro bioavailability equivalence of cotrimoxazole tablets marketed in Tigray, Ethiopia. Int J Pharm Sci Res. 2011;2(12):3210–8. Anonymous. 19th WHO model list of essential medicines. http://www.who.int/medicines/publications/essentialmedicines/EML2015_8-May-15.pdf. Accessed 12 Oct 2015. Anonymous. Ciprofloxacin hydrochloride. http://www.drugs.com/monograph/ciprofloxacin-hydrochloride.html. Accessed 12 Oct 2015. Hamilton Richard J. Tarascon pharmacopoeia. 15th ed. Burlington: Jones & Bartlett Publishers; 2014. Akpabio E, Jackson C, Ugwu C, Etim M, Udofia M. Quality control and in vitro bioequivalence studies on four brands of ciprofloxacin tablets commonly sold in Uyo Metropolis, Nigeria. J Chem Pharm Res. 2011;3(3):734–41. Anonymous. Counterfeit medicine. http://www.fda.gov/Drugs/ResourcesForYou/Consumers/BuyingUsingMedicineSafely/CounterfeitMedicine/. Accessed 12 Oct 2015. Caudron JM, Ford N, Henkens M, Mace C, Kiddell Monroe R, Pinel J. Substandard medicines in resource-poor settings: a problem that can no longer be ignored. Trop Med Int Health. 2008;13(8):1062–72. Cockburn R, Newton PN, Agyarko EK, Akunyili D, White NJ. The global threat of counterfeit drugs: why industry and governments must communicate the dangers. Plos Med. 2005;2(4):100. Wambui KJ. The effects of counterfeits on pharmaceutical distribution and retailing in Mombasa county, Kenya. http://erepository.uonbi.ac.ke/xmlui/bitstream/handle/11295/60692/Kabiru_The%20effects%20of%20counterfeits%20on%20pharmaceutical%20distribution%20and%20retailing%20in%20Mombasa%20county,%20Kenya.pdf?sequence=3&isAllowed=y. Accessed 12 Oct 2015. Anonymous. Substandard and counterfeit medicines. http://www.who.int/mediacentre/factsheets/2003/fs275/en/. Accessed 12 Oct 2015. Khan A, Ghilzai N. Counterfeit and substandard quality of drugs: the need for an effective and stringent regulatory control in India and other developing countries. Indian J Pharmacol. 2007;39(4):206–7. Barnes K. New counterfeit report highlights worrying trends. http://www.outsourcing-pharma.com/Contract-Manufacturing/New-counterfeit-report-highlights-worrying-trends. Accessed 21 Nov 2015. Almuzaini T, Choonara I, Sammons H. Substandard and counterfeit medicines: a systematic review of the literature. BMJ Open. 2013;3(e002923):2. Hanif M, Mobarak MR, Ronan A, Rahman D, Donovan JJ, Bennish ML. Fatal renal failure caused by diethylene glycol in paracetamol elixir: the Bangladesh epidemic. Br Med J. 1995;311:88. Anonymous. Rid's syrup unauthorised, toxic element found. http://archive.thedailystar.net/newDesign/news-details.php?nid=99261. Accessed 12 Oct 2015. Anonymous. Counterfeit medications. https://en.wikipedia.org/wiki/Counterfeit_medications. Accessed 21 Nov 2015. Ahmad K. WHO fights fake pharmaceuticals. Lancet Infect Dis. 2006;6:195. Uddin MS, Mamun AA, Tasnu T, Asaduzzaman M. In-process and finished products quality control tests for pharmaceutical tablets according to pharmacopoeias. J Chem Pharm Res. 2015;7(9):180–5. Uddin MS, Mamun AA, Akter N, Sarwar MS, Rashid M, Amran MS. Pharmacopoeial standards and specifications for pharmaceutical oral liquid preparations. Arch Curr Res Int. 2016;3(2):1–3. Uddin MS, Mamun AA, Rashid M, Assaduzzaman M. In-process and finished products quality control tests for pharmaceutical capsules according to pharmacopoeias. Br J Pharm Res. 2016;9(2):1–2. Fahmy S, Abu-Gharbieh E. In vitro dissolution and in vivo bioavailability of six brands of ciprofloxacin tablets administered in rabbits and their pharmacokinetic modeling. Biomed Res Int. 2014;2014:8. Wedel C. Global development strategy for generic medicinal products with regard to bioequivalence studies–special focus on the biowaiver approach in Canada, Australia and Brazil. 2012. http://dgra.de/media/pdf/studium/masterthesis/master_wedel_c.pdf. Accessed 5 Feb 2016. Crawford P, Feely M, Guberman A, Kramer G. Are there potential problems with generic substitution of antiepileptic drugs? A review of issues. Seizure. 2006;15:165–76. Shargel L, Wu-Pong S, Yu ABC. Applied biopharmaceutics & pharmacokinetics. 6th ed. New York: McGraw-Hill; 2012. Shah V. Dissolution: a quality control test vs a bioequivalent test. Dissolution Technol. 2001;8(4):1–2. Hassali MA, Thambyappa J, Saleem F, Haq N, Aljadhey V. Generic substitution in Malaysia: recommendations from a systematic review. J App Pharm Sci. 2012;2(8):159–60. Sufian MA, Uddin MS, Islam MT, Zahan T, Hossain K, Uddin GMS, Mamun AA. Quality control parameters of parenteral pharmaceuticals based on pharmacopoeias. Indo Am J P Sci. 2016;3(12):1624-1638. United States Pharmacopeial Convention. United States pharmacopoeia 33-national formulary 28. Great Britain: Stationery Office; 2010. Uduma EO, Ayodeji AA, Rosemary CA, Okorie O, Christian CO. Bioequivalence studies on some selected brands of ciprofloxacin hydrochloride tablets in the Nigerian market with ciproflox® as innovator brand. J App Pharm Sci. 2011;1(6):81–3. Swarbrick J. Encyclopedia of pharmaceutical technology. 3rd ed. New York: Informa Healthcare; 2007. Tangri P, Mamgain P, Shaffi, Verma AML, Lakshmayya. In process quality control: a review. Int J Ind Pharm Bio Sci. 2012;1(1):49–51. British Pharmacopoeia Commission. British pharmacopoeia. 8th ed. Great Britain: Stationery Office; 2014. Usman S, Alam A, Suleiman R, Awad K, Abudeek I. Evaluation of dissolution testing for ciprofloxacin (500 mg) tablets: post market surveillance of different brands available in Ras Al Khaimah (UAE). Int J Biopharm. 2014;5(1):65–72. Costa P, Lobo JMS. Modeling and comparison of dissolution profiles. Euro J Pharm Sci. 2001;13:123–33. Shaikh HK, Kshirsagar RV, Patil SG. Mathematical models for drug release characterization: a review. World J Pharm Pharm Sci. 2015;4(4):324–38. Uddin MS, Mamun AA, Rashid M, Asaduzzaman M. In-process and finished products quality control tests for pharmaceutical capsules according to pharmacopoeias. Br J Pharm Res. 2016;9(2):2. Gennaro AR. Remington: the science and practice of pharmacy. 19th ed. New York: Lippincott Williams & Wilkins; 2000. Uddin MS. Quality control of pharmaceuticals: compendial standards and specifications. Germany: Scholars' Press; 2017. Salih H, Hamam B. Comparative in vitro evaluation of generic ciprofloxacin hydrochloride tablets. World J Pharm Pharm Sci. 2014;38(12):388–96. Lachman L, Lieberman H, Kanig JL. The theory and practice of industrial pharmacy. 3rd ed. Philadelphia: Lea & Febiger; 1986. Merchant HA, Shoiab HM, Tazeen J, Yousuf RI. A once daily tablet formulation and in vitro release evaluation of cepfodoxime using hydroxypropyl methylcellulose: a technical note. AAPS Pharm Sci Tech. 2006;7(3):78. Uduma EO, Ayodeji AA, Rosemary CA, Ogbonna O, Christian CO. Bioequivalence studies on some selected brands of ciprofloxacin hydrochloride tablets in the Nigerian market with ciproflox® as innovator brand. J App Pharm Sci. 2011;01(06):80–4. Aulton ME, Taylor K. Aulton's pharmaceutics: the design and manufacture of medicines. 4th ed. New York: Churchill livingstone Elsevier; 2013. Niazi SK. Handbook of bioequivalence testing. 1st ed. New York: Informa Healthcare; 2007. Kahsay G, Egziabher AG. Quality assessment of the commonly prescribed antimicrobial drug, ciprofloxacin tablets, marketed in Tigray, Ethiopia. Momona Ethiop J Sci. 2010;2(1):93–107. Allen LV, Popovich NG, Ansel HC. Ansel's pharmaceutical dosage forms and drug delivery systems. 9th ed. New York: Lippincott Williams & Wilkins; 2011. Fereja TH, Tufa SB. In vitro quality assessment and bioequivalence studies on four brands of Ciprofloxacin tablets, marketed in Ambo, Ethiopia. Int J Pharm Sci. 2015;5(2):1007–12. Nijhu RS, Jhanker YM, Sutradhar KB. Development of an assay method for simultaneous determination of ciprofloxacin and naproxen by UV spectrophotometric method. Stamford J Pharm Sci. 2011;4(1):84–90. This work was carried out in collaboration between all authors. Author MSU designed the study, wrote the protocol, managed the analyses of the study and prepared the draft of the manuscript. Authors MSU, AAM and MSH carried out the tests and managed the literature searches. Author MA participated in data analysis and interpretation. MSS performed statistical and graphical evaluations. Author MR and OHC reviewed the scientific contents of the manuscript. All authors read and approved the final manuscript. The authors wish to thank the anonymous reviewer(s)/editor(s) of this article for their constructive reviews. The authors are grateful to the Department of Pharmacy, Southeast University, Dhaka, Bangladesh for providing research facilities. This work was self-funded. Department of Pharmacy, Southeast University, Dhaka, Bangladesh Md. Sahab Uddin , Abdullah Al Mamun , Md. Saddam Hossain , Md. Asaduzzaman & Mamunur Rashid Department of Pharmacy, Noakhali Science and Technology University, Noakhali, Bangladesh Md. Shahid Sarwar Department of Pharmacy, University of Rajshahi, Rajshahi, Bangladesh Mamunur Rashid Academic Department of Pharmaceutical Sciences, Faculty of Pharmacy and Biochemistry, Universidad Nacional San Luis Gonzaga de Ica, Ica, Peru Oscar Herrera-Calderon Search for Md. Sahab Uddin in: Search for Abdullah Al Mamun in: Search for Md. Saddam Hossain in: Search for Md. Asaduzzaman in: Search for Md. Shahid Sarwar in: Search for Mamunur Rashid in: Search for Oscar Herrera-Calderon in: Correspondence to Md. Sahab Uddin. Pharmacopoeial specifications
CommonCrawl
Non-symmetric pinning of topological defects in living liquid crystals Continuous generation of topological defects in a passively driven nematic liquid crystal Maruša Mur, Žiga Kos, … Igor Muševič Engineering bacterial vortex lattice via direct laser lithography Daiki Nishiguchi, Igor S Aranson, … Andrey Sokolov Organizing bacterial vortex lattices by periodic obstacle arrays Henning Reinken, Daiki Nishiguchi, … Igor S. Aranson Liquid-induced topological transformations of cellular microstructures Shucong Li, Bolei Deng, … Joanna Aizenberg Guided accumulation of active particles by topological design of a second-order skin effect Lucas S. Palacios, Serguei Tchoumakov, … Adolfo G. Grushin Reconfigurable flows and defect landscape of confined active nematics Jérôme Hardoüin, Rian Hughes, … Francesc Sagués Furcated droplet motility on crystalline surfaces Xin Tang, Wei Li & Liqiu Wang Autonomous mesoscale positioning emerging from myelin filament self-organization and Marangoni flows Arno van der Weijden, Mitch Winkens, … Peter A. Korevaar Viscoelastic control of spatiotemporal order in bacterial active matter Song Liu, Suraj Shankar, … Yilin Wu Nuris Figueroa-Morales1,2,3, Mikhail M. Genkin4, Andrey Sokolov ORCID: orcid.org/0000-0001-6697-99933 & Igor S. Aranson ORCID: orcid.org/0000-0002-4062-53931 Communications Physics volume 5, Article number: 301 (2022) Cite this article Cellular motility Nonlinear phenomena Topological defects, such as vortices and disclinations, play a crucial role in spatiotemporal organization of equilibrium and non-equilibrium systems. The defect immobilization or pinning is a formidable challenge in the context of the out-of-equilibrium system, like a living liquid crystal, a suspension of swimming bacteria in lyotropic liquid crystal. Here we control the emerged topological defects in a living liquid crystal by arrays of 3D-printed microscopic obstacles (pillars). Our studies show that while −1/2 defects may be easily immobilized by the pillars, +1/2 defects remain motile. Due to attraction between oppositely charged defects, positive defects remain in the vicinity of pinned negative defects, and the diffusivity of positive defects is significantly reduced. Experimental findings are rationalized by computational modeling of living liquid crystals. Our results provide insight into the engineering of active systems via targeted immobilization of topological defects. Point topological defects are singularities of the orientational field. They are topologically stable entities that form when a certain continuum symmetry is broken, for example at a phase transition1. The examples include Abrikosov vortices in type-II superconductors2, quantized vortices in superfluid Helium3, point disclinations in nematic liquid crystals4, skyrmions in ferromagnets5, and even cosmic strings6. Near the symmetry-breaking phase transition, the system can be universally described by the Ginzburg-Landau equation for the corresponding order parameter7. Various strategies of superconducting vortex pinning were proposed, like the creation of artificial periodic defect arrays in superconducting films, e.g., holes8,9 or magnetic nanodots10. It is tempting to apply a similar strategy to control the spatiotemporal response of active matter11,12,13,14. As it was pointed out by de Gennes15, there is a deep analogy between Abrikosov vortices and half-integer defects in liquid crystals in 2D. However, the defect motion in active systems is more subtle than at equilibrium. Dynamics of topological defects at equilibrium is relatively simple: their mutual motion and annihilation minimize the free energy. In non-equilibrium systems, such as active nematics, exemplified by cytoskeletal extracts16,17,18,19,20, cells tissues21,22, or living liquid crystals (LLC)23, the entire concept of thermodynamics is in question. Half-integer topological defects exhibit rich spatiotemporal behavior, like persistent creation and annihilation of disclination pairs, the onset of long-range dynamic order24,25,26, etc. Furthermore, activity makes the dynamics of individual defects non-symmetric: + 1/2 defects drift spontaneously while isolated −1/2 defect remain at rest24,25,27. Thus, defect pinning in active systems is more subtle, and little is known about how active defects can be immobilized28,29. Among the realm of active nematic-like systems16,17,22,30,31, a suspension of swimming bacteria mixed with a liquid crystal, a living liquid crystal11,23 displays the guidance of bacteria along the nematic director23,32,33,34, transport of cargo along bacterial trajectories35, and dynamic self-assembly of bacterial clusters36. Bacteria swim away from the cores of −1/2 defects and accumulate in the cores of +1/2 topological defects25 (Fig. 1a). The system is simple in preparation and amenable to effective computational modeling25,37,38,39. Fig. 1: Schematics of experiment and defect snapshots. a Nematic field in the vicinity of +1/2 and −1/2 topological defects. b A schematic view of 3D model of a square lattice of pillars on a glass slide. c A side view of the microscopic pillar inside the experimental microfluidic chamber. d Bright field microscope image of a living liquid crystal in the presence of pillars lattice. Scale bar is 100 μm. e Snapshot illustrating position of bacteria around the pillar. Scale bar is 10 μm. f Reconstructed nematic field lines and topological defects in the vicinity of the pillar, the observation area 100 μm × 100 μm. Scale bar is 20 μm. To investigate pinning of active topological defects, we conduct experiments with a realization of living liquid crystal: motile bacteria Bacillus subtilis suspended in lyotropic liquid crystal disodium cromoglycate (DSCG). The measurements are performed in a Hele-Shaw-type cell geometry with 3D printed microobstacle arrays. We show that − 1/2 disclinations can be successfully pinned by the obstacles whereas + 1/2 defects remain mobile. Furthermore, we have found that pinning of negative defects results in the overall reduction of mobility of the positive ones. The experimental findings are supported by computational analysis based on models of living liquid crystal developed in25,37. Overall, we obtained good agreement between our theory and the experiment. Our results stimulate new strategies for control and manipulation of active matter via targeting topological defects, in systems where topological charge can be manipulated by designing arbitrary arrays of specifically shaped microscopic irregularities. Experimental observations The micro-chamber for measurements contains a square lattice of 20 μm-tall microscopic pillars resembling negatively curved triangles raising from the glass substrate, alongside a pillar-free control area [see Fig. 1b–e, Supplementary Video 1, and Methods for experimental details]. LLC was sandwiched between the bottom glass slide and a thin film of PDMS supported by the 3D-printed pillars. In this configuration oxygen permeates through the PDMS film, promoting the motility of aerobic bacteria and enabling activity of LLC. The dynamic of LLC is captured by a Prosilica digital camera (1600 × 1200 pixels, 10 frames per second) using bright-field microscopy. A custom MATLAB script reconstructs the nematic field lines of LLC from local bacterial orientations on the snapshots. These field lines allow identification of topological defects, which can then be tracked [Fig. 1f and Supplementary Video 2]. Note that although this method depends on non-zero local concentration of bacteria to identify field lines and defects, at these high concentrations the entire space is occupied by bacteria at almost all times. Negative nematic defects residing on the pillars is the most distinct feature of active nematics with obstacles, Fig. 1f. The fraction of pillars occupied by −1/2 defects is as high as 0.94, starting from a few seconds after flow in the measurement chamber has settled to zero. Supplementary Video 3 shows a pillar that is initially not occupied by a defect, then, a − 1/2 defect in its vicinity drifts and settles on the pillar. The average filling fraction was obtained by average over all the pillars and the entire duration of the experiment – 6.2 minutes. However, the instant filling fraction can be smaller in some moments of time. The observed negative charging of pillars is attributed to several factors. The first factor may be considered in the context of equilibrium physics: if the curvature of the obstacle surface is relatively small, the liquid crystal (DSCG) tends to align parallel to the surface40,41,42. This minimizes the free energy by reduction of anchoring energy at a smaller cost of bending deformation. Therefore, orienting action of the pillar facilitates the formation and pinning of a negative defect. However, this effect is not dominant in the phenomenon that we here show. For an equivalent passive system (liquid crystal without bacteria) negative charging of pillars takes place only after several (~20) minutes of relaxation of the nematic director, while at previous times the nematic orientation is dominated by the initial flow established when the micro-chamber is closed [see Supplementary Note 1, Figs. S1, S2, and Video 4]. Additional experiments on a non-active nematic containing 2 μm-long gold rods, Figs. S1 and S2, show non-organized orientation of rods around pillars, also demonstrating that the pillars themselves do not template − 1/2 defects in the corresponding timescale. An important question here is how the shape of the pillars affects the pinning of negative defects. One may think that only the triangular shape enables the trapping of negative defects by forcing LC to align along the surface. However, even for circular obstacles, the planar alignment of LC along the circular pillar is not necessarily stable. Changing shape from triangular to circular does not guarantee that the nematic alignment remains planar along the surface. We reconstructed the nematic field around triangular star-like, round, and square pillars in a passive liquid crystal to obtain experimental confirmation of this statement (see Fig. S3). One observes no significant difference in the defects distribution around different shapes' pillars. In fact, due to competition between bending and splay deformations, circular alignment often becomes distorted, leading to a configuration with defects (usually a pair of defects)43. This effect is pronounced when the surface anchoring is not too strong (as in our experiment). Moreover, bacteria do not necessarily move on circular orbits in the vicinity of the pillar. A study in ref. 14 demonstrates that if the concentration of bacteria is above a critical value, the bacteria self-organize their flow, corresponding to a negative (saddle-like) defect. For the active system (liquid crystal with swimming bacteria), a coupling between bacteria trajectories and a local nematic field orientation23 contributes to the second factor. Collisions of swimming bacteria with obstacles leads to strong hydrodynamic trapping along the edges44,45: bacteria swim parallel to the sides of the pillars and align the nematic field in a shape resembling a negative defect. The role of this factor could be controlled by bacterial activity. An additional contribution comes from the activity of the nematic liquid crystal but does not require triangular shape. Significantly less mobile negative defects are attracted to stationary interfaces25 introduced by pillars. The defect pinning is also affected by the pillar size. Our previous study, ref. 37, shows that the topological charge of isotropic inclusion increases linearly with its size. A similar effect should occur for the pillars. However, the pinning strength should vanish when the pillar size becomes smaller than the defect's core size, that, in turn, of the order of the bacterial length of 5 μm. While the majority of negative defects are pinned to pillars, [Fig. 2a, c], positive defects tend to remain at a small distance from the pillars, [Fig. 2b, c and Figs. S4, S5]. A spatial distribution of positive defects clearly indicates three well-defined peaks outside the pillar sides. At large distances from the pillars, the distribution is almost homogeneous. There is a weak depletion of the negative defect density at a distance of about 20 μm from a pillar. This effect could be due to the low mobility of negative defects and their strong attraction to the pillar. This configuration is somewhat similar to the distribution of positive charges forming a layer around a negatively charged colloidal particle46. Fig. 2: Defect and topological charge distributions. Heat map of the average concentration of negative (a) and positive (b) defects. Color bars represent the defect concentrations. The center of the pillar is located at (X = 0, Y = 0). c Probability distribution function (PDF) of defect concentrations vs distance from the pillars r. The half-distance between pillars is 73 μm. d Topological charge density inside the pillar lattice (X < 500 μm) and in unconstrained area (X > 500 μm). Average values are Cin = (1.9 ± 0.7) × 10−4 μm−2 and Cout = (1.5 ± 0.5) × 10−4 μm−2 respectively. Color bars represent the charge densities. e Average topological charge density around a pillar. Pinned negative defects do not annihilate with positive defects in their vicinity. Spontaneous nematic charging of pillars increases fluctuations of topological charge, depicted in Fig. 2d and e. The concentration of ± 1/2 defects in the area with pillars increased roughly by 30% for our experimental conditions in comparison with the basal concentration in the pillar-free region. To understand how pillars modify the activity of topological defects, we compute the mean squared displacement (MSD) and the average speed of defects, Fig. 3. The MSD is computed as \({\langle {\left\vert {{{{{{{\bf{s}}}}}}}}(t+\Delta t)-{{{{{{{\bf{s}}}}}}}}(t)\right\vert }^{2}\rangle }_{t}\), where s(t) is the (x, y) position of the defect at time t, and 〈…〉t denotes the average along each trajectory. For a valid statistical representation Δt is limited to 1/10 of the total duration of the track. The analysis of the experimental data [Fig. 3a] shows that defects in the vicinity of pillars (small value of r) have a reduced MSD compared with free defects (larger value of r), see Fig. S5. Fig. 3: Activity of topological defects. a The mean squared displacement (MSD) of defects (obtained by moving average along the trajectory) as a function of time interval (Δt) and the average distance to the nearest pillar (r). Inset: The MSD vs r in semilogarithmic scale for two time intervals Δt. Each point represents the MSD of a defect. b The MSD of defects as a function of time near pillars (25 μm < r < 35 μm, solid line) and far from pillars (r > 140 μm, dashed line). Shaded regions represent the uncertainty resulting from averaging different MSD of defects, red for positive and blue for negative defects, correspondingly. The uncertainty is computed as the standard error of the mean (SEM). The corresponding values of diffusion coefficients are: D+ = 12.2 ± 0.2 μm2 s−1, D− = 5.8 ± 0.2 μm2 s−1 at distance from a pillar and D+ = 6.3 ± 0.2 μm2 s−1, D− = 4.8 ± 0.2 μm2 s−1 in the pillar vicinity. c Average defect speeds measured over the entire duration of the track as a function of their average distance to the nearest pillar. Error bars show standard deviation (thin lines) and the standard error of the mean (thick lines). Positive and negative defects are created and annihilated in pairs. For our experimental conditions, the distribution of defect lifetimes in the area located far from pillars can be fitted as an exponential decay ~e−t/τ, where τ = (4.8 ± 0.1) s. The distribution of τ is shown in Fig. S6. The typical value of τ observed in the experiments is not long enough to accurately extract diffusion coefficients from defect trajectories. At the same time, the collected experimental data allows us to quantify the differences between the MSD of the near-pillar defects (25 μm < r < 35 μm) and unbound defects (r > 140 μm). The linear slope comparison of a temporal evolution of the MSD presented in Fig. 3b shows that motility of both positive and negative defects located in the vicinity of pillars is reduced. The difference in motility is especially noticeable for positive defects. The defect mobility is characterized by their linear speed: the average speed increases with the distance from the pillar, Fig. 3c. Computational results To support our findings and extend the analysis beyond experimental limitations we conducted computational studies. We analyzed the system in the framework of a continuous model for living liquid crystals developed previously25,34,37. See Methods and Supplementary Note 2 for details. In this two-dimensional model, the dynamics of a liquid crystal is described by the Beris–Edwards equations for the nematic tensor Q, and the hydrodynamic velocity \(\overrightarrow{v}\). These equations are coupled with equations for the bacterial orientation tensor P, and concentrations of bacteria c+, c− moving in opposite directions along the nematic field. Pillars are modeled as isotropic tactoids (normal inclusions), where the nematic order parameter Q is strongly suppressed by modulation of the Landau-de Gennes (LdG) coefficient (see ref. 37). The spatially-modulated LdG coefficient introduces the shapes and positions of the pillars, and the conditions at the interface are simulated by adding a term that anchors the nematic tensor parallel to the surface of the pillars. We solved Eqs. (1)–(7) in a square 512 × 512 μm2 domain with four identical pillars arranged in 2 × 2 squared grid [see Fig. 4a]. Defect positions were determined using a custom defect detection algorithm37 and recorded every 0.1s of equivalent simulations time units and later used to track defects. Fig. 4: Simulations of the computational model. a Nematic field around four star-like pillars. Director orientation is shown with thick lines and amplitude of the order parameter with color. The color bar represents the magnitude of the order parameter. Negative (b) and positive (c) defect concentrations around a pillar. The pillar contour is depicted by white dots. The color bars represents the defect concentrations. d The probability density function of positive and negative defects vs the distance from the pillar center. e MSD vs time for positive and negative defects near pillars (25 μm < r < 50 μm) and far from pillars (r > 140 μm). Shaded regions represent the standard error of the mean value (SEM), red for positive and blue for negative defects, correspondingly. f Average defect speeds as a function of their average distance to the nearest pillar, calculated for Δt = 8 s. Error bars show standard deviation. The graininess of b, c is due to the coarse binning of the defect probability densities. We obtained good agreement between theory and experiment, comparing experimental Figs. 2, 3 and computational Fig. 4. Numerical simulations confirm that negative defects reside on the pillar center, while positive defects accumulate at d ≈ 15 − 20 μm distance from the pillar center [Fig. 4a–d], in agreement with the experimental findings [Figs. 2 and 3] and Supplementary experimental Videos 3, 4 and simulation Videos 5, 6. Like in the experiment, the distribution of positive defects has maxima near the concave segments of the pillar, Figs. 2b and 4c. The proximity of pillars also leads to a reduction of defect mobility (Fig. 4e, f). The effective diffusivity of positive and negative defects drops from D+ = 24.7 μm2 s−1, D− = 5.6 μm2 s−1 to D+ = 13.9 μm2 s−1 and D− = 4 μm2 s−1. Since the motion of defects is not a purely normal diffusion, the provided values are given for a typical lifetime of defects Δt = 5–8 s. We have found from the simulations that while positive defect accumulation near a pillar is robust, the defect distributions are sensitive to the director anchoring details. For example, if the anchoring is relatively strong and planar, the bend dominates the splay, as shown in supplementary Fig. S7a. Therefore, it will be more energetically favorable for the defect to bind to the concave segment of the pillar, as in the experiment and in our simulations. The opposite case of weak planar/hybrid anchoring is sketched in supplementary Fig. S7b: splay deformations are dominant at the vertices, and the defects preferentially bind near the tips. This situation can be possibly realized by treating the pillars with a substance favoring a homeotropic anchoring47. Similar to the experiment, there is also a depression of the negative defect density at a certain distance from the pillar; compare computational Fig. 4d and experimental Fig. 2c. To extend our study beyond experimental conditions, we numerically investigate the effect of circular pillars [see Fig. 5 and Supplementary Videos 7, 8]. While defect trapping by pillars is robust, the form of the probability distributions depends on the pillar's shape. Like in the triangular case, Fig. 4b, c, the distribution of negative defects for circular pillars has a maximum in the center while positive defects have a minimum, Fig. 5b, c. However, the defect distributions for circular pillars are axisymmetric, and the distribution for negative defects is about twice wider than that for triangular pillars. The defect mobility in the proximity of pillars does not exhibit significant differences compared to the triangular case, see Fig. S8. Fig. 5: Simulations for pillars of circular cross-section. a The probability density function of positive and negative defects vs the distance from the pillar center. The concentration of negative (b) and positive (c) defects, the pillars' contours are depicted by the white lines. The color bars represent the defect concentrations. The graininess of (b), (c) is due to the coarse binning of the defect probability densities. Fokker–Planck model As an alternative mathematical description of the system, we model the defect dynamics with a one-dimensional probabilistic model derived from the Langevin defect dynamics, see Methods and ref. 37. A somewhat similar approach was later considered in ref. 48. Here the deterministic drift force is a sum of inter-defects interaction forces and the force on tactoid's (pillar's) surface that prevents defects from escaping the tactoid region. Stochastic forces depend on defects' diffusivities, where positive defects are more mobile than the negative ones, in agreement with the experiment, Figs. 3a and 5a. The corresponding system of two Fokker–Planck (FP) equations for positive and negative defects densities is solved numerically, see Methods. The model provides a qualitative insight into the experiment. The steady-state concentration of positive and negative defects are shown in Fig. S9. This approach explains defects clustering at pillar's boundary: less active negative defects tend to cluster inside the pillar, while more mobile positive defects escape the potential barrier and spread across the entire domain. The model predicts that the concentration of positive defects approaches the background value faster than that for the negative ones. The discrepancy for the defect concentrations inside the pillar is due to the ambiguity of defect identification in isotropic phase: Unlike the continuous model, in 1D simulations the defects were not allowed to annihilate. Our work provides a new strategy for tuning the physical properties of active nematics. We demonstrate that microscopic obstacles robustly pin −1/2 topological defects, while the +1/2 defects remain mobile while some of them are trapped in the vicinity of pillars. Our experiments and simulations show that the pinning of negative defects also results in the overall reduction of motility of the positive defects. The experimental findings are supported by computational analysis based on the model of a living liquid crystal. Further numerical studies suggest that the observed phenomenon is not sensitive to the shape of obstacles: qualitatively similar behavior was observed for an array of round obstacles. In addition, we have shown that artificial imperfections can change the overall balance between positive and negative defects. As a result, the active fluid becomes topologically charged, with materials properties that are likely to differ from a "neutral" active fluid. Our work opens up an important future direction on experimental study of a topologically charged fluid. In the context of equilibrium physics, the interplay between correlated disorder and vortex matter results in a variety of nontrivial glassy states49,50. An intriguing question is: whether topological defects in active systems form states similar to "spin glasses"51 or their interaction with the disorder is very different. Pillars manufacturing An array of pillars is 3D-printed on a glass slide by direct laser lithography using Nanoscribe Photonic Professional GT system. The materials used for printing is a high-resolution negative photoresist IP-Dip manufactured by Nanoscribe. The exposed photoresist is developed with Propylene glycol monomethyl ether acetate (PGMEA) from Sigma Aldrich for 20 min and then rinsed with Isopropanol. For best experimental conditions the pillars are made 20 μm tall. Living liquid crystal preparation Bacillus subtilis (strain 1085) bacteria initially grown on a Lysogeny Broth (Sigma Aldrich) agar plate are transferred to Terrific Broth (TB) liquid growth medium and kept at 30 ∘C for ≈8−12 h. The experiments are performed with a population of bacteria in the early logarithmic phase of growth. The bacteria are concentrated by centrifugation and mixed with a liquid crystal to achieve the final bacteria concentration of ≈5 × 109 cell/cm3. The liquid crystal is obtained by mixing disodium cromoglycate (DSCG) purchased from Spectrum Chemicals with TB at a concentration of 20% by weight. The final concentration of DSCG after mixing with is the concentrated bacterial suspension is 11.5% by weight. Image acquisition and processing The dynamics of the system was examined via an inverted microscope Olympus IX71 and recorded by a monochrome camera Prosilica GX 1660 (resolution 1600 × 1200 pixels) at 10 frames per second. The images were processed in MATLAB. The director field was reconstructed from the local bacterial orientations. For finding bacterial orientation, we used a gradient method and assumed that the largest variation of image intensity in the area around a single bacterium is perpendicular to a bacterial body. The director field was interpolated for the areas with no bacteria. Advection-diffusion computational model of a living liquid crystal We used the Beris–Edwards (BE) equations for the nematodynamics coupled with two advection-diffusion equations for bacterial concentrations37. The first BE equation describes the evolution of the tensorial order parameter Q: $$({\partial }_{t}+\overrightarrow{v}\cdot \nabla ){{{{{{{\bf{Q}}}}}}}}-{{{{{{{\bf{S}}}}}}}}-\Gamma {{{{{{{\bf{H}}}}}}}}+{{{{{{{{\bf{F}}}}}}}}}_{{{{{{{{\rm{anch}}}}}}}}}=0,$$ where \(\overrightarrow{v}\) is the fluid velocity, tensor S describes the nematic flow alignment, H is the tensorial molecular field and Γ is the relaxation rate of the director, see Supplementary Note 2 for the definitions and Table S1 for the parameters used in simulations. The molecular field H is a variational derivative of Landau-de Gennes free energy and takes the following form: $${{{{{{{\bf{H}}}}}}}}=a{{{{{{{\bf{Q}}}}}}}}-c{{{{{{{\bf{Q}}}}}}}}{{{{{{{\rm{Tr}}}}}}}}{{{{{{{{\bf{Q}}}}}}}}}^{2}+K{\nabla }^{2}{{{{{{{\bf{Q}}}}}}}},$$ where a and c are the Landau-de Gennes coefficients, K is the elastic constant (a one-constant approximation is used). We model pillars as isotropic tactoids. Pillars of the desired shape are created by prescribing the coefficient a to a negative value in the region of the pillars, strongly suppressing the magnitude of the order parameter. a is positive in the rest of the domain, which has a nematic phase. The coefficient \(c={{{{{{{\rm{const}}}}}}}} \, > \,0\) everywhere in the domain. The equilibrium magnitude of the order parameter remains zero in isotropic tactoids, while for the nematic phase \({q}_{{{{{{{{\rm{eq}}}}}}}}}=\sqrt{a/c} \, > \,0\). The last term in Eq. (1) imposes strong planar alignment on the pillar's surfaces. Similar to ref. 37, this term only alters the director orientation and does not change the amplitude of the order parameter: $${{{{{{{{\bf{F}}}}}}}}}_{{{{{{{{\rm{anch}}}}}}}}}=4{\xi }_{{{{{{{{\rm{anch}}}}}}}}}{{{{{{{\bf{Q}}}}}}}}{{{{{{{{\bf{R}}}}}}}}}_{\pi /2}{{{{{{{\rm{Tr}}}}}}}}\left({{{{{{{\bf{Q}}}}}}}}(\overrightarrow{{f}_{e}}\overrightarrow{{f}_{e}}-{{{{{{{\bf{I}}}}}}}}/2){{{{{{{{\bf{R}}}}}}}}}_{\pi /2}\right){{{{{{{\mathcal{I}}}}}}}}(\overrightarrow{r})$$ here ξanch is the anchoring strength, Rπ/2 is a π/2 rotation matrix, \({\overrightarrow{f}}_{e}\) is a vector parallel to the pillar's surface, and \({{{{{{{\mathcal{I}}}}}}}}(\overrightarrow{r})\) is the indicator function: \({{{{{{{\mathcal{I}}}}}}}}(\overrightarrow{r})\) is one near the pillars' surfaces and zero everywhere else. As shown in ref. 25, this form leads to relaxation of the director orientation towards the vector \({\overrightarrow{f}}_{e}\) as follows: $$\dot{\theta }=4{\xi }_{{{{{{{{\rm{anch}}}}}}}}}{q}^{2}\sin (2\phi -2\theta ),$$ where q is the amplitude of the order parameter, ϕ is the orientation angle of the vector \({\overrightarrow{f}}_{e}\), and θ is the director orientation angle. In addition to Eq. (1), the system includes the following equations: $$\nabla \cdot \left({{{{{{{{\boldsymbol{\sigma }}}}}}}}}_{{{{{{{{\rm{a}}}}}}}}}+{{{{{{{{\boldsymbol{\sigma }}}}}}}}}_{{{{{{{{\rm{s}}}}}}}}}+{{{{{{{{\boldsymbol{\sigma }}}}}}}}}_{{{{{{{{\rm{act}}}}}}}}}+{{{{{{{{\boldsymbol{\sigma }}}}}}}}}_{{{{{{{{\rm{visc}}}}}}}}}-p{{{{{{{\bf{I}}}}}}}}\right)-\zeta \overrightarrow{v}=0.$$ $${\partial }_{t}{{{{{{{\bf{P}}}}}}}}={a}_{p}{{{{{{{\bf{P}}}}}}}}-4{c}_{p}{{{{{{{{\bf{P}}}}}}}}}^{3}-\frac{{{{{{{{{\bf{F}}}}}}}}}_{{{{{{{{\rm{Q}}}}}}}}}}{{\tau }_{0}}+{D}_{{{{{{{{\rm{p}}}}}}}}}{\nabla }^{2}{{{{{{{\bf{P}}}}}}}}$$ $$\begin{array}{l}{\partial }_{t}{c}^{+}+\nabla \cdot \left({V}_{0}\overrightarrow{p}{c}^{+}+\overrightarrow{v}{c}^{+}\right)=-\frac{{c}^{+}-{c}^{-}}{\tau }+{D}_{{{{{{{{\rm{c}}}}}}}}}{\nabla }^{2}{c}^{+},\\ {\partial }_{t}{c}^{-}+\nabla \cdot \left(-{V}_{0}\overrightarrow{p}{c}^{-}+\overrightarrow{v}{c}^{-}\right)=-\frac{{c}^{-}-{c}^{+}}{\tau }+{D}_{{{{{{{{\rm{c}}}}}}}}}{\nabla }^{2}{c}^{-}\end{array}$$ Equation (5) is the balance of linear momentum. The stress includes the elastic σs (symmetric) and σa (antisymmetric), and viscous σvisc contributions, see Supplementary Note 2 for the definitions. σact is the active stress that depends on the bacterial concentration, p is the fluid pressure and \(\zeta \overrightarrow{v}\) is the viscous friction which depends on the sample thickness. Equation (6) describes the evolution of the bacterial orientation tensor \({{{{{{{\bf{P}}}}}}}}=\left\vert {{{{{{{\bf{P}}}}}}}}\right\vert (\overrightarrow{p}\overrightarrow{p}-{{{{{{{\bf{I}}}}}}}}/2)\). The two first terms on the right-hand side control the amplitude ∣P∣. Similar to Eq. (3), the third term aligns P with the nematic field Q37: $${{{{{{{{\bf{F}}}}}}}}}_{{{{{{{{\rm{Q}}}}}}}}}=4{{{{{{{\bf{P}}}}}}}}{{{{{{{{\bf{R}}}}}}}}}_{\pi /2}{{{{{{{\rm{Tr}}}}}}}}\left({{{{{{{\bf{PQ}}}}}}}}{{{{{{{{\bf{R}}}}}}}}}_{\pi /2}\right)$$ here τ0 is an alignment time for a bacterium with respect imposed nematic direction (about a second), and Dp is the bacterial orientation diffusion. Eqs. (7) account for the bacterial concentrations c± that swim parallel and antiparallel to the orientation vector \(\overrightarrow{p}\). τ is the bacterial reversal rate and Dc is the concentration diffusion, V0 is the magnitude of bacterial velocity. We positioned four identical pillars in a 2 × 2 squared grid (see Fig. 4(a)). Their shape and size were similar to the experimental ones. Their boundaries in polar coordinates were described by: \(f(r)=\frac{{r}_{0}}{0.2+\left\vert \cos (1.5\theta )\right\vert }\), where we set r0 = 2.5 μm. This formula was used to create spatially-modulated Landau-de Gennes coefficient a(x, y), which was negative inside and positive outside of the regions of pillars. Additional details for this computational model can be found in37. Simulation parameters are listed in Table S1. Simplified Fokker-Planck model We model the transport of defects with the 1D Langevin equation37. We assume that the system contains equal number of positive and negative defects N± residing either in the nematic phase or on the pillar modeled by a isotropic tactoid. We assume that well-separated defects interact similarly to electrical charges (those with the same topological charge repel, those with different attract) and the interaction strength decays with distance as 1/x. Thus, two contributions: one due to defect interaction and another due to a barrier on the tactoid's surface control the defect dynamics. Our computational 1D domain \(x\in \left[0;L\right]\) consists of a small tactoid region \(x\in \left[0;a\right]\), and the rest is occupied by a nematic medium. The potential barrier prevents the defects from escaping isotropic phase and has a form of a step function: $$U(x)=\left\{\begin{array}{ll}0,\quad &{{{{{{{\rm{for}}}}}}}}\quad 0 \, < \,x \, < \,a,\\ A,\quad &{{{{{{{\rm{for}}}}}}}}\quad x \, > \,a,\end{array}\right.$$ The Langevine equations for the positions xi of each individual defect can cast as following: $$\begin{array}{l}{\partial }_{t}{x}_{i}^{+}={U}^{{\prime} }({x}_{i}^{+})+\mu \left(\mathop{\sum }\limits_{j=0}^{N}\frac{1}{{x}_{j}^{-}-{x}_{i}^{+}}-\mathop{\sum }\limits_{j=0,j\ne i}^{N}\frac{1}{{x}_{j}^{+}-{x}_{i}^{+}}\right)+{\xi }^{+},\\ {\partial }_{t}{x}_{i}^{-}={U}^{{\prime} }({x}_{i}^{-})+\mu \left(\mathop{\sum }\limits_{j=0}^{N}\frac{1}{{x}_{j}^{+}-{x}_{i}^{-}}-\mathop{\sum }\limits_{j=0,j\ne i}^{N}\frac{1}{{x}_{j}^{-}-{x}_{i}^{-}}\right)+{\xi }^{-},\end{array}$$ where μ is the interaction strength, ξ± are random forces with the magnitude D±. Following ref. 37, we introduce the probability density distribution functions of positive and negative defects P±. Then, the sums in Eq. (10) can be cast as integrals over P±. The corresponding Fokker-Planck equations for the probability density distribution of positive (P+) and negative (P−) defects are then of the form37: $$\begin{array}{l}{\partial }_{t}{P}^{+}(x,t)={D}^{+}{\partial }_{xx}{P}^{+}-{\partial }_{x}\left[\left(-{U}^{{\prime} }(x)+\mu \int\nolimits_{0}^{L}\frac{{P}^{+}({x}^{{\prime} },\,t)-{P}^{-}({x}^{{\prime} },\,t)}{x-{x}^{{\prime} }}d{x}^{{\prime} }\right){P}^{+}(x,t)\right],\\ {\partial }_{t}{P}^{-}(x,t)={D}^{-}{\partial }_{xx}{P}^{-}-{\partial }_{x}\left[\left(-{U}^{{\prime} }(x)-\mu \int\nolimits_{0}^{L}\frac{{P}^{+}({x}^{{\prime} },\,t)-{P}^{-}({x}^{{\prime} },\,t)}{x-{x}^{{\prime} }}d{x}^{{\prime} }\right){P}^{-}(x,t)\right],\end{array}$$ where and D± are the diffusivities of positive and negative defects correspondingly. The diffusivities are expressed via magnitude of the noise terms ξ±. We assume that D+ > D−, reflecting the fact that positive defects are more mobile (see Fig. 3(c)). We are looking for a stationary solution to Eqs. (11). Integrating both equations, we obtain: $$\begin{array}{l}{D}^{+}\left({\partial }_{x}{P}^{+}+{R}^{+}\right)=\left(-{U}^{{\prime} }(x)+\mu \int\nolimits_{0}^{L}\frac{{P}^{+}({x}^{{\prime} },\,t)-{P}^{-}({x}^{{\prime} },\,t)}{x-{x}^{{\prime} }}d{x}^{{\prime} }\right){P}^{+}(x,t),\\ {D}^{-}\left({\partial }_{x}{P}^{-}+{R}^{-}\right)=\left(-{U}^{{\prime} }(x)-\mu \int\nolimits_{0}^{L}\frac{{P}^{+}({x}^{{\prime} },\,t)-{P}^{-}({x}^{{\prime} },\,t)}{x-{x}^{{\prime} }}d{x}^{{\prime} }\right){P}^{-}(x,t),\end{array}$$ where R± are the integration constants to be determined. Eqs. (12) can be transformed using a general formula for the solution of a 1st order inhomogenious ordinary differential equation (integration factor): $${P}^{\pm }(x)={s}_{a}^{\pm }(x)\left({C}^{\pm }-{s}_{b}^{\pm }(x)\right),$$ where C± are another pair of integration constants and the following notations are introduced: $$z(x)=\int\nolimits_{0}^{L}\left({P}^{+}({x}^{{\prime} })-{P}^{-}({x}^{{\prime} })\right)\log \left\vert x-{x}^{{\prime} }\right\vert d{x}^{{\prime} },$$ $${s}_{a}^{\pm }(x)=\exp \left[\int\nolimits_{0}^{x}\left(-\frac{{U}^{{\prime} }(x)}{{D}^{\pm }}\pm \frac{\mu }{{D}^{\pm }}\int\nolimits_{0}^{L}\frac{{P}^{+}({x}^{{\prime} },t)-{P}^{-}({x}^{{\prime} },t)}{x-{x}^{{\prime} }}d{x}^{{\prime} }\right)dx\right]\exp \left[\frac{1}{{D}^{\pm }}\left(-U(x)\pm z(x)\right)\right],$$ $${s}_{b}^{\pm }(x)=\int\nolimits_{0}^{x}\frac{{R}^{\pm }}{{s}_{a}^{\pm }({x}^{{\prime} })}d{x}^{{\prime} },$$ To find the integration constant R±, we use the Neumann boundary condition (which also accounts for probability reflection on the boundary, since the potential is constant at the boundaries) at the right boundary, δxP±(L) = 0. Substituting it into Eq. (12), we obtain: $${R}^{\pm }=\pm \frac{\mu {P}^{\pm }(L)}{{D}^{\pm }}\int\nolimits_{0}^{L}\frac{{P}^{+}({x}^{{\prime} })-{P}^{-}({x}^{{\prime} })}{L-{x}^{{\prime} }}d{x}^{{\prime} }$$ To find the integration constants C±, we use the normalization condition for positive and negative defect densities \(\int\nolimits_{0}^{L}{P}^{\pm }(x)dx=L{P}_{0}\). From Eq. (13) we find: $${C}^{\pm }=\frac{L{P}_{0}+\int\nolimits_{0}^{L}{s}_{a}^{\pm }(x){s}_{b}^{\pm }(x)dx}{\int\nolimits_{0}^{L}{s}_{a}^{\pm }(x)dx}$$ We obtain P±(x) using the iterative relaxation method. Given initial guesses \({P}_{0}^{\pm }(x)\), we calculate tentative \({\tilde{P}}_{0}^{\pm }(x)\) using Eq. (13). We then update the probability densities by weighted average of the previous and current values: $${P}_{n}^{\pm }(x)=\alpha {P}_{n-1}^{\pm }(x)+(1-\alpha ){\tilde{P}}_{n-1}^{\pm }(x),$$ where α = 0.999. Such method rapidly converges to the stationary solutions \({P}_{* }^{\pm }(x)\). The results are shown on Fig. S9. While the results generally agree with the experiment, especially for round pillars. There are also some discrepancies. The main difference is that positive defects also peak inside the tactoid, although positive peak amplitude is smaller than the negative one. This can be attributed to an oversimplification of the Fokker-Planck model. For example, mutual annihilation can lead to depletion of positive defects inside the tactoid. The data that support the plots within this paper and other findings of this study are available from the authors upon a request. The code to carry out the simulations is available from the corresponding author on a request. Mermin, N. D. The topological theory of defects in ordered media. Rev. Mod. Phys. 51, 591 (1979). Article ADS MathSciNet Google Scholar Blatter, G., Feigel'man, M. V., Geshkenbein, V. B., Larkin, A. I. & Vinokur, V. M. Vortices in high-temperature superconductors. Rev. Mod. Phys. 66, 1125 (1994). Salomaa, M. M. & Volovik, G. E. Quantized vortices in superfluid 3He. Rev. Mod. Phys. 59, 533 (1987). Kléman, M. Defects in liquid crystals. Rep. Prog. Phys. 52, 555 (1989). Foster, D. et al. Two-dimensional skyrmion bags in liquid crystals and ferromagnets. Nat. Phys. 15, 655–659 (2019). Hindmarsh, M. B. & Kibble, T. W. B. Cosmic strings. Rep. Prog. Phys. 58, 477 (1995). Aranson, I. S. & Kramer, L. The world of the complex Ginzburg-Landau equation. Rev. Mod. Phys. 74, 99 (2002). Article ADS MathSciNet MATH Google Scholar Baert, M., Metlushko, V. V., Jonckheere, R., Moshchalkov, V. V. & Bruynseraede, Y. Composite flux-line lattices stabilized in superconducting films by a regular array of artificial defects. Phys. Rev. Lett. 74, 3269 (1995). Harada, K. et al. Direct observation of vortex dynamics in superconducting films with regular arrays of defects. Science 274, 1167–1170 (1996). Martin, J. I., Vélez, M., Nogues, J. & Schuller, I. K. Flux pinning in a superconductor by an array of submicrometer magnetic dots. Phys. Rev. Lett. 79, 1929 (1997). Aranson, I. S. Bacterial active matter. Rep. Prog. Phys. 85, 076601 (2022). Gompper, G. et al. The 2020 motile active matter roadmap. J. Phys. Condens. Matter 32, 193001 (2020). Nishiguchi, D., Aranson, I. S., Snezhko, A. & Sokolov, A. Engineering bacterial vortex lattice via direct laser lithography. Nat. Commun. 9, 4486 (2018). Reinken, H. et al. Organizing bacterial vortex lattices by periodic obstacle arrays. Commun. Phys. 3 76 (2020). Brochard-Wyart, F., Prost, J. & Bok, J., P-G De Gennes' Impact On Science-Volume I: Solid State And Liquid Crystals, Vol. 18 (World Scientific, 2009). Sanchez, T., Chen, D. T. N., DeCamp, S. J., Heymann, M. & Dogic, Z. Spontaneous motion in hierarchically assembled active matter. Nature 491, 431 (2012). Kumar, N., Zhang, R., de Pablo, J. J. & Gardel, M. L. Tunable structure and dynamics of active liquid crystals. Sci. Adv. 4, eaat7779 (2018). Guillamat, P., Ignés-Mullol, J. & Sagués, F. Control of active liquid crystals with a magnetic field. Proc. Natl Acad. Sci. 113, 5498–5502 (2016). Ellis, P. W. et al. Curvature-induced defect unbinding and dynamics in active nematic toroids. Nat. Phys. 14, 85–90 (2018). Zhang, R., Kumar, N., Ross, J. L., Gardel, M. L. & De Pablo, J. J. Interplay of structure, elasticity, and dynamics in actin-based nematic materials. Proc. Natl Acad. Sci. 115, E124–E133 (2018). Duclos, G. et al. Spontaneous shear flow in confined cellular nematics. Nat. Phys. 14, 728–732 (2018). Kawaguchi, K., Kageyama, R. & Sano, M. Topological defects control collective dynamics in neural progenitor cell cultures. Nature 545, 327–331 (2017). Zhou, S., Sokolov, A., Lavrentovich, O. D. & Aranson, I. S. Living liquid crystals. Proc. Natl Acad. Sci. 111, 1265–1270 (2014). Giomi, L., Bowick, M. J., Ma, X. & Marchetti, M. C. Defect annihilation and proliferation in active nematics. Phys. Rev. Lett. 110, 228101 (2013). Genkin, M. M., Sokolov, A., Lavrentovich, O. D. & Aranson, I. S. Topological defects in a living nematic ensnare swimming bacteria. Phys. Rev. X 7, 011029 (2017). DeCamp, S. J., Redner, G. S., Baskaran, A., Hagan, M. F. & Dogic, Z. Orientational order of motile defects in active nematics. Nat. Mater. 14, 1110 (2015). Pismen, L. M. Dynamics of defects in an active nematic layer. Phys. Rev. E 88, 050502 (2013). Aranson, I. S. Harnessing medium anisotropy to control active matter. Acc. Chem. Res. 51, 3023–3030 (2018). Aranson, I. S. Topological defects in active liquid crystals. Physics-Uspekhi 62, 892 (2019). Keber, F. C. et al. Topology and dynamics of active nematic vesicles. Science 345, 1135–1139 (2014). Saw, ThuanBeng et al. Topological defects in epithelia govern cell death and extrusion. Nature 544, 212–216 (2017). Kumar, A., Galstian, T., Pattanayek, S. K. & Rainville, S. The motility of bacteria in an anisotropic liquid environment. Mol. Cryst. Liq. Cryst. 574, 33–39 (2013). Mushenheim, P. C., Trivedi, R. R., Weibel, D. B. & Abbott, N. L. Using liquid crystals to reveal how mechanical anisotropy changes interfacial behaviors of motile bacteria. Biophys. J. 107, 255–265 (2014). Turiv, T. et al. Polar jets of swimming bacteria condensed by a patterned liquid crystal. Nat. Phys. 16, 481–487 (2020). Sokolov, A., Zhou, S., Lavrentovich, O. D. & Aranson, I. S. Individual behavior and pairwise interactions between microswimmers in anisotropic liquid. Phys. Rev. E 91, 013009 (2015). Mushenheim, P. C., Trivedi, R. R., Tuson, H. H., Weibel, D. B. & Abbott, N. L. Dynamic self-assembly of motile bacteria in liquid crystals. Soft Matter 10, 88–95 (2014). Genkin, M. M., Sokolov, A. & Aranson, I. S. Spontaneous topological charging of tactoids in a living nematic. N J. Phys. 20, 043027 (2018). Zhang, R., Roberts, T., Aranson, I. S. & De Pablo, J. J. Lattice Boltzmann simulation of asymmetric flow in nematic liquid crystals with finite anchoring. J. Chem. Phys. 144, 084905 (2016). Thampi, S. P., Golestanian, R. & Yeomans, J. M. Velocity correlations in an active nematic. Phys. Rev. Lett. 111, 118101 (2013). Nazarenko, V. G. et al. Surface alignment and anchoring transitions in nematic lyotropic chromonic liquid crystal. Phys. Rev. Lett. 105, 017801 (2010). Tone, C. M., De Santo, M.P., Buonomenna, M.G., Golemme, G. & Ciuchi, F. Dynamical homeotropic and planar alignments of chromonic liquid crystals. Soft Matter 8, 8478–8482 (2012). Bowick, M. J. & Giomi, L. Two-dimensional matter: order, curvature and defects. Adv. Phys. 58, 449–563 (2009). Lapointe, C. P., Mason, T. G. & Smalyukh, I. I. Shape-controlled colloidal interactions in nematic liquid crystals. Science 326, 1083–1086 (2009). Sipos, O., Nagy, K., Di Leonardo, R. & Galajda, P. Hydrodynamic trapping of swimming bacteria by convex walls. Phys. Rev. Lett. 114, 258104 (2015). Figueroa-Morales, N. et al. Living on the edge: transfer and traffic of E. coli in a confined flow. Soft Matter 11, 6284–6293 (2015). Behrens, S. H. & Grier, D. G. The charge of glass and silica surfaces. J. Chem. Phys. 115, 6716–6721 (2001). Zhou, S. et al. Dynamic states of swimming bacteria in a nematic liquid crystal cell with homeotropic alignment. N J. Phys. 19, 055006 (2017). Shankar, S. & Marchetti, M. C. Hydrodynamics of active defects: from order to chaos to defect ordering. Phys. Rev. X 9, 041047 (2019). Hwa, T., Le Doussal, P., Nelson, D. R. & Vinokur, V. M. Flux pinning and forced vortex entanglement by splayed columnar defects. Phys. Rev. Lett. 71, 3545 (1993). Nelson, D. R. & Vinokur, V. M. Boson localization and correlated pinning of superconducting vortex arrays. Phys. Rev. B 48, 13060 (1993). Mézard, M., Parisi, G. and Virasoro, M. A., Spin Glass Theory and beyond: An Introduction to the Replica Method and Its Applications, Vol. 9 (World Scientific Publishing Company, 1987). N.F.M. and I.S.A. are supported by the NSF PHY-1707900 and PHY- 2140010. A.S. is supported by the U.S. DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. The authors are grateful to Prof. Ivan Smalyukh and Prof. Noel Clark for useful discussions and assistance with measurements. Department of Biomedical Engineering, The Pennsylvania State University, University Park, PA, 16802, USA Nuris Figueroa-Morales & Igor S. Aranson Department of Physics, University of Colorado Boulder, 390 UCB, Boulder, CO, 80309, USA Nuris Figueroa-Morales Materials Science Division, Argonne National Laboratory, Argonne, IL, 60439, USA Nuris Figueroa-Morales & Andrey Sokolov Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA Mikhail M. Genkin Andrey Sokolov Igor S. Aranson N.F.M., M.G., A.S., and I.S.A. designed the research, N.F.M. performed the experiments, N.F.M. and A.S. developed data analysis methods, M.G. and I.S.A. developed the theory and simulations, N.F.M., M.G., A.S., and I.S.A. wrote the manuscript. I.S.A. supervised the project. Correspondence to Igor S. Aranson. Communications Physics thanks Chenhui Peng and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Description of Additional Supplementary Files Figueroa-Morales, N., Genkin, M.M., Sokolov, A. et al. Non-symmetric pinning of topological defects in living liquid crystals. Commun Phys 5, 301 (2022). https://doi.org/10.1038/s42005-022-01077-w Focus Collections Communications Physics (Commun Phys) ISSN 2399-3650 (online)
CommonCrawl
View source for Social credit ← Social credit {{About|the philosophy, economic theory and history of social credit|political parties|Social Credit Party (disambiguation)}} {{Use dmy dates|date=July 2012}} '''Social credit''' is an [[interdisciplinary]] [[Distributism|distributive]] philosophy developed by [[C. H. Douglas]] (1879–1952), a British engineer, who wrote a book by that name in 1924. It encompasses the fields of [[economics]], [[political science]], [[history]], [[accounting]], and [[physics]]. Its policies are designed, according to Douglas, to disperse economic and political power to individuals. Douglas wrote, "Systems were made for men, and not men for systems, and the interest of man which is [[Personal development|self-development]], is above all systems, whether theological, political or economic."<ref>{{cite book |title=Economic Democracy, Fifth Authorised Edition |location=Epsom, Surrey, England |last=Douglas |first= C.H.|year=1974 |publisher=Bloomfield Books |isbn=0-904656-06-3 |pages=18 |url=http://www.archive.org/details/econdemocracy00dougiala |accessdate=12 11 2008 }}</ref> Douglas said that Social Crediters want to build a new civilization based upon "absolute economic security" for the individual, where "they shall sit every man under his vine and under his [[Figs in the Bible|fig tree]]; and none shall make them afraid."<ref name="Douglas">{{Cite news |last=Douglas |first=C.H. |publication-date=1954-55 |year=1954 |title=Cover |periodical=The Douglas Quarterly Review |series=The Fig Tree, New Series |publication-place=Belfast, Northern Ireland |publisher=K.R.P. Publications |volume=1 |issue=June |at=Cover |postscript=<!--None--> }}</ref><ref name="ReferenceB">{{sourcetext|source=Bible|version=King James|book=Micah|chapter=4|verse=4}}</ref> In his words, "what we really demand of existence is not that we shall be put into somebody else's [[Utopia]], but we shall be put in a position to construct a Utopia of our own."<ref name="The Necessity for a National rather than an International Financial System">{{Cite journal |last=Douglas |first=C.H. |publication-date=1933 |title=Major C.H. Douglas Speaks |publication-place=Sydney |publisher=Douglas Social Credit Association |pages=41 |postscript=<!--None--> }}</ref> It was while he was reorganising the work at Farnborough, during World War I, that Douglas noticed that the weekly total costs of goods produced was greater than the sums paid out to individuals for [[wage]]s, [[Salary|salaries]] and [[dividend]]s. This seemed to contradict the theory put forth by classic [[Ricardian economics]], that all costs are distributed simultaneously as [[purchasing power]]. Troubled by the seeming disconnect between the way money flowed and the objectives of industry ("delivery of goods and services", in his view), Douglas set out to apply [[engineering]] methods to the economic system. Douglas collected data from over a hundred large British businesses and found that in nearly every case, except that of companies heading for [[bankruptcy]], the sums paid out in salaries, wages and dividends were always less than the total costs of goods and services produced each week: [[consumer]]s did not have enough income to buy back what they had made. He published his observations and conclusions in an article in the ''English Review'', where he suggested: "That we are living under a system of accountancy which renders the delivery of the nation's goods and services to itself a technical impossibility."<ref>"The Delusion of Super-Production", C.H. Douglas, ''English Review'', December 1918.</ref> He later formalized this observation in his A+B theorem. Douglas proposed to eliminate this gap between total prices and total incomes by augmenting consumers' [[purchasing power]] through a National Dividend and a Compensated Price Mechanism. According to Douglas, the true purpose of [[Production (economics)|production]] is [[Consumption (economics)|consumption]], and production must serve the genuine, freely expressed interests of consumers. In order to accomplish this objective, he believed that each citizen should have a beneficial, not direct, inheritance in the communal [[Capital (economics)|capital]] conferred by complete access to consumer goods assured by the National Dividend and Compensated Price.<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=4, 108| url=http://www.archive.org/details/creditpowerdemoc00douguoft |accessdate=12 11 2008 }}</ref> Douglas thought that consumers, fully provided with adequate [[purchasing power]], will establish the policy of [[Manufacturing|production]] through exercise of their monetary vote.<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=89–91 }}</ref> In this view, the term [[economic democracy]] does not mean [[workers' control|worker control]] of industry, but democratic control of credit.<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=4–9 }}</ref> Removing the policy of production from [[Financial institution|banking institutions]], government, and industry, Social Credit envisages an "[[aristocracy]] of producers, serving and [[Accreditation|accredited]] by a democracy of consumers."<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=95 }}</ref> The policy proposals of social credit attracted widespread interest in the decades between the world wars of the twentieth century because of their relevance to economic conditions of the time. Douglas called attention to the excess of production capacity over consumer purchasing power, an observation that was also made by [[John Maynard Keynes]] in his book, ''[[The General Theory of Employment, Interest and Money]]''.<ref name="Keynes General Theory">{{cite book |title=The General Theory of Employment, Interest and Money |last=Keynes |first=John M. |year=1936 |publisher=MacMillan & Co Ltd. |location=London, England |pages=32, 98–100, 370–371 |isbn=1-56000-149-6}}</ref> While Douglas shared some of Keynes' criticisms of classical economics, his unique remedies were disputed and even rejected by most economists and bankers of the time. Remnants of Social Credit still exist within [[Social Credit Party (disambiguation)|social credit parties]] throughout the world, but not in the purest form originally advanced by Major C. H. Douglas. ==Economic theory== ===Factors of production and value=== Douglas disagreed with [[classical economists]] who recognised only three [[factors of production]]: [[Land (economics)|land]], [[Labour (economics)|labour]] and [[Physical capital|capital]]. While Douglas did not deny the role of these factors in production, he saw the "[[Intangible cultural heritage|cultural inheritance of society]]" as the primary factor. He defined cultural inheritance as the knowledge, techniques and processes that have been handed down to us incrementally from the origins of civilization (i.e. [[Progress (history)|progress]]). Consequently, mankind does not have to keep "[[reinventing the wheel]]". "We are merely the administrators of that cultural inheritance, and to that extent the cultural inheritance is the property of all of us, without exception.<ref>Douglas, C.H. (22 January 1934). "[http://www.alor.org/Library/MonopolisticIdea.htm#1a The Monopolistic Idea]" address at Melbourne Town Hall, Australia. The Australian League of Rights: Melbourne. Retrieved 28 February 2008. </ref> [[Adam Smith]], [[David Ricardo]] and [[Karl Marx]] claimed that [[labour theory of value|labour creates all value]]. While Douglas did not deny that all costs ultimately relate to labour charges of some sort (past or present), he denied that the present labour of the world creates all wealth. Douglas carefully distinguished between [[value (economics)|value]], [[historical cost|costs]] and [[price]]s. He claimed that one of the factors leading to a misdirection of thought in terms of the nature and function of money was economists' obsession over values and their relation to prices and incomes.<ref> {{cite book |title=Social Credit |last=Douglas |first=C.H. |year=1973 |publisher=Gordon Press |location=New York | pages = 60 | url = http://douglassocialcredit.com/resources/resources/social_credit_by_ch_douglas.pdf |isbn=0-9501126-1-5}} </ref> While Douglas recognized [[use value|"value in use"]] as a legitimate theory of values, he also saw values as subjective and not capable of being measured in an objective manner. Thus he rejected the idea of the role of money as a standard, or measure, of value. Douglas believed that money should act as a medium of communication by which consumers direct the distribution of production. ===Economic sabotage=== Closely associated with the concept of cultural inheritance as a factor of production is the social credit theory of economic sabotage. While Douglas believed the cultural heritage factor of production is primary in increasing wealth, he also believed that economic sabotage is the primary factor decreasing it. The word wealth derives from the Old English word ''wela'', or "well-being", and Douglas believed that all production should increase personal well-being. Therefore, production that does not directly increase personal well-being is waste, or economic sabotage. <blockquote>The economic effect of charging all the waste in industry to the consumer so curtails his purchasing power that an increasing percentage of the product of industry must be exported. The effect of this on the worker is that he has to do many times the amount of work which should be necessary to keep him in the highest standard of living, as a result of an artificial inducement to produce things he does not want, which he cannot buy, and which are of no use to the attainment of his internal standard of well-being.<ref name="dl.lib.brown.edu">{{Cite news |last=Douglas |first=C.H. |publication-date=1919 |title=A Mechanical View of Economics |periodical=The New Age |series=1373 |publication-place=38 Cursitor Street, London |publisher=The New Age Press |volume=XXIV |issue=9 |pages=136 |url=http://dl.lib.brown.edu/pdfs/1140814692791748.pdf |format=PDF|accessdate=2008-03-14 |postscript=<!--None-->}}</ref></blockquote> By modern methods of accounting, the consumer is forced to pay for all the costs of production, including waste. The economic effect of charging the consumer with all waste in industry is that the consumer is forced to do much more work than is necessary. Douglas believed that wasted effort could be directly linked to confusion in regards to the purpose of the economic system, and the belief that the economic system exists to provide employment in order to distribute goods and services. <blockquote>But it may be advisable to glance at some of the proximate causes operating to reduce the return for effort ; and to realise the origin of most of the specific instances, it must be borne in mind that the existing economic system distributes goods and services through the same agency which induces goods and services, i.e., payment for work in progress. In other words, if production stops, distribution stops, and, as a consequence, a clear incentive exists to produce useless or superfluous articles in order that useful commodities already existing may be distributed. This perfectly simple reason is the explanation of the increasing necessity of what has come to be called economic sabotage ; the colossal waste of effort which goes on in every walk of life quite unobserved by the majority of people because they are so familiar with it ; a waste which yet so over-taxed the ingenuity of society to extend it that the climax of war only occurred in the moment when a culminating exhibition of organised sabotage was necessary to preserve the system from spontaneous combustion.<ref>{{cite book |title=Economic Democracy, Fifth Authorised Edition |location=Epsom, Surrey, England |last=Douglas |first= C.H.|year=1974 |publisher=Bloomfield Books |isbn=0-904656-06-3 |pages=74 |url=http://www.archive.org/details/econdemocracy00dougiala |accessdate=12-11-2008 }}</ref></blockquote> ===Purpose of an economy=== Douglas claimed there were three possible policy alternatives with respect to the economic system: <blockquote>1. The first of these is that it is a disguised Government, of which the primary, though admittedly not the only, object is to impose upon the world a system of thought and action. 2. The second alternative has a certain similarity to the first, but is simpler. It assumes that the primary objective of the industrial system is the provision of employment. 3. And the third, which is essentially simpler still, in fact, so simple that it appears entirely unintelligible to the majority, is that the object of the industrial system is merely to provide goods and services.<ref>{{cite web|url=http://www.alor.org/Library/Warning%20Democracy.htm#1a |title=Warning Democracy |accessdate=2008-12-18 |author=C.H. Douglas |publisher=Australian League of Rights}}</ref></blockquote> Douglas believed that it was the third policy alternative upon which an economic system should be based, but confusion of thought has allowed the industrial system to be governed by the first two objectives. If the purpose of our economic system is to deliver the maximum amount of goods and services with the least amount of effort, then the ability to deliver goods and services with the least amount of employment is actually desirable. Douglas proposed that unemployment is a logical consequence of machines replacing labour in the productive process, and any attempt to reverse this process through policies designed to attain full employment directly sabotages our cultural inheritance. Douglas also believed that the people displaced from the industrial system through the process of mechanization should still have the ability to consume the fruits of the system, because he suggested that we are all inheritors of the cultural inheritance, and his proposal for a national dividend is directly related to this belief. ===The creditary nature of money=== Douglas criticized classical economics because many of the theories are based upon a [[Barter|barter economy]], whereas the modern economy is a monetary one. Initially, money originated from the productive system, when cattle owners punched leather discs which represented a head of cattle. These discs could then be exchanged for corn, and the corn producers could then exchange the disc for a head of cattle at a later date. The word "pecuniary"<ref>[http://www.billcasselman.com/unpublished_works/cow_words_one.htm billcasselman.com]</ref> comes from the Latin ''pecunia'', originally and literally meaning "cattle" (related to ''pecus'', meaning "beast").<ref>{{Cite journal |last=Pollock |first=Fredrick |publication-date=1996 |title=The History of English Law Before the Time of Edward I |publisher=Lawbook Exchange Ltd |pages=151 |postscript=<!--None--> }}</ref> Today, the productive system and the monetary system are two separate entities. Douglas demonstrated that loans create [[Deposit account|deposits]], and presented [[mathematical proof]] in his book ''Social Credit''.<ref>{{cite web |url=http://www.mondopolitico.com/library/socialcredit/p2c1.htm |title=The Working of the Money System |accessdate=2008-02-27 |author=C.H. Douglas |work=Social Credit |publisher=Mondo Politico}}</ref> Bank credit comprises the vast majority of money, and is created every time a bank makes a loan.<ref>{{cite web |url=http://www.bankofcanada.ca/wp-content/uploads/2010/11/canada_money_supply.pdf |title=The Bank in Brief: Canada's Money Supply |accessdate=2008-02-28 |publisher=Bank of Canada}}</ref> Douglas was also one of the first to understand the creditary nature of money. The word [[Credit (finance)|credit]] derives from the Latin ''credere'', meaning "to believe". "The essential quality of money, therefore, is that a man shall believe that he can get what he wants by the aid of it."<ref>{{Cite journal |last=Douglas |first=C.H. |publication-date=22 April 1927 |title=Engineering, Money and Prices |place=Institution of Mechanical Engineers |publisher=Warning Democracy |pages=15 |url=http://www.alor.org/Library/Warning%20Democracy.htm#1a |accessdate=2008-02-28 |postscript=<!--None--> }}</ref> According to economists, money is a [[medium of exchange]]. Douglas argued that this may have once been the case when the majority of wealth was produced by individuals who subsequently exchanged it with each other. But in modern economies, [[division of labour]] splits production into multiple processes, and wealth is produced by people working in association with each other. For instance, an automobile worker does not produce any wealth (i.e., the automobile) by himself, but only in conjunction with other auto workers, the producers of roads, gasoline, insurance, etc. In this view, wealth is a pool upon which people can draw, and money becomes a [[Voucher|ticketing system]]. The efficiency gained by individuals cooperating in the productive process was coined by Douglas as the "[[unearned increment]] of association" – historic accumulations of which constitute what Douglas called the cultural heritage. The means of drawing upon this pool is money distributed by the banking system. Douglas believed that money should not be regarded as a commodity but rather as a ticket, a means of distribution of production.<ref name="UseOfMoney">Douglas, C.H. (13 February 1934). "[http://www.alor.org/Library/TheUseofMoney.htm#1a The Use of Money]" address at St. James' Theatre, Christchurch, New Zealand. The Australian League of Rights: Melbourne. Retrieved 28 February 2008.</ref> "There are two sides to this question of a ticket representing something that we can call, if we like, a value. There is the ticket itself – the money which forms the thing we call '[[effective demand]]' – and there is something we call a price opposite to it."<ref name="UseOfMoney"/> Money is effective demand, and the means of reclaiming that money are prices and taxes. As real capital replaces labour in the process of modernization, money should become increasingly an instrument of distribution. The idea that money is a medium of exchange is related to the belief that all wealth is created by the current labour of the world, and Douglas clearly rejected this belief, stating that the cultural inheritance of society is the primary factor in the creation of wealth, which makes money a distribution mechanism, not a medium of exchange. Douglas also claimed the problem of production, or [[scarcity]], had long been solved. The new problem was one of distribution. However; so long as orthodox economics makes scarcity a value, banks will continue to believe that they are creating value for the money they produce by making it scarce.<ref>{{cite book |title=Social Credit |last=Douglas |first=C.H. |year=1973 |publisher=Gordon Press |location=New York |pages=47 |url=http://www.mondopolitico.com/library/socialcredit/p1c2.htm |isbn=0-9501126-1-5 }}</ref> Douglas criticized the banking system on two counts: # for being a form of government which has been [[Centralization|centralizing]] its power for centuries, and # for claiming ownership of the money they create. The former Douglas identified as being anti-social in policy.<ref>{{cite web|url=http://douglassocialcredit.com/resources/resources/possible_social_credit_in_alberta.pdf|format=PDF|title=FIRST INTERIM REPORT ON THE POSSIBILITIES OF THE APPLICATION OF SOCIAL CREDIT PRINCIPLES TO THE PROVINCE OF ALBERTA|accessdate=2008-12-18 |author=C.H. Douglas |publisher=Social Credit Secretariat}}</ref> The latter he claimed was equivalent to claiming ownership of the nation.<ref>Douglas, C.H. (24 November 1936). "[http://www.alor.org/Library/Dictatorshipbytaxation.htm#1a Dictatorship by Taxation]" address at Ulster Hall, Belfast. The Australian: Melbourne. Retrieved 28 February 2008.</ref> According to Douglas, money is merely an [[Abstract object|abstract]] representation of the real credit of the community, which is the ability of the community to deliver goods and [[service (economics)|services]], when and where they are required. ==The A + B theorem== In January 1919, ''A Mechanical View of Economics'' by C.H. Douglas was the first article to appear in the ''New Age'', edited by [[Alfred Richard Orage|A.R. Orage]], critiquing the methods by which economic activity is typically measured: <blockquote>It is not the purpose of this short article to depreciate the services of accountants; in fact, under the existing conditions probably no body of men has done more to crystallise the data on which we carry on the business of the world; but the utter confusion of thought which has undoubtedly arisen from the calm assumption of the book-keeper and the accountant that he and he alone was in a position to assign positive or negative values to the quantities represented by his figures is one of the outstanding curiosities of the industrial system; and the attempt to mould the activities of a great empire on such a basis is surely the final condemnation of an out-worn method.</blockquote> In 1920, Douglas presented the A + B theorem in his book, ''Credit-Power and Democracy'', in critique of accounting methodology pertinent to income and prices. In the fourth, Australian Edition of 1933, Douglas states: <blockquote>A factory or other productive organization has, besides its economic function as a producer of goods, a financial aspect – it may be regarded on the one hand as a device for the distribution of purchasing-power to individuals through the media of wages, salaries, and dividends; and on the other hand as a manufactory of prices – financial values. From this standpoint, its payments may be divided into two groups: :Group A: ''All payments made to individuals (wages, salaries, and dividends).'' :Group B: ''All payments made to other organizations (raw materials, bank charges, and other external costs).'' Now the rate of flow of purchasing-power to individuals is represented by A, but since all payments go into prices, the rate of flow of prices cannot be less than A+B. The product of any factory may be considered as something which the public ought to be able to buy, although in many cases it is an intermediate product of no use to individuals but only to a subsequent manufacture; but since A will not purchase A+B; a proportion of the product at least equivalent to B must be distributed by a form of purchasing-power which is not comprised in the description grouped under A. It will be necessary at a later stage to show that this additional purchasing power is provided by loan credit (bank overdrafts) or export credit.<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=22–23 }}</ref></blockquote> Beyond [[empirical]] evidence, Douglas claims this [[deductive]] [[theorem]] demonstrates that total prices rise faster than total incomes when regarded as a [[Stock and flow|flow]]. In his pamphlet entitled "The New and the Old Economics", Douglas describes the cause of "B" payments: <blockquote>I think that a little consideration will make it clear that in this sense an overhead charge is any charge in respect of which the actual distributed purchasing power does not still exist, and that practically this means any charge created at a further distance in the past than the period of cyclic rate of circulation of money. There is no fundamental difference between tools and intermediate products, and the latter may therefore be included.<ref>{{cite book |title=The New and the Old Economics |last=Douglas |first=C.H. |url=http://douglassocialcredit.com/resources/resources/New%20and%20Old%20Economics--C%20H%20Douglas.pdf |publisher=Tidal Publications |location=Sydney, n.d. }}</ref></blockquote> In 1932, Douglas estimated the cyclic rate of circulation of money to be approximately three weeks. The cyclic rate of circulation of money measures the amount of time required for a loan to pass through the productive system and return to the bank. This can be calculated by determining the amount of [[Clearing (finance)|clearings]] through the bank in a year divided by the average amount of [[deposit account|deposits]] held at the banks (which varies very little). The result is the number of times money must turnover in order to produce these [[Clearing house (finance)|clearing house]] figures. In a testimony before the Alberta Agricultural Committee of the Alberta Legislature in 1934, Douglas said: <blockquote>Now we know there are an increasing number of charges which originated from a period much anterior to three weeks, and included in those charges, as a matter of fact, are most of the charges made in, respect of purchases from one organization to another, but all such charges as capital charges (for instance, on a railway which was constructed a year, two years, three years, five or ten years ago, where charges are still extant), cannot be liquidated by a stream of purchasing power which does not increase in volume and which has a period of three weeks. The consequence is, you have a piling up of debt, you have in many cases a diminution of purchasing power being equivalent to the price of the goods for sale.<ref name="The Douglas System">{{Cite journal |last=Douglas |first=C.H. |publication-date=1934 |title=The Douglas System of Social Credit: Evidence taken by the Agricultural Committee of the Alberta Legislature, Session 1934 |publication-place=Edmonton |publisher=Legislative Assembly of Alberta |pages=90 |postscript=<!--None--> }}</ref></blockquote> According to Douglas, the major consequence of the problem he identified in his A+B theorem is exponentially increasing debt. Further, he believed that society is forced to produce goods that consumers either do not want or cannot afford to purchase. The latter represents a favorable [[balance of trade]], meaning a country exports more than it imports. But not every country can pursue this objective at the same time, as one country must import more than it exports when another country exports more than it imports. Douglas proposed that the long-term consequence of this policy is a [[trade war]], typically resulting in real war – hence, the social credit admonition, "He who calls for Full-Employment calls for War!", expressed by the [[Social Credit Party of Great Britain and Northern Ireland]], led by [[John Hargrave]]. The former represents excessive capital production and/or military build-up. Military buildup necessitates either the violent use of weapons or a superfluous accumulation of them. Douglas believed that excessive capital production is only a temporary correction, because the cost of the capital appears in the cost of consumer goods, or taxes, which will further exacerbate future gaps between income and prices. <blockquote>In the first place, these capital goods have to be sold to someone. They form a reservoir of forced exports. They must, as intermediate products, enter somehow into the price of subsequent ultimate products and they produce a position of most unstable equilibrium, since the life of capital goods is in general longer than that of consumable goods, or ultimate products, and yet in order to meet the requirements for money to buy the consumable goods, the rate of production of capital goods must be continuously increased.<ref>{{Cite news |last=Douglas |first=C.H. |publication-date=1925 |title=A + B AND THE BANKERS |periodical=The New Age |publication-place=38 Cursitor Street, London |publisher=The New Age Press |url=http://douglassocialcredit.com/resources/resources/A+B%20and%20the%20Bankers--CH%20Douglas,%20New%20Age%201925.pdf|accessdate=2010-08-08}}</ref></blockquote> ===The A + B theorem and a cost accounting view of inflation=== The replacement of labour by capital in the productive process implies that overhead charges (B) increase in relation to income (A), because "'B' is the financial representation of the lever of capital".<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=25 }}</ref> As Douglas stated in his first article, "The Delusion of Superproduction":<ref name="douglassocialcredit.com">{{cite web |url=http://douglassocialcredit.com/resources/resources/the_delusion_of_super-production_douglas.pdf |format=PDF|title=The Delusion of Superproduction | accessdate=2008-12-11 |author=C.H. Douglas |work=The Delusion of Superproduction |publisher=The English Review |date=December 1918}}</ref> <blockquote>The factory cost – not the selling price – of any article under our present industrial and financial system is made up of three main divisions-direct labor cost, material cost and overhead charges, the ratio of which varies widely, with the "modernity" of the method of production. For instance, a sculptor producing a work of art with the aid of simple tools and a block of marble has next to no overhead charges, but a very low rate of production, while a modern screw-making plant using automatic machines may have very high overhead charges and very low direct labour cost, or high rates of production. Since increased industrial output per individual depends mainly on tools and method, it may almost be stated as a law that intensified production means a progressively higher ratio of overhead charges to direct labour cost, and, apart from artificial reasons, this is simply an indication of the extent to which machinery replaces manual labour, as it should.</blockquote> If overhead charges are constantly increasing relative to income, any attempt to stabilize or increase income is met with rising prices. If income is constant or increasing, and overhead charges are continuously increasing due to technological advancement, then prices, which equal income plus overhead charges, must also increase. Further, any attempt to stabilize or decrease prices must be met by falling incomes according to this analysis. As the [[Phillips Curve]] demonstrates, inflation and unemployment are trade-offs, unless prices are reduced from monies derived from outside the productive system. According to Douglas's A+B theorem, the systemic problem of rising prices, or inflation, is not "too much money chasing too few goods", but is the increasing rate of overhead charges in production due to the replacement of labour by capital in industry combined with a policy of full employment. Douglas did not suggest that inflation cannot be caused by too much money chasing too few consumer goods, but according to his analysis this is not the only cause of inflation, and inflation is systemic according to the rules of cost accountancy given overhead charges are constantly increasing relative to income. In other words inflation can exist even if consumers have insufficient purchasing power to buy back all of production. Douglas claimed that there were two limits which governed prices, a lower limit governed by the cost of production, and an upper limit governed by what an article will fetch on the open market. Douglas suggested that this is the reason why deflation is regarded as a problem in orthodox economics because bankers and businessmen were very apt to forget the lower limit of prices. ===Compensated price and national dividend=== Douglas proposed to eliminate the gap between purchasing power and prices by increasing consumer purchasing power with credits which do not appear in prices in the form of a price rebate and a dividend. Formally called a "Compensated Price" and a "National (or Consumer) Dividend", a National Credit Office would be charged with the task of calculating the size of the rebate and dividend by determining a national [[balance sheet]], and calculating [[Aggregate data|aggregate]] production and consumption statistics. The price rebate is based upon the observation that the real cost of production is the mean rate of consumption over the mean rate of production for an equivalent period of time. : <math> \text{real cost (production)} = M \cdot \cfrac{\int_{T_1}^{T_2} \frac{dC}{dt} \, dt}{\int_{T_1}^{T_2} \frac{dP}{dt} \, dt}</math> where * ''M'' = money distributed for a given programme of production, * ''C'' = consumption, * ''P'' = production. The physical cost of producing something is the materials and [[Capital (economics)|capital]] that were consumed in its production, plus that amount of consumer goods labour consumed during its production. This total consumption represents the physical, or real, cost of production. : <math> \text{true price } (\$) = \text{cost } (\$) \cdot \dfrac{\text{consumption } (\$) + \text{depreciation } (\$)}{\text{credit } (\$) + \text{production } (\$)}</math> where * Consumption = cost of consumer goods, * Depreciation = depreciation of real capital, * Credit = Credit Created, * Production = cost of total production Since fewer inputs are consumed to produce a unit of output with every improvement in process, the real cost of production falls over time. As a result, prices should also fall with the progression of time. "As society's capacity to deliver goods and services is increased by the use of plant and still more by scientific progress, and decreased by the production, maintenance, or depreciation of it, we can issue credit, in costs, at a greater rate than the rate at which we take it back through prices of ultimate products, if capacity to supply individuals exceeds desire."<ref name="Douglas CP&D">{{cite book |title=Credit-Power and Democracy |last=Douglas |first=C.H. |year=1933 |publisher=The Social Credit Press |location=Melbourne, Australia |pages=132 }}</ref> Based on his conclusion that the real cost of production is less than the financial cost of production, the Douglas price rebate (Compensated Price) is determined by the ratio of consumption to production. Since consumption over a period of time is typically less than production over the same period of time in any industrial society, the real cost of goods should be less than the financial cost. For example, if the money cost of a good is $100, and the ratio of consumption to production is 3/4, then the real cost of the good is $100(3/4) = $75. As a result, if a consumer spent $100 for a good, the National Credit Authority would rebate the consumer $25. The good costs the consumer $75, the retailer receives $100, and the consumer receives the difference of $25 via new credits created by the National Credit Authority. The National Dividend is justified by the displacement of labour in the productive process due to technological increases in productivity. As human labour is increasingly replaced by machines in the productive process, Douglas believed people should be free to consume while enjoying increasing amounts of leisure, and that the Dividend would provide this [[Economic freedom|freedom]]. ===Critics of the A + B theorem and rebuttal=== Critics of the theorem, such as J.M. Pullen, Hawtrey and J.M Keynes argue there is no difference between A and B payments. Other critics, such as Gary North, argue that social credit policies are inflationary. "The A + B theorem has met with almost universal rejection from academic economists on the grounds that, although B payments may be made initially to "other organizations," they will not necessarily be lost to the flow of available purchasing power. A and B payments overlap through time. Even if the B payments are received and spent before the finished product is available for purchase, current purchasing power will be boosted by B payments received in the current production of goods that will be available for purchase in the future."<ref>{{cite news |title=Major Douglas and Social Credit: A Reappraisal |last=Pullen |first=J. M. |author2=G. 0. Smith |year=1997 |publisher=Duke University Press |pages=219}}</ref> A.W. Joseph replied to this specific criticism in a paper given to the Birmingham Actuarial Society, "Banking and Industry": <blockquote>Let A1+B1 be the costs in a period to time of articles produced by factories making consumable goods divided up into A1 costs which refer to money paid to individuals by means of salaries, wages, dividends, etc., and B1 costs which refer to money paid to other institutions. Let A2, B2 be the corresponding costs of factories producing capital equipment. The money distributed to individuals is A1+A2 and the cost of the final consumable goods is A1+B1. If money in the hands of the public is to be equal to the costs of consumable articles produced then A1+A2 = A1+B1 and therefore A2=B1. Now modern science has brought us to the stage where machines are more and more taking the place of human labour in producing goods, i.e. A1 is becoming less important relatively to B1 and A2 less important relatively to B2.</blockquote> <blockquote>In symbols if B1/A1 = k1 and B2/A2 = k2 both k1 and k2 are increasing.</blockquote> <blockquote>Since A2=B1 this means that (A2+B2)/(A1+B1)= (1+k2)*A2/(1+1/k1)*B1 = (1+k2)/(1+1/k1) which is increasing.</blockquote> <blockquote>Thus in order that the economic system should keep working it is essential that capital goods should be produced in ever increasing quantity relatively to consumable goods. As soon as the ratio of capital goods to consumable goods slackens, costs exceed money distributed, i.e. the consumer is unable to purchase the consumable goods coming on the market."</blockquote> And in a reply to Dr. Hobson, Douglas restated his central thesis: "To reiterate categorically, the theorem criticised by Mr. Hobson: the wages, salaries and dividends distributed during a given period do not, and cannot, buy the production of that period; that production can only be bought, i.e., distributed, under present conditions by a draft, and an increasing draft, on the purchasing power distributed in respect of future production, and this latter is mainly and increasingly derived from financial credit created by the banks." <ref>{{cite book |title=The Douglas Theory; a reply to Mr. J.A. Hobson |last=Douglas |first=C.H.|year=1922 |publisher=London: Cecil Palmer |pages=5 |url=http://www.archive.org/details/douglastheoryrep00doug/}}</ref> Incomes are paid to workers during a multi-stage program of production. According to the convention of accepted orthodox rules of accountancy, those incomes are part of the financial cost and price of the final product. For the product to be purchased with incomes earned in respect of its manufacture, all of these incomes would have to be saved until the product's completion. Douglas argued that incomes are typically spent on past production to meet the present needs of living, and will not be available to purchase goods completed in the future – goods which must include the sum of incomes paid out during their period of manufacture in their price. Consequently, this does not liquidate the financial cost of production inasmuch as it merely passes charges of one accountancy period on as mounting charges against future periods. In other words, according to Douglas, supply does not create enough demand to liquidate all the costs of production. Douglas denied the validity of [[Say's Law]] in economics. While John Maynard Keynes referred to Douglas as a "private, perhaps, but not a major in the brave army of heretics",<ref name="Keynes 1936 publisher=MacMillan & Co Ltd">{{cite book |title=The General Theory of Employment, Interest and Money |last=Keynes |first=John M. |year=1936 |publisher=MacMillan & Co Ltd. |location=London, England |url=http://www.marxists.org/reference/subject/economics/keynes/general-theory/ |isbn=1-56000-149-6}}</ref> he did state that Douglas "is entitled to claim, as against some of his orthodox adversaries, that he at least has not been wholly oblivious of the outstanding problem of our economic system."<ref name="Keynes 1936 publisher=MacMillan & Co Ltd"/> While Keynes said that Douglas's A+B theorem "includes much mere mystification", he reaches a similar conclusion to Douglas when he states: <blockquote>Thus the problem of providing that new capital-investment shall always outrun capital-disinvestment sufficiently to fill the gap between net income and consumption, presents a problem which is increasingly difficult as capital increases. New capital-investment can only take place in excess of current capital-disinvestment if future expenditure on consumption is expected to increase. Each time we secure to-day's equilibrium by increased investment we are aggravating the difficulty of securing equilibrium to-morrow.</blockquote><ref name="Keynes 1936 publisher=MacMillan & Co Ltd"/> The criticism that social credit policies are inflationary is based upon what economists call the [[quantity theory of money]], which states that the quantity of money multiplied by its velocity of circulation equals total purchasing power. Douglas was quite critical of this theory stating, "The velocity of the circulation of money in the ordinary sense of the phrase, is – if I may put it that way – a complete myth. No additional purchasing power at all is created by the velocity of the circulation of money. The rate of transfer from hand-to-hand, as you might say, of goods is increased, of course, by the rate of spending, but no more costs can be canceled by one unit of purchasing power than one unit of cost. Every time a unit of purchasing power passes through the costing system it creates a cost, and when it comes back again to the same costing system by the buying and transfer of the unit of production to the consuming system it may be cancelled, but that process is quite irrespective of what is called the velocity of money, so the categorical answer is that I do not take any account of the velocity of money in that sense."<ref name="Douglas 1933">{{Cite news |last=Douglas |first=C.H. |publication-date=1933 |year=1933 |title=The Birmingham Debate|periodical=The New Age| volume =Vol. LII, No. 23| url=http://douglassocialcredit.com/resources/resources/Douglas-Hawtrey%20Birmingham%20debate%20Final%20Edit.pdf |postscript=<!--None-->}}</ref> The Alberta Social Credit government published in a committee report what was perceived as an error in regards to this theory: "The fallacy in the theory lies in the incorrect assumption that money 'circulates', whereas it is issued against production, and withdrawn as purchasing power as the goods are bought for consumption."<ref>{{cite web |url=http://www.geocities.com/socredus/compendium/alberta-march-1945.txt |title=The Alberta Post-War Reconstruction Committee Report of the Subcommittee on Finance |accessdate=2008-03-01 |work=Simple Text |archiveurl=http://www.webcitation.org/query?url=http://www.geocities.com/socredus/compendium/alberta-march-1945.txt&date=2009-10-26+02:52:50|archivedate=2009-10-26}}</ref> Other critics argue that if the gap between income and prices exists as Douglas claimed, the economy would have collapsed in short order. They also argue that there are periods of time in which purchasing power is in excess of the price of consumer goods for sale. Douglas replied to these criticisms in his testimony before the Alberta Agricultural Committee: <blockquote>What people who say that forget is that we were piling up debt at that time at the rate of ten millions sterling a day and if it can be shown, and it can be shown, that we are increasing debt continuously by normal operation of the banking system and the financial system at the present time, then that is proof that we are not distributing purchasing power sufficient to buy the goods for sale at that time; otherwise we should not be increasing debt, and that is the situation.<ref name="The Douglas System"/></blockquote> ==Political theory== C.H. Douglas defined democracy as the "will of the people", not rule by the majority,<ref name ="ND">{{cite web|url=http://www.alor.org/Library/NatureofDemocracy.htm#1a |title=The Nature of Democracy |accessdate=2008-04-13 |author=C.H. Douglas |publisher=Australian League of Rights}}</ref> suggesting that social credit could be implemented by any political party supported by effective public demand. Once implemented to achieve a realistic integration of means and ends, party politics would cease to exist. Traditional [[ballot box]] democracy is incompatible with Social Credit, which assumes the right of individuals to choose freely one thing at a time, and to contract out of unsatisfactory associations. Douglas advocated what he called the "responsible vote", where anonymity in the voting process would no longer exist. "The individual voter must be made individually responsible, not collectively taxable, for his vote."<ref name="RC">{{cite web |url=http://www.alor.org/Library/RealisticConstitutionalism.htm#1a |title=Realistic Constitutionalism |accessdate=2008-02-28 |author=C.H. Douglas |publisher=Australian League of Rights}}</ref> Douglas believed that party politics should be replaced by a "union of electors" in which the only role of an elected official would be to implement the popular will.<ref name=sten>{{Cite journal | url = http://books.google.com/?id=_RqEg6BLilUC&pg=PA12&lpg=PA12&dq=%22union+of+electors%22+douglas | title = Social Discredit: Anti-Semitism, Social Credit, and the Jewish Response | isbn = 9780773520103 | author1 = Stingel | first1 = Janine | date = 2000-02-24}}</ref> Douglas believed that the implementation of such a system was necessary as otherwise the government would be the tool of international financiers. Douglas also opposed the [[secret ballot]] arguing that it led to electoral irresponsibility, calling it a "Jewish" technique used to ensure [[Barabbas]] was freed leaving Christ to be crucified.<ref name=sten/> Douglas considered the constitution an organism, not an organization.<ref name="RC">{{cite web|url=http://www.alor.org/Library/RealisticConstitutionalism.htm#1a |title = Realistic Constitutionalism |accessdate=2008-04-13 |author=C.H. Douglas |publisher=Australian League of Rights}}</ref> In this view, establishing the [[Superior (hierarchy)|supremacy]] of [[common law]] is essential to ensure protection of [[individual rights]] from an all-powerful parliament. Douglas also believed the effectiveness of [[Her Majesty's Government|British government]] is structurally determined by application of a Christian concept known as [[Trinitarianism]]: "In some form or other, sovereignty in the [[British Isles]] for the last two thousand years has been Trinitarian. Whether we look on this Trinitarianism under the names of King, Lords and Commons or as Policy, Sanctions and Administration, the Trinity-in-Unity has existed, and our national success has been greatest when the balance (never perfect) has been approached."<ref name="RC"/> Opposing the formation of Social Credit parties, C.H. Douglas believed a group of elected amateurs should never direct a group of competent experts in technical matters.<ref>Douglas, C.H. (7 March 1936). "[http://www.alor.org/Library/Approachtoreality.htm#1a The Approach to Reality]" address at Westminster. Australian League of Rights: Melbourne. Retrieved 28 February 2008.</ref> While experts are ultimately responsible for achieving results, the goal of politicians should be to pressure those experts to deliver policy results desired by the populace. According to Douglas, "the proper function of Parliament is to force all activities of a public nature to be carried on so that the individuals who comprise the public may derive the maximum benefit from them. Once the idea is grasped, the criminal absurdity of the [[party system]] becomes evident."<ref>Douglas, C.H. (30 October 1936). "[http://www.alor.org/Library/Tragedyofhumaneffort.htm#1a The Tragedy of Human Effort]" address at Central Hall, Liverpool. Australian League of Rights: Melbourne. Retrieved on 2008–02-28.</ref> ==History== C.H. Douglas was a [[civil engineer]] who pursued his higher education at [[Cambridge University]]. His early writings appeared most notably in the British intellectual journal ''[[The New Age]]''. The editor of that publication, [[Alfred Richard Orage|Alfred Orage]], devoted ''The New Age'' and later ''The New English Weekly'' to the promulgation of Douglas's ideas until his death on the eve of his BBC speech on social credit, 5 November 1934, in the ''Poverty in Plenty'' Series. Douglas's first book, ''Economic Democracy'', was published in 1920, shortly after his article ''The Delusion of Super-Production''<ref name="douglassocialcredit.com"/> appeared in 1918 in the ''English Review''. Among Douglas's other early works were ''The Control and Distribution of Production'', ''Credit-Power and Democracy'', ''Warning Democracy'' and ''The Monopoly of Credit''. Of considerable interest is the evidence he presented to the Canadian House of Commons Select Committee on Banking and Commerce<ref>{{cite web |url=http://douglassocialcredit.com/resources/resources/major_douglas-testimony_ottawa_1923.pdf |format=PDF|title=Select Committee on Banking and Commerce | accessdate=2008-12-11 | year=1923}}</ref> in 1923, to the British Parliamentary [[Macmillan Committee|Macmillan Committee on Finance and Industry]] in 1930, which included exchanges with economist [[John Maynard Keynes]], and to the Agricultural Committee of the [[Legislative Assembly of Alberta|Alberta Legislature]] in 1934 during the term of the [[United Farmers of Alberta]] Government in that [[Provinces and territories of Canada|Canadian province]]. The writings of C.H. Douglas spawned a worldwide movement, most prominent in the British Commonwealth, with beachheads in Europe and activities in the United States where Orage, during his sojourn there, promoted Douglas's ideas. In the United States, the New Democracy group was headed by the American author [[Gorham Munson]] who contributed a major book on social credit titled ''Aladdin's Lamp: The Wealth of the American People''. While Canada and [[New Zealand]] had electoral successes with "social credit" political parties, the movement in England and Australia was primarily devoted to pressuring existing parties to implement social credit. This function was performed especially by Douglas's social credit secretariat in England and the [[Australian League of Rights|Commonwealth Leagues of Rights]] in Australia. Douglas continued writing and contributing to the secretariat's journals, initially social credit and shortly thereafter ''the social crediter'' (which continues to be published by the Secretariat) for the remainder of his lifetime, concentrating more on political and philosophical issues in his later years. ===Political history=== In early years of the movement, [[Labour Party (UK)|Labour Party]] leadership resisted pressure from Trade unionists to implement social credit, as hierarchical views of [[Fabian socialism]], economic growth and [[full employment]], were incompatible with the National Dividend and abolishment of [[wage slavery]] suggested by Douglas. In an effort to discredit the social credit movement, one leading Fabian, Sidney Webb, is said to have declared that he didn't care whether Douglas was technically correct or not – they simply did not like his policy.<ref>{{cite web| url=http://www.richardccook.com/wp-content/uploads/2009/01/ch-douglas-1879-52-1979-centenary.pdf|last=Lee |first=Jeremy |publication-date=July 1972 |date=July 1972 |title=C.H. Douglas The Man and the Vision |publisher=Australian League of Rights |page=6}}</ref> In 1935 the first [[Social Credit Party of Alberta|"Social Credit"]] government was elected in [[Alberta]], Canada under the leadership of [[William Aberhart]]. A book by Maurice Colbourne entitled ''The Meaning of Social Credit'' convinced Aberhart that the theories of C.H. Douglas were essential for Alberta's recovery from the [[Great Depression]]. Aberhart added a heavy dose of [[fundamentalist Christianity]] to Douglas' theories; the [[Canadian social credit movement]], which was largely nurtured in Alberta, thus acquired a strong [[social conservatism|social conservative]] tint that it retains to this day. Having counselled the previous [[United Farmers of Alberta]] provincial government, Douglas became an advisor to Aberhart, but withdrew shortly after due to strategic differences. Aberhart sought orthodox counsel with respect to the Province's finances, and the strained correspondence between them was published by Douglas in his book, ''The Alberta Experiment''.<ref>{{cite book |title=The Alberta Experiment |last=Douglas |first=C.H. |year=1937 |publisher=Eyre and Spottiswoode |location=London |isbn=0-949667-18-8 }}</ref> While the [[Premier of Alberta|Premier]] wanted to balance the provincial budget, Douglas argued the whole concept of a "[[balanced budget]]" was inconsistent with Social Credit principles. Douglas stated that, under existing rules of financial cost accountancy, balancing all budgets within an economy simultaneously is an arithmetic impossibility.<ref name="Douglas 346-7">{{cite web| url=http://www.cooperativeindividualism.org/douglas-c-h_fallacy-of-a-balanced-budget.html|last=Douglas |first=C.H. |publication-date=28 July 1932 |date=28 July 1932 |title=The Fallacy of a Balanced Budget |work=The New English Weekly |pages=346–7 }}</ref> In a letter to Aberhart, Douglas stated:<ref name="Douglas 346-7"/> <blockquote>This seems to be a suitable occasion on which to emphasise the proposition that a Balanced Budget is quite inconsistent with the use of Social Credit (i.e., Real Credit – the ability to deliver goods and services 'as, when and where required') in the modern world, and is simply a statement in accounting figures that the progress of the country is stationary, i.e., that it consumes exactly what it produces, including [[capital asset]]s. The result of the acceptance of this proposition is that all [[capital appreciation]] becomes quite automatically the property of those who create and issue of money [i.e., the banking system] and the necessary unbalancing of the Budget is covered by Debts.</blockquote> Douglas sent two other expert social credit technical advisors from the United Kingdom, L. Denis Byrne and George F. Powell. But all attempts to pass social credit legislation were ruled [[ultra vires]] by the [[Supreme Court of Canada]] and [[Privy Council]] in London. Based on the monetary theories of [[Silvio Gesell]], William Aberhart issued a currency substitute known as [[prosperity certificates]]. But these [[scrip]]s actually depreciated in value the longer they were held,<ref>{{cite web |url=http://www.glenbow.org/exhibitions/online/libhtm/prosp.htm |title=Prosperity Certificate |accessdate=2008-02-27 |author=Glenbow Museum |publisher=Glenbow Museum}}</ref> and Douglas openly criticized the idea: <blockquote>Gesell's theory was that the trouble with the world was that people saved money so that what you had to do was to make them spend it faster. Disappearing money is the heaviest form of continuous taxation ever devised. The theory behind this idea of Gesell's was that what is required is to stimulate trade – that you have to get people frantically buying goods – a perfectly sound idea so long as the objective of life is merely trading.<ref>{{cite web |url=http://www.alor.org/Library/Approachtoreality.htm#1a |title=The Approach to Reality |accessdate=2008-02-27 |author=C.H. Douglas |publisher=The Australian League of Rights }}</ref></blockquote> Under [[Ernest Manning]], who succeeded Aberhart after his untimely death, the [[Alberta Social Credit Party]] gradually departed from its origins and became popularly identified as a [[Right-wing politics|right wing]] [[Populism|populist]] movement. In the Secretariat's journal, ''An Act for the Better Management of the Credit of Alberta'',<ref>{{Cite news |last=Douglas |first=C.H. |publication-date=8 February 1947 |publication-place=Liverpool |year=1947 |volume=17 |issue=23 |title=An Act for the Better Management of the Credit of Alberta |periodical=The Social Crediter |publisher=K.R.P. Publications Ltd. |postscript=<!--None--> }}</ref> Douglas published a critical analysis of the Social Credit movement in Alberta,<ref>{{Cite news |last=Douglas |first=C.H. |publication-date=28 August 1947 |publisher=K.R.P. Publications Ltd. |publication-place=Liverpool |volume=20 |issue=26 |year=1947 |title=Social Credit in Alberta |periodical=The Social Crediter |postscript=<!--None--> }}</ref><ref>{{Cite news |last=Douglas |first=C.H. |publication-date= 4–11 September 1947 |year=1947 |publisher=K.R.P. Publications Ltd. |publication-place=Liverpool |volume=21 |issue=1,2 |title=Social Credit in Alberta |periodical=The Social Crediter |postscript=<!--None--> }}</ref> in which he said, "The Manning administration is no more a Social Credit administration than the British government is Labour". Manning accused Douglas and his followers of antisemitism, and went about purging all of the so-called "Douglasites" from the Party. The [[British Columbia Social Credit Party]] won power in 1952 in the province to Alberta's west, but had little in common with Douglas or his theories. Social credit parties also enjoyed some national electoral success in Canada. The [[Social Credit Party of Canada]] was founded with support from Western Canada, and eventually built another base of support in [[Quebec]]. Social Credit also did well at the national level in [[Social Credit Party (New Zealand)|New Zealand]], where it was the country's third party for almost 30 years. ==Philosophy== Douglas described Social Credit as "the policy of a philosophy", and warned against viewing it solely as a scheme for monetary reform.<ref>{{cite web |url=http://www.alor.org/Library/Policyofaphilosophy.htm#1a |title=The Policy of a Philosophy |accessdate=2008-03-01 |author=C.H. Douglas |publisher=Australian League of Rights |archiveurl = http://web.archive.org/web/20070904045608/http://www.alor.org/Library/Policyofaphilosophy.htm#1a <!-- Bot retrieved archive --> |archivedate = 2007-09-04}}</ref> He called this philosophy "practical Christianity" and stated that its central issue is the [[Incarnation (Christianity)|Incarnation]]. Douglas believed that there was a [[Biblical Canon|Canon]] which ran through the universe, and [[Jesus Christ]] was the Incarnation of this Canon. However, he also believed that Christianity remained ineffective so long as it remained [[Trancendentalism|transcendental]]. Religion, which derives from the Latin word ''religare'' (to "bind back"), was intended to be a binding back to reality.<ref>{{cite book |url=http://www.alor.org/Library/BrieffortheProsecution.htm#1a |title=Brief for the Prosecution |author=C.H. Douglas |publisher=Veritas Publishing Co. Pty, Ltd|isbn=0-949667-80-3}}</ref> Social Credit is concerned with the incarnation of Christian principles in our organic affairs. Specifically, it is concerned with the principles of association and how to maximize the increments of association which redound to satisfaction of the individual in society – while minimizing any decrements of association.<ref>{{cite book |title=The ABC of Social Credit |author=E. S. Holter|publisher=Vancouver: Institute of Economic Democracy, Sixth Printing, Dec.1978 | isbn=0-920392-24-5 |year=1978}}</ref> The goal of Social Credit is to maximize [[Immanence|immanent]] [[sovereignty]]. Social credit is consonant with the Christian doctrine of [[salvation]] through [[Grace (Christianity)|unearned grace]], and is therefore incompatible with any variant of the doctrine of salvation through works. Works need not be of Purity in intent or of desirable consequence and in themselves alone are as "filthy rags". For instance, the present system makes destructive, obscenely wasteful wars a virtual certainty – which provides lots of "work" for everyone. Social credit has been called the Third Alternative to the futile [[Left-right politics|Left-Right Duality]].<ref>{{cite book |title=Aladdin's Lamp: The Wealth of the American People |last=Munson |first=Gorham |year=1945 |publisher=Creative Age Press |location=New York }}</ref> Although Douglas defined social credit as a philosophy with Christian roots, he did not envision a Christian [[theocracy]]. Douglas did not believe that religion should be thrust upon anyone through force of law or external compulsion. Practical Christian society is Trinitarian in structure, based upon a constitution where the constitution is an organism changing in relation to our knowledge of the nature of the universe.<ref name="RC"/> "The progress of human society is best measured by the extent of its creative ability. Imbued with a number of natural gifts, notably reason, memory, understanding and free will, man has learned gradually to master the secrets of nature, and to build for himself a world wherein lie the potentialities of peace, security, liberty and abundance."<ref>{{cite book | title= Alberta Post-War Reconstruction Committee Report of the Subcommittee on Finance | year = 1945}}</ref> Douglas said that social crediters want to build a new civilization based upon absolute economic security for the individual – where "they shall sit every man under his vine and under his [[Figs in the Bible|fig tree]]; and none shall make them afraid."<ref name="Douglas" /><ref name="ReferenceB"/> In keeping with this goal, Douglas was opposed to all forms of taxation on real property. This set social credit at variance from the land-taxing recommendations of [[Henry George]].<ref>{{cite book |title=The Land for the (Chosen) People Racket|last=Douglas |first=C.H.|year=1943 |publisher=KRP Publications Ltd |location=London }}</ref></blockquote> Social credit society recognizes the fact that the relationship between man and God is unique.<ref>{{cite book|title= Why I am a Social Crediter| last=Monahan| first=Bryan | pages= 3 |isbn=0-85855-001-6|year= 1971|publisher= Tidal Publications|location= Sydney}}</ref> In this view, it is essential to allow man the greatest possible freedom in order to pursue this relationship. Douglas defined freedom as the ability to choose and refuse one thing at a time, and to contract out of unsatisfactory associations. Douglas believed that if people were given the economic security and leisure achievable in the context of a social credit dispensation, most would end their service to [[Mammon]] and use their free time to pursue spiritual, intellectual or cultural goals leading to self-development.<ref>{{cite web |url=http://www.alor.org/Library/UseofSocialCredit.htm#1a |title=The Use of Social Credit}}</ref> Douglas opposed what he termed "the pyramid of power". [[Totalitarianism]] reflects this pyramid and is the antithesis of social credit. It turns the government into an end instead of a means, and the individual into a means instead of an end – ''Demon est deus inversus'' – "the Devil is God upside down." Social credit is designed to give the individual the maximum freedom allowable given the need for association in economic, political and social matters.<ref>{{cite book|title= Why I am a Social Crediter| last=Monahan| first=Bryan | pages=7 |publisher=Tidal Publications |isbn=0-85855-001-6|year= 1971}}</ref> Social Credit elevates the importance of the individual and holds that all institutions exist to serve the individual – that the State exists to serve its citizens, not that individuals exist to serve the State.<ref>{{cite book |title=Economic Democracy |last=Douglas |first=C.H. |year=1920 |pages=33 |isbn=0-904656-00-4 |publisher=Heritage for Institute of Economic Democracy |location=Melbourne}}</ref> Douglas emphasized that all policy derives from its respective philosophy and that "Society is primarily [[Metaphysics|metaphysical]], and must have regard to the organic relationships of its prototype."<ref name="ReferenceA">C.H. Douglas letter to L.D. Byrne, 28 March 1940</ref> Social credit rejects [[Dialectical Materialism|dialectical materialistic]] philosophy.<ref name="ReferenceA"/> "The tendency to argue from the particular to the general is a special case of the sequence from materialism to collectivism. If the universe is reduced to molecules, ultimately we can dispense with a catalogue and a dictionary; all things are the same thing, and all words are just sounds – molecules in motion."<ref>{{cite web |url=http://www.alor.org/Library/BrieffortheProsecution.htm#1a |title=Brief for the Prosectution |accessdate=2009-03-29 |author=C.H. Douglas }}</ref> Douglas divided philosophy into two schools of thought that he labeled the "classical school" and the "modern school", which are broadly represented by philosophies of [[Aristotle]] and [[Francis Bacon]] respectively. Douglas was critical of both schools of thought, but believed that "the truth lies in appreciation of the fact that neither conception is useful without the other".<ref>{{cite web |url=http://www.mondopolitico.com/library/socialcredit/p1c1.htm |title=Static and Dynamic Sociology |accessdate=2008-03-01 |author=C.H. Douglas |work=Social Credit |publisher=Mondo Politico}}</ref> ===Relationship to antisemitism=== Social crediters, and Douglas himself, have been criticized for spreading [[antisemitism]]. Douglas was critical of [[Jewish population|"international Jewry"]], especially in his later writings. He asserted that some Jews controlled many of major banks and were involved in an [[Jewish conspiracy|international conspiracy]] to centralize the power of finance. Some people have claimed that Douglas was antisemitic because he was quite critical of Jewish philosophy. In his book entitled ''Social Credit'', he wrote that, "It is not too much to say that one of the root ideas through which Christianity comes into conflict with the conceptions of the Old Testament and the ideals of the pre-Christians era is in respect of this dethronement of abstractionism."<ref name="SocialCreditBook">{{cite book |title=Social Credit |last=Douglas |first=C.H. |year=1973 |publisher=Gordon Press |location=New York |pages=22 |url=http://douglassocialcredit.com/resources/resources/social_credit_by_ch_douglas.pdf |isbn=0-9501126-1-5}}</ref> Douglas was opposed to abstractionist philosophies, because he believed that these philosophies inevitably led to the elevation of [[abstraction]]s, such as the state, over individuals. He also believed that what he called Jewish abstractionist thought tended to lead Jews to communist ideals and emphasis on the group over the individual. John L. Finlay, in his book, ''Social Credit: The English Origins'', wrote, "Anti-Semitism of the Douglas kind, if it can be called anti-Semitism at all, may be fantastic, may be dangerous even, in that it may be twisted into a dreadful form, but it is not itself vicious nor evil."<ref name="Finlay">{{cite book |title=Social Credit: The English Origins |last=Finlay |first=John L |year=1972 |publisher=McGill-Queens University Press |location=Montreal| pages=105 |isbn=978-0-7735-0111-9 |url=http://www.geocities.com/socredus/compendium/finlay-1972.txt |archiveurl=http://www.webcitation.org/query?url=http://www.geocities.com/socredus/compendium/finlay-1972.txt&date=2009-10-26+02:52:52|archivedate=2009-10-26}}</ref> In her book, ''Social Discredit: Anti-Semitism, Social Credit and the Jewish Response'', Janine Stingel claims that "Douglas's economic and political doctrines were wholly dependent on an anti-Semitic conspiracy theory."<ref>{{cite book| title=Social Discredit: Anti-Semitism, Social Credit and the Jewish Response |last=Stingel |first=Janine |page=13 |publisher=McGill-Queen's University Press |location=Montreal | year=2000| isbn=0-7735-2010-4}}</ref> John L. Finlay disagrees with Stingel's assertion and argues that, "It must also be noted that while Douglas was critical of some aspects of Jewish thought, Douglas did not seek to discriminate against Jews as a people or race. It was never suggested that the National Dividend be withheld from them."<ref name="Finlay" /> ==Groups influenced by social credit== ===Australia=== * [[Australian League of Rights]] * [[Douglas Credit Party]] * [http://bleedingindebt.com Bleeding in Debt' website, ''Social Credit''] ===Canada=== '''Federal political parties:''' * [[Social Credit Party of Canada]]/[[Canadian social credit movement]] * ''[[Ralliement créditiste]]'' * [[Abolitionist Party of Canada]]/[[Christian Credit Party]] * [[Canadian Action Party]] (active) * [[Global Party of Canada]] '''Provincial political parties:''' * [[Alberta Social Credit Party]] (active) * [[British Columbia Social Credit Party]] (active) * [[Manitoba Social Credit Party]] * [[Social Credit Party of Ontario]] * [[Ralliement créditiste du Québec]] * [[Social Credit Party of Saskatchewan]] '''Organizations:''' * [[Pilgrims of Saint Michael]] * [[Committee on Monetary and Economic Reform]] * ''See also:'' [[Prosperity Certificate]] ===Ireland=== * [[Monetary Reform Party]] ===New Zealand=== * [[Country Party (New Zealand)|Country Party]] * [[Democratic Labour Party (New Zealand)|Democratic Labour Party]] * [[New Zealand Democratic Party for Social Credit]] (active) * [[New Democratic Party (New Zealand)]] * [[Real Democracy Movement (New Zealand)|Real Democracy Movement]] * [[Social Credit Party (New Zealand)]] * [[New Zealand Social Credit Association (Inc)]] [http://www.nzsocialcredit.blogspot.com] ===Solomon Islands=== * [[Solomon Islands Social Credit Party]] (active) ===United Kingdom=== * [[Douglas Social Credit Secretariat]] * [[Social Credit Party of Great Britain and Northern Ireland]] ==Literary figures in social credit== As lack of finance has been a constant impediment to the development of the arts and literature, the concept of economic democracy through Social Credit had immediate appeal in literary circles. Names associated with Social Credit include [[C.M. Grieve]], [[Charlie Chaplin]], [[William Carlos Williams]], [[Ezra Pound]], [[T. S. Eliot]], [[Herbert Read]], [[Aldous Huxley]], [[Storm Jameson]], Eimar O'Duffy, [[Sybil Thorndyke]], [[Bonamy Dobrée]], [[Eric de Maré]] and the American publisher [[James Laughlin]]. [[Hilaire Belloc]] and [[GK Chesterton]] espoused similar ideas. In 1933 Eimar O'Duffy published ''Asses in Clover'', a science fiction fantasy exploration of Social Credit themes. His Social Credit economics book ''Life and Money: Being a Critical Examination of the Principles and Practice of Orthodox Economics with A Practical Scheme to End the Muddle it has made of our Civilisation'', was endorsed by Douglas. [[Robert A. Heinlein]] described a Social Credit economy in his posthumously-published first novel, ''[[For Us, The Living: A Comedy of Customs]]'', and his ''[[Beyond This Horizon]]'' describes a similar system in less detail. In Heinlein's future society, government is not funded by taxation. Instead, government controls the currency and prevents inflation by providing a price rebate to participating business and a guaranteed income to every citizen. In his novel ''The Trick Top Hat'', part of his ''[[Schrödinger's Cat trilogy|Schrödinger's Cat Trilogy]]'', [[Robert Anton Wilson]] described the implementation by the President of an alternate future United States of an altered form of Social Credit, in which the government issues a National Dividend to all citizens in the form of "trade aids," which can be spent like money but which cannot be lent at [[interest]] (in order to mollify the banking industry) and which eventually expire (to prevent inflation and hoarding). More recently, [[Richard C. Cook]], an analyst for the [[United States Civil Service Commission|U.S. Civil Service Commission]], [[Food and Drug Administration]], [[NASA]], the [[United States Department of the Treasury|U.S. Treasury Department]], and author of the books ''Challenger Revealed'' and ''We Hold These Truths'', has written several articles relating to Social Credit and [[monetary reform]] at [[Global Research]], an independent research and media group of writers, scholars, journalists and activists. Frances Hutchinson, Chairperson of the Social Credit Secretariat, has co-authored, with Brian Burkitt, a book entitled ''The Political Economy of Social Credit and [[Guild Socialism]]''.<ref>{{cite book| title=Political Economy of Social Credit and Guild Socialism |last=Hutchinson |first=Frances |publisher=Routledge |location=UK |year=1997 |isbn=978-0-415-14709-5}}</ref> ==See also== * [[Basic income]] * [[Citizen's dividend]] * [[Monetary reform]] * [[Social dividend]] ==Notes== {{Reflist|33em}} ==Further reading== * ''Economic Democracy'', by [[C. H. Douglas]] (1920) new edition: December 1974; Bloomfield Books; ISBN 0-904656-06-3 * ''Major Douglas: The Policy of Philosophy'', by John W. Hughes, Edmonton, Brightest Pebble Publishing Company, 2004; first published in Great Britain by Wedderspoon Associates, 2002 * ''Major Douglas and Alberta Social Credit'', by Bob Hesketh, ISBN 0-8020-4148-5 ===Fiction and poetry=== * ''[[For Us, The Living: A Comedy of Customs]]'', by [[Robert A. Heinlein]] * ''[[Beyond This Horizon]]'', by [[Robert A. Heinlein]] * ''[[The Cantos]]'', by [[Ezra Pound]] ==External links== * [https://archive.org/details/econdemocracy00dougiala/ C.H. Douglas's book ''Economic Democracy'' at American Libraries] * [https://archive.org/details/creditpowerdemoc00douguoft/ C.H. Douglas's book ''Credit-Power and Democracy'' at American Libraries] * [https://archive.org/details/controldistribut00douguoft/ C.H. Douglas's book ''The Control and Distribution of Production'' at American Libraries] * [http://www.socialcredit.com.au/books/MonopolyOfCredit.pdf C.H. Douglas's book "The Monopoly of Credit"] * [https://archive.org/details/douglastheoryrep00doug/ C.H. Douglas's work "The Douglas Theory, A Reply to Mr. J.A. Hobson" at American Libraries] * [https://archive.org/details/thesepresentdisc00douguoft/ C.H. Douglas's work, "These Present Discontents" at American Libraries] * [http://douglassocialcredit.com/resources/resources/social_credit_by_ch_douglas.pdf Clifford Hugh Douglas' book, ''Social Credit''] * [https://archive.org/details/cu31924013873223/ Hilderic Cousens, "A New Policy for Labour; an essay on the relevance of credit control" at American Libraries] * [http://www.scribd.com/doc/112942854/Introduction-to-Social-Credit-by-Dr-Bryan-W-Monahan/ Bryan Monahan, "Introduction to Social Credit"] * [http://www.scribd.com/doc/114593844/Money-in-Industry/ M. Gordon-Cumming, "Money in Industry"] * [http://www.douglassocialcredit.com DouglasSocialCredit.com] – Social Credit Secretariat * [http://www.alor.org/Library1.htm Australian League of Rights] – online library * [http://www.kibbokift.org/ The Green Shirt Movement for Social Credit] * [http://www.ecn.net.au/~socred/ Social Credit School of Studies] * [http://socialcredit.com.au/ Social Credit Website] * [http://www.socred.org/ Clifford Hugh Douglas Institute] {{Social Credit}} {{History of economic thought}} [[Category:Economic theories]] [[Category:Heterodox economics]] [[Category:Macroeconomics]] [[Category:Monetary economics]] [[Category:Political philosophy]] [[Category:Social credit| ]] Template:About (view source) Template:Cite book (view source) Template:Cite journal (view source) Template:Cite news (view source) Template:Cite web (view source) Template:Column-width (view source) Template:DMCA (view source) Template:Dated maintenance category (view source) Template:FULLROOTPAGENAME (view source) Template:Hatnote (view source) Template:History of economic thought (view source) Template:Ns has subpages (view source) Template:Social Credit (view source) Template:Sourcetext (view source) Template:Use dmy dates (view source) Return to Social credit. Retrieved from "https://en.formulasearchengine.com/wiki/Social_credit"
CommonCrawl
Forthcoming papers TVT: TVT, 2013, Volume 51, Issue 1, Pages 67–72 (Mi tvt57) This article is cited in 4 scientific papers (total in 4 papers) Thermophysical Properties of Materials $P$, $\rho$, $T$-properties and phase equilibria in the water-$n$-hexane system with a low content of water S. M. Rasulov, S. M. Orakova Federal State Institution of Science, Amirkhanov Institute of Physics, Dagestan Branch, Russian Academy of Sciences, Makhachkala, Russia Abstract: Using a piezometer of constant volume, we determined experimentally the $P$, $\rho$, and $T$ properties and the phase equilibria for the binary water-$n$-hexane mixtures with $0.04$, $0.05$, $0.06$, and $0.0673$ mass fraction of $H_2O$ over the density range of $0.067$–$0.607 g/cm^3$, temperature range of $380$–$680$ K, at pressures up to $60$ MPa. The equilibrium lines of the liquid-liquid and liquid-gas transition have been determined. The three-phase line, the line of the azeotrope, and the lower branch of the critical line (all lines are joined at the upper finite critical point) have been plotted in the work. High Temperature, 2013, 51:1, 60–65 UDC: 536.17 Citation: S. M. Rasulov, S. M. Orakova, "$P$, $\rho$, $T$-properties and phase equilibria in the water-$n$-hexane system with a low content of water", TVT, 51:1 (2013), 67–72; High Temperature, 51:1 (2013), 60–65 \Bibitem{RasOra13} \by S.~M.~Rasulov, S.~M.~Orakova \paper $P$, $\rho$, $T$-properties and phase equilibria in the water-$n$-hexane system with a low content of water \jour TVT \mathnet{http://mi.mathnet.ru/tvt57} \jour High Temperature \crossref{https://doi.org/10.1134/S0018151X13010136} http://mi.mathnet.ru/eng/tvt57 http://mi.mathnet.ru/eng/tvt/v51/i1/p67 V. A. Mirskaya, N. V. Ibavov, D. A. Nazarevich, "Experimental investigation of the isochoric heat capacity of the n-heptane–water binary system", High Temperature, 53:5 (2015), 658–667 Bezgomonova E.I., Rasulov A.R., Stepanov G.V., "Liquid-Gas Critical Phenomena in N-Hexane in the Presence of the Liquid Phase of Water", Russ. J. Phys. Chem. B, 9:7 (2015), 1026–1031 Bezgomonova E.I., Saidov S.M., Stepanov G.V., "Isochoric Heat Capacity of An N-Hexane Plus Water System", Russ. J. Phys. Chem. A, 89:1 (2015), 5–9 S. M. Rasulov, S. M. Orakova, I. A. Isaev, "Thermal properties and phase diagrams of water–hydrocarbon systems", High Temperature, 54:2 (2016), 210–214 Full text: 39
CommonCrawl
Talks are at noon on Monday in various rooms in University Hall The next talk: Jan 15 everyone Open problem session in B543 Jan 29 Nathan Ng Mean values of long Dirichlet polynomials A Dirichlet polynomial is a function of the form $A(t)=\sum_{n \le N} a_n n^{-it}$ where $a_n$ is a complex sequence, $N \in \mathbb{N}$, and $t \in \mathbb{R}$. For $T \ge 1$, the mean values $$\int_{0}^{T} |A(t)|^2 \, dt$$ play an important role in the theory of L-functions. I will discuss work of Goldston and Gonek on how to evaluate these integrals in the case that $T < N < T^2$. This will depend on the correlation sums \[ \sum_{n \le x} a_n a_{n+h} \text{ for } h \in \mathbb{N}. \] If time permits, I will discuss a conjecture of Conrey and Keating in the case that $a_n$ corresponds to a generalized divisor function and $N > T$. Feb 12 Ha Tran Reduced Ideals from the Reduction Algorithm (University of Calgary) Reduced ideals of a number field $F$ have inverses of small norms and they form a finite and regularly distributed set in the infrastructure of $F$. Therefore, they can be used to compute the regulator and the class number of a number field [5, 3, 2, 1, 4]. One usually applies the reduction algorithm (see Algorithm 10.3 in [4]) to find them. Ideals obtained from this algorithm are called 1-reduced. There exist reduced ideals that are not 1-reduced. We will show that these ideals have inverses of larger norms among reduced ones. Especially, we represent a sufficient and necessary condition for reduced ideals of real quadratic fields to be obtained from the reduction algorithm. Johannes Buchmann. A subexponential algorithm for the determination of class groups and regulators of algebraic number fields. In Séminaire de Théorie des Nombres, Paris 1988-1989, volume 91 of Progr. Math., pages 27-41. Birkhäuser Boston, Boston, MA, 1990. Johannes Buchmann and H. C. Williams. On the infrastructure of the principal ideal class of an algebraic number field of unit rank one. Math. Comp., 50(182):569-579, 1988. H. W. Lenstra, Jr. On the calculation of regulators and class numbers of quadratic fields. In Number theory days, 1980 (Exeter, 1980), volume 56 of London Math. Soc. Lecture Note Ser., pages 123-150. Cambridge Univ. Press, Cambridge, 1982. René Schoof. Computing Arakelov class groups. In Algorithmic number theory: lattices, number fields, curves and cryptography, volume 44 of Math. Sci. Res. Inst. Publ., pages 447-495. Cambridge Univ. Press, Cambridge, 2008. Daniel Shanks. The infrastructure of a real quadratic field and its applications. In Proceedings of the Number Theory Conference (Univ. Colorado, Boulder, Colo., 1972), pages 217-224. Univ. Colorado, Boulder, Colo., 1972. Feb 26 Andrew Fiori A Geometric Description of Arthur Packets In this talk I will discuss joint work with Clifton Cunningham, Ahamed Moussaui, James Mracek and Bin Xu. I will begin by giving a brief overview of the (conjectural) Langlands Correspondence, focusing in particular on Vogan's geometric reformulation of the local Langlands Correspondence. We will then discuss some geometric objects that arise as part of several conjectures which give geometric interpretations to Arthur packets and their associated stable distributions under the LLC. More specifically we shall discuss equivariant perverse sheaves and their vanishing cycles. Mar 5 Amir Akbary On the size of the gcd of $\boldsymbol{a^n-1}$ and $\boldsymbol{b^n-1}$ We review some results, from the last twenty years, on the problem of bounding $${\rm gcd}(a^n-1, b^n-1),$$ as $n$ varies. Here either $a$ and $b$ are integers or $a$ and $b$ are polynomials with coefficients in certain fields. In spite of elementary nature of the problem, the results are depended on tools from Diophantine approximation and Diophantine geometry. Mar 12 Steve Wilson The BGCG Construction (Northern Arizona University) Well, it's not really a construction yet — it's more like a template for constructions. It's a way to take many copies of one tetravalent graph B, the 'base graph', and identify each edge-midpoint with one other according to another graph C, the 'connection graph' to produce a bipartite tetravalent graph. If the identifying is done with caution, wisdom and, um, insouciance, the resulting graph will have lots of symmetry. The cunning of the identifications is related to edge-colorings of the base graph which are themselves nicely symmetric, and we will give several examples where the symmetry can actually be achieved. Mar 19 Peng-Jie Wong On Generalisations of the Titchmarsh divisor problem The study of the asymptotic behaviour of the summatory function of the number of divisors of shifted primes was initiated by Titchmarsh, who showed that under the generalised Riemann hypothesis, one has \[ \sum_{p \le x} \tau(p-a) = x \prod_{p\nmid a} \left( 1+ \frac{1}{p(p-1)}\right)\prod_{p |a} \left( 1- \frac{1}{p}\right) + O\left(\frac{x \log \log x}{\log x}\right), \] where $\tau$ denotes the divisor function. The above formula was first proved unconditionally by Linnik via the dispersion method. Moreover, applying the celebrated Bombieri-Vinogradov theorem, Halberstam and Rodriguez independently gave another proof. In this talk, we shall study the Titchmarsh divisor problem in arithmetic progressions by considering the sum \[ \sum_{\substack{ p \le x \\ p\equiv b \ (\mathrm{mod}\, r)}} \tau(p-a). \] Also, we will try to explain how to obtain an asymptotic formula for the same, uniform in a certain range of the modulus $r$. If time allows, we will discuss a number field analogue of this problem by considering the above sum over primes satisfying Chebotarev conditions. (This is joint work with Akshaa Vatwani.) Mar 26 Joy Morris Cayley index and Most Rigid Representations (MRRs) For any finite group $G$, a natural question to ask is the order of the smallest possible automorphism group for a Cayley graph on $G$. A particular Cayley graph whose automorphism group has this order is referred to as an MRR (Most Rigid Representation), and its Cayley index is the index of the regular representation of $G$ in its automorphism group. Study of GRRs (Graphical Regular Representations, where the full automorphism group is the regular representation of $G$) showed that with the exception of two infinite families and ten individual groups, every group admits a Cayley graph whose MRRs are GRRs, so that the Cayley index is 1. I will present results that complete the determination of the Cayley index for those groups whose Cayley index is greater than 1. This is based on joint work with Josh Tymburski, who was an undergraduate student here at the time. Apr 9 Jean-Marc Deshouillers Values of arithmetic functions at consecutive arguments (University of Bordeaux) We shall place in a general context the following result recently (*) obtained jointly with Yuri Bilu (Bordeaux), Sanoli Gun (Chennai) and Florian Luca (Johannesburg). Theorem. Let $\tau(\cdot)$ be the classical Ramanujan $\tau$-function and let $k$ be a positive integer such that $\tau(n) \neq 0$ for $1 \le n \le k/2$. (This is known to be true for $k < 10^{23}$, and, conjecturally, for all $k$.) Further, let $\sigma$ be a permutation of the set $\{1, \ldots, k\}$. We show that there exist infinitely many positive integers $m$ such that $$\bigl|\tau \bigl( m + \sigma(1) \bigr)\bigr| < \bigl|\tau \bigl(m + \sigma(2) \bigr)\bigr| < \cdots < \bigl| \tau \bigl( m + \sigma(k) \bigr)\bigr| .$$ The proof uses sieve method, Sato-Tate conjecture, recurrence relations for the values of $\tau$ at prime power values. (*) Hopefully to appear in 2018. May 7 Alia Hamieh Non-vanishing of $L$-functions of Hilbert Modular Forms inside the Critical Strip in C630 (University of Northern British Columbia) In this talk, I will discuss recent joint work with Wissam Raji. We show that, on average, the $L$-functions of cuspidal Hilbert modular forms with sufficiently large weight $k$ do not vanish on the line segments $\Im(s)=t_{0}$, $\Re(s)\in(\frac{k-1}{2},\frac{k}{2}-\epsilon)\cup(\frac{k}{2}+\epsilon,\frac{k+1}{2})$. The proof follows from computing the Fourier expansion of a certain kernel function associated with Hilbert modular forms and estimating its first Fourier coefficient. This result is analogous to the case of classical modular forms which was proved by W. Kohnen in 1997. Past semesters: Fall F2007 F2008 F2009 F2010 F2011 F2012 F2013 F2014 F2015 F2016 F2017 Spring S2008 S2009 S2010 S2012 S2013 S2014 S2015 S2016 S2017 Click here for a PDF file of abstracts all talks back to 2007 (approx 280K).
CommonCrawl
Supersymmetric world from a conservative viewpoint ( ) Tuesday, August 28, 2012 ... / / Alan Guth and inflation Alan Guth of MIT is one of the nine well-deserved inaugural winners of the Milner Prize. He has received $2,999,988 because Milner failed to pay the banking fees (Alan Guth was generous enough not to have sued Yuri Milner for that so far). As far as I know, Alan Guth is the only winner of a prize greater than the Nobel prize who has ever regularly attended a course of mine. ;-) I have taken many pictures of Alan Guth, this is the fuzziest one but I think it's funny to see a young Italian physicist showing a finger to Alan Guth in the New York Subway during our trip to a May 2005 conference at Columbia University. Under the name Alan H. Guth, the SPIRES database offers 73 papers, 51 of which are "citeable". That's fewer than some other famous physicists have but the advantage is that it keeps Alan Guth in the rather elite club of physicists with about 200 citations per average paper. For quite some time, Guth would work on rather typical problems of particle physics, the science of the very small, but he of course became one of the main symbols of modern cosmology, the science of the very large. Note that the LHC probes distances comparable to \(10^{-20}\) meters while the current radius of the visible universe is about \(46\) billion light years which is \(4.4\times 10^{26}\) meters. Every distance scale comes with its own set of physical phenomena, visible objects, and effective laws, and it may look very hard to jump over these 46 orders of magnitude from the very short distance scales to the very long distance scales and become a leader of a different scientific discipline. And indeed, it is rather hard. However, Nature recycles many physical ideas at many places so the "ideological" distance between the short and long distance scales is much shorter than the "numerical" distance indicates. Fundamental physicists are the rulers of the vast interval of distance scales (except for some messy phenomena in the middle where folks such as biologists may take over for a while). And yes, Alan Guth's most famous discovery was a very important piece of "reconciliation" between physics of very short distances and physics of very long distances – a fascinating idea that put their friendship on firmer ground. (We're not talking about quantum gravity here which is what we do if we talk about the "stringy reconciliation"; gravity is treated classically or at most semiclassically in all the discussions about inflation.) Guth was thinking about the Higgs field – a field that became very hot this summer – and he realized it could help to solve some self-evident problems in cosmology. By finding a speedy bridge between the world of the tiny and the world of the large, Guth has also explained where many large numbers comparing cosmology and particle physics such as "the number of elementary particles in the visible Universe" come from. These large numbers were naturally produced during an exponentially, explosively productive ancient era in the life of our Universe, an era in which the Universe acted as "the ultimate free lunch", using Guth's own words. Yes, cosmology has acquired an exemption from the energy conservation law. While people who study inflation usually say that there's nothing such as a free lunch (if they're economists, including Alan G[reenspan]), and they're "mostly" right, their colleague Alan Guth knows better. Two papers by this author have over 1,000 citations. The pioneering 1980 paper on cosmic inflation has collected over 4,000 citations so far; Guth's 1982 paper with S.Y. \(\pi\) on fluctuations in new (i.e. non-Guth) inflation stands at 1,300+ now. Three more papers above 250 citations are about scalar fields, phase transitions, and false vacuum bubbles. All the papers are on related topics but they're inequivalent. Old inflation: first look at the paper Of course, I want to focus on his most famous paper whose content began to be discovered in 1979, The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems (scanned PDF via KEK, full text) Rotate the PDF above in the clockwise direction; these commands are available via the right click in the Chrome built-in PDF reader, too. Some people make a breakthrough but they present the idea in a confusing way and other people have to clean the discovery. I think that Guth's paper is different. It may be immediately used in the original form. It highlights the awkward features of the old-fashioned Big Bang Theory in a very modern way, pretty much the same one that people would talk about today, 30 years later; it sketches the basic strategy how to solve them; and it lists some undesirable predictions of his model of "old inflation" that could perhaps be solved by future modifications. If you read the paper in a certain way, you might conclude that everyone else who did research on inflation was just solving some homework exercises vaguely or sharply defined by Guth. He or she was filling holes in a skeleton constructed by Alan Guth. Problems of TBBT If you use the term "The Big Bang Theory" in the less popular sense – i.e. if you are talking about a cosmological theory, not a CBS sitcom – you will find out that despite all the advantages, the theory has some awkward features (unlike Sheldon Cooper who doesn't have any). Alan Guth correctly identified two main problems of TBBT: the horizon problem and the flatness problem. I am no historian and at the end of 1979, I was affiliated with a kindergarten so I can't tell you how much people were confused about the disadvantages of TBBT in the late 1970s. But it's clear that Alan Guth wasn't confused. The horizon problem The horizon problem is the question why the cosmic microwave background radiation discovered by Penzias and Wilson in 1964 seems to have a uniform temperature around 2.7 kelvins, with the relative accuracy of 0.001% or so, even though the places in different directions of the heaven where the photons were originally emitted couldn't possibly have communicated with each other because they were too far from each other and the speed of light is the universal cosmic speed limit (for relative speed of two information-carrying objects moving past each other). The limitations on speed are relevant because the Penrose causal diagram of the spacetime in TBBT looks like the picture above. Observers and signals have to move along timelike or lightlike trajectories which are, by the definition of the Penrose diagram, lines on the Penrose causal diagram that are "more vertical than horizontal", i.e. at most 45 degrees away from the vertical direction. But at the moment of the Big Bang, and this moment is depicted as the lowest horizontal "plate" on the picture, the Universe had to be created and there was no prehistory that would allow the different places of the ancient Universe to agree about a common temperature. You might object that the Universe "right after \(t=0\)" was smaller so it could have been easier to communicate (shorter distances have to be surpassed). But you also had a shorter time between \(t=0\) and the other small value of \(t\) and if you study these things quantitatively, you will realize that the latter point (shortage of time) actually becomes more important than the smallness of the distances (because the distances go like \(a\sim t^k\) for \(k\lt 1\) so they're "more constant" than the time, relatively speaking), so to agree about a common temperature, two places in the Universe would need an ever higher speed of communication which surely exceeds the speed of light \(c\). In other words, two events at \(t=0\), the horizontal plate at the bottom of the picture, had "no common ancestors" i.e. no other events in the intersection of their past light cones – because there was no "past" before the Big Bang (sorry, Bogdanov brothers and others) – so it's puzzling why the temperature is uniform if the different regions of the Universe were "created by God" independently from others. Alan Guth proposed a solution: if the Universe has ever been exponentially expanding for a long enough time i.e. by a high enough factor, the Penrose diagram effectively becomes much taller – it looks like we are adding a whole "pre-Big-Bang prehistory" below the bottom plate at the picture above – and suddenly there is enough room to prepare the thermal equilibrium by the exchange of heat. So with this "taller" Penrose diagram, the equal magnitude of temperature in different directions is no longer mysterious: it is a result of a relatively long period of thermalization i.e. exchange of heat that inevitably erases the temperature differences. To successfully achieve this goal (and especially the "flatness goal" to be discussed later), we need a certain amount of time for the thermalization: the universe has to increase about\[ e^{62} \approx 10^{27} \] times, i.e. a billion of billions of billions of times. Appreciating that \(2.718\) is a more natural base of exponentials than ten (because Nature has \(e\) and not ten fingers as humans or two fingers as the discrete physicists), physicists say that there had to be at least \(62\) \(e\)-foldings. An \(e\)-folding is a period of time during an exponential expansion in which linear distances increase \(e\) times. The required minimum varies but 60-65 is what people usually consider the minimum (but there's nothing wrong with thousands of \(e\)-foldings, either, and many models on the market actually predict even higher numbers). I only chose \(62\) so that I could have written "billion of billions of billions". ;-) So the distances \(a\) between two places in the sky as defined by the FRW coordinates, i.e. between two "future galaxies", grew \(10^{27}\) times during inflation; the remaining multiplicative growth was due to the ordinary Big Bang Theory growth (which approximately follows power laws, \(a\sim t^k\)). But because the growth was exponential, the proper time that inflation took was just \(62\) of some basic natural units of time: a natural, small number. You see that the exponential growth is what allows cosmology to "quickly connect" very different distance scales and time scales. If you can expand the distances \(10^{27}\) times very quickly, it's easy to inflate a subatomic object to astronomical distances within a split second. That's cool. The uniformity of the temperatures suddenly becomes much more natural (even though you could have waved your hands and say that God created different regions of the Universe in similar conditions even if they couldn't have communicated with each other – because He has some universal initial conditions that just hold everywhere). A reader could protest that we cheated because we "explained" the unnatural features of the Universe by using large numbers that are calculated as exponentials and the exponentials themselves are "unnatural". However, the latter assertion is incorrect. The exponentials are actually totally natural in the inflationary context. It's because the FRW equations, Einstein's equations simplified for the case of a homogeneous and isotropic expanding Universe, imply that the distance \(a\) between two future (or already existing) galaxies obeys\[ \ddot a = \dots + \frac{\Lambda c^2}{3} a. \] Einstein's equations control the second time derivative of \(a\) – which emerges from the second derivatives of the metric tensor that is hiding in the curvature tensors – and the equation for the second derivative of the distance \(a\) is analogous to an equation in the Newtonian physics, \(ma=F\), for the acceleration of an object. In the FRW case, the force on the right hand side contains a term proportional to the cosmological constant \(\Lambda\) as well as \(a\) itself. And you may verify that the equation \(\ddot a = K a\) has solutions that are exponentially increasing (or decreasing, but the increasing piece ultimately dominates unless you fine-tune the exponentially growing component exactly to zero). Well, the exponentially increasing/decreasing functions are solutions for \(K\gt 0\) i.e. \(\Lambda\gt 0\), a positive cosmological constant. For \(K\lt 0\), the solutions are sines and cosines because the equations describe a harmonic oscillator. (That's also why a negative cosmological constant \(\Lambda\) would tend to produce a Big Crunch – a sign that the Universe would like to resemble an oscillatory one.) You may see that if you had a spring with a negative (repulsive) spring constant, it would shoot the ball attached on the spring exponentially. It's because the derivative (and the second derivative) of the exponential function is the exponential function (times a different normalization) in general. I hope you know the joke about functions walking on the street. Suddenly, the derivative appears behind the corner. All functions are scared to hell. Only one of them is proudly marching on the sidewalk. The derivative approaches the function and asks: Why aren't you afraid of me? I am \(e^x\), the function answers and moves the derivative by one unit of distance away from itself (because the exponential of the derivative is the shift operator, because of the formula for the Taylor expansion). Sorry if I made the joke unfunny by the more advanced Taylor expansion piece. ;-) Fine. The exponential (the exponentially increasing proper distance between the seeds of galaxies) is a totally natural solution of the basic universal equations – of nothing else than Einstein's equations expressed in a special cosmological context. It's not cheating. It's inevitable physics. Flatness problem Concerning the flatness problem, I may recommend you e.g. this question on the Physics Stack Exchange plus my answer. Einstein's equations say that the spatial slice \(t={\rm const}\) through the Universe is a flat 3D space if the average matter density is close to a calculable "critical density", or their ratio \(\Omega=1\). However, it may be derived that \(|\Omega-1|\), the (dimensionless) deviation of the density from the value that guarantees flatness, increases with time during the normal portions of TBBT (which are either radiation-dominated or, later, matter-dominated). Observations today show that the \(t=13.7\) billion years slice is a nearly flat three-dimensional space – the curvature radius is more than 1.5 orders of magnitude longer than the radius of the visible Universe (i.e. the curvature radius is longer than hundreds of billions of light years) – so \(|\Omega-1|\leq 0.01\) or so today. But because this \(|\Omega-1|\) was increasing with time, we find out that when the Universe was just minutes or seconds old (or even younger), \(|\Omega-1|\) had to be much more tiny, something like \(10^{-{\rm dozens}}\). Such a precisely fine-tuned value of the matter density is unnatural because \(|\Omega-1|\) may a priori be anything of order one and it may depend on the region. Our Universe today seems rather accurately flat – I mean the 3D spatial slices – and you would like to see an explanation. You would expect that the flatness is an inevitable outcome of the previous evolution. However, TBBT contradicts this explanation. In TBBT, the deviations from flatness increase with time, so when the Universe was very young, the Universe had to be even closer to exact flatness by dozens of orders of magnitude, so it had to be even more unnatural when it was young than it is today! It had to be unbelievably unnaturally flat. Again, cosmic inflation solves the problem because it reverses the trend. During cosmic inflation, \(|\Omega-1|\) is actually decreasing with time as the Universe keeps on expanding. So a sufficiently long period of inflation is again capable of producing the Universe in an unusually "nearly precisely flat" shape and some of its exponentially great flatness may be wasted in the subsequent power-law, TBBT expansion that makes the flatness less perfect. But the accuracy with which the Universe was flat after inflation was so good that there's a lot of room for wasting. Inflation also solves other problems. For example, it dilutes exotic topological defects such as the magnetic monopoles. If you watch TV, you must have noticed that Sheldon Cooper's discovery of the magnetic monopoles near the North Pole was an artifact of a fraudulent activity of his colleagues. It seems that the number of magnetic monopoles, cosmic strings, and other topologically nontrivial objects in the Universe around us is much lower than what a generic grand unified theory would be willing to predict. Inflation makes the Universe much larger and the density of the topological defects decreases substantially, pretty much to \(O(1)\) defects per visible Universe. It's not too surprising that none of these one or several defects moving somewhere in the visible Universe has managed to hit Sheldon Cooper's devices yet. So Alan Guth realized that the exponentially increasing period is a very natural hypothesis about cosmology beyond (i.e. before) the ordinary Big Bang expansion which helps to explain previously unnatural features of the initial conditions required by the ordinary Big Bang expansion. He also realized that the cosmological constant needed for this exponential expansion may come from a scalar field's potential energy density \(V(\phi)\). That's where his particle physics experience turned out to be precious: it's just enough to consider the potential energy for the Higgs field \(V(h)\), realize that its positive value has the same impact on Einstein's equations as a positive cosmological constant – they're really the same thing, physically speaking, because you may simply move the cosmological constant term \(-(1/2)Rg_{\mu\nu}\) to the right hand side of Einstein's equations and include it as a part of the stress-energy tensor. And he had to rename the Higgs field to an inflaton. Guth's original "old inflation" assumes that the inflaton sits at a higher minimum of its possible values i.e. its "configuration space" during inflation and it ultimately jumps to a different place (the place we experience today) where the cosmological constant is vastly lower. Now, the exponential expansion had to be temporary because we know that in the most recent 13.7 billion years, the expansion wasn't exponential but it followed the laws of the Big Bang cosmology. So the state of the Universe had to jump from a place in the configuration space with a large value of \(V(\phi)\) to another place with a tiny value of \(V(\phi)\). In Guth's "old inflation", it would literally be a discontinuous jump. In a year or two, "new inflation" i.e. "slow-roll inflation" got popular and started to dominate the inflationary literature. In the new picture, the inflaton scalar field continuously rolls down the hill from a maximum/plateau (the upper inflationary-era position is no longer a local minimum of the potential in that "new inflation" picture but it isn't necessarily a catastrophe) it occupies during inflation to the minimum we experience today. When it's near the minimum, its kinetic energy is converted to oscillations of other fields, i.e. particles that become seeds of the galaxies. The very recent 8 years in cosmology and especially in string theory have shown that "new inflation" may possibly be incompatible with string theory. The very condition of the "slow-rollness", the requirement that the inflaton rolls down (very) slowly which is needed for the inflation to last (very) long, might be incompatible with some rather general inequalities that may follow from string theory. It's the main reason that has revived the interest in the "old inflation": the transition from inflation to the post-inflationary era could have been more discontinuous than "new inflation" has assumed for decades and physicists may be forced to get back to the roots and solve the problems of "old inflation" differently than by the tools that "new inflation" had offered. Averaged fluctuations of the CMB temperature as a function of the typical angular scale: theory agrees with experiments. These comments rather faithfully reflect the amount of uncertainty about inflation. The observations of the cosmic microwave background made by WMAP satellite – and even more recently, the Planck spacecraft – are in excellent, detailed agreement with the theory that needs TBBT as well as the nice, flat initial conditions, as well as some initial fluctuations away from the flatness that are naturally calculable within the inflationary framework. So the pieces probably have to be right. However, there are many technical details – about the mass scale associated with the inflaton (it may be close to the GUT scale but it may be as low as the electroweak scale: there are even models using the newly discovered Higgs field as the driver of inflation although they need some extra unusual ingredients); about the number of inflaton scalar fields; about their detailed potential; about the question whether any quantum tunneling has occurred when the inflationary era ended; whether the scalar field should be interpreted in a more geometric way (e.g. the distance between branes, some quantity describing the evolving shape of the hidden dimensions etc.); and other things. But it is fair to admit that I would say that exactly the general features that were discussed in Alan Guth's pioneering paper have already been empirically established. The Nobel prize is nevertheless awarded for "much more directly" observed discoveries so it's great that Yuri Milner has created the new prize in which the "theory-driven near-complete certainty" plays a much larger role than it does in Stockholm. And that's the memo. Previous article about the Milner Prize winners: Ashoke Sen Vystavil Luboš Motl v 4:41 PM | | Other texts on similar topics: astronomy, philosophy of science, science and society snail feedback (34) : reader Aran said... Personally I much prefer the "where do flatness and uniformity come from" problems to "WTF is inflation and where did it come from" problem. Aug 28, 2012, 6:41:00 PM ... reader Luboš Motl said... Fine but there is a conceptual difference between these two types of problems, regardless of which one you prefer. The former one is a genuine physics problem while the latter is a learning disorder. Learning disorders are subjective while physics is about objective laws of physics which is why physicists don't even talk about the latter. Instead of WTFing, they just learn it. The question behind your learning disorder is completely irrational. You can "ask" the same thing about every concept in science. WTF is evolution and where it came from, for example. What's the difference between these two questions? Or what's the difference between "WTF is inflation and where did it come from" and "I, Aran, am a complete imbecile", for that matter? reader Luke Lea said... "Fundamental physicists are the rulers of the vast interval of distance scales (except for some messy phenomena in the middle where folks such as biologists may take over for a while)." Do you suppose the physicists will ever take over the play of ideas in the human brain? Will the biologists even get there? Right now the only "effective" theory is what evolutionary biologists call "theory of mind" and the Germans used to call "verstehen." reader Shannon said... Does the fact that the Big Bang is a one-off event makes it unnatural ? A very good question. I think the answer is No. First of all, we don't really know whether the Big Bang is a one-off event; in eternal inflation, "big bangs" are repeating indefinitely. But even if we knew that, it's not unnatural because numbers such as 1 are the most natural numbers one may get. ;-) If there's a formula in the fundamental equations that determines the number of big bangs, then it's natural that the result is either infinity or a number close enough to one. ;-) reader Paul in Boston said... I remember hearing Guth speak at the M.I.T. Physics colloquium about inflation in about 1979. I'm quite sure that he attributed the flatness problem to Robert Dicke. I think that the horizon problem was understood by the cosmology community as well. Guth certainly deserves the Nobel Prize now that WMAP has confirmed his ideas. reader Haelfix said... Hi Lubos, what paper(s) talks about string theory potentially excluding new inflation? Any good review articles? Aug 28, 2012, 11:48:00 PM ... reader Dilaton said... Thanks Lumo for this very nice reminder about inflation and the issues it solves, this was very enjoyable to read :-) Some time ago I read this book in German http://www.amazon.com/The-Inflationary-Universe-Alan-Guth/dp/0201328402 It was a lot of fun with all these autobiographic notes etc but therein Alan Guth did not mention that he used to sleep during your string theory lectures neither was a nice picture of his office included :-D What were they discussing when the younger italian physicist showed Alan Guth the finger ...? Now a look forward to the still outstanding 7 articles about the other FPP winners ... ;-) Aug 29, 2012, 1:12:00 AM ... reader David Nataf said... Aran, It may take a while to convince yourself that the flatness and horizon problems are actually problems. I didn't see the problem at first, it's the sort of thing you might take a while to recognize as being a problem. With respect to where inflation comes from, Guth didn't invent particle physics, he was applying particle physics. Finally, inflation makes scientific predictions. Guth didn't know that the cutvature was 0 in 1980, some people were thinking it might be 0.8 because the cosmological constant was ignored. Inflation will be further tested if NASA builds the gravitational wave detector LISA. reader Simple Astronomer said... True story about Alan Guth. When I was in grad school, I admitted to the graduate adviser that I was not sure if I had what it took to be a professional scientist and that I also thought that maybe the other grad students would turnout to be better scientists. The graduate adviser told me not to worry and said that he had the very same thoughts when he was a graduate student, except for one difference. He said that there was one grad student with whom he was sure would be less successful than himself and that student was Alan Guth. I am surprised that you can't see the extreme difference between explanatory power and supporting evidence of inflation and evolution Lubos. One explains pretty much all life on Earth by postulating a simple mechanism and is supported by everything from laboratory experiments to fossils. The other explains barely anything beyond the 2 problems it was invented to explain and has very little to support it. Making up explanations is trivial, the trick is to invent ones that explain much more then they postulate. Inventing a completely novel and utterly different state of the whole damn universe just to explain uniformity and flatness doesn't cut it. It's too much like inventing god to explain where the universe came from. Evolution is the framework without which nothing in biology makes sense (to paraphrase T. Dobzhansky), inflation is an idea without which almost everything in physics makes as much sense as before with the sole exception of flatness and homogeneity of early universe. Still cannot see the difference? Why don't you just admit that you don't understand evolution? Your comment is pure propaganda. A person with the opposite bias than yours may equally say that inflation is a framework without which nothing in cosmology makes sense while evolution is just an idea trying to find a reason for the diversity of life forms, but evolution is still unnecessary to understand how any individual organism lives. The reason why I don't admit that I don't understand evolution is that I do understand evolution - and you know it very well. I also understand inflation and I understand that you don't understand inflation and you are a giant obnoxious arrogant demagogic asshole trying to mask your stupidity, too. reader George Christodoulides said... very nice article I find it fascinating too ;-) Hi Haelfix, try to look e.g. at http://arxiv.org/abs/hep-th/0703071 and references and citations thereof (it's probably a wrong way to use the word "thereof" right?). Thanks for the history lesson! It's still important that a competent theorist was able to understand those problems "from a different field", take them seriously, and find a likely explanation. If I were on the committee of Nobel, I would probably also say Yes, it's demonstrated by WMAP and eligible for that prize, too. Still, the acoustic peaks are only "softly linked" to modes in the inflationary Universe; there isn't quite a proof of "uniqueness" of the inflationary explanation of them. I believe in the unity of science. Brains may be studied in science and physics is the "most general" and "most scientific" perspective on all of science. At the end, whoever studies brain is really a brain scientist, whether he is a physicist is a matter of convention. So a decidable question is whether physics training is useful or will become necessary for brain sciences. I don't know. People studying brain surely have to be physicists in some sense - e.g. in the sense that they're smart enough. ;-) But how do you exactly distinguish people who study brain via physics or non-physical science? The former are surely more quantitative but one may be quantitative even if he hadn't studied physics. reader Gene said... Your view of inflation parallels the views of those that just don't get quantum mechanics. They reject it because they are unwilling to actually learn it independently of their prior biases. That intellectual filter blocks all hope of understanding. Yes, inflation seems strange, even outrageous at first but that is not a valid reason to reject it or to disparage its importance. Inflation, like QM, is not going away. I respectfully suggest that you learn more about it before expressing your preferences so strongly. Guth made a profound advance in our understanding of the world we live in. That does matter. reader jaded said... There may be a case for a Nobel for Guth, but I sure hope this does not happen soon. He just won a prize which is nice (perhaps too nice!) and has won various other prizes in the past, like Gruber. There a many worthy people who have not won much. And some even have much more impressive citation records than Guth (his total is only around 9000). Guth has been saying very similar things since 1980 and has given the same kind of talk thousands of times. IMHO, monomania is not necessarily a sign of true greatness. A responsible community should try to spread the wealth around! Darn, now the physics SE user Anixx is heavily trolling about fundamental physics. He has started to attach a "metaphysics" tag to perfectly valid physics questions too (some of them are probably off topic) for them to become closed or deleted: http://meta.physics.stackexchange.com/questions/1458/metaphysics-tag/1459#comment3676_1459 Sorry for the off topic, but this is very annoying; I dont wont him to jump at questions about topics the Milner Prize is targeted at. Fundamental physics questions are allowed at physics SE, darn :-(0) !!! Well, I would sometimes use the metaphysics tag myself, but it's undoubtedly controversial. At any rate, you may want to lower the attention paid to Anixx because he or she or it is an idiot, and an irrelevant one. I just reminded myself about his "question", namely idea that extraterrestrial life must be impossible because lightnings guarantee quantum immortality, or some combination of words like that: http://physics.stackexchange.com/questions/7702/does-quantum-mechanics-prohibit-extraterrestrial-life Thanks Lumo, you are right... It seems I just forgot about the rule to "not feed the trolls" ... ;-). David seems to agree with me and as always, dmckee needs to be supervised a little bit in order to prevent him from closing valid fundamental physics questions without any good reason. I regularely have a wary eye on which questions get closed and why, by whom etc ... :-P reader Peter said... Sorry off topic: What is your opinion about this paper by Sabine http://arxiv.org/abs/1208.5874. Do you think is novel? Aug 31, 2012, 10:12:00 AM ... I don't know how to evaluate whether it's novel but what's more important is whether it's right or wrong and it's just wrong. It's complete nonsense to say that one is free to adjust the commutator of a field and its time derivative, send it zero, or establish a new symmetry (a vanishing of a commutator isn't a symmetry in any sense, another completely illogical statement). If we accept that there is a degree of freedom that looks like the metric tensor and if the low-energy effective action is given by the Ricci curvature scalar, and there's a lot of experimental evidence for this theory, the general theory of relativity, then the commutators directly and unambiguously follow from the action. The commutators aren't independent assumptions one may adjust aside from the choice of the action. The commutators *are* given by the action. She has no clue what she's talking about. The commutators or hbar are NOT fudge factors ... Darn, this troll on physics SE is really obstinate :-(0) He has posted several questions on meta to achieve the goal of his horrible attack on fundamental physics and now he is listing questions / topics to be voted off topic and closed/deleted/migrated to philosophy etc ... And David fails to efficiently stop him ... :-((( So it is better to refrain from asking questions about topics the Milner Prize is targeted at for example over there now ... I at least dont dare to ask about cosmology, quantum gravity, beyond the standard model physics, ST etc any more; it would just be a wast of time since this Anixx troll rules :-( At least I still have TRF :-) reader random said... entry to fqxi contest is now closed, you have to dig, but there are a few nuggets in there Sep 2, 2012, 5:19:00 AM ... The topic of this fqxi contest seems silly to me. Why do some people so obstinately want to discard established and correct principles and foundations of physics that have not been (experimentally or theoretically) disprooved? Sep 2, 2012, 11:05:00 AM ... Right, Dilaton, and a great question. And the likely answer is that what they find more important is to be as famous as e.g. Einstein rather than to appreciate the things for which Einstein and others are actually famous. Sep 2, 2012, 12:23:00 PM ... There is now a better possibility to get famous (and financially powerful): They could do something useful and win a fundamental physics prize for example ... :-) I think that's a correct observation in some cases, but I think in others it really is driven from a desire to correct misconceptions. The contest is targeted a broader audience, so it is an opportunity to communicate, whether one wins or not isn't important. I would agree with dilaton up to the point where I begin reading web posts that people are amazed that the latest results of measurements show that space cannot be discretized...and these theories came from "professional" physicists. That just tells me that it is not the laymen that are in need of help here. Sep 2, 2012, 3:43:00 PM ... reader Eugene S said... The passage quoted by Luke Lea had been bugging me, too. Then I re-read the famous More Is Different paper by Philip Anderson. It seems inevitable to go on uncritically to what appears at first sight to be an obvious corollary of reductionism: that if everything obeys the same fundamental laws, then the only scientists who are studying anything fundamental are those who are working on those laws. In practice, that amounts to some astrophysicists, some elementary particle physicists, some logicians and other mathematicians, and few others. ... The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a "constructionist" one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. However, with the clarification in your reply to Luke it appears that your view and Anderson's are compatible after all. Oct 13, 2012, 3:56:00 PM ... Dear Eugene, because reductionism is right, one may reduce the scientific answer to any question about well-defined ongoing phenomena or lab experiments to the fundamental laws of physics. However, what isn't guaranteed to you by the fundamental laws of physics is that you will ask the relevant questions. What does it mean for a question to be relevant? Well, various questions about DNA are relevant even though in the space of possible bound states of atoms, they are extremely special. But they're relevant because we're surrounded by them. So one needs to know what are the relevant questions, and some of them - those depend on our environment, our current time from the big bang, our location in the Universe and on the Earth, and the history of life and humans on Earth - won't be derivable from fundamental physics. But once you describe the environment accurately enough, everything is accessible by methods of fundamental physics. | Subscribe to: all TRF disqus traffic via RSS [old] To subscribe to this single disqus thread only, click at ▼ next to "★ 0 stars" at the top of the disqus thread above and choose "Subscribe via RSS". Subscribe to: new TRF blog entries (RSS) Recent Disqus comments ☼ + 20 recent comments Support your TRF Due to some breathtaking recent expenses related to my free expression, I really need your material help... Also try PayPal.me/motls (EUR,CZK avoid a fee) Statistics of Covid-19, a map Seven new dynamic views ☻ Classic ☻ Flipcard ☻ Magazine ☻ Mosaic ☻ Sidebar ☻ Snapshot ☻ Timeslide ☻ Mobile ☻ 2 GB cloud space for free LHC records hottest papers: ATLAS / CMS highest luminosity: 21/nb/s = 570/fb/year (2017) 7.8/nb/s = 245/fb/year (2012) delivered luminosity: 70/fb (2018, 13 TeV), 4.3/fb (2015, 13 TeV), 27/fb (2012, 8 TeV) 5/fb (2011, 7 TeV) total collisions: 15 quadrillion Hover for additional notes The 2015-18 Run 2 is over. Calendar for 2016, 2017 \(\rm\LaTeX\) help: 6 pg, 157 pg MathJax doc: mathjax.org For TRF guest bloggers See an example of the optimally formatted TXT source of the contributions; HTML outcome. Wolfram|Alpha answers your query Your calendar... The past light cone EU: all incandescent light bulbs banned tomorrow Harvard course: 125 students copy a take-home exam German offshore wind turbines: hiding all the disa... London built a new paralympic LHC collider Moonlanding was staged, 74% of climate alarmists say Conformal Standard Model and the second \(325\GeV\... Crackpots are patient while sending texts to journals Neil Armstrong: 1930-2012 Simple proof QM implies many worlds don't exist Ordinary SUSY quantum mechanical theories don't al... Strange claims about a new \(38\MeV\) boson Prague Castle and AGW after Václav Klaus Larry Summers: government is guaranteed to grow Peter Shor on deterministic fakes of quantum mecha... Arctic warming: a 1947 hysteria Hard work vs groundbreaking discoveries in HE physics String theory is underestimated even by the enthus... Freeman Dyson and William Press' minirevolution in... Two-state systems: MASERs Cohen, Happer, Lindzen in WSJ and colors of noise Israeli plans to strike Iran Why quantum mechanics has to be complex and linear... ATLAS released 10 papers today Paul Ryan, an apparently bright man of principle CIA declassifies deep-sea-sunk spy satellite Olympic medals, fame, and genuine values \({\mathcal F}\)-\(SU(5)\): LSPs, stops, and proto... Nancy Pelosi spoke to ghosts of Susan B. Anthony e... President Klaus vetoes an insane bill on energy-ef... Degree of climate skepticism in Czechia Dirac, Lawrence, Mott, Penrose: anniversaries Olympic medals per capita, GDP, team size Earth 2100: a review Curiosity rover and excitement at NASA Pakistani water-powered car Why stringy enhanced symmetries are natural, impor... Polar bears may have been around for 4-5M years UAH, first 7 months: 2012 was 6th warmest Antarctic palm trees: 52M years ago John Christy's climate testimony Hackers fight against algebra in the New York Times Ashoke Sen and tachyon condensation Gerlach and Osheroff: birthdays Highlight words; disabled alternative physics, arts, astronomy, biology, climate, computers, Czechoslovakia, everyday life, experiments, freedom vs PC, games, guest, landscape, mathematics, media and critics, philosophy of science, politics, religion, science and society, stringy vacua and phenomenology, stringy quantum gravity, video Show posts with certain words Most visited in 30 days (it doesn't mean "best") "Lost German girl" didn't deserve better On the contrary... 74 years ago, Prague was liberated by the Red Army. The Vlasov Army – Soviet soldiers who were captured and forced to ... Igor and Grichka Bogdanoff (1949-2021/2022) My sister in the French Riviera just informed me about doubly sad news. Both Igor and Grichka Bogdanoff died of Covid at Parisian "Geo... Malone vs Berenson A somewhat unexpected war erupted between two of my fellow warriors against the vaccination mandates and many other deeply counterproducti... Creatures behind Malone's Twitter ban deserve the most stringent punishment In the morning, I was totally shocked (and I am still immensely angry, four hours later) when I learned that Twitter banned the main co-inve... Pairs of quantum observers generically agree to disagree Pierfrancesco La Mura – whom I have known rather well in person for 2 decades (or at least 2 decades ago) in New Jersey and California – sho... Evil carps and CO2: a French-owned big Czech bank commits marketing suicide The French bank Société Générale owns a majority of a top 3 Czech bank by size, KB (Komerční banka which translates as The Commerce Bank); I... 1056 seconds of Chinese fusion Willie S. sent me an encouraging article in the Chinese media China's 'artificial sun' smashes 1000-second fusion world record ... Witch hunt on Djoković: when Coronazism and anti-Slavic racism team up Technical: A 12-year-old useless Statcounter widget started to malfunction on PC browsers yesterday, redirected TRF to an empty page with ... Racist things this week: E.O. Wilson, normal distribution, ... E.O. Wilson , a Harvard-Duke biologist sometimes nicknamed "the natural heir to Darwin" and "father of biodiversity" and... CMS: another bump at \(2.9\TeV\) This is a very short comment consolidating some LHC events with the apparent mass near \(2.9\TeV\). In the early September 2015, I mentioned... Select a category alternative physics (284) architecture (15) arts (227) astronomy (629) biology (496) cars (61) climate (1777) colloquium (25) computers (609) Czechoslovakia (1039) Denmark (26) education (288) Europe (834) everyday life (322) experiments (1037) France (98) freedom vs PC (467) fusion (31) games (198) geology (66) guest (127) heliophysics (69) Hewlett-Packard (14) IQ (43) Kaggle (22) Kyoto (213) landscape (207) Latin America (9) LHC (518) LIGO (40) markets (699) mathematics (572) media and critics (132) Mersenne (5) Middle East (346) missile (100) murders (96) music (93) philosophy of science (802) Pluto (7) politics (2231) quantum foundations (326) religion (263) Russia (220) science and society (2334) sports (85) string vacua and phenomenology (1236) stringy quantum gravity (1131) TBBT (56) textbooks (36) TV (150) video (314) weather records (283) Friends, disciples, stalkers TRF global Facebook like button Owner links His Czech blog His Czech home page Google Scholar (interrupted) His Wikipedia entries His Chrome extensions ATOM XML + RSS Mobile TRF (old 1, old 2) Physics Stack Exchange Google has reset my feeds, probably intentionally so. Unless they re-subscribe, the followers are lost. Subscribe to Luboš Motl's Reference Frame by Email Tweeting Lumídek Tweets by @lumidek in Pilsen, CZ: map Map of your town World Science (global science news site) Google Science (at news.google.com) CNN Science (at www.cnn.com) Blogs led by science SciGuy (Eric Berger) Resonaances (serious hep-ph) Push-button thinking (Roman Staněk's IT blog) (Mathematica, S. Wolfram) Wolfram Alpha blog Click the logo to buy. Now Mathematica 10! Check Mathematica on TRF. Science and policy See news at oilprice.com Global map of winds + II UAH daily Arctic temps Archives ► UAH Hurricanes ◄ Others Political and social blogs Václav Klaus (Czech president) Frontpagemag.COM (David Horowitz) Rae Ann (vicious momma) (Australia) iSteve (Steve Sailer's IQ blog) Lies, damned lies, and statistics Map above: visitors this month (by country), mobile template not counted Pageviews since July 2010 Some physics at Amazon Who is Lumo? Luboš Motl Pilsen, Czechia (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Abelian Group Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems by Yu · Published 09/18/2017 An Example of a Matrix that Cannot Be a Commutator Let $I$ be the $2\times 2$ identity matrix. Then prove that $-I$ cannot be a commutator $[A, B]:=ABA^{-1}B^{-1}$ for any $2\times 2$ matrices $A$ and $B$ with determinant $1$. Read solution Click here if solved 20 Add to solve later by Yu · Published 09/15/2017 · Last modified 01/16/2018 7 Problems on Skew-Symmetric Matrices Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$. (a) Prove that $A+B$ is skew-symmetric. (b) Prove that $cA$ is skew-symmetric for any scalar $c$. (c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric. (d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix. (e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix. (f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$. (g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$. Determine a Condition on $a, b$ so that Vectors are Linearly Dependent \[\mathbf{v}_1=\begin{bmatrix} 1 \\ \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} a \\ \end{bmatrix}\] be vectors in $\R^3$. Determine a condition on the scalars $a, b$ so that the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is linearly dependent. Two Matrices are Nonsingular if and only if the Product is Nonsingular An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$. Using the definition of a nonsingular matrix, prove the following statements. (a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular. (b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then: The matrix $B$ is nonsingular. The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.) A Singular Matrix and Matrix Equations $A\mathbf{x}=\mathbf{e}_i$ With Unit Vectors Let $A$ be a singular $n\times n$ matrix. \[\mathbf{e}_1=\begin{bmatrix} \vdots \\ \end{bmatrix}, \mathbf{e}_2=\begin{bmatrix} \end{bmatrix}, \dots, \mathbf{e}_n=\begin{bmatrix} \end{bmatrix}\] be unit vectors in $\R^n$. Prove that at least one of the following matrix equations \[A\mathbf{x}=\mathbf{e}_i\] for $i=1,2,\dots, n$, must have no solution $\mathbf{x}\in \R^n$. Click here if solved 9 The Matrix $[A_1, \dots, A_{n-1}, A\mathbf{b}]$ is Always Singular, Where $A=[A_1,\dots, A_{n-1}]$ and $\mathbf{b}\in \R^{n-1}$. Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector. Then the product $A\mathbf{b}$ is an $n$-dimensional vector. Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$. Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$. Prove $\mathbf{x}^{\trans}A\mathbf{x} \geq 0$ and determine those $\mathbf{x}$ such that $\mathbf{x}^{\trans}A\mathbf{x}=0$ For each of the following matrix $A$, prove that $\mathbf{x}^{\trans}A\mathbf{x} \geq 0$ for all vectors $\mathbf{x}$ in $\R^2$. Also, determine those vectors $\mathbf{x}\in \R^2$ such that $\mathbf{x}^{\trans}A\mathbf{x}=0$. (a) $A=\begin{bmatrix} 4 & 2\\ 2& 1 \end{bmatrix}$. (b) $A=\begin{bmatrix} The Transpose of a Nonsingular Matrix is Nonsingular Let $A$ be an $n\times n$ nonsingular matrix. Prove that the transpose matrix $A^{\trans}$ is also nonsingular. If the Quotient is an Infinite Cyclic Group, then Exists a Normal Subgroup of Index $n$ Let $N$ be a normal subgroup of a group $G$. Suppose that $G/N$ is an infinite cyclic group. Then prove that for each positive integer $n$, there exists a normal subgroup $H$ of $G$ of index $n$. Construction of a Symmetric Matrix whose Inverse Matrix is Itself Let $\mathbf{v}$ be a nonzero vector in $\R^n$. Then the dot product $\mathbf{v}\cdot \mathbf{v}=\mathbf{v}^{\trans}\mathbf{v}\neq 0$. Set $a:=\frac{2}{\mathbf{v}^{\trans}\mathbf{v}}$ and define the $n\times n$ matrix $A$ by \[A=I-a\mathbf{v}\mathbf{v}^{\trans},\] where $I$ is the $n\times n$ identity matrix. Prove that $A$ is a symmetric matrix and $AA=I$. Conclude that the inverse matrix is $A^{-1}=A$. The Range and Null Space of the Zero Transformation of Vector Spaces Let $U$ and $V$ be vector spaces over a scalar field $\F$. Define the map $T:U\to V$ by $T(\mathbf{u})=\mathbf{0}_V$ for each vector $\mathbf{u}\in U$. (a) Prove that $T:U\to V$ is a linear transformation. (Hence, $T$ is called the zero transformation.) (b) Determine the null space $\calN(T)$ and the range $\calR(T)$ of $T$. If Generators $x, y$ Satisfy the Relation $xy^2=y^3x$, $yx^2=x^3y$, then the Group is Trivial Let $x, y$ be generators of a group $G$ with relation xy^2=y^3x,\tag{1}\\ yx^2=x^3y.\tag{2} Prove that $G$ is the trivial group. Find the Inverse Linear Transformation if the Linear Transformation is an Isomorphism Let $T:\R^3 \to \R^3$ be the linear transformation defined by the formula \[T\left(\, \begin{bmatrix} x_1 \\ x_3 \end{bmatrix} \,\right)=\begin{bmatrix} x_1+3x_2-2x_3 \\ 2x_1+3x_2 \\ x_2-x_3 \end{bmatrix}.\] Determine whether $T$ is an isomorphism and if so find the formula for the inverse linear transformation $T^{-1}$. Find the Inverse Matrices if Matrices are Invertible by Elementary Row Operations For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. 1 & 3 & -2 \\ 2 &3 &0 \\ 0 & 1 & -1 \end{bmatrix}$ 1 & 0 & 2 \\ -1 &-3 &2 \\ Click here if solved 112 The Sum of Cosine Squared in an Inner Product Space Let $\mathbf{v}$ be a vector in an inner product space $V$ over $\R$. Suppose that $\{\mathbf{u}_1, \dots, \mathbf{u}_n\}$ is an orthonormal basis of $V$. Let $\theta_i$ be the angle between $\mathbf{v}$ and $\mathbf{u}_i$ for $i=1,\dots, n$. Prove that \[\cos ^2\theta_1+\cdots+\cos^2 \theta_n=1.\] Rotation Matrix in the Plane and its Eigenvalues and Eigenvectors Consider the $2\times 2$ matrix \[A=\begin{bmatrix} \cos \theta & -\sin \theta\\ \sin \theta& \cos \theta \end{bmatrix},\] where $\theta$ is a real number $0\leq \theta < 2\pi$. (a) Find the characteristic polynomial of the matrix $A$. (b) Find the eigenvalues of the matrix $A$. (c) Determine the eigenvectors corresponding to each of the eigenvalues of $A$. Using the Wronskian for Exponential Functions, Determine Whether the Set is Linearly Independent By calculating the Wronskian, determine whether the set of exponential functions \[\{e^x, e^{2x}, e^{3x}\}\] is linearly independent on the interval $[-1, 1]$. A One Side Inverse Matrix is the Inverse Matrix: If $AB=I$, then $BA=I$ An $n\times n$ matrix $A$ is said to be invertible if there exists an $n\times n$ matrix $B$ such that $AB=I$, and $BA=I$, where $I$ is the $n\times n$ identity matrix. If such a matrix $B$ exists, then it is known to be unique and called the inverse matrix of $A$, denoted by $A^{-1}$. In this problem, we prove that if $B$ satisfies the first condition, then it automatically satisfies the second condition. So if we know $AB=I$, then we can conclude that $B=A^{-1}$. Let $A$ and $B$ be $n\times n$ matrices. Suppose that we have $AB=I$, where $I$ is the $n \times n$ identity matrix. Prove that $BA=I$, and hence $A^{-1}=B$. Inverse Matrix Contains Only Integers if and only if the Determinant is $\pm 1$ Let $A$ be an $n\times n$ nonsingular matrix with integer entries. Prove that the inverse matrix $A^{-1}$ contains only integer entries if and only if $\det(A)=\pm 1$. Find Inverse Matrices Using Adjoint Matrices Let $A$ be an $n\times n$ matrix. The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column. Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$. The matrix $\Adj(A)$ is called the adjoint matrix of $A$. When $A$ is invertible, then its inverse can be obtained by the formula \[A^{-1}=\frac{1}{\det(A)}\Adj(A).\] For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula. 0 &-1 &2 \\ 0 & 0 & 1 (b) $B=\begin{bmatrix} Page 10 of 38« First«...7891011121314...2030...»Last » This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Math-Magic (1) Module Theory (13) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Probabilities of An Infinite Sequence of Die Rolling Interchangeability of Limits and Probability of Increasing or Decreasing Sequence of Events Linearity of Expectations E(X+Y) = E(X) + E(Y) Successful Probability of a Communication Network Diagram Lower and Upper Bounds of the Probability of the Intersection of Two Events Group Homomorphism Sends the Inverse Element to the Inverse Element The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane Find a Row-Equivalent Matrix which is in Reduced Row Echelon Form and Determine the Rank Solving a System of Linear Equations Using Gaussian Elimination Solve Linear Recurrence Relation Using Linear Algebra (Eigenvalues and Eigenvectors) How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ Prove Vector Space Properties Using Vector Space Axioms Express a Vector as a Linear Combination of Other Vectors How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Matrix of Linear Transformation with respect to a Basis Consisting of Eigenvectors Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis 12 Examples of Subsets that Are Not Subspaces of Vector Spaces The Intersection of Two Subspaces is also a Subspace Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved.
CommonCrawl
Community and individual level determinants of infant mortality in rural Ethiopia using data from 2016 Ethiopian demographic and health survey Magnitude of low birthweight in malaria endemic settings of Nanoro, rural Burkina Faso: a secondary data analysis Moussa Lingani, Serge H. Zango, … Philippe Donnen Hierarchical disentanglement of contextual from compositional risk factors of diarrhoea among under-five children in low- and middle-income countries Adeniyi Francis Fagbamigbe, A. Olalekan Uthman & Latifat Ibisomi Mapping of mothers' suffering and child mortality in Sub-Saharan Africa Bayuh Asmamaw Hailu, Gebremariam Ketema & Joseph Beyene Exploring hot spots of short birth intervals and associated factors using a nationally representative survey in Bangladesh Mohammad Zahidul Islam, M. Mofizul Islam, … Md. Nuruzzaman Khan Inequalities in women's utilization of postnatal care services in Bangladesh from 2004 to 2017 Samia Aziz, Abdul Basit, … Joshua P. Vogel Geographical variation of common childhood illness and its associated factors among under-five children in Ethiopia: spatial and multilevel analysis Dagmawi Chilot, Mengistie Diress, … Daniel Gashaneh Belay Effect of China's maternal health policy on improving rural hospital delivery: Evidence from two cross-sectional surveys Xiaojing Fan, Yongjian Xu, … Jianmin Gao Longitudinal trends in the health outcomes among children of the North Eastern States of India: a comparative analysis using national DHS data from 2006 to 2020 Ankita Mukherjee, Rizu & Rakesh Parashar Mapping exclusive breastfeeding in Africa between 2000 and 2017 Natalia V. Bhattacharjee, Lauren E. Schaeffer, … Simon I. Hay Setegn Muche Fenta1, Girum Meseret Ayenew2, Haile Mekonnen Fenta3, Hailegebrael Birhan Biresaw1 & Kenaw Derebe Fentaw1 Scientific Reports volume 12, Article number: 16879 (2022) Cite this article The infant mortality rate remains unacceptably high in sub-Saharan African countries. Ethiopia has one of the highest rates of infant death. This study aimed to identify individual-and community-level factors associated with infant death in the rural part of Ethiopia. The data for the study was obtained from the 2016 Ethiopian Demographic and Health Survey. A total of 8667 newborn children were included in the analysis. The multilevel logistic regression model was considered to identify the individual and community-level factors associated with new born mortality. The random effect model found that 87.68% of the variation in infant mortality was accounted for by individual and community level variables. Multiple births (AOR = 4.35; 95%CI: 2.18, 8.69), small birth size (AOR = 1.29; 95%CI: 1.10, 1.52), unvaccinated infants (AOR = 2.03; 95%CI: 1.75, 2.37), unprotected source of water (AOR = 1.40; 95%CI: 1.09, 1.80), and non-latrine facilities (AOR = 1.62; 95%CI: 1.20) were associated with a higher risk of infant mortality. While delivery in a health facility (AOR = 0.25; 95%CI: 0.19, 0.32), maternal age 35–49 years (AOR = 0.65; 95%CI: 0.49, 0.86), mothers receiving four or more TT injections during pregnancy (AOR = 0.043, 95% CI: 0.026, 0.071), and current breast feeders (AOR = 0.33; 95% CI: 0.26, 0.42) were associated with a lower risk of infant mortality. Furthermore, Infant mortality rates were also higher in Afar, Amhara, Oromia, Somalia, and Harari than in Tigray. Infant mortality in rural Ethiopia is higher than the national average. The government and other concerned bodies should mainly focus on multiple births, unimproved breastfeeding culture, and the spacing between the orders of birth to reduce infant mortality. Furthermore, community-based outreach activities and public health interventions focused on improving the latrine facility and source of drinking water as well as the importance of health facility delivery and received TT injections during the pregnancy. The infant mortality rate is the most significant public health predictor, as it represents children's and communities' access to basic health measures such as vaccination, infectious disease care treatment, and proper nutrition1. The Sustainable Development Goals (SDG) child mortality objective seeks to bring an end to preventable deaths of newborns and children under-five years of age by 2030, with all countries aiming to reduce newborn mortality to at least as low as 12 deaths per 1000 live births and under-five mortality to at least as low as 25 deaths per 1000 live births. In 2018, 4.1 million children died worldwide in their first year of life2,3,4,5.More than one million children die in Africa alone before celebrating their first birthday. These values are approximately equivalent to 2,808 deaths per single day or about two deaths every minute. Since the Millennium Development Goals (MDGs) were adopted, the countries of Sub-Saharan Africa have achieved incredible success and increases in infant survival, but the infant mortality rate in Sub-Saharan Africa is still the highest in the global region2,3,4,5. Ethiopia has one of the highest infant mortality rates in the world5,6. In rural Ethiopia, the infant mortality rate is 62 deaths per 1000 live births. This infant mortality is substantially higher than the SDG targets of 12 deaths per 1000 live births5,7,8. The Ethiopian government is working to reduce child deaths and reforms have been made in recent years. In the rural part of Ethiopia, maternal complications during childbirth, immediate exclusive breast-feeding, birth interval, maternal socioeconomic characteristics, and health service seeking actions, etc., are still major challenges9,10. In order to prepare and enforce an initiative and take steps to address the burden of newborn deaths in the rural areas of Ethiopia, identification of the enumeration area specific factors on infant mortality is therefore necessary4,11. Previous studies have concluded that infants born in rural areas are more at risk of death than infants born in urban areas12,13,14,15,16,17. In Ethiopia, infant mortality in rural areas is a major challenge7,9. Understanding the causes of infant mortality in rural regions is crucial if we are to reduce Ethiopia's high infant mortality rate. Furthermore, the estimated infant mortality rate in this country was greater in rural areas (62 per 1000 live births) than in urban areas (54 per 1000 live births) and compared to the national average (48 per 1000 live births)7,18. Although various studies have been undertaken in Ethiopia to investigate infant death rates and risk factors16,17,19,20, little study has been conducted in rural Ethiopia. This lack of epidemiologic study limits our understanding of the determinant to prioritize for evidence-based programming in this high-risk region of infant mortality. Previous studies in Ethiopia on the risk factors for infant mortality were also institutionally focused21,22, and only looked at individual-level factors19,21,22. However, community-level variables such as the source of drinking water23, type of toilet facilities23, cluster (enumeration area)23,24, and region23,24 may all have an effect on infant mortality. Furthermore, the Ethiopian Demographic and Health Survey (EDHS) used a multistage cluster sampling procedure in which individuals were nested in clusters and infant mortality was correlated with these clusters7,18. This violates the assumption of independence, which may introduce a significant bias in programmatic implementation by implying that contextual variables are not taken into account in the study. For example, researchers discovered that geographic access to health care has an impact on infant mortality. Contextual variables, such as the region of respondents, enable researchers to investigate how a wide range of environmental factors may influence health and well-being25. To address this, we used a multilevel logistic regression model to examine variables associated with infant mortality at individual and community levels25,26. This study aimed to identify individual- and community-level factors associated with infant death in the rural parts of Ethiopia. Study design and setting This study was conducted in Ethiopia, which is the second-most populated country in Africa, after Nigeria, and is located in the Horn of Africa. Ethiopia has nine regional states (Tigray, Afar, Amhara, Oromiya, Somali, BenishangulGumuz, Southern Nations Nationalities and People (SNNP), Gambela, and Harari) as well as two city administrations (Addis Ababa and Dire Dawa). We used secondary data from the 2016 EDHS. Sampling and data measurements The 2016 EDHS employed stratified and cluster multistage sampling, with the goal of being representative at the regional and national levels in terms of appropriate demographic and health indicators. In the first stage, 645 clusters (202 in urban areas and 443 in rural areas) were selected using a probability proportional to cluster size and independent selection in each sampling stratum. In the second stage, random samples of 18,008 households were drawn from all identified EAs. A total of 15,683 women aged 15–49 were interviewed. Data was collected from 18 January to 27 June 2016. The sample size for EDHS was determined using a multistage sampling procedure that took into account sampling variation7. Study variables Outcome variables The response variable of this study was the status of infant mortality. Infant mortality is defined as the risk of a child dying between birth and their first birthday. This takes a binary outcome; so infant death is classified as either death (1 = if the infant died between birth and their first birthday) or alive (0 = if the infant was alive between birth and their first birthday). The possible predictor variables associated with infant mortality will be categorized as individual-level factors and community-level factors. These variables were chosen based on previous knowledge and existing literature12,13,14,15,16,19 (Table 1). Table 1 Description and measurement of individual- and community-level independent variables. Data management and analysis The variables were extracted from the BR dataset using SPSS software version 21, and then exported to the statistical software R version 3.5.3 for further analysis. Data were weighted after extraction using sampling weight (v005), main sampling unit (v023), and strata (v021) to account for unequal probability of selection and non-response. Descriptive analysis was done and sample characteristics were presented in frequency and percentages to show the distribution of respondents by the selected variables. The data in EDHS were not flat and were collected using multistage stratified cluster sampling techniques. To draw valid inferences and conclusions, advanced statistical models such as hierarchical modelling, which consider independent variables measured at the individual and community levels, should be used to account for the clustering effect/dependency. A two-level multilevel logistic regression analysis was used to examine the effects of individual- and community-level characteristics on infant death and to determine the extent to which characteristics at the individual and community levels explain enumeration area variations in infant death in rural Ethiopia. The reason for using multilevel logistic regression model was to account for the hierarchical (correlated) structure of the data. The assumption is that infant and their households are nested within enumeration area (communities). This suggests that infant in households with similar characteristics can have different health outcomes when residing in different communities with different characteristics. The log of the probability of infant death was modeled using a two-level multilevel model as follows: $$ Log\left[ {\frac{{\pi_{ij} }}{{1 - \pi_{ij} }}} \right] = \beta_{0} + \beta_{1} X_{ij} + \beta_{2} Z_{ij} + u_{j} + e_{ij} $$ where, \(i\) and \(j\) are the level 1 (individual) and level 2 (community) units, respectively; \(X\) and \(Z\) refer to individual and community-level variables, respectively; \(\pi_{ij}\) is the probability of infant death for the ith women in the jth community; the \(\beta\) indicates the fixed coefficients. Whereas, \(\beta_{0}\) is the intercept-the effect on the probability of infant death in the absence of influence of predictors; and \(u_{j}\) showed the random effect (effect of the community on infant death for the jth community and \(e_{ij}\) showed random errors at the individual levels. By assuming each community had different intercept (\(\beta_{0}\)) and fixed coefficient (\(\beta\)), the hierarchical (clustered) data nature and the within and between community variations were taken into account. Four models were fitted to identify community and individual level factors associated with infant death. The first model (Model 1 or empty model) contained no explanatory variables, but was fitted to decompose the total variance into its individual- and community-level components. The second model (Model 2) considered only the individual-level variables in order to examine the individual-level effect. The third model (Model 3) considered only the community-level variables in order to examine the effect of community-level factors on infant death, independent of other factors. The fourth model (Model 4) is the full model that incorporated all the individual and community-level variables into the multilevel analysis. Fitting the final model involved two steps. First, stepwise logistic regression analysis was done to identify the key variables associated with infant death. Second, all the variables selected from the stepwise logistic regression models were incorporated into the multilevel modeling. For the result of fixed effect, odds ratio (ORs) with 95% confidence intervals (CIs) was used to declare statistical significance. The P-value ≤ 0.05 has been considered as statistically significant. The measures of variation (random-effects) were summarized using ICC, Median Odds Ratio (MOR) and proportional change in variance (PCV) to measure the variation between enumeration areas (clusters). ICC is a measure of within-cluster variation, the variation between individuals within the same cluster, and it was calculated using the formula: \( ICC = ~\frac{{V_{A} }}{{V_{A} + {\raise0.7ex\hbox{${\pi ^{2} }$} \!\mathord{\left/ {\vphantom {{\pi ^{2} } 3}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$3$}}}} = ~\frac{{V_{A} }}{{V_{A} + 3.29}} \) , where \({V}_{A}\) is the estimated variance in each model, which has been described elsewhere27. The total variation attributed to individual or/and community level factors at each model was measured by the proportional change in variance (PCV), which was calculated as: \(CV= \frac{{V}_{A}-{V}_{B}}{{V}_{A}}\) , where \({V}_{A}\) = variance of the initial model, and \({V}_{B}\) = variance of the model with more terms25. The MOR is the median odds ratio between the individual of higher propensity and the individual of lower propensity when comparing two individuals from two different randomly chosen clusters and it measures the unexplained cluster heterogeneity, the variation between clusters by comparing two persons from two randomly chosen different clusters. It was computed using the formula: \(MOR=\mathrm{exp}(\sqrt{2*{V}_{A}*0.6745} )\approx \mathrm{exp}(0.95\sqrt{{V}_{A}} )\), where \({V}_{A}\) is the cluster level variance25,27. The MOR measure is always greater than or equal to 1. If the \(MOR\) is 1, there is no variation between clusters 27. The generalized variance-inflation factor (GVIF) test was used to check for multicollinearity; the findings showed that there no multicollinearity because the GVIF for each variable was less than 5. Model comparison was done using Deviance Information Criteria (DIC), Akaike's Information Criterion (AIC) and Bayesian's Information Criterion (BIC). The model with the smallest value of the information criterion was selected as the final model of the analysis27. Ethical Issues. Publicly available EDHS 2016 data was used for this study. Informed consent was taken from each participant, and all identifiers were removed. Confirmation of methods Author(s) confirm that all methods were carried out in accordance with relevant guidelines and regulations in the manuscript. Socio-demographic and obstetric characteristics of the study respondents The total number of infants participated in this study was 8667. 6568 (73.8%) of them were born at home, and 4467 (51.1%) of them were males. 6509 (75.1%) of mothers were housewives and 6267 (72.3%) had no formal education. About 8439 (97.4%) infants were born as singletons and 5625 (64.9%) infants were born into low-income household. The majority; 7129 (82.3%) of mothers have used improved sources of drinking water. About 5350 (61.7%) of mothers did not have visits during pregnancy and 5857 (67.6%) of women did not receive tetanus injection during pregnancy. 4675 (53.9%) of mothers had 3 or above ever-born children and only 458 (5.3%) infants were not breastfeeding at all (Table 2). Table 2 Socio-demographic and obstetric characteristics of the study respondents EDHS 2016. Factors associated with the infant death The result of the multilevel logistic regression model was summarized in Table 2. The model selection result indicated that model IV was a better fit to the data as compared to other reduced models, since it has the smallest AIC, BIC, and deviance statistic. The final selected model (model IV) showed that antenatal care visit, preceding birth interval, number of TT injections, education level of the mother, family size, vaccination of the child, contraceptive use, child twin, place of delivery, diarrhea status, size of child at birth, marital status, number of living children, breastfeeding, region, water source, and latrine facility type were found to have a statistically significant association with infant mortality (Table 3 and Table 4). Table 3 multilevel logistic regression analysis for risk factors of infant mortality in rural Ethiopia, 2016. Table 4 Measure of variation on individual and community level risk factors of infant death in rural Ethiopia, EDHS 2016 dataset. Individual-level factor The odds of infant death among mothers who had four or more ANC visits during their pregnancy was 0.787 (AOR = 0.787, 95%CI: 0.645, 0.961) times lower as compared to mothers who had no ANC visits during their pregnancy. The odds of infant death for infants with preceding birth interval less than 2 years were increased by 27% (AOR = 0.724, 95%CI: 0.624, 0.839) compared to infants with preceding birth interval more than 2 years. The odds of infant death among multiple birth were 4.35 (AOR = 4.350; 95% CI, 2.179, 8.685) times higher as compared to singletons. Infants born to mother who attained primary education had 0.859 (AOR = 0.859; 95% CI: 0.739, 0.998) times lower likelihood of infant death than infants whose mother did not have formal education. The odds of infant death who are born at the health facility were 0.249 (AOR = 0.249; 95% CI, 0.193, 0.321) times lower as compared to children who are born at home. Infants who had not received vaccination had a 2.033 (AOR = 2.033; 95% CI, 1.745, 2.370) times higher risk of death than infants who had received vaccination. The odds of infant death were 1.29 (AOR = 1.290; 95% CI, 1.096, 1.519) times higher in infants born with a small birth size compared to infants born with a large birth size. As compared to separated mothers, married mothers had 0.670 (AOR = 0.670; 95% CI, 0.485, 0.925) times lower odds of infant death. The odds of infant death among mothers who received TT injections 4 and above times during pregnancy was 0.043 (AOR = 0.043, 95%CI: 0.026, 0.071) times lower as compared to mothers who did not receive TT injection during pregnancy. Families with five or more members were 1.623 times (AOR = 1.623; 95% CI: 1.193, 2.206) more likely to lose an infant than families with four or fewer members. The risk of infant death was reduced by 67.1% (AOR = 0.329; 95%CI: 0.260, 0.418) in mothers who breastfed compared to mothers who did not breastfeed (Table 2). Community level factor Infants living in Afar (AOR = 2.564: 95%CI: 1.466, 4.487), Amhara (AOR = 3.326; 95%CI: 2.064, 5.361), Oromia(AOR = 12.070; 95% CI: 7.584, 19.21), Somali(AOR = 4.171; 95% CI: 2.501, 6.955), SNNPR (AOR = 4.083; 95%CI: 2.561, 6.509) and Harari (AOR = 7.067; 95%CI: 3.679, 13.575) regional state were more likely die as compared to infants living in Tigray regional state. Infants from households without access to a latrine had 62.1% (AOR = 1.621; 95%CI: 1.201, 2.187) higher odds of death compared with infants from households that had an improved latrine facility. The probability of infant death for women those use unprotected source of water was increased by 40% (AOR = 1.400; 95%CI: 1.087, 1.802) as to women use protected water sources (Table 3). Random effect (a measure of variation) In Table 4, the results of the random-effects model are given. There were varying infant mortality rates across clusters (communities). Significant variations in infant mortality at the community level were seen in the results of the null model (Model I). The findings show that 32.93% of intra-class correlations (ICCs) were correlated with infant mortality at the community level. After adding both the individual-level and community-level factors in the model (Model IV), there is a significant variation of infant mortality across communities or clusters. About 87.68% of infant mortality in clusters was explained in the full model. Moreover, the MOR confirmed that infant mortality was attributed to community-level factors. In the null model, the MOR for infant mortality was 3.344; this showed that the difference between communities (clustering) was 3.344 times greater than the reference (MOR = 1). When all variables were included in the model, the unexplained community variation in infant deaths was reduced to an MOR of 2.04. This indicated that when all factors are considered, the effects of clustering are still statistically significant in the full models (Table 4). This study aimed to investigate modifiable risk factors in rural Ethiopia for infant mortality. In addition, it explored the variation of rural infant mortality in the enumeration area that has not been studied so far. From the 2016 EDHS data, 8667 rural infants nested in 443 clusters were included in the analysis. The infant mortality rate in rural Ethiopia in 2016 was 62 deaths per 1000 live births7. This death rate is higher than 54 deaths per 1000 live births in urban Ethiopia7. This may be due to the prevalence in rural settings of weak infrastructure, low-economic classes, and restricted flow of information, where the level of risk is estimated to be high. The random effect model showed that both individual and community-level factors accounted for about 87.68% of the variation observed for infant mortality. This finding was in line with the study conducted in Ethiopia24. A lower risk of infant death has been associated to having a wealthy household. Previous studies in Ethiopia20, Bangladesh15, rural district in Indonesia28 and Nigeria29 showed that infant mortality was negatively associated with household income. Infants from high-income families would be able to meet basic needs and services including health care, quality of life, water quality, and sanitation25. Compared to short birth intervals, long birth intervals were associated with a lower risk of infant death, and the risk of infant death decreased as the previous birth interval increased. This was supported by the study findings in Bangladesh15, Ethiopia19,20,30 and Tanzania31. The reason behind this could be shorter preceding birth intervals are linked to an increased risk of preterm birth, low birth weight, and IUGR for subsequent births. Furthermore, women had less time to recover from previous births and were less able to provide sustenance for their children, potentially increasing the risk of infant mortality. The study result showed that; the odds of infant death among mothers who received TT injections during pregnancy was lower as compared to mothers who did not receive TT injection during pregnancy This finding was in agreement with a study done in Ethiopia11,17,25 and rural district in Indonesia28 . This could be due to the fact that TT injection produces protective antibodies against infant tetanus11. The women's education level was an important socio-economic predictor of infant death. Educated mothers had lower infant mortality than uneducated mothers. This is similar to studies done in Bangladesh15, Ethiopia19,32, Nigeria29,33 and Brazilian34, which found the infant death rate decreased with an increase in the level of education of the mother. Educated mothers are more likely to be conscious of nutrition, use of contraceptives to space births, and awareness of childhood diseases and care. This study found that ANC visit was a significant predictor of infant mortality. Women who did not have an ANC visit during pregnancy had a greater risk of infant death than those who did. This finding is consistent with the study conducted in Ethiopia11, rural district in Indonesia28, Nepal35 and Pakistan36. This could be because antenatal care visits provide health benefits such as iron, folic acid, and tetanus vaccines, which may reduce the risk of infant mortality. Furthermore, ANC provides mothers and newborns with the opportunity to undergo various interventions like as anti-D, childhood vaccines, and nutritional supplementation37. Infant mortality was positively associated with multiple births among infants. The odds of infant death among multiple births were higher as compared to singletons. This finding is in agreement with a study from Ethiopia19,20,24 showed that the risk of infant deaths due to multiple births is very high. Infants born at the health center were at lower risk than infants born at home. Multiple births are regarded to put a strain on a family's finances, affecting the infant's nutrition and health care. Multiple births have been related to a higher rate of negative prenatal outcomes, such as premature birth and low birth weight. Place of delivery was a significant predictor of infant mortality. Infants born health facilities had a lower risk of death when compared to those born at home. This finding was in agreement with a study done in Ethiopia4 rural district in Indonesia28and Nigeria29. This outcome could be explained by the fact that the place of delivery is required to promote the health of women and fetuses by lowering birth complications. Child vaccination was significantly correlated with infant mortality. Non-vaccinated infant had a greater risk of death than those who had been vaccinated. It is supported by other findings in Ethiopia17,38. The possible reason could be that vaccines can prevent infectious diseases that once killed or harmed many infants. Breastfeeding was found to be significantly associated with infant death. Currently breastfeeding infants have fewer chances of dying with infants than non-breastfed infants. This is also consistent with previous research conducted in Ethiopia17,20 and Nepal39. This may be because breastfeeding may protect babies from infectious diseases since the fluids from the breast are high in antibodies and white cells. The results of this study also indicated that infant deaths are significantly impacted by family size. The likelihood of infant death increased significantly as family size increased. Similar findings were also found in Ethiopia20. The potential explanation for this may be that there are too many siblings residing at home, which may result in baby care that is insufficient and inappropriate. The study also revealed that number of living children in the household is an important variable affecting infant mortality. As the number of living children in the household increased the risk of infant mortality increased significantly. A study from Ethiopia17, Bangladesh15 and rural district in Indonesia28 consistently reported that infant mortality increase with increase in the number of living children. Separated women had a higher risk of infant mortality than married women. This finding is in agreement with a study from Rwanda40 and United States41 .This could be due to socioeconomic issues, cultural norms, and the lifestyle consequences of single women. Similarly, infants born with a small birth size had a higher risk of infant mortality than infants born with a large birth size. This result is in line with the previous findings in Ethiopia19,20, Bangladesh14,42 and Indonesia43. Poor nutritional status may have an impact on size at birth, which may have an impact on the risk of newborn mortality19. The studies also revealed that infant mortality was influenced by geographic location. Infants born in the regional states of Afar, Amhara, Oromia, Somalia, SNNPR, Benishangul-gumz, Gambela, Dire Dawa, and Harari were more likely to die than those born in Tigray. It is supported by other findings in Ethiopia24, Bangladesh13 and Nigeria29,33. The potential explanation for this may be the regional disparities in socioeconomic status, health-care coverage, and other amenities. Women who drank from an unimproved source had a greater risk of infant death than women who drank from a safe and protected source. This finding is in line with studies done in Bangladesh15, Pakistan44 and Nigeria45. This could be because protected sources of drinking water are less likely to be polluted and less likely to prevent the spread of water-borne diseases like infections and cholera. Infants born into families without access to a latrine died at a higher rate than those born into families with superior latrine facilities. It is supported by other findings in Pakistan44 and Nigeria45 . This may be because access to modern sanitation facilities such as flush toilets is reducing diarrhea prevalence and ultimately decreasing the infant deaths23. Strengths and limitation EDHS are a national representative household survey with a high response rate and the findings are generalized to the national populations. The study focused on national survey data that provides policy makers and program managers insight into the implementation of effective intervention strategies both at national as well regional level. In addition, this study applied multilevel modeling to accommodate the EDHS data hierarchical nature. Because of the cross-sectional nature of the data, it is difficult to measure the causal effect and it is not possible to know if the data depends on time or not. Infant mortality in rural Ethiopia is higher than the national average. This study has demonstrated the importance of both individual and community level factors in explaining enumeration area variations in infant death. This study indicated that antenatal visit, preceding birth interval, number of TT injections, education level of the mother, family size, vaccination of child, contraceptive use, child twin, place of delivery, diarrhea status, size of child at birth, marital status, number of living children, breastfeeding, region, water source, and latrine facility type were found to have a statistically significant association with infant death. The findings suggest that the government and other stakeholders should mainly focus on multiple births, unimproved breastfeeding culture, and the spacing between the orders of birth to reduce infant mortality. Community-based outreach activities and public health interventions focused on improving the latrine facility and source of drinking water as well as the importance of health facility delivery and received TT injections during the pregnancy. The findings of this study may provide a national perspective on the factors that contribute to infant mortality in rural Ethiopia. Finally, we advised policymakers and governments to prioritize community-level factors over individual factors in order to meet the SDG goals and targets by the end of 2030. This study used EDHS 2016 child data set and extracted the outcome and explanatory variables. Data is publicly available online from (https://dhsprogram.com/Data/). Correspondence and requests for data and materials should be addressed to S.M. AIC: Akaike's information criterion AOR: Adjusted odds ratio CSA: Central Statistical Agency DIC: Deviance information criterion EAs: Enumeration areas EDHS: Ethiopian demographic and health survey ICC: Intra-cluster correlation Median odds ratio PCV: Proportional change in variance SNNPR: Southern Nations, Nationalities, and People Region TT: Estimation, U.N.I.G.f.C.M., Levels & trends in child mortality: report 2017: estimates developed by the UN Inter-Agency Group for Child Mortality Estimation. 2017: United Nations Children's Fund. Estimation, U.N.I.-a.G.f.C.M., et al., Levels & Trends in Child Mortality: Report 2018, Estimates Developed by the. 2018: United Nations Children's Fund. Organization, W.H., World health statistics overview 2019: monitoring health for the SDGs, sustainable development goals. 2019, World Health Organization. Fenta, S. M., Fenta, H. M. & Ayenew, G. M. The best statistical model to estimate predictors of under-five mortality in Ethiopia. J. Big Data 7(1), 1–14 (2020). IGME, U., Levels & Trends in Child Mortality: Report,. Estimates Developed by the UN Inter-Agency Group for Child Mortality Estimation 2017 (United Nations Children's Fund, 2017). Zimmerman, L. A. et al. Evaluating consistency of recall of maternal and newborn care complications and intervention coverage using PMA panel data in SNNPR, Ethiopia. PLoS ONE 14(5), e0216612 (2019). EDHS, Central Statistical Agency: Ethiopia Demographic and Health Survey https://dhsprogram.com/publications/publication-fr328-dhs-final-reports.cfm. 2016. Ruducha, J. et al. How Ethiopia achieved millennium development goal 4 through multisectoral interventions: a countdown to 2015 case study. Lancet Glob. Health 5(11), e1142–e1151 (2017). Gebresilassie, Y., Nyatanga, P. & Gebreselassie, M. Determinants of rural–urban differentials in under-five child mortality in ethiopia. Eur. J. Dev. Res. 33(3), 710–734 (2021). Yalew, M. et al. Time to under-five mortality and its predictors in rural Ethiopia: Cox-gamma shared frailty model. PLoS ONE 17(4), e0266595 (2022). Fenta, S. M. & Fenta, H. M. Risk factors of child mortality in Ethiopia: application of multilevel two-part model. PLoS ONE 15(8), e0237640 (2020). Naz, L. and K.K. Patel, Determinants of infant mortality in Sierra Leone: applying Cox proportional hazards model. Int. J. Soc. Econ. (2020). Nilima, S., Sultana, R. & Ireen, S. Neonatal, infant and under-five mortality: An application of cox proportional Hazard model to BDHS data. J. Asiat. Soc. Bangladesh, Sci. 44(1), 7–14 (2018). Vijay, J. & Patel, K. K. Risk factors of infant mortality in Bangladesh. Clin. Epidemiol. Glob. Health 8(1), 211–214 (2020). Rahman, A., et al. Machine Learning Algorithm for Analysing Infant Mortality in Bangladesh. In International Conference on Health Information Science. 2021. Springer. Baraki, A. G. et al. Factors affecting infant mortality in the general population: Evidence from the 2016 Ethiopian demographic and health survey (EDHS); a multilevel analysis. BMC Pregnancy Childbirth 20(1), 1–8 (2020). Fentaw, K. D. et al. Factors associated with post-neonatal mortality in Ethiopia: Using the 2019 Ethiopia mini demographic and health survey. PLoS ONE 17(7), e0272016 (2022). Aalemi, A. K., Shahpar, K. & Mubarak, M. Y. Factors influencing vaccination coverage among children age 12–23 months in Afghanistan: Analysis of the 2015 demographic and health survey. PLoS ONE 15(8), e0236955 (2020). Abate, M. G., Angaw, D. A. & Shaweno, T. Proximate determinants of infant mortality in Ethiopia, 2016 Ethiopian demographic and health surveys: results from a survival analysis. Arch. Public Health 78(1), 1–10 (2020). Mulugeta, S. S. et al. Multilevel log linear model to estimate the risk factors associated with infant mortality in Ethiopia: Further analysis of 2016 EDHS. BMC Pregnancy Childbirth 22(1), 1–11 (2022). Eyeberu, A. et al. Neonatal mortality among neonates admitted to NICU of Hiwot Fana specialized university hospital, eastern Ethiopia, 2020: a cross-sectional study design. BMC Pediatr. 21(1), 1–9 (2021). Hadgu, F. B. et al. Prevalence and factors associated with neonatal mortality at ayder comprehensive specialized hospital, Northern Ethiopia: A cross-sectional study. Pediatric health, med. therapeutics 11, 29 (2020). Fenta, S. M., Biresaw, H. B. & Fentaw, K. D. Risk factor of neonatal mortality in Ethiopia: Multilevel analysis of 2016 demographic and health survey. Trop. Med. Health 49(1), 1–11 (2021). Baraki, A. G. et al. Factors affecting infant mortality in the general population: evidence from the 2016 Ethiopian demographic and health survey (EDHS): A multilevel analysis. BMC Pregnancy Childbirth 20, 1–8 (2020). Kiross, G. T. et al. Individual-, household-and community-level determinants of infant mortality in Ethiopia. PLoS ONE 16(3), e0248501 (2021). Tesema, G. A. & Worku, M. G. Individual-and community-level determinants of neonatal mortality in the emerging regions of Ethiopia: A multilevel mixed-effect analysis. BMC Pregnancy Childbirth 21(1), 1–11 (2021). Austin, P. C. et al. Measures of clustering and heterogeneity in multilevel P oisson regression analyses of rates/count data. Stat. Med. 37(4), 572–589 (2018). Article MathSciNet PubMed Google Scholar Rahayu, S. & Muhaimin, T. inadequate antenatal care visits and risks of infant mortality in rural district. Media Publ. Promosi Kesehat. Indones. (MPPKI) 5(7), 819–823 (2022). Kunnuji, M. et al. Background predictors of time to death in infancy: evidence from a survival analysis of the 2018 Nigeria DHS data. BMC Public Health 22(1), 1–8 (2022). Tesema, G. A. et al. Trends of infant mortality and its determinants in Ethiopia: mixed-effect binary logistic regression and multivariate decomposition analysis. BMC Pregnancy Childbirth 21(1), 1–16 (2021). Ogbo, F. A. et al. Determinants of trends in neonatal, post-neonatal, infant, child and under-five mortalities in Tanzania from 2004 to 2016. BMC Public Health 19(1), 1–12 (2019). Kiross, G. T. et al. The effect of maternal education on infant mortality in Ethiopia: A systematic review and meta-analysis. PLoS ONE 14(7), e0220076 (2019). Yaya, S. et al. Prevalence and determinants of childhood mortality in Nigeria. BMC Public Health 17(1), 1–7 (2017). Anele, C. R. et al. The influence of the municipal human development index and maternal education on infant mortality: An investigation in a retrospective cohort study in the extreme south of Brazil. BMC Public Health 21(1), 1–12 (2021). Lamichhane, R. et al. Factors associated with infant mortality in Nepal: A comparative analysis of Nepal demographic and health surveys (NDHS) 2006 and 2011. BMC Public Health 17(1), 1–18 (2017). Article MathSciNet Google Scholar Patel, K. K., Rai, R. & Rai, A. K. Determinants of infant mortality in Pakistan: Evidence from Pakistan demographic and health survey 2017–18. J. Public Health 29(3), 693–701 (2021). Antehunegn, G. & Worku, M. G. Individual-and community-level determinants of neonatal mortality in the emerging regions of Ethiopia: A multilevel mixed-effect analysis. BMC Pregnancy Childbirth 21(1), 1–11 (2021). Gebremichael, S.G. and S.M. Fenta, Factors associated with U5M in the afar region of Ethiopia. Adv. Public Health (2020). Lamichhane, R. et al. Factors associated with infant mortality in Nepal: A comparative analysis of Nepal demographic and health surveys (NDHS) 2006 and 2011. BMC Public Health 17(1), 53 (2017). Article MathSciNet PubMed PubMed Central Google Scholar Mfateneza, E. et al. Application of machine learning methods for predicting infant mortality in Rwanda: Analysis of rwanda demographic health survey 2014–15 dataset. BMC Pregnancy Childbirth 22(1), 1–13 (2022). Orischak, M. et al. Social determinants of infant mortality amongst births to non-hispanic black women. Am. J. Obstet. Gynecol. 226(1), S706 (2022). Razzaque, A. et al. Levels, trends and socio-demographic determinants of infant and under-five mortalities in and around slum areas of Dhaka city Bangladesh. SSM-popul. health 17, 101033 (2022). Wardani, Y., Huang, Y.-L. & Chuang, Y.-C. Factors associated with infant deaths in Indonesia: An analysis of the 2012 and 2017 Indonesia demographic and health surveys. J. Trop. Pediatr. 68(5), p.fmac065 (2022). Asif, M. F. et al. Socio-economic determinants of child mortality in Pakistan and the moderating role of household's wealth index. BMC Pediatr. 22(1), 1–8 (2022). Eke, D. O. & Ewere, F. Levels, trends and determinants of infant mortality in Nigeria: An analysis using the logistic regression model. Earthline J. Math. Sci. 8(1), 17–40 (2022). The authors acknowledge the ICF international for granting access to use the 2016 EDHS data set for this study. The authors received no financial support for the research, authorship, and/or publication of this article. Department of Statistics, Faculty of Natural and Computational Sciences, Debre Tabor University, Debre Tabor, Ethiopia Setegn Muche Fenta, Hailegebrael Birhan Biresaw & Kenaw Derebe Fentaw Research and Technology Transfer Directorate, Amhara Public Health Institute, P.O. Box 477, Bahir Dar, Ethiopia Girum Meseret Ayenew Department of Statistics, College of Science, Bahir DarUniversity, Bahir Dar, Ethiopia Haile Mekonnen Fenta Setegn Muche Fenta Hailegebrael Birhan Biresaw Kenaw Derebe Fentaw S.M. had substantial contributions to the conception and design of this research, involved in the analysis and interpretation of data, and drafted the article. G.M., H.M., H.B. and K.D. designed the study and revised the article. All authors read and approved the final article. Correspondence to Setegn Muche Fenta. Fenta, S.M., Ayenew, G.M., Fenta, H.M. et al. Community and individual level determinants of infant mortality in rural Ethiopia using data from 2016 Ethiopian demographic and health survey. Sci Rep 12, 16879 (2022). https://doi.org/10.1038/s41598-022-21438-3
CommonCrawl
Home > Research > Publications & Outputs > A short proof that B(L_1) is not amenable Associated organisational unit 2009.04028v3 Rights statement: https://www.cambridge.org/core/journals/proceedings-of-the-royal-society-of-edinburgh-section-a-mathematics/article/short-proof-that-l1-is-not-amenable/5EB9501F273D24ABD38EDEAB8B9433AD The final, definitive version of this article has been published in the Journal, Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 151 (6), pp 1758-1767 2021, © 2020 Cambridge University Press. Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Submitted manuscript Licence: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License https://www.cambridge.org/core/journals/proceedings-of-the-royal-society-of-edinburgh-section-a-mathematics/article/short-proof-that-l1-is-not-amenable/5EB9501F273D24ABD38EDEAB8B9433AD https://doi.org/10.1017/prm.2020.79 Amenable Banach algebra, Banach spaces, operator ideals A short proof that B(L_1) is not amenable Yemon Choi Proceedings of the Royal Society of Edinburgh: Section A Mathematics Non-amenability of ${\mathcal B}(E)$ has been surprisingly difficult to prove for the classical Banach spaces, but is now known for $E= \ell_p$ and $E=L_p$ for all $1\leq p<\infty$. However, the arguments are rather indirect: the proof for $L_1$ goes via non-amenability of $\ell^\infty({\mathcal K}(\ell_1))$ and a transference principle developed by Daws and Runde (Studia Math., 2010). In this note, we provide a short proof that ${\mathcal B}(L_1)$ and some of its subalgebras are non-amenable, which completely bypasses all of this machinery. Our approach is based on classical properties of the ideal of representable operators on $L_1$, and shows that ${\mathcal B}(L_1)$ is not even approximately amenable. Bibliographic note https://www.cambridge.org/core/journals/proceedings-of-the-royal-society-of-edinburgh-section-a-mathematics/article/short-proof-that-l1-is-not-amenable/5EB9501F273D24ABD38EDEAB8B9433AD The final, definitive version of this article has been published in the Journal, Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 151 (6), pp 1758-1767 2021, © 2020 Cambridge University Press. An introduction to non-amenability of B(E) Activity: Talk or presentation types › Invited talk
CommonCrawl
Solvability of the free boundary value problem of the Navier-Stokes equations On the Allen-Cahn equation in the Grushin plane: A monotone entire solution that is not one-dimensional July 2011, 29(3): 803-822. doi: 10.3934/dcds.2011.29.803 On integrable codimension one Anosov actions of $\RR^k$ Thierry Barbot 1, and Carlos Maquera 2, Université d'Avignon et des pays de Vaucluse, LANLG, Faculté des Sciences, 33 rue Louis Pasteur, 84000 Avignon, France Universidade de São Paulo - São Carlos, Instituto de Ciências Matemáticas e de Computação, Av. do Trabalhador São-Carlense 400, 13560-970 São Carlos, SP, Brazil Received January 2010 Revised June 2010 Published November 2010 In this paper, we consider codimension one Anosov actions of $\RR^k,\ k\geq 1,$ on closed connected orientable manifolds of dimension $n+k$ with $n\geq 3$. We show that the fundamental group of the ambient manifold is solvable if and only if the weak foliation of codimension one is transversely affine. We also study the situation where one $1$-parameter subgroup of $\RR^k$ admits a cross-section, and compare this to the case where the whole action is transverse to a fibration over a manifold of dimension $n$. As a byproduct, generalizing a Theorem by Ghys in the case $k=1$, we show that, under some assumptions about the smoothness of the sub-bundle $E^{ss}\oplus E$uu , and in the case where the action preserves the volume, it is topologically equivalent to a suspension of a linear Anosov action of $\mathbb{Z}^k$ on $\TT^{n}$. Keywords: Verjovsky conjecture., Anosov action. Mathematics Subject Classification: Primary: 37C8. Citation: Thierry Barbot, Carlos Maquera. On integrable codimension one Anosov actions of $\RR^k$. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 803-822. doi: 10.3934/dcds.2011.29.803 T. Barbot, Actions de groupes sur les $1$-variétés non séparées et feuilletages de codimension un,, Ann. Fac. Sciences Toulouse, 7 (1998), 559. Google Scholar T. Barbot and C. Maquera, Transitivity of codimension one Anosov actions of $\RR^k$ on closed manifolds,, to appear in Ergod. Th. & Dyn. Syst., (2010). Google Scholar C. Bonatti and R. Langevin, Un exemple de flot d'Anosov transitif transverse à un tore et non conjugué à une suspension,, Ergod. Th. & Dyn. Sys., (1994), 633. doi: 10.1017/S0143385700008099. Google Scholar M. I. Brin, Topological transitivity of one class of dynamical systems, and flows of frames on manifolds of negative curvature,, Funcional. Anal. i Prilozen, 9 (1975), 9. Google Scholar M. I. Brin, The topology of group extensions of C-systems,, Math. Zametki, 3 (1975), 453. Google Scholar J. Franks, Anosov diffeomorphisms,, in, 14 (1970), 61. Google Scholar E. Ghys, Codimension one Anosov flows and suspensions,, Lecture Notes in Math. \textbf{1331}, 1331 (1988), 59. Google Scholar E. Ghys, Groups acting on the circle,, Enseign. Math. (2), 47 (2001), 329. Google Scholar A. Haefliger and G. Reeb, Variétés (non séparées) à une dimension et structures feuilletées du plan,, Enseign. Math., 3 (1957), 107. Google Scholar G. Hector and U. Hirsch, "Introduction to the Geometry of the Foliations. Part B,", Aspects of Mathematics, (1981). Google Scholar M. Hirsch, C. Pugh and M. Shub, Invariant manifolds,, Lecture Notes \textbf{583}, 583 (1977). Google Scholar B. Kalinin and R. Spatzier, On the classification of Cartan actions,, G.A.F.A. Geom. Funct. Anal., 17 (2007), 468. doi: 10.1007/s00039-007-0602-2. Google Scholar A. Katok and R.J. Spatzier, First cohomology of Anosov actions of higher rank abelian groups and applications to rigidity,, Inst. Hautes Études Sci. Publ. Math., 79 (1994), 131. doi: 10.1007/BF02698888. Google Scholar A. Katok and J. Lewis, Global rigidity results for lattice actions on tori and new examples of volume-preserving actions,, Israel J. Math., 93 (1996), 253. doi: 10.1007/BF02761106. Google Scholar S. Matsumoto, Codimension one foliations on solvable manifolds,, Comment. Math. Helv., 68 (1993), 633. doi: 10.1007/BF02565839. Google Scholar S. E. Newhouse, On codimension one Anosov diffeomorphisms,, Amer. J. Math., 92 (1970), 761. doi: 10.2307/2373372. Google Scholar J. Plante, Anosov flows,, Amer. J. Math., 94 (1972), 729. doi: 10.2307/2373755. Google Scholar J. Plante, Anosov flows, transversely affine foliations, and a conjecture of Verjovsky,, J. London Math. Soc. (2), 23 (1981), 359. doi: 10.1112/jlms/s2-23.2.359. Google Scholar C. Pugh and M. Shub, Ergodicity of Anosov actions,, Invent. Math., 15 (1972), 1. doi: 10.1007/BF01418639. Google Scholar S. Schwartzman, Asymptotic cycles,, Ann. of Math., 66 (1957), 270. doi: 10.2307/1969999. Google Scholar S. Simic, Codimension one Anosov flows and a conjecture of Verjovsky,, Ergodic Thery Dynam. Systems, 17 (1997), 1211. Google Scholar A. Verjovsky, Codimension one Anosov flows,, Bol. Soc. Mat. Mexicana, 19 (1974), 49. Google Scholar C. T. C. Wall, Surgery on compact manifolds,, in, (1970). Google Scholar João P. Almeida, Albert M. Fisher, Alberto Adrego Pinto, David A. Rand. Anosov diffeomorphisms. Conference Publications, 2013, 2013 (special) : 837-845. doi: 10.3934/proc.2013.2013.837 Brandon Seward. Every action of a nonamenable group is the factor of a small action. Journal of Modern Dynamics, 2014, 8 (2) : 251-270. doi: 10.3934/jmd.2014.8.251 Michael Hutchings. Mean action and the Calabi invariant. Journal of Modern Dynamics, 2016, 10: 511-539. doi: 10.3934/jmd.2016.10.511 Meera G. Mainkar, Cynthia E. Will. Examples of Anosov Lie algebras. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 39-52. doi: 10.3934/dcds.2007.18.39 Helmut Kröger. From quantum action to quantum chaos. Conference Publications, 2003, 2003 (Special) : 492-500. doi: 10.3934/proc.2003.2003.492 Alex Eskin, Gregory Margulis and Shahar Mozes. On a quantitative version of the Oppenheim conjecture. Electronic Research Announcements, 1995, 1: 124-130. Uri Shapira. On a generalization of Littlewood's conjecture. Journal of Modern Dynamics, 2009, 3 (3) : 457-477. doi: 10.3934/jmd.2009.3.457 Michael Hutchings, Frank Morgan, Manuel Ritore and Antonio Ros. Proof of the double bubble conjecture. Electronic Research Announcements, 2000, 6: 45-49. Vitali Kapovitch, Anton Petrunin, Wilderich Tuschmann. On the torsion in the center conjecture. Electronic Research Announcements, 2018, 25: 27-35. doi: 10.3934/era.2018.25.004 G. A. Swarup. On the cut point conjecture. Electronic Research Announcements, 1996, 2: 98-100. Janos Kollar. The Nash conjecture for threefolds. Electronic Research Announcements, 1998, 4: 63-73. Roman Shvydkoy. Lectures on the Onsager conjecture. Discrete & Continuous Dynamical Systems - S, 2010, 3 (3) : 473-496. doi: 10.3934/dcdss.2010.3.473 Joel Hass, Michael Hutchings and Roger Schlafly. The double bubble conjecture. Electronic Research Announcements, 1995, 1: 98-102. Tracy L. Payne. Anosov automorphisms of nilpotent Lie algebras. Journal of Modern Dynamics, 2009, 3 (1) : 121-158. doi: 10.3934/jmd.2009.3.121 Gareth Ainsworth. The magnetic ray transform on Anosov surfaces. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 1801-1816. doi: 10.3934/dcds.2015.35.1801 Yong Fang. Thermodynamic invariants of Anosov flows and rigidity. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1185-1204. doi: 10.3934/dcds.2009.24.1185 S. A. Krat. On pairs of metrics invariant under a cocompact action of a group. Electronic Research Announcements, 2001, 7: 79-86. Alexandre Rocha, Mário Jorge Dias Carneiro. A dynamical condition for differentiability of Mather's average action. Journal of Geometric Mechanics, 2014, 6 (4) : 549-566. doi: 10.3934/jgm.2014.6.549 K. H. Kim and F. W. Roush. The Williams conjecture is false for irreducible subshifts. Electronic Research Announcements, 1997, 3: 105-109. Yakov Varshavsky. A proof of a generalization of Deligne's conjecture. Electronic Research Announcements, 2005, 11: 78-88. Thierry Barbot Carlos Maquera
CommonCrawl
The role of structural viscoelasticity in deformable porous media with incompressible constituents: Applications in biomechanics MBE Home Influence of Allee effect in prey populations on the dynamics of two-prey-one-predator model August 2018, 15(4): 905-932. doi: 10.3934/mbe.2018041 EEG in neonates: Forward modeling and sensitivity analysis with respect to variations of the conductivity Hamed Azizollahi 1, , Marion Darbas 2,, , Mohamadou M. Diallo 2, , Abdellatif El Badia 3, and Stephanie Lohrengel 4, GRAMFC INSERM U1105, Department of Medicine, Amiens University Hospital, 80000 Amiens, France Laboratoire Amiénois de Mathématique Fondamentale et Appliquée, CNRS UMR 7352, Université de Picardie Jules Verne, 80039 Amiens, France Laboratoire de Mathématiques Appliquées de Compiègne, Sorbonne Université, Université de Technologie de Compiègne, 60205 Compiègne, France Laboratoire de Mathématiques de Reims, EA4535, Université de Reims Champagne-Ardenne, 51687 Reims cedex 2, France * Corresponding authorr: [email protected]. Received July 20, 2017 Accepted October 13, 2017 Published March 2018 Figure(12) / Table(2) The paper is devoted to the analysis of electroencephalography (EEG) in neonates. The goal is to investigate the impact of fontanels on EEG measurements, i.e. on the values of the electric potential on the scalp. In order to answer this clinical issue, a complete mathematical study (modeling, existence and uniqueness result, realistic simulations) is carried out. A model for the forward problem in EEG source localization is proposed. The model is able to take into account the presence and ossification process of fontanels which are characterized by a variable conductivity. From a mathematical point of view, the model consists in solving an elliptic problem with a singular source term in an inhomogeneous medium. A subtraction approach is used to deal with the singularity in the source term, and existence and uniqueness results are proved for the continuous problem. Discretization is performed with 3D Finite Elements of type P1 and error estimates are proved in the energy norm ($H^1$-norm). Numerical simulations for a three-layer spherical model as well as for a realistic neonatal head model including or not the fontanels have been obtained and corroborate the theoretical results. A mathematical tool related to the concept of Gâteau derivatives is introduced which is able to measure the sensitivity of the electric potential with respect to small variations in the fontanel conductivity. This study attests that the presence of fontanels in neonates does have an impact on EEG measurements. Keywords: Electroencephalography in neonates, dipole sources, finite elements, sensitivity analysis, simulations for realistic head models. Mathematics Subject Classification: Primary: 92C50, 65N30, 49K40, 35B30; Secondary: 65N12. Citation: Hamed Azizollahi, Marion Darbas, Mohamadou M. Diallo, Abdellatif El Badia, Stephanie Lohrengel. EEG in neonates: Forward modeling and sensitivity analysis with respect to variations of the conductivity. Mathematical Biosciences & Engineering, 2018, 15 (4) : 905-932. doi: 10.3934/mbe.2018041 Z. Akalin Acar and S. Makeig, Effects of Forward Model Errors on EEG Source Localization, Brain Topogrography, 26 (2013), 378-396. Google Scholar A. Alonso-Rodriguez, J. Camano, R. Rodriguez and A. Valli, Assessment of two approximation methods for the inverse problem of electroencephalography, Int. J. of Numerical Analysis and Modeling, 13 (2016), 587-609. Google Scholar H. Azizollahi, A. Aarabi and F. Wallois, Effects of uncertainty in head tissue conductivity and complexity on EEG forward modeling in neonates, Hum. Brain Ma, 37 (2016), 3604-3622. doi: 10.1002/hbm.23263. Google Scholar H. T. Banks, D. Rubio and N. Saintier, Optimal design for parameter estimation in EEG problems in a 3D multilayered domain, Mathematical Biosciences and Engineering, 12 (2015), 739-760. doi: 10.3934/mbe.2015.12.739. Google Scholar M. Bauer, S. Pursiainen, J. Vorwerk, H. Köstler and C. H. Wolters, Comparison Study for Whitney (Raviart-Thomas)-Type Source Models in Finite-Element-Method-Based EEG Forward Modeling, IEEE Trans. Biomed. Eng., 62 (2015), 2648-2656. doi: 10.1109/TBME.2015.2439282. Google Scholar J. Borggaard and V. L. Nunes, Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters, American Control Conference, San Francisco, 2011. Google Scholar H. Brezis, Functional Analysis, Sobolev Spaces And Partial Differential Equations, Universitext. Springer, New York, 2011. Google Scholar P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North Holland, New York, 1978. Google Scholar M. Clerc and J. Kybic, Cortical mapping by Laplace-Cauchy transmission using a boundary element method, Journal on Inverse Problems, 23 (2007), 2589-2601. doi: 10.1088/0266-5611/23/6/020. Google Scholar M. Clerc, J. Leblond, J. -P. Marmorat and T. Papadopoulo, Source localization using rational approximation on plane sections, Inverse Problems, 28 (2012), 055018, 24 pp. Google Scholar M. Darbas, M. M. Diallo, A. El Badia and S. Lohrengel, An inverse dipole source problem in inhomogeneous media: application to the EEG source localization in neonates, in preparation. Google Scholar A. El Badia and T. Ha Duong, An inverse source problem in potential analysis, Inverse Problems, 16 (2000), 651-663. doi: 10.1088/0266-5611/16/3/308. Google Scholar A. El Badia and M. Farah, Identification of dipole sources in an elliptic equation from boundary measurements, J. Inv. Ill-Posed Problems, 14 (2006), 331-353. doi: 10.1515/156939406777571012. Google Scholar A. El Badia and M. Farah, A stable recovering of dipole sources from partial boundary measurements, Inverse Problems, 26 (2010), 115006, 24pp. Google Scholar Q. Fang and D. A. Boas, Tetrahedral mesh generation from volumetric binary and grayscale images, EEE International Symposium on Biomedical Imaging: From Nano to Macro, (2009), SBI? 09. Boston, Massachusetts, USA, 1142–1145. Google Scholar M. Farah, Problémes Inverses de Sources et Lien avec l'Electro-encéphalo-graphie, Thése de doctorat, Université de Technologie de Compiégne, 2007. Google Scholar O. Faugeras, F. Clément, R. Deriche, R. Keriven, T. Papadopoulo, J. Roberts, T. Viéville, F. Devernay, J. Gomes, G. Hermosillo, P. Kornprobst and D. Lingrand, The Inverse EEG and MEG Problems: The Adjoint State Approach I: The Continuous Case ,Rapport de recherche, 1999. Google Scholar P. Gargiulo, P. Belfiore, E. A. Friogeirsson, S. Vanhalato and C. Ramon, The effect of fontanel on scalp EEG potentials in the neonate, Clin. Neurophysiol, 126 (2015), 1703-1710. doi: 10.1016/j.clinph.2014.12.002. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order Springer, Berlin, 1977. Google Scholar R. Grech, T. Cassar, J. Muscat, K. P. Camilleri, S. G. Fabri, M. Zervakis, P. Xanthopoulos, V. Sakkalis and B. Vanrumste, Review on solving the inverse problem in EEG source analysis J. NeuorEng. Rehabil. , 5 (2008). doi: 10.1186/1743-0003-5-25. Google Scholar H. Hallez, B. Vanrumste, R. Grech, J. Muscat, W. De Clercq, A. Vergult, Y. d'Asseler, K. P. Camilleri, S. G. Fabri, S. Van Huffel and I. Lemahieu, Review on solving the forward problem in EEG source analysis J. NeuorEng. Rehabil., 4 (2007). doi: 10.1186/1743-0003-4-46. Google Scholar M. Hämäläinen, R. Hari, J. Ilmoniemi, J. Knuutila and O. V. Lounasmaa, Magnetoencephalography-theory, instrumentation, and applications to noninvasive studies of the working human brain, Rev. Mod. Phys., 65 (1993), 413-497. Google Scholar F. Hecht, O. Pironneau, A. Le Hyaric and K. Ohtsuka, FreeFem++ Manual, 2014. Google Scholar E. Hernández and R. Rodríguez, Finite element Approximation of Spectral Problems with Neumann Boundary Conditions on curved domains, Math. Comp., 72 (2002), 1099-1115. Google Scholar R. Kress, Linear Integral Equations, Second edition, Applied Mathematical Sciences 82, Spinger-Verlag, 1999. Google Scholar J. Kybic, M. Clerc, T. Abboud, O. Faugeras, R. Keriven and T. Papadopoulo, A common formalism for the integral formulations of the forward EEG Problem, IEEE Transactions on Medical Imaging, 24 (2005), 12-28. doi: 10.1109/TMI.2004.837363. Google Scholar J. Kybic, M. Clerc, O. Faugeras, R. Keriven and T. Papadopoulo, Fast multipole acceleration of the MEG/EEG boundary element method, Physics in Medicine and Biology, 50 (2005), 4695-4710. doi: 10.1088/0031-9155/50/19/018. Google Scholar J. Kybic, M. Clerc, T. Abboud, O. Faugeras, R. Keriven and T. Papadopoulo, Generalized head models for MEG/EEG: Boundary element method beyond nested volumes, Phys. Med. Biol., 51 (2006), 1333-1346. doi: 10.1088/0031-9155/51/5/021. Google Scholar J. Leblond, Identifiability properties for inverse problems in EEG data processing and medical engineering, with observability and optimization issues, Acta Applicandae Mathematicae, 135 (2015), 175-190. doi: 10.1007/s10440-014-9951-7. Google Scholar S. Lew, D. D. Silva, M. Choe, P. Ellen Grant, Y. Okada, C. H. Wolters and M. S. Hämäläinen, Effects of sutures and fontanels on MEG and EEG source analysis in a realistic infant head model, Neuroimage, 76 (2013), 282-293. doi: 10.1016/j.neuroimage.2013.03.017. Google Scholar T. Medani, D. Lautru, D. Schwartz, Z. Ren and G. Sou, FEM method for the EEG forward problem and improvement based on modification of the saint venant's method, Progress In Electromagnetic Research, 153 (2015), 11-22. Google Scholar J. C. de Munck and M. J. Peters, A fast method to compute the potential in the multisphere model, IEEE Trans. Biomed. Eng., 40 (1993), 1166-1174. Google Scholar Odabaee, A. Tokariev, S. Layeghy, M. Mesbah, P. B. Colditz, C. Ramon and S. Vanhatalo, Neonatal EEG at scalp is focal and implies high skull conductivity in realistic neonatal head models, NeuroImage, 96 (2014), 73-80. Google Scholar P. A. Raviart and J. M. Thomas, Introduction á l'Analyse Numérique des Equations aux Dérivées Partielles, Masson, Paris, 1983. Google Scholar N. Roche-Labarbe, A. Aarabi, G. Kongolo, C. Gondry-Jouet, M. Dümpelmann, R. Grebe and F. Wallois, High-resolution electroencephalography and source localization in neonates, Human Brain Mapping, 29 (2008), 167-76. doi: 10.1002/hbm.20376. Google Scholar C. Rorden, L. Bonilha, J. Fridriksson, B. Bender and H. O. Karnath, Age-specific CT and MRI templates for spatial normalization, NeuroImage, 61 (2012), 957-965. doi: 10.1016/j.neuroimage.2012.03.020. Google Scholar M. Schneider, A multistage process for computing virtual dipole sources of EEG discharges from surface information, IEEE Trans. on Biomed. Eng., 19, 1-19. Google Scholar M. I. Troparevsky, D. Rubio and N. Saintier, Sensitivity analysis for the EEG forward problem Frontiers in Computational Neuroscience, 4 (2010), p138. doi: 10.3389/fncom.2010.00138. Google Scholar J. Vorwerk, J. H. Cho, S. Rampp, H. Hamer, T. T. Knösche and C. H. Wolters, A guideline for head volume conductor modeling in EEG and MEG, NeuroImage, 100 (2014), 590-607. doi: 10.1016/j.neuroimage.2014.06.040. Google Scholar C. H. Wolters, H. Köstler, C. Möller, J. Härdtlein, L. Grasedyck and W. Hackbusch, Numerical mathematics of the subtraction approach for the modeling of a current dipole in EEG source reconstruction using finite element head models, SIAM J. Sci. Comput., 30 (2007), 24-45. Google Scholar C. H. Wolters, H. Köstler, C. Möller, J. Härdtlein and A. Anwander, Numerical approaches for dipole modeling in finite element method based source analysis, Int. Congress Ser., 1300 (2007), 189-192. doi: 10.1016/j.ics.2007.02.014. Google Scholar Z. Zhang, A fast method to compute surface potentials generated by dipoles within multilayer anisotropic spheres, Phys. Med. Biol., 40 (1995), 335-349. doi: 10.1088/0031-9155/40/3/001. Google Scholar Figure 1.1. Fontanels and skull of a neonate. Figure 2.1. Three-layer head model. Figure 5.1. Behavior of factors RDM and MAG with respect to the eccentricity of the dipole. Different mesh sizes (finest mesh $M_3$). Neonatal three-layer spherical head model without fontanels. Exact reference solution. Figure 5.2. A spherical head model with the main fontanel. Figure 5.3. Errors in $H^1$-norm with respect to the mesh size $h$ in logarithm scale. Three-layer spherical head model with the anterior fontanel (Gaussian behavior for the fontanel conductivity). Numerical reference solution computed on $M_{\tiny{\mbox{ref}}}$. Left: one single source $S = (0, 0, 40mm)$, $\mathbf{q} = (0, 0, J)$. Right: two sources $S_1 = (0, 0, 10mm)$, $S_2 = (0, 10mm, 0)$ with moments $\mathbf{q}_1 = (0, 0, J)$, $\mathbf{q}_2 = (0, J, 0)$. Intensity $J = 10^{-6} A.m^{-2}$. Figure 5.4. Behavior of factors RDM and MAG with respect to the eccentricity dipole position for different meshes. Three-layer spherical model with the anterior fontanel (Gaussian behavior for the fontanel conductivity). Numerical reference solution computed on $M_{\tiny{\mbox{ref}}}$. Figure 6.1. Realistic head model of a neonate. Left: skull and fontanels. Right: mesh of the fontanels. Figure 6.2. The coronal, sagittal and axial plane of the head model and its 3D reconstruction. Figure 6.3. Variations of factors RDM and MAG with respect to different conductivities $(\sigma_{\!f}, \sigma_{skull})$. Four-layer realistic head model. Reference solution computed with the model without fontanels. Figure 6.4. Sensitivity of the electric potential on the scalp with respect to eccentricity. Distance source-interface brain/CSF $\approx 5$mm (left) and $\approx 15$mm (right). Figure 6.5. Sensitivity of the electric potential on the scalp with respect to orientation. Distance source-interface brain/CSF $\approx 15$mm. Left: moment $\mathbf{q} = (0, J, J)$. Right: moment $\mathbf{q} = (J, J, 0)$. Figure 6.6. Sensitivity of the electric potential on the scalp for a deep source. Table 1. Definition of meshes (neonatal three-layer spherical head model). Mesh Nodes Tetrahedra Boundary nodes $h_{min}$ [m] $h_{max}$ [m] $M_1$ $102 540$ $594 907$ $16 936$ $8.16 10^{-4}$ $4.81 10^{-3}$ $M_2$ $302 140$ $1\ 855 005$ $23 339$ $6.35 10^{-4}$ $3.07 10^{-3}$ $M_3$ $596 197$ $3 632 996$ $54 290$ $4.1 10^{-4}$ $2.46 10^{-3}$ $M_{\rm ref}$ $2 754 393$ $17 263 316$ $124 847$ $2.5 10^{-4}$ $1.51 10^{-3}$ Table 2. Four-layer realistic head model Mesh Nodes Tetrahedra Boundary faces $h_{min}$ [m] $h_{max}$ [m] $M_{real}$ 108 669 590 878 55 660 $3.4\ 10^{-4}$ $14\ 10^{-3}$ Fredrik Hellman, Patrick Henning, Axel Målqvist. Multiscale mixed finite elements. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1269-1298. doi: 10.3934/dcdss.2016051 Edward Della Torre, Lawrence H. Bennett. Analysis and simulations of magnetic materials. Conference Publications, 2005, 2005 (Special) : 854-861. doi: 10.3934/proc.2005.2005.854 Peter Monk, Jiguang Sun. Inverse scattering using finite elements and gap reciprocity. Inverse Problems & Imaging, 2007, 1 (4) : 643-660. doi: 10.3934/ipi.2007.1.643 Tianliang Hou, Yanping Chen. Superconvergence for elliptic optimal control problems discretized by RT1 mixed finite elements and linear discontinuous elements. Journal of Industrial & Management Optimization, 2013, 9 (3) : 631-642. doi: 10.3934/jimo.2013.9.631 Eric Dubach, Robert Luce, Jean-Marie Thomas. Pseudo-Conform Polynomial Lagrange Finite Elements on Quadrilaterals and Hexahedra. Communications on Pure & Applied Analysis, 2009, 8 (1) : 237-254. doi: 10.3934/cpaa.2009.8.237 Murat Uzunca, Ayşe Sarıaydın-Filibelioǧlu. Adaptive discontinuous galerkin finite elements for advective Allen-Cahn equation. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 269-281. doi: 10.3934/naco.2020025 Zhangxin Chen, Qiaoyuan Jiang, Yanli Cui. Locking-free nonconforming finite elements for planar linear elasticity. Conference Publications, 2005, 2005 (Special) : 181-189. doi: 10.3934/proc.2005.2005.181 Philip Gerlee, Alexander R. A. Anderson. Diffusion-limited tumour growth: Simulations and analysis. Mathematical Biosciences & Engineering, 2010, 7 (2) : 385-400. doi: 10.3934/mbe.2010.7.385 Daniel Peterseim. Robustness of finite element simulations in densely packed random particle composites. Networks & Heterogeneous Media, 2012, 7 (1) : 113-126. doi: 10.3934/nhm.2012.7.113 Jaroslaw Smieja, Malgorzata Kardynska, Arkadiusz Jamroz. The meaning of sensitivity functions in signaling pathways analysis. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2697-2707. doi: 10.3934/dcdsb.2014.19.2697 Ruotian Gao, Wenxun Xing. Robust sensitivity analysis for linear programming with ellipsoidal perturbation. Journal of Industrial & Management Optimization, 2020, 16 (4) : 2029-2044. doi: 10.3934/jimo.2019041 Kazimierz Malanowski, Helmut Maurer. Sensitivity analysis for state constrained optimal control problems. Discrete & Continuous Dynamical Systems, 1998, 4 (2) : 241-272. doi: 10.3934/dcds.1998.4.241 Martial Agueh, Reinhard Illner, Ashlin Richardson. Analysis and simulations of a refined flocking and swarming model of Cucker-Smale type. Kinetic & Related Models, 2011, 4 (1) : 1-16. doi: 10.3934/krm.2011.4.1 Martin Burger, Peter Alexander Markowich, Jan-Frederik Pietschmann. Continuous limit of a crowd motion and herding model: Analysis and numerical simulations. Kinetic & Related Models, 2011, 4 (4) : 1025-1047. doi: 10.3934/krm.2011.4.1025 Hailing Xuan, Xiaoliang Cheng. Numerical analysis and simulations of a frictional contact problem with damage and memory. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021037 Marcin Studniarski. Finding all minimal elements of a finite partially ordered set by genetic algorithm with a prescribed probability. Numerical Algebra, Control & Optimization, 2011, 1 (3) : 389-398. doi: 10.3934/naco.2011.1.389 P.K. Newton. The dipole dynamical system. Conference Publications, 2005, 2005 (Special) : 692-699. doi: 10.3934/proc.2005.2005.692 Qi Wang, Jingyue Yang, Feng Yu. Boundedness in logistic Keller-Segel models with nonlinear diffusion and sensitivity functions. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 5021-5036. doi: 10.3934/dcds.2017216 Z. Jackiewicz, B. Zubik-Kowal, B. Basse. Finite-difference and pseudo-spectral methods for the numerical simulations of in vitro human tumor cell population kinetics. Mathematical Biosciences & Engineering, 2009, 6 (3) : 561-572. doi: 10.3934/mbe.2009.6.561 Seung-Yeal Ha, Shi Jin. Local sensitivity analysis for the Cucker-Smale model with random inputs. Kinetic & Related Models, 2018, 11 (4) : 859-889. doi: 10.3934/krm.2018034 Hamed Azizollahi Marion Darbas Mohamadou M. Diallo Abdellatif El Badia Stephanie Lohrengel
CommonCrawl
Multipoint channel charting-based radio resource management for V2V communications Hanan Al-Tous ORCID: orcid.org/0000-0001-8353-62891, Tushara Ponnada1, Christoph Studer2 & Olav Tirkkonen1 EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 132 (2020) Cite this article We consider a multipoint channel charting (MPCC) algorithm for radio resource management (RRM) in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication systems. A massive MIMO (mMIMO) infrastructure network performs logical localization of vehicles to a MPCC, based on V2I communication signals. Combining logical distances given by channel charting with V2V measurements, the network trains a function to predict the quality of a direct V2V communication link from observed V2I communication signals. In MPCC, the network uses machine learning techniques to learn a logical radio map from V2I channel state information (CSI) samples transmitted from unknown locations. The network extracts CSI features, constructs a dissimilarity matrix between CSI samples, and performs dimensional reduction of the CSI feature space. Here, we use Laplacian Eigenmaps (LE) for dimensional reduction. The resulting MPCC is a two-dimensional map where the spatial distance between a pair of vehicles is closely approximated by the distance in the MPCC. In addition to V2I CSI, the network acquires V2V channel quality information for vehicles in the training set and develops a link quality predictor. MPCC provides a mapping for any vehicle location in the training set. To use MPCC for cognitive RRM of V2I and V2V communications, network management has to find logical MPCC locations for vehicles not in the training set, based on newly acquired V2I CSI measurements. For this, we develop an extension of LE-based MPCC to out-of-sample CSI samples. We evaluate the performance of link quality prediction for V2V communications in a mMIMO millimeter-wave scenario, in terms of the relative error of the predicted outage probability. Communication technologies are becoming integrated in vehicles for safety applications, such as blind spot warning and forward collision warning, as well as for non-safety-related applications such as toll collection and infotainment [1]. The dedicated short-range communication (DSRC) protocol can be used both for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications, and provides a coverage range of about 1 km and achieves data rates in the range of 2–6 Mbps [2]. 4G-LTE connectivity below 6 GHz can be used for V2I, achieving a data rate of up to 100 Mbps [3]. Next generation vehicles are expected to become automated and to contain hundreds of sensor nodes. The increase in the number of sensors will generate a huge amount of data that can be utilized for different applications. It is expected that autonomous cars will comprise 75% of total traffic on the road by the year 2040. There are many benefits of sharing rich sensor data with other vehicles and infrastructure. However, this will require exchanging a large amount of data, from tens to thousands of megabits per second. The state-of-the-art vehicular communication standard DSRC is not sufficient to handle such high data rates in next generation vehicles [4]. The large bandwidth channels at millimeter-wave (mm-Wave) are a promising candidate to realizing high data rates and is of prime interest for 5G and Beyond 5G (B5G) communication [5]. Massive MIMO (mMIMO) is another promising technology for 5G and B5G, with the potential to provide high spectral and power efficiency. In a mMIMO cell, each base station (BS) has a large number of antennas, which can provide a simultaneous use of the resource (e.g., frequency and/or time slots) for multiple user equipments (UEs) in the cell [5–7]. Furthermore, the high spatial resolution exploited by the large-scale antenna arrays used at the mMIMO BSs can be used for many applications, such as UE positioning and environment mapping [8–10]. In [11], test results of mm-Wave for V2V and V2I communications are reported. The results are promising, while it is indicated that much research is still needed to develop the physical (PHY) and medium access control (MAC) layers for mm-Wave systems to provide a reliable basis for V2V and V2I communication. A key challenge in developing mm-Wave systems is the potential for rapid channel dynamics; mm-Wave propagation suffers from high path loss, reduced diversity, and increased effect of blockage by obstacles [12]. mm-Wave BSs have to use beamforming for transmission in order to increase the signal-to-noise ratio, reaching a radius of up to 200 m. Hence, hundreds of BSs will be needed to cover large spaces. Modeling, measuring, and predicting the radio channel characteristics of mm-Wave systems for V2V communications are the currently active research areas [3, 13]. Successful deployment of mm-Wave systems requires new management procedures to handle resource-constrained devices, radio resource management, heterogeneous networking, and computing infrastructures [4, 5, 14, 15]. The level of channel variability in mm-Wave has widespread implications for virtually every aspect of V2V communications. Motivated by the burgeoning progress of artificial intelligence (AI) and its breakthroughs in a variety of domains, the B5G research community is currently seeking solutions from machine learning (ML) for intelligent control of PHY and MAC layers of future networks. B5G networks are expected to be intelligent enough to adapt to very dynamic topologies, intensive computation and storage applications, and diverse Quality of Service (QoS) requirements [16–19]. To efficiently manage B5G networks and to perform cognitive networking tasks, the network states which include the spatial distribution and trajectories of the UEs, neighborhood relationships among the UEs, and handover boundaries among neighboring cells need to be estimated. A novel ML framework called channel charting (CC) based on the massive amounts of channel state information (CSI) available at the base stations is proposed for a single-cell MIMO system in [20]. CC applies unsupervised ML techniques to create a radio map of the cell served by the BS, which preserves the neighborhood relations of UEs, using features that characterize the large-scale fading effects of the channel. The obtained CC can be used for local radio resource management (RRM) in the cell. However, cell edge UEs may not be accurately located in the chart due to their low signal-to-noise ratio (SNR) at the cell edge. In [21], a multipoint CC (MPCC) framework is proposed to support advanced multicell RRM and to accurately map cell edge UEs. First, each BS generates its own dissimilarity matrix between the users it can decode; then, the dissimilarity matrices are fused and used to construct the MPCC. The trustworthiness and continuity measures show that the proposed MPCC is capable to preserve the neighborhood structure between UEs in the network. MPCC-based approach entails more computational efforts compared to other approach at the BSs to compute the dissimilarity matrix between the UEs seen by the same BS. In this paper, we consider MPCC in V2I networks, where vehicular UEs communicate with infrastructure BSs. Using only uplink radio channel features, a logical MPCC map is constructed for the network. Furthermore, some of the UEs have the capability of V2V communications. To enable V2V connectivity prediction, radio link quality information of V2V pairs is collected and used to build a link quality prediction (LQP) model utilizing the MPCC distance between V2V pairs. To use MPCC for online RRM, it is important to generalize the chart, allowing the incorporation of new data to an existing MPCC and/or to estimate the features related to a location in the chart. As the radio channel features of a UE can change rapidly in a small distance, it is important to accurately estimate the MPCC location of data from a UE that was not included in the training data set (out-of-sample UE). In this paperFootnote 1, an extension-of-MPCC (EMPCC) to out-of-sample data points is considered. This is a general framework that is needed to implement any online RRM function using CC. This paper investigates V2V link quality prediction based on an MPCC approach. MPCC-based LQP for V2I/V2V consists of two phases: an offline training and online usage phase. In the training phase, V2I and V2V radio channel features of a large number of UEs are used to construct the MPCC and LQP model, respectively. In the online phase, given the radio features of active vehicles (UEs), the EMPCC algorithm is used to map the UEs to CC locations. Based on the CC distance and LQP model, the possibility of V2V communication for a given pair of vehicles is evaluated. All simulation and modeling are performed in an mm-Wave context, lending credibility for the considered solutions for mm-Wave-based V2I/V2V. It is worth noting here that the proposed MPCC-based LQP for V2I/V2V is not restricted to mm-Wave communications and can be used for other radio frequencies. In LQP based on MPCC, neither physical location information, downlink channel measurement at the vehicular terminals, nor V2V measurements are needed for predicting V2V connectivity. Advanced power allocation and beam alignment algorithms for V2V communications can be then designed based on LQP and MPCC. The remainder of this paper is organized as follows. Section 2 presents the system model of V2I and V2V communications. In Section 4, the MPCC and LQP and EMPCC frameworks are presented. Numerical results are presented and discussed in Section 5. Finally, conclusions are drawn in Section 6. We adopt the following notation: matrices and vectors are set in upper and lower boldface, respectively. (·)T, (·)∗, (·)H, |·|, ||·||p denote the transpose, the conjugate, the Hermitian, the absolute value, and the p-norm, respectively. Tr(A) denotes the trace of matrix A. Calligraphic letters denote sets, e.g., \(\mathcal {G}\), and \(|\mathcal {G}|\) denotes the cardinality of \(\mathcal {G}\). \(\mathbb {R}_{+}\) is the set of non-negative real numbers, \(\mathbb {C}\) is the set of complex numbers, \(\mathbb {C}^{N\times M}\) is the space of N×M matrices and \(\mathbb {E}[\cdot ]\) denotes expectation, and \(\imath =\sqrt {-1}\). The system under consideration is schematically shown in Fig. 1. V2I and V2V communication system with B BSs. A V2V/V2I example; a UE communicating with three mMIMO BSs and direct communication between a pair of UEs Each infrastructure BS b=1,…,B has M antenna elements. In the network, two types of UEs are assumed: V2I UEs and V2V UEs. Each UE of V2I type has a single antenna element, whereas UE of V2V type has N+1 antenna elements, one is used for V2I communications, and N antennas for V2V communication. In V2I communications, the base station antenna is at an elevated position, 10–25 m above ground. This is not the case in V2V communications; both the transmit (Tx) and receive (Rx) antennas are at the same height relatively close to the ground level, at some 1–2 m above ground, by having antennas close to the ground level, shadowing effects from other vehicles and surrounding buildings are expected to be stronger. To handle this issue, multiple antennas are used at both the Tx and Rx terminals [3]. Note that UEs of type V2I can have more than one antenna; however, it is shown that one element at the UE can be used to construct an accurate MPCC [21]. The V2I channel vector of UE k=1,…,K using a uniform-linear-array (ULA) at BS b for a coherence bandwidth can be modeled as [23]: $$ \boldsymbol{h}_{b,k}=\sum_{l=1}^{L_{k}}\beta_{b,k}^{(l)}\boldsymbol{a}\big(\phi_{b,k}^{(l)}\big), $$ where Lk is the number of multipath components for the wireless channel between UE k and BS b, \(\phi _{b,k}^{(l)}\) is the direction of arrival of the lth path, \(\beta _{b,k}^{(l)}\) is the complex-valued gain of the lth path, and a(·) is the BS steering vector. For ULA, the steering vector is: $$ \boldsymbol{a}(\phi)=[1,\mathrm{e}^{\imath\frac{2\pi}{\lambda}s\sin(\phi)},\ldots, \mathrm{e}^{\imath\frac{2\pi}{\lambda}(M-1)s\sin(\phi)}]^{T}, $$ where λ is the carrier wavelength, and s is the antenna spacing. The covariance \(\boldsymbol {R}_{b,k}\in \mathbb {C}^{M \times M}\) of the CSI hb,k used to extract the features at BS b becomes: $$ \boldsymbol{R}_{b,k}=\mathbb{E}[\boldsymbol{h}_{b,k}\boldsymbol{h}_{b,k}^{H}]=\boldsymbol{A}_{b,k}\boldsymbol{S}_{b,k}\boldsymbol{A}_{b,k}^{H}, $$ where \(\mathbb {E}\) is the expectation operator, \(\boldsymbol {A}_{b,k}=\left [\boldsymbol {a}\left (\phi _{b,k}^{(1)}\right),\ldots, \boldsymbol {a}\left (\phi _{b,k}^{(L_{k})}\right)\right ]\) is a matrix of array steering vectors, and \(\boldsymbol {S}_{b,k}=\text {diag}\left (\mathbb {E}\left [|\beta _{b,k}^{(1)}|^{2}\right ],\ldots, \mathbb {E}\left [|\beta _{b,k}^{(L_{k})}|^{2}\right ]\right)\) is a diagonal matrix of multipath power components. For V2V communication between UEs i and j, the channel matrix is denoted as \(\boldsymbol {H}_{i,j}\in \mathbb {C}^{N \times N}\), and the channel covariance matrix at receiver terminal j is: $$ \boldsymbol{Q}_{i,j}= \mathbb{E}\left[\boldsymbol{H}_{i,j}\boldsymbol{H}_{i,j}^{H}\right]. $$ The received signal vector at UE j is: $$ \boldsymbol{y}_{i,j}=\sqrt{P}\boldsymbol{H}_{i,j}\boldsymbol{w}_{i,j}x+\boldsymbol{n}_{j}, $$ where x is the transmitted symbol with \( \mathbb {E}[|x|^{2}]=1\), P is the transmitted power, \(\boldsymbol {n}_{j}\in \mathbb {C}^{N\times 1}\) is the received white Gaussian noise, and wi,j is the beamformer weight. Assuming Tx i knows the statistics of the wireless channel, the beamformer weight wi,j is selected as the Eigenvector u corresponding to the largest Eigenvalue of the covariance matrix Qi,j. The average V2V SNR at UE j can be computed as [24]: $$ \Gamma_{i,j}=\frac{P}{\sigma_{n}}\mathbb{E}\left[\text{Tr}\left[\boldsymbol{H}_{i,j}\boldsymbol{w}_{i,j}\boldsymbol{w}_{i,j}^{H}\boldsymbol{H}_{i,j}^{H}\right]\right] =\frac{P}{\sigma_{n}}\lambda_{\max}, $$ where Tr is matrix trace operator and σn is the received noise power. The latter equality holds for the adopted Eigenbeamformer, and λmax is the maximum Eigenvalue of Qi,j. Channel charting Feature extraction and dissimilarity matrix Large-scale effects of wireless channel are caused by reflection, diffraction, and scattering of the physical environment, whereas small-scale effects are caused by multipath propagation and related destructive/constructive addition of signal components. CC is based on the assumption that statistical properties of MIMO channel vary relatively slowly across space, on a length-scale related to the macroscopic distances between scatterers in the channel, not on the small fading length-scale of wavelengths. In this regard, the CSI covariance matrix can be used to capture large-scale effects of the wireless channel based on the assumption that there is a continuous mapping from the spatial location pk of UE k to the covariance CSI Rb,k [20, 21]: $$ \mathcal{H}_{b}:\mathbb{R}^{d}\rightarrow\mathbb{C}^{M \times M};~~\mathcal{H}_{b}(\boldsymbol{p}_{k}) =\boldsymbol{R}_{b,k}. $$ Here, d is the spatial dimension which is either 2 or 3. CC starts by processing the CSI covariance matrix Rb,k into suitable channel features fb,k that capture large-scale properties of the wireless channel. CC then proceeds by using the set of collected features \(\{\boldsymbol {f}_{b,k}\}_{k=1}^{K_{b}}\) for the set of UEs \(\mathcal {K}_{b}=\{1,\ldots,K_{b}\}\) seen by BS b to learn the dissimilarity matrix \(\boldsymbol {D}_{b}\in \mathbb {R}_{+}^{K_{b}\times K_{b}}\). The pairwise dissimilarity [Db]k,m between UEs k and m, for \(k,m \in \mathcal {K}_{b}\) measures the dissimilarity of the radio features between UEs k and m. Different approaches can used to select the channel features and then computing the dissimilarity matrix (see [20, 21]). In this paper, we select the feature vector fb,k based on multipath components [21]: $$ \boldsymbol{f}_{b,k}=\left[\lambda_{b,k}^{(1)},\cdots, \lambda_{b,k}^{(L_{k})},\phi_{b,k}^{(1)},\cdots,\phi_{b,k}^{(L_{k})}\right], $$ where \(\lambda _{b,k}^{(l)}=\mathbb {E}\left [|\beta _{b,k}^{(l)}|^{2}\right ]\). The multipath components (power and phase) \(\left \{\lambda _{b,k}^{(l)}\right \}_{l=1}^{L_{k}}\) and \(\left \{\phi _{b,k}^{(l)}\right \}_{l=1}^{L_{k}}\) of UE k at BS b are estimated from the CSI covariance matrix Rb,k using the multiple signal classification (MUSIC) algorithm [25]. The dissimilarity between two UEs (k,m) is based on identifying multipath components in their feature vectors that are similar. For this, the components of feature vectors are transformed to Cartesian coordinates as [21]: $$ \mathcal{F}\{\boldsymbol{f}_{b,k}\}=\left[\boldsymbol{x}_{b,k}^{(1)}, \cdots, \boldsymbol{x}_{b,k}^{(L_{k})}\right], $$ where \(\boldsymbol {x}_{b,k}^{(l)}=\left [\frac {\cos (\phi _{b,k}^{(l)})}{\sqrt {\lambda _{b,k}^{(l)}}}, \frac {\sin (\phi _{b,k}^{(l)})}{\sqrt {\lambda _{b,k}^{(l)}}}\right ]^{T}\). To cluster multipath components to clusters deemed to be similar, the density-based spatial clustering of applications with noise (DBSCAN) algorithm [26] is used to label the multipath componentsFootnote 2\(\{\mathcal {F}\{\boldsymbol {f}_{b,k}\}\}_{k=1}^{K_{b}}\). This results in a label \(\mathcal {L}\big (\boldsymbol {x}_{b,k}^{(l)}\big)\in \{C_{1},\cdots,C_{N}\}\) for each multipath component, where Cn is the label of the nth cluster. The dissimilarity coefficient between a pair of UEs (k,m) then is computed taking into consideration multipath components of the UEs that are in the same cluster. The pairwise dissimilarity is computed as: $$\begin{array}{*{20}l} [\boldsymbol{D}_{b}]_{k,m}&= \left\{\begin{array}{ll}||\boldsymbol{x}_{b,k}^{(i')}-\boldsymbol{x}_{b,m}^{(j')}||_{2}& \text{if}\ \mathcal{L}\left(\boldsymbol{x}_{b,k}^{(i')}\right)=\mathcal{L}\left(\boldsymbol{x}_{b,m}^{(j')}\right),\\ ||\boldsymbol{x}_{b,k}^{(1)}-\boldsymbol{x}_{b,m}^{(1)}||_{2} & \text{otherwise}, \end{array}\right. \end{array} $$ where \([i',j']=\text {arg}\:\underset {i,j}{\max }\min \big (\lambda _{b,k}^{(i)},\lambda _{b,m}^{(j)}\big)\). Multipoint channel charting MPCC utilizes the different views of the spatially distributed BSs by fusing the BS-specific dissimilarity matrices Db, b=1,…,B into a global dissimilarity matrix D [21]. The benefits of having multiple spatially distributed BSs can be utilized by merging the BS-specific dissimilarity matrices \(\{\boldsymbol {D}_{b}\}_{b=1}^{B}\) into a global dissimilarity matrix D, where the (k,m)th element [D]k,m can be computed as: $$ [\boldsymbol{D}]_{k,m}=\frac{1}{\sum_{b=1}^{B}\omega_{b}(k,m)}\sum_{b=1}^{B}\omega_{b}(k,m) [\boldsymbol{D}_{b}]_{k,m}, $$ where ωb(k,m) is a weighting factor computed as ωb(k,m)= min(γb,k,γb,m)2 and γb,k is the SNR of the wireless link between UE k and BS b. Dimesionality reduction and Laplacian Eigenmaps CC finds in an unsupervised manner a low dimensional channel chart providing logical locations \(\boldsymbol {Z}=\{\boldsymbol {z}_{k}\}_{k=1}^{K}\) for the sample UEs such that neighboring UEs will be neighboring points in the channel chart, i.e., CC preserves the local geometry. The relation between the logical and physical locations is approximative: $$ \left\|\boldsymbol{z}_{k}-\boldsymbol{z}_{m}\right\|\approx\alpha\left\|\boldsymbol{p}_{k}-\boldsymbol{p}_{m}\right\|, \: \text{for}\ k,m\in \mathcal{K}, $$ where α is a scaling factor. Note that the UE spatial location P is not known and BS location is not needed; CC is computed solely based on the dissimilarity matrix. A channel chart is constructed using an unsupervised ML framework that processes the dissimilarity matrix, and manifold learning is used to dimensionally reduce the CSI feature space [20]. For a given dissimilarity matrix, different dimension reduction techniques have been proposed in the literature. The performance of a given technique is problem dependent, as discussed in [27]. The single-cell CC problem has been solved using principle component analysis (PCA), Sammon's mapping (SM), and autoencoder reduction techniques in [20], whereas the MPCC is solved using SM, Laplacian Eigenmaps (LE), and t-Distributed Stochastic Neighbor Embedding (t-SNE) in [21]. Recently, neural networks have been used successfully for dimensionality reduction as in [28, 29]. LE is a computationally efficient non-linear dimensionality reduction technique based on the graph Laplacian. It preserves neighborhood properties and clustering connections [30]. LE constructs a graph from neighborhood information of the dissimilarity matrix. The LE problem is expressed as [30]: $$\begin{array}{*{20}l} &\underset{\boldsymbol{Z}}{\text{minimize}}\:\text{Tr}\big(\boldsymbol{Z}^{T}\boldsymbol{L}\boldsymbol{Z}\big), \end{array} $$ $$\begin{array}{*{20}l} & \text{subject to}~\boldsymbol{Z}^{T}\boldsymbol{S}\boldsymbol{Z}=\boldsymbol{I}_{d+1}, \end{array} $$ where \(\boldsymbol {Z}=\left [\boldsymbol {z}_{1}^{T},\ldots,\boldsymbol {z}_{K}^{T}\right ]^{T}\) represents the optimization variables (CC locations) in a matrix form, Id is the identity matrix of order d, L is the graph Laplacian matrix, and S is the degree matrix. The graph Laplacian matrix is computed as: $$ \boldsymbol{L}= \boldsymbol{S}-\boldsymbol{W}, $$ where W is the weight matrix. The degree matrix S can be constructed using the dissimilarity matrix either by an ε-neighborhood, i.e., nodes k and m are connected by an edge if [D]k,m≤ε, or by N nearest neighbors, i.e., nodes k and m are connected by an edge if m is among the N nearest neighbors (N smallest dissimilarity values of the kth row of D) of k or k is among the N nearest neighbors (N smallest dissimilarity values of the mth row of D) of m. The weight matrix can be constructed using the dissimilarity matrix either by a simple approach, if nodes k and m are connected, [ W]k,m=1, otherwise [ W]k,m=0 or by using the heat kernel with temperature T, if nodes k and m are connected, \([\!\boldsymbol {W}]_{k,m}={e}^{-\frac {[\boldsymbol {D}]_{k,m}^{2}}{T}}\), otherwise [W]k,m=0. The temperature T can be selected based on the statistics of the dissimilarity matrix. The Laplacian matrix is a symmetric positive-semidefinite matrix. Every row sum and column sum of L is zero, consequently λ0=0 is the smallest Eigenvalue of L, and v0=[1,…,1]T satisfies Lv0=0. In addition, the elements of an Eigenvector sum to zero, i.e., \(\sum _{k=1}^{K}[\boldsymbol {v}_{i}]_{k}=0\) for i=1,…,K−1. The solution of (13) can be obtained in closed form as the solution of a generalized Eigenvector problem based on KKT conditions [30]. The CC locations are obtained by finding the d+1 Eigenvectors corresponding to d+1 smallest Eigenvalues. An example of a connected graph of five nodes is shown in Fig. 2. The dissimilarity matrix is computed using the true Euclidean distance, three nearest neighbors are used to compute the degree matrix, and the heat kernel temperature is set T=1. The true location of the nodes is shown in the top subfigure. LE is used to find the logical location of the nodes. The second and third Eigenvectors preserve the local neighborhood information as shown in the middle subfigure, whereas the forth and fifth Eigenvectors maximize the difference between the nodes as shown in the bottom subfigure. The neighborhood information is not preserved using Eigenvectors corresponding to the largest Eigenvalues. A connected graph of five nodes. Top: two-dimensional true location of the nodes. Middle: LE logical location of the nodes using the first and second Eigenvectors. Bottom: logical location using the third and fourth Eigenvectors Algorithm 1 summarizes how the CC locations can be obtained using LE. Out-of-sample extension Since the MPCC is constructed by processing the data of all UEs from all BSs, it is computationally expensive to repeat the MPCC process if an out-of-sample data item is available, and needs to be inserted into the chart. If the original MPCC is based on a sufficient number of samples, it is expected that the out-of-sample data will not change the MPCC positions of the original samples. Here, we address out-of-sample extension of MPCC in this sense, aiming to estimate the location of the new sample on the MPCC, to be used for RRM functions, such as V2V LQP. It is worth mentioning that the same feature extraction should be used for the out-of-sample data items as for the original samples, and the data-driven dissimilarity measure found for the original samples should be used to measure dissimilarity of the out-of-sample items to the original samples. For an out-of-sample UE j, the CSI covariance matrix Rb,j at BS b is used to find the feature vector \(\mathcal {F}\{\boldsymbol {f}_{b,j}\}=\left [\boldsymbol {x}_{b,j}^{(1)},\ldots,\boldsymbol {x}_{b,j}^{(L_{j})}\right ]\). The cluster label for an out-of-sample multipath component is determined based on the cluster label of the nearest multipath component on the original data set, i.e., \(\mathcal {L}\big (\boldsymbol {x}_{b,j}^{(l)}\big)= \mathcal {L}\big (\boldsymbol {x}_{b,m}^{(l')}\big)\) where \([m,l']=\text {arg}\underset {k,n}{\min } ||\boldsymbol {x}_{b,k}^{(n)}-\boldsymbol {x}_{b,j}^{(l)}||_{2}\), \(k\in \mathcal {K}_{b}\), and n=1,…,Lk. The out-of-sample dissimilarity element [Db]j,m at BS b is computed using (10), and then, the global dissimilarity is computed using (11). The relation between MPCC and EMPCC is shown in Fig. 3. Main steps of MPCC and EMPCC. EMPCC uses the CC locations of the offline training set and the dissimilarity measure learned by MPCC In [31], a generalized framework for out-of-sample extension is proposed for several algorithms, providing that these algorithms learn Eigenfunctions of a data-dependent kernel. The out-of-sample mapping can be formulated as an optimization problem, where the objective is to find a normalized kernel function that minimizes the mean squared error. The normalized kernel vector is used as a weight vector to find the out-of-sample mapping. For LE, the normalized kernel function (weight) is computed as [31]: $$ \hat{W}(k,i)=\frac{1}{K}\frac{W(k,i)}{\sqrt{\mathbb{E}_{x}[W(k,x)]\mathbb{E}_{y}[W(i,y)]}},\: k,i=1,\ldots,K, $$ where W(k,i)=[W]k,i and the expectation is taking with respect to the original data set. The EMPCC position of an out-of-sample data z(j) for j∉{1,…,K}, i=1,…,K, and d=2 can be computed as: $$ \boldsymbol{z}(j)=\left[\sum_{k=1}^{K}\hat{W}(j,k)\hat{\boldsymbol{v}}_{1}(k),\sum_{k=1}^{K}\hat{W}(j,k)\hat{\boldsymbol{v}}_{2}(k)\right]~, $$ where the weight \(\hat {W}(j,i)\) for j∉{1,…,K} is computed based on the dissimilarity of the radio features of UE j with respect to the radio features of all UEs in the original set, and the Eigenvectors \(\hat {\boldsymbol {v}}_{1}\) and \(\hat {\boldsymbol {v}}_{2}\) are computed based on the normalized weighting matrix \(\hat {\boldsymbol {W}}\) of the original data set. The resulting EMPCC method is summarized in Algorithm 2. MPCC-based V2V link quality prediction Radio maps can be utilized for RRM functionalities. To construct radio maps, either the physical or the logical location of the UEs in the radio environment and the corresponding CSIs are needed. The physical location can be obtained either by a global navigation satellite system (GNSS) such as GPS or by a triangulation approach. Triangulation can be used for only LOS communications with at least three BSs. The locations of the BSs need to be known, whereas CC has the advantage of being able to be used for both LOS and NLOS communications without the need to know the BS locations. CC can be used with a single BS; however, using more BSs improves the CC accuracy. CC has the advantage of replacing the timely and costly measurement campaign in GNSS fingerprinting-based algorithms by heavily processing ML algorithms (i.e., unsupervised learning plays a key role of mapping radio features to logical locations and preserving neighborhood relations) at the BSs, which has the advantage of being able to be applied for large-scale areas and in an automated manner when the radio environment changes. The back-haul cost of CC is less than the back-haul of GNSS fingerprinting, since the location information is not transmitted. Table 1 compares CC-based radio maps with GNSS-based fingerprinting and triangulation-based fingerprinting in terms of communication scenario, BS location, back-haul load, and computational cost at UEs and BSs. Table 1 Benefits and costs of CC-based RRM We consider V2I/V2V RRM based on large-scale radio features, i.e., the covariance matrices. A large data set of radio features of V2I is processed to obtain a channel chart of logical locations. In the training phase, the network control unit selects pairs of UEs that have the capability for V2V communications, and asks them to establish connection and measure the link quality. The vehicular terminals then feedback the average SNR of V2V communication to the network. The control unit constructs a LQP model based on the knowledge of CC locations of the vehicular terminals and the received average SNR of V2V pairs. The RRM framework consists of an offline training phase where MPCC and LQP are generated and an online phase where the MPCC and LQP are used to predict connectivity of UEs in the network. In the online phase, out-of-sample extension of MPCC is used to place vehicles to the MPCC, and the LQP model is used to predict V2V connectivity. The block diagram of the considered method to predict V2V connectivity is shown in Fig. 4. RRM for V2I/V2V communication systems. Offline phase: the MPCC and LQP models are constructed. Online phase: network management uses EMPC and LQP to predict V2V connectivity Link quality prediction model In wireless communications, the optimal transmission scheme is adaptively selected based on the estimated CSI. Due to the high-mobility nature of V2V, directivity, and blockage of mm-Wave bands, link quality prediction of V2V is a challenging problem. Generally, analytical and theoretical models for LQP are based on simplified bounding assumptions, which cannot be used in practical scenarios. Here, we consider a data-driven probabilistic LQP model, utilizing the MPCC locations and average SNR of a large set of V2V pairs. The LQP of V2V communications is determined by the average SNR at the receiving terminal. The most important characteristic of a V2V channel is whether there is a connection or not. To proceed with predicting connectivity, we assume that there is an SNR threshold for successful reception. Knowing the SNR statistics for V2V communication with a given MPCC distance, one may then predict the probability of the V2V link being in outage with respect to this SNR threshold. The channel charting distance \(d^{(\mathcal {C})}_{i,j}\) between UE i and UE j is defined using the Euclidean distance of MPCC locations zi and zj as: $$ d^{(\mathcal{C})}_{i,j}=|\boldsymbol{z}_{i}-\boldsymbol{z}_{j}|_{2}. $$ The MPCC distance \(d^{(\mathcal {C})}_{i,j}\) of the V2V pairs is quantized into a grid with G points, \(\mathcal {D}=\{d_{0}^{\mathcal {C}},\ldots, d_{G-1}^{\mathcal {C}}\}\), such that \(d^{(\mathcal {C})}_{i,j}\) is assigned to grid point g if \(d^{\mathcal {C}}_{g-1}\leq d^{(\mathcal {C})}_{i,j}< d^{(\mathcal {C})}_{g}\). The outage probability for CC grid distance \(d^{(\mathcal {C})}_{g}\) can then be estimated as: $$ \mathcal{O}\left(\gamma_{th}| d^{(\mathcal{C})}_{g}\right)=\mathsf{Pr}\left(\Gamma\leq \gamma_{th}| d^{\mathcal{C}}_{g}\right), $$ where γth is an SNR threshold determined for reliable communication at a rate required by the network, and Γ is the average SNR of a V2V communication pair belonging to the sample set with MPCC distance quantized to \(d^{\mathcal {C}}_{g}\). The outage probability for distances \(d_{g}\in \mathcal {D}\) is empirically computed using the measured SNR of V2V UEs. Simulation results and discussion A multicell mm-Wave scenario is considered as discussed in [21]. The simulation parameters are shown in Table 2. The UE locations are generated on the streets of a Manhattan grid as shown in Fig. 5. In [21], a ray-tracing mm-Wave cellular channel model was created following the principles of [32, 33]. Here, we use this channel model for V2I and further generalize it to a V2V model. The channel simulator models the path loss experienced by the multipath components using the free-space path loss model with power inversely proportional to the square of the distance. The reflections from obstacles, i.e., the walls, are modeled such that the reflection coefficients are based on Fresnel's equations. The typical value for the wall relative permittivity is between 4 and 6. The channel for each link is then calculated using the ray-traced paths with the path loss, reflection losses, and antenna gain accounted for in the channel. The multipath gain \(\beta _{b,k}^{(l)}\) is computed as: $$ \beta_{b,k}^{(l)}= e^{\imath\psi_{l}}\sqrt{G_{0} \, \rho\, d_{l}^{-2} \, g_{1}(\theta_{l}) \, g_{2}(\phi_{l}) \, \prod_{i=1}^{R}\left|r^{(i)}_{l}\right|^{2}}, $$ Simulated scenario: streets in a Manhattan grid with 10 BSs labeled by numbers and sampled UE locations marked by colors Table 2 Simulation parameters [21] where G0=10−6.14 is the omnidirectional path gain at a reference distance of 1 m; ρ is the transmit power; ψl is the phase modeled as a uniform random variable \(\psi _{l}\sim \mathcal {U}(0,2\pi)\); dl is the propagation distance in meters; gl(θl) and g2(ϕl) are the antenna gain for an angle of departure θl at the UE and angle of arrival ϕl at the BS, respectively; R is the number of reflections that the lth multipath component undergoes; and \(r_{l}^{(i)}\) is the ith reflection coefficient. For an LOS path, R=1 and \(r_{l}^{(1)} = 1\). A scenario showing the propagation paths for multipath components using the ray-tracing model is shown in Fig. 6. A UE location has LOS communication with one BS (BS−LOS) and a NLOS communication with another BS (BS−NLOS). The SNR observed at BS−LOS which is at a distance of 43.01 m is obtained as 38 dB. The SNR at BS−NLOS which is at a distance of 235.7 m is calculated as −36.83 dB. A scenario showing the propagation paths and MPCS for a UE location with LOS and NLOS BSs Performance of out-of-sample extension algorithm First, we investigate the performance of EMPCC, which inserts out-of-sample UEs to the chart. There are K UEs, and the number of neighboring UEs used to construct the graph for LE is denoted by N. The number of UEs for which EMPCC is used is denoted by J. Two scenarios are considered to evaluate the performance of EMPCC. In scenario I, the MPCC is generated based on the channel features of K UE locations. Then, J UE locations are removed at random, and EMPCC is used for mapping the J locations to the chart. In scenario II, J UE locations are selected at random and the MPCC is generated based on the channel features of K−J UE locations. EMPCC is used for mapping the J locations to the chart. Both Laplacian Eigenmaps based on a conflict graph and LE based on a weighted graph are used for channel charting. An example instance for LE-based MPCC/EMPCC for different parameters is shown in Figs. 7 and 8. For Fig. 7, the parameters are K=500, J=100, N=25, and B=4 BSs labeled as {1,3,5,7}. We select a reference point in the first quadrant for K and K−J MPCC to avoid the possibility of rotation or flipping of the EMPCC compared to MPCC. Two-dimensional channel chart for 4 BSs. The channel chart location of out-of-sample UEs is in black color. Left: re-inserting removed sample. Right: inserting new sample Two-dimensional channel chart for 10 BSs. The channel chart location of out-of-sample UEs is in black color. Left: re-inserting removed sample. Right: inserting new sample In Fig. 8, the parameters are K=5000, N=250, J=500, and B=10. The J out-of-sample locations are accurately mapped by EMPCC. The performance of MPCC/EMPCC is evaluated using continuity (CT) and trustworthiness (TW) measures as shown in Table 3. For a discussion on these measures, see [34]. CT and TW are computed by considering 50 nearest neighbors. For MPCC, all K UEs are used to generate the chart, whereas for EMPCC, the chart is constructed by K−J UEs and the EMPCC is used to position the reaming J UEs. For weighted Gaussian kernel, T=0.05. The CT and TW and measures of EMPCC are comparable to MPCC, indicating that the out-of-sample extension methodology in EMPCC works. Table 3 Comparison of MPCC and EMPCC in terms of TW and CT measures considering 50 neighbors, weighted LE (w-LE) with T=0.05 is considered Performance of link quality prediction For V2V link quality prediction, the MPCC is constructed based on V2I communications. For this, we consider a scenario with K=5000 UEs and B=10 BSs, in the Manhattan grid considered above. The LQP model is constructed based on the SNR of V2V pairs with the corresponding Euclidean charting distance computed using the MPCC locations. To construct the V2V channels, 1,000,000 random pairs of UEs are selected among the chart locations. The V2V mm-Wave channels are generated by generalizing the ray-tracing channel model of [21] in the same environment where the MPCC is constructed, and the average SNRs for V2V communications are computed as in (6). Figure 9 shows a scatter plot of the average SNR of the V2V pairs as function of physical and chart distances. As expected, the SNR of a V2V link decreases with increasing physical distance, and the relation of SNR with chart distance also captures this. This figure indicates that MPCC preserves the distance-SNR relation. It can be seen from Fig. 9 that at smaller distances, when charting distance \(d^{(\mathcal {C})}<75\), the probability that an average SNR of a V2V link is below a SNR threshold of γth=25 dB is zero, so for this charting distance, V2V communication is guaranteed to be successful with high data rates, or the transmitted power can be reduced to reduce the interference to other terminals. Scatter plot of the average SNR of V2V links as a function of true distance (left plot) and CC distance (right plot) Using the collected data of the average SNRs and the physical distances, a benchmark LQP model is constructed. The outage probabilities \(\mathcal {O}_{P}(\gamma _{th}| d^{(\mathcal {P})}_{g})\) are empirically computed for the true location UEs, for different SNR thresholds γth using a grid distance \(d^{(\mathcal {P})}_{g}\). Using the collected data of the average SNRs and the corresponding CC distances, a LQP model is constructed. The outage probabilities \(\mathcal {O}_{CC}(\gamma _{th}| d^{(\mathcal {C})}_{g})\) of (18) are empirically computed for the chart UEs, for different SNR thresholds γth. The trained outage probability model for different physical and CC distances and different thresholds γth is shown in Fig. 10. The left plot represents the benchmark LQP model that can be used to predict the outage probability of an out-of-sample V2V pair by knowing the true distance. The right plot represents the LQP model that can be used to predict the outage probability of an out-of-sample V2V pair by just knowing the EMPCC (out-of-sample chart) distance between them. The CC LQP relation as a function of the CC distance is similar to the benchmark LQP as a function of true distance. Outage probability for different SNR threshold values as a function of true distance (left plot) and CC distance (right plot) To estimate the performance of LQP in the online RRM phase, a test set of J=1000 out-of-sample V2I UEs was generated. The large-scale radio features of V2I channels are used to map these out-of-sample UEs to the existing chart using the EMPCC algorithm. Again, 1,000,000 V2V pairs are constructed at random from these out-of-sample UEs. The V2V mm-Wave channels and the V2V SNRs are generated in the same way as that for the chart UEs. The true outage probabilities as the function of physical and chart distances are then constructed for this test set. As a result, we get the outage probabilities of out-of-sample UEs as \(\mathcal {O}_{OS}\left (\gamma _{th}| d^{(\mathcal {P})}_{g}\right)\) and \(\mathcal {O}_{OS}\left (\gamma _{th}| d^{(\mathcal {C})}_{g}\right)\), respectively. Note that for comparison to the LQP model, the same quantization grid \(d^{(\mathcal {C})}_{g}\) and \(d^{(\mathcal {P})}_{g}\) are used for the physical and chart distances of the test set, as for the original trained UEs, respectively. The true outage probabilities can be compared to the ones predicted by the trained LQP. The relative mean square error for LQP of the outage probability from the data of chart UEs for a given γth at chart distance \(d^{(\mathcal {C})}_{g}\) is given by: $$ \delta_{\mathcal{C},g}^{2} = \frac{\left(\mathcal{O}_{OS}\left(\gamma_{th}| d^{(\mathcal{C})}_{g}\right) - \mathcal{O}_{CC}\left(\gamma_{th}| d^{(\mathcal{C})}_{g}\right)\right)^{2} }{\left(\mathcal{O}_{CC}(\gamma_{th}| d^{(\mathcal{C})}_{g}\right)^{2}}. $$ Similarly, the relative mean square error for the benchmark LQP of the outage probability from the data of UEs for a given γth at physical distance \(d^{(\mathcal {P})}_{g}\) is denoted as \(\delta _{\mathcal {P},g}^{2}\). Figure 11 shows the error \(\delta _{\mathcal {C},g}^{2}\) for different SNR thresholds as a function of the CC distance and the error \(\delta _{\mathcal {P},g}^{2}\) as a function of the physical distance. The largest relative mean square error \(\delta _{\mathcal {C},g}^{2}=3.0\%\) is observed for threshold γth=6 dB, for a CC distance larger than 150. This indicates that the trained LQP model provides reliable prediction of the outage probability of the out-of-sample UEs just based on CSI of the V2I links. The relative error \(\delta _{\mathcal {P},g}\) based on the true distance is smaller than the relative error \(\delta _{\mathcal {C},g}\) based on the chart distance. Relative mean-square error of prediction \(\delta _{\mathcal {P},g}^{2}\) (left plot) and \( \delta _{\mathcal {C},g}^{2}\) (right plot). The small values of \(\delta _{\mathcal {P},g}^{2}\) and \( \delta _{\mathcal {C},g}^{2}\) indicate that the LQP model provides reliable prediction of the outage probability of the out-of-sample UEs We have presented the concept of link quality prediction for V2V communications in dynamic environments based on multipoint channel charting. For this, the physical locations of neither the vehicles nor the base stations are required. We have considered a network controlled V2V approach, where vehicles communicate with infrastructure BSs, and the large-scale radio frequency features of the V2I channels have been used to map vehicles to a logical map. A network control unit has been used to manage the selection and collection of enough SNR samples of V2V channels and to construct a LQP model. In order to use the prediction in online RRM, the channel charting principle has to be extended to out-of-sample data CSI features, related to out-of-sample vehicle locations. For this, a MPCC has been constructed first using an original data set of V2I CSIs. The multipath components of the new CSI samples have been estimated at each BS and then processed using the data-driven dissimilarity computation as the original set. The dissimilarity vector of the out-of-sample vehicle has been used to generate the weighting vector for out-of-sample mapping. The resulting EMPCC algorithm has been used to map out-of-sample vehicles to the chart. The trustworthiness and continuity performance measures have been used to evaluate the EMPCC, and we found that out-of-sample extension works in a reliable manner. The method has wide applicability in cognitive RRM, where predictions of vehicle connectivity parameters would be used. Here, we have used the channel chart to predict V2V connectivity. Based on the Euclidean chart distance, the probability of outage of V2V communication between two out-of-sample vehicles has been predicted. This can be used by the network to identify which vehicles may communicate over direct V2V links. The only input for this prediction is the V2I CSI of the two involved vehicles, as measured by the infrastructure base stations. In simulation modeling of a mm-Wave network, the LQP was found to perform well, with a typical relative mean square error of <2%. In future work, the locations of V2V pair, not only the chart distance, are going to be used to improve the LQP model. An advanced LQP model based on deep learning will also be considered to predict the SNR of the link given the CC locations of the V2V pairs. Using channel charting for multihop V2V communication is another RRM problem that can be considered, i.e., selecting the relaying nodes for V2V communication to achieve a desired link quality. Advanced mm-Wave channel models, in which the blockage probability, density, size, and speed of vehicles are taken into consideration, are important components when verifying channel charting-based RRM in such challenging scenarios. Data sharing not applicable to this article as no data sets were generated or analyzed during the current study. Part of the results of this paper were presented in [22]. A non-linear transformation can be applied to make clusters of multipath components separable. AI: B5G: Beyond 5G BS: CSI: Channel state information CT: DSRC: Dedicated short-range communication EMPCC: Extension-of-MPCC LE: Laplacian Eigenmaps LQP: Link quality prediction mMIMO: mm-Wave: Millimeter-wave MPCC: PHY: Radio resource management Rx: SM: Sammon's mapping t-SNE: t-Distributed Stochastic Neighbor Embedding TW: UEs: User equipments V2I: Vehicle-to-infrastructure V2V: Vehicle-to-vehicle w-LE: Weighted-LE V. Va, T. Shimizu, G. Bansal, R. Heath, Millimeter Wave Vehicular Communications: A Survey (Now Publishers Inc., Hanover, 2016). J. Kenney, Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE. 99(7), 1162–1182 (2011). M. Giordani, A. Zanella, M. Zorzi, in Proc. of the 6th International Conference on Modern Circuits and Systems Technologies, (MOCAST). Millimeter wave communication in vehicular networks: challenges and opportunities, (2017), pp. 1–6. https://doi.org/10.1109/mocast.2017.7937682. J. Choi, V. Va, N. Gonzalez-Prelcic, R. Daniels, C. Bhat, R. Heath, Millimeter-Wave vehicular communication to support massive automotive sensing. IEEE Commun. Mag.54(12), 160–167 (2016). S. Busari, K. Huq, S. Mumtaz, L. Dai, J. Rodriguez, Millimeter-Wave massive MIMO communication for future wireless systems: a survey. IEEE Commun. Surv. Tuts.20(2), 836–869 (2018). E. Bjornson, E. G. Larsson, T. Marzetta, Massive MIMO: ten myths and one critical question. IEEE Trans. Commun.54(2), 114–123 (2016). S. Yang, L. Hanzo, Fifty years of MIMO detection: the road to large-scale MIMOs. IEEE Commun. Surv. Tuts.17(4), 1941–1988 (2015). A. Shahmansoori, G. Garcia, G. Destino, G. Seco-Granados, H. Wymeersch, Position and orientation estimation through Millimeter-Wave MIMO in 5G systems. IEEE Trans. Wirel. Commun.17(3), 1822–1835 (2018). F. Guidi, A. Guerra, D. Dardari, A. Clemente, R. Errico, in Proc. of IEEE Globecom Workshops, (GC Wkshps). Environment mapping with Millimeter-Wave massive arrays: system design and performance, (2016), pp. 1–6. https://doi.org/10.1109/glocomw.2016.7848895. N. Garcia, H. Wymeersch, E. Larsson, A. Haimovich, M. Coulon, Direct localization for massive MIMO. IEEE Trans. Signal Process.65(10), 2475–2487 (2017). A. Kato, K. Sato, M. Fujise, ITS wireless transmission technology. technologies of Millimeter-Wave inter-vehicle communications: Propagation characteristics. J. Commun. Res. Lab.48(4), 99–110 (2001). M. Giordani, M. Polese, A. Roy, D. Castor, M. Zorzi, A tutorial on beam management for 3GPP NR at mmWave frequencies. IEEE Commun. Surv. Tuts.21(1), 173–196 (2019). M. Giordani, T. Shimizu, A. Zanella, T. Higuchi, O. Altintas, M. Zorzi, Path loss models for V2V mmWave communication: performance evaluation and open challenges. CoRR (2019). https://doi.org/10.1109/cavs.2019.8887792. M. Giordani, M. Mezzavilla, A. Dhananjay, S. Rangan, M. Zorzi, in Proc. of the 22th European Wireless Conference. Channel dynamics and SNR tracking in millimeter wave cellular systems, (2016), pp. 1–8. L. Liang, H. Ye, G. Y. Li, Toward intelligent vehicular networks: a machine learning framework. IEEE Internet Things J.6(1), 124–135 (2019). M. G. Kibria, K. Nguyen, G. P. Villardi, O. Zhao, K. Ishizu, F. Kojima, Big data analytics, machine learning, and artificial intelligence in next-generation wireless networks. IEEE Access. 6:, 32328–32338 (2018). D. Gündüz, P. de Kerret, N. Sidiropoulos, D. Gesbert, C. Murthy, M. Schaar, Machine learning in the air. CoRR. abs/1904.12385: (2019). C. Zhang, P. Patras, H. Haddadi, Deep learning in mobile and wireless networking: a survey. IEEE Commun. Surv. Tuts., 1 (2019). https://doi.org/10.1109/comst.2019.2904897. Z. Jiang, S. Chen, A. Molisch, R. Vannithamby, S. Zhou, Z. Niu, Exploiting wireless channel state information structures beyond linear correlations: a deep learning approach. IEEE Commun. Mag.57(3), 28–34 (2019). C. Studer, S. Medjkouh, E. Gonultaş, T. Goldstein, O. Tirkkonen, Channel charting: locating users within the radio environment using channel state information. IEEE Access. 6:, 47682–47698 (2018). J. Deng, S. Medjkouh, N. Malm, O. Tirkkonen, C. Studer, in Proc. of 52nd Asilomar Conference on Signals, Systems, and Computers. Multipoint channel charting for wireless networks, (2018), pp. 286–290. https://doi.org/10.1109/acssc.2018.8645281. T. Ponnada, H. Al-Tous, O. Tirkkonen, C. Studer, in Proc. of 14th EAI International Conference on Cognitive Radio Oriented Wireless Networks, (Crowncom). An out-of-sample extension for wireless multipoint channel charting, (2019). https://doi.org/10.1007/978-3-030-25748-4_16. M. Akdeniz, Y. Liu, M. Samimi, S. Sun, S. Rangan, T. Rappaport, E. Erkip, Millimeter wave channel modeling and cellular capacity evaluation. IEEE J. Sel. Areas Commun.32(6), 1164–1179 (2014). A. Goldsmith, Wireless Communications (Cambridge University Press, New York, NY, USA, 2005). R. Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag.34(3), 276–280 (1986). M. Ester, H. Kriegel, J. Sander, X. Xu, in Proc. of the Second International Conference on Knowledge Discovery and Data Mining, (KDD). A density-based algorithm for discovering clusters in large spatial databases with noise, (1996), pp. 226–231. L. Maaten, E. Postma, H. Herik, Dimensionality reduction: a comparative review. J Mach Learn Res. 10:, 66–71 (2009). P. Huang, O. Castaneda, E. Gonultaş, S. Medjkouh, O. Tirkkonen, T. Goldstein, C. Studer, in Proc. of the IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications, (SPAWC). Improving channel charting with representation-constrained Autoencoders, (2019), pp. 1–5. https://doi.org/10.1109/spawc.2019.8815478. E. Lei, O. Castaneda, O. Tirkkonen, T. Goldstein, C. Studer, Siamese neural networks for wireless positioning and channel charting. ArXiv. abs/1909.13355: (2019). https://doi.org/10.1109/allerton.2019.8919897. M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput.15(6), 1373–1396 (2003). Y. Bengio, J. Paiement, P. Vincent, O. Delalleau, N. L. Roux, M. Ouimet, in Proc. of the 16th International Conference on Neural Information Processing Systems, (NIPS). Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering (MIT PressCambridge, MA, USA, 2003), pp. 177–184. S. Hur, S. Baek, B. Kim, J. Park, A. F. Molisch, K. Haneda, M. Peter, in Proc. of the 9th European Conference on Antennas and Propagation, (EuCAP). 28 GHz channel modeling using 3D ray-tracing in urban environments, (2015), pp. 1–5. M. Samimi, T. Rappaport, in Proc. of IEEE International Conference on Communications, (ICC). 3-D statistical channel model for millimeter-wave outdoor mobile broadband communications, (2015), pp. 2430–2436. https://doi.org/10.1109/icc.2015.7248689. J. Venna, S. Kaski, in Proc. of the International Conference on Artificial Neural Networks, (ICANN). Neighborhood preservation in nonlinear projection methods: an experimental study, (2001), pp. 485–491. https://doi.org/10.1007/3-540-44668-0_68. This work was funded in part by the Academy of Finland (grant 319484). The work of C. Studer was supported in part by Xilinx Inc. and by the US NSF under grants ECCS-1408006, CCF-1535897, CCF-1652065, CNS-1717559, and ECCS-1824379. Department of Communications and Networking, Aalto University, Espoo, Finland Hanan Al-Tous, Tushara Ponnada & Olav Tirkkonen School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA Hanan Al-Tous Tushara Ponnada Olav Tirkkonen OT proposed the idea and revised this paper. HA and TP wrote the manuscript and participated in the simulation. CS gave some suggestions and participated in the paper revision. All authors have contributed to this research work. All authors have read and approved the final manuscript. Correspondence to Hanan Al-Tous. Al-Tous, H., Ponnada, T., Studer, C. et al. Multipoint channel charting-based radio resource management for V2V communications. J Wireless Com Network 2020, 132 (2020). https://doi.org/10.1186/s13638-020-01723-3
CommonCrawl
Zeta function integral How can I show$$\frac{1}{2\pi i}\oint_{c}\frac{1}{\zeta(s)s(s-1)^2}ds=-1$$ Where C is a closed curve encircling all of the zeros of $\zeta(s)$, Perhaps can someone just help me show it exists (the integral) Doesn't the fact the real parts of the zeros of the zeta function are less then 1 imply its existence? number-theory $\begingroup$ Which zeroes? The trivial ones or the non trivial ones. $\endgroup$ – Mhenni Benghorbal Jan 10 '13 at 1:45 $\begingroup$ both trivial and non trivial $\endgroup$ – Ethan Jan 10 '13 at 1:52 $\begingroup$ Dear Ethan, The trivial zeroes extend all the way along the negative real axis. The trivial zeroes extend all the way up and down along the critical strip (presumably even the critical line). Given this, it's hard to encircle them with a single closed curve. So are you sure you have everything straight? Regards, $\endgroup$ – Matt E Jan 10 '13 at 2:50 $\begingroup$ Not sure at all lol $\endgroup$ – Ethan Jan 10 '13 at 2:54 $\begingroup$ What is the source of this statement? $\endgroup$ – Antonio Vargas Jan 10 '13 at 3:00 Here is the sum of residues at the trivial zeroes of the Zeta function $$ -\sum _{k=1}^{\infty }\,\frac{1}{{2k\zeta}'(-2k)(2k+1)^2} \sim 0.9998418292,$$ where the residue at $s=-2k$ is given by $$ \lim_{s \to -2k}\frac{(s+2k)}{\zeta(s)s(s-1)^2}=-\frac{1}{{2k\zeta}'(-2k)(2k+1)^2}. $$ Note: You need to find a suitable sequence of contours $C_n$. Mhenni BenghorbalMhenni Benghorbal I feel like I messed somthing up here, $$\lim_{n\to \infty}\frac{1}{n}\sum_{k=1}^nf(\frac{k}{n})=\int_{0}^1 f(x) \ dx$$ $$\sum_{k\leq x}\Lambda(k)=\psi(x)$$ $$\sum_{k\leq x}\mu(k)=M(x)$$ $$\frac{1}{n}\sum_{k=1}^n\ln(\frac{k}{n})M(\frac{n}{k})=\frac{\psi(n)}{n}-\frac{\ln(n)}{n},\text{ by Chebyshevs identity}$$ $$\lim_{n\to \infty}\frac{1}{n}\sum_{k=1}^n\ln(\frac{k}{n})M(\frac{n}{k})=\int_{0}^1\ln(x)M(\frac{1}{x}) \ dx=\lim_{n \to \infty} \frac{\psi(n)}{n}-\frac{\ln(n)}{n}$$ $$\int_{0}^1\ln(x)M(\frac{1}{x}) \ dx=1, \text{ by the prime number theorem}$$ $$\frac{1}{2\pi i}\oint_{c}\frac{1}{x^s\zeta(s)s}ds=M(\frac{1}{x}), \text{by Perron's formula}$$ $$\frac{1}{2\pi i}\oint_{c}\frac{\ln(x)}{x^s\zeta(s)s}ds=\ln(x)M(\frac{1}{x})$$ $$\frac{1}{2\pi i}\oint_{c}\int_{0}^1\frac{\ln(x)}{x^s\zeta(s)s} dx \ ds=\int_{0}^1\ln(x)M(\frac{1}{x}) \ dx=1$$ $$\frac{1}{2\pi i}\oint_{c}\int_{0}^1\frac{\ln(x)}{x^s\zeta(s)s} dx \ ds=1$$ $$\int_{0}^1\frac{\ln(x)}{x^s} dx = \frac{-1}{(s-1)^2}, \text{for } \text{ } \Re(s)<1$$ $$\frac{1}{2\pi i}\oint_{c}\int_{0}^1\frac{\ln(x)}{x^s\zeta(s)s} dx \ ds=\frac{1}{2\pi i}\oint_{c}\frac{-1}{\zeta(s)s(s-1)^2} ds=1, \text{ because the zeros of the zeta function satisfy } \text{ } \Re(s)<1$$ $$\frac{1}{2\pi i}\oint_{c}\frac{1}{\zeta(s)s(s-1)^2} ds=-1$$ $\begingroup$ What is $c$, exactly? Also, could you give a citation for that version of Perron's formula? $\endgroup$ – Antonio Vargas Jan 10 '13 at 4:00 $\begingroup$ @Ethan Typically in Perron's formula, you integrate over a line $\text{Real}(s) = c$. Hence, do you mean an integral over the line or is it some closed curve in which case could you clarify, which Perron's formula you are using. $\endgroup$ – user17762 Jan 10 '13 at 4:04 $\begingroup$ I thought I could re-write it as a contour integral over some curve involving the roots of the zeta function, but nvm that, you can just assume the countour integral is a definite integral with lower and uper bound replaced with $c-i\infty$, $c+i\infty$, and c>1, and arbitrary otherwise. $\endgroup$ – Ethan Jan 10 '13 at 4:11 Not the answer you're looking for? Browse other questions tagged number-theory or ask your own question. Calculating the Zeroes of the Riemann-Zeta function Rankin-Selberg zeta function Continuation of the Zeta Function Prime Number Theorem and the Riemann Zeta Function What is the explicit formula for the staircase function $S(x)$ defined in terms of the zeta zeros? Question on Integral Transform Related to Riemann Zeta Function $\zeta(s)$ Does this contour integral actually count the roots of $\zeta(s)$ with imaginary part $<T$? Confusion with the Riemann Zeta function Reference request on the Riemann Zeta function Why doesn't the Riemann Zeta Function have a zero at $s=0$?
CommonCrawl
Oxidised Met147 of human serum albumin is a biomarker of oxidative stress, reflecting glycaemic fluctuations and hypoglycaemia in diabetes Akari Momozono1,2,3, Yoshio Kodera2,3, Sayaka Sasaki1,2,3, Yuzuru Nakagawa2, Ryo Konno2 & Masayoshi Shichiri ORCID: orcid.org/0000-0002-5704-13781 Scientific Reports volume 10, Article number: 268 (2020) Cite this article Predictive markers Oxidative stress has been linked to a number of chronic diseases, and this has aroused interest in the identification of clinical biomarkers that can accurately assess its severity. We used liquid chromatography-high resolution mass spectrometry (LC-MS) to show that oxidised and non-oxidised Met residues at position 147 of human serum albumin (Met147) can be accurately and reproducibly quantified with stable isotope-labelled peptides. Met147 oxidation was significantly higher in patients with diabetes than in controls. Least square multivariate analysis revealed that glycated haemoglobin (HbA1c) and glycated albumin (GA) did not significantly influence Met147 oxidation, but the GA/HbA1c ratio, which reflects glycaemic excursions, independently affected Met147 oxidation status. Continuous glucose monitoring revealed that Met147 oxidation strongly correlates with the standard deviation of sensor glucose concentrations and the time spent with hypoglycaemia or hyperglycaemia each day. Thus, glycaemic variability and hypoglycaemia in diabetes may be associated with greater oxidation of Met147. Renal function, high-density lipoprotein-cholesterol and serum bilirubin were also associated with the oxidation status of Met147. In conclusion, the quantification of oxidised and non-oxidised Met147 in serum albumin using our LC-MS methodology could be used to assess the degree of intravascular oxidative stress induced by hypoglycaemia and glycaemic fluctuations in diabetes. Oxidative stress is involved in a number of disease processes, including cardiovascular diseases1,2, diabetes3,4,5,6,7, chronic kidney disease8,9,10, cancer11,12, hypertension2 and neurodegenerative disorders13,14. Oxidative stress is also believed to be associated with ageing-associated disorders15,16. Functional oxidative modification of biomolecules, including intravascular and cellular proteins, may have a causal role in the cellular dysfunctions that are involved in disease pathophysiology17,18. The identification of clinical biomarkers of the severity of exposure to oxidative stress has been the intense focus of many researchers19,20, because they could be used to predict the development of major human diseases. Because the quantification of reactive oxygen species is difficult, given their very short half-lives, the measurement of stable by-products generated under conditions of oxidative stress remains a popular approach to the monitoring of free radical-influenced processes20. Methionine (Met), a sulfur-containing amino acid, is an important antioxidant that contributes to the structure and stability of proteins21. Met is readily oxidised to form Met sulfoxide (MetO), which can be reduced back to Met by MetO reductases22,23,24,25,26. Because of this instability of Met and MetO, their quantification has not been a widely used method for the assessment of the degree of oxidative stress27. However, we have recently found that the mass spectral intensity of serum tryptic peptides containing oxidised and non-oxidised Met residues can be very stably and reproducibly measured using liquid chromatography-high resolution mass spectrometry (LC-MS), irrespective of the time the blood sample is left to clot at room temperature before centrifugation or repeated freeze/thaw cycles28. This may be because of the absence of MetO reductase activity in human blood29, although these have not been well characterised to date. In the present study, we have quantified the levels of oxidised and non-oxidised Met at position 147 of human serum albumin (Met147), to determine whether the oxidation status of this residue reflects the oxidative stress induced during disease pathophysiology, and ultimately whether this might represent a useful biomarker of oxidative stress. To this end, we have improved our mass spectrometric methodology for the accurate and stable quantification of such residues in clinically-derived samples. Improved methodology for the quantification of oxidised and non-oxidised Met-containing serum tryptic peptides We first sought to improve the accuracy and reproducibility of our previous methodology for the quantification of the oxidation of Met residues in serum tryptic proteins28. We previously found that the ratio of trypsin-digested serum albumin fragments containing oxidised and a non-oxidised Met residues at position 147 of human serum albumin, Alb(Met147O) and Alb(Met147), is one of the most promising potential clinical biomarker of intravascular redox status among the Met-containing tryptic serum proteins identified using a proteomic strategy28. Because the use of stable isotope-labelled peptides is known to enable the accurate quantification of peptide concentrations in biological samples, we synthesized two stable isotope-labelled peptides, SI-Alb(Met147) and SI-Alb(Met147O), corresponding to the tryptic peptides, Alb(Met147) and Alb(Met147O), respectively. Our previous methodology, which did not use stable isotope-labelled peptides, quantified the signal intensity of Alb(Met147O) and Alb(Met147), and employed their ratio as an indicator of oxidised Met. In the current analysis, the serially diluted stable isotope-labelled peptides, SI-Alb(Met147) and SI-Alb(Met147O), were spiked into serum samples from participants prior to tryptic digestion and LC-MS analysis to generate the extracted ion chromatogram (XIC) intensities for the two endogenous tryptic peptides, Alb(Met147) and Alb(Met147O), and the corresponding stable-isotope-labelled peptides, SI-Alb(Met147) and SI-Alb(Met147O) (Fig. 1). The serum concentrations of Alb(Met147) (\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\)) and Alb(Met147O) \(({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})})\) were then extrapolated from the XICs generated using the respective endogenous peptides and the corresponding spiked stable isotope-labelled peptides. The oxidation ratio for Met147 was obtained by dividing \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\) by \({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\). Standard calibration curves were generated using serially diluted serum samples vs. quantified Alb(Met147) or Alb(Met147O), and resulting equations from the regression analyses demonstrated very high degrees of linearity (Supplementary Fig. S1). This updated method, using stable isotope-labelled peptides, yielded a coefficient of variation (%CV) of 9.8%, while the original method, which used signal intensity ratio of Alb(Met147O) and Alb(Met147), yielded a %CV of 19.7% (n = 17). Therefore, subsequent quantifications were performed using \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) values determined using the respective stable isotope-containing peptides. Representative extracted ion chromatograms (XICs) of tryptic peptides containing oxidised and non-oxidised Met147 residue and relevant stable isotope-labelled peptides. The stable isotope-labelled peptides, SI-Alb(Met147) and SI-Alb(Met147O), were spiked to the serum prior to trypsin digestion. XICs with charge states of three (a,c) and four (b,d) of endogenous (Alb(Met147) and Alb(Met147O)) (a,b) and stable isotope-labelled peptides (c,d) are presented. Oxidised peptides were magnified 100-fold and 10-fold, respectively, and are shown above the original peaks. We next evaluated the benefits of using of L-Met and L-cysteine (L-Cys) to prevent the spontaneous oxidation of Met and the carbamidomethylation of N-terminal amino acid residues. The addition of L-Met after the tryptic digestion of serum samples suppressed the spontaneous oxidation of Alb(Met147). \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\), measured in serum samples from six healthy participants without the addition of L-Met, increased after a week or a month of storage (Fig. 2a–c). In contrast, the addition of excess L-Met to the serum samples markedly suppressed this increase (Fig. 2a–c). We next compared the XICs of Alb(Met147) and SI-Alb(Met147), with and without the use of L-Cys. The addition of excess L-Cys prior to the trypsin digestion of serum samples inhibited carbamidomethylation at the N-terminus of these peptides (Fig. 2d,e), leading to higher XIC values for uncarbamidomethylated Alb(Met147) and SI-Alb(Met147) (Fig. 2f,g). Therefore, we added excess L-Cys and L-Met prior to and immediately after the enzymatic digestion of serum samples in subsequent experiments. Suppressive effects of L-Met and L-Cys on the spontaneous oxidation of Met and the carbamidomethylation of the N-terminal amino acid of Alb(Met147). Serum samples obtained from six healthy participants were digested with trypsin and, after removal of surfactant, stored at −80 °C with (L-Met(+)) or without (L-Met(−)) the addition of excess L-Met for 0 (a), 7 (b) or 28 (c) days before determining \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\)). Trypsin-digested serum samples with or without the addition of excess L-Cys were analysed using LC-MS and the XICs of N-terminally carbamidomethylated Alb(Met147) (d), SI-Alb(Met147) (e), uncarbamidomethylated Alb(Met147) (f) and SI-Alb(Met147) (g) were determined. Horizontal bars represent the mean ± SEM. *p < 0.05 vs L-Met(+) or L-Cys(+), #p < 0.005 vs L-Met(+) or L-Cys(+). The effect of long-term storage on Met oxidation was evaluated using SI-Alb(Met147) and SI-Alb(Met147O), and excess L-Cys and L-Met. \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) did not significantly change after 2 years of storage at −80 °C (0.001267 ± 0.0001391, compared with the value obtained immediately after blood withdrawal: 0.001447 ± 0.0002763; n = 6; p = 0.2188). We next determined whether the length of the clotting time prior to serum separation affected the spontaneous oxidation of Alb(Met147). Blood samples were allowed to clot at room temperature, and the obtained sera were alkylated and trypsin-digested for subsequent LC-MS analysis. \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\), determined in samples centrifuged after periods of time on the bench of between 10 min and 6 h, did not show any appreciable differences (0.001331 ± 0.00008874, 10 min; 0.001325 ± 0.00006762, 30 min; 0.001221 ± 0.0001424, 1 h; 0.001264 ± 0.0001149, 2 h; 0.001267 ± 0.0001608, 3 h; 0.001312 ± 0.0001276, 6 h; p = 0.5783). Therefore, \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) is a robust and reproducible measurement that is not affected by clotting time. Oxidised Met ratio in diabetes We next measured \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) in 40 healthy volunteers and 124 patients with diabetes (Table 1). \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) levels were significantly higher in the diabetic patients than in the healthy volunteers (Fig. 3). Single regression analysis revealed that age, glycated albumin (GA)/glycated haemoglobin (HbA1c), blood urea nitrogen (BUN) serum creatinine (Cr) and uric acid positively correlated with \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) level, and this negatively correlated with body mass index (BMI), estimated glomerular filtration rate (eGFR) and serum total bilirubin (Table 2). Least square multivariate analysis was undertaken using these statistically significant parameters as explanatory variables, as well as those reported to have an antioxidant activity, and revealed that GA/HbA1c, eGFR, high-density lipoprotein (HDL)-cholesterol and total bilirubin significantly and independently influenced \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) level (Table 3). Table 1 Characteristics of the enrolled subjects. Levels of Met oxidation in diabetic and non-diabetic participants. The serum concentrations of Alb(Met147) (\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\)) and Alb(Met147O) \({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) were determined using the XICs generated by LC-MS analyses of Alb(Met147), Alb(Met147O), SI-Alb(Met147) and SI-Alb(Met147O) peptides. \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) was determined in 40 healthy volunteers (Control) and 124 diabetic participants (DM). **p < 0.01 compared with Control. Table 2 Correlations between the serum level of methionine oxidation and other parameters (univariate analyses). Table 3 Multivariate analysis of the relationship between the serum level of methionine oxidation and other participant characteristics. From the study sample of 164 participants, 35 (17 men and 18 women; 28 diabetic and 7 non-diabetic participants; 47.2 ± 15.5 years) had their \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) measured while undergoing continuous glucose monitoring (CGM). The standard deviation (SD), %CV and the mean sensor glucose level (SGL) were calculated over 4–7-day monitoring periods. The periods of time during each day the participant was hypoglycaemic (SGL < 70 mg/dl), normoglycaemic (70 mg/dl < SGL < 140 mg/dl) and hyperglycaemic (140 mg/dl < SGL) were also calculated. The SD and %CV significantly correlated with \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) (SD: p = 0.0055, r = 0.4592; %CV: p = 0.0039, r = 0.4751) (Fig. 4a–c) and the lengths of the hypoglycaemic and hyperglycaemic periods positively correlated with \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) (p = 0.0402, r = 0.3484 and p = 0.0138, r = 0.4124, respectively) (Fig. 4d,f). In contrast, the time spent in the normoglycaemic range negatively correlated with \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) (p = 0.0026, r = −0.4927) (Fig. 4e). Relationship between Met oxidation and blood glucose profile, evaluated using continuous glucose monitoring. Continuous glucose monitoring was performed in 35 participants for 4–7 days and the sensor glucose levels (SGL) over the entire monitoring period were used to calculate the standard deviation (SD) (a), % coefficient of variation (%CV) (b) and the mean SGL (c) values. The relative lengths of time with SGL < 70 mg/dL (d), 70–140 mg/dL (e) and >140 mg/dL (f) were plotted against \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\), and the corresponding regression line is shown. Because glycaemic excursions appeared to be closely associated with higher \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\), we next determined whether the administration of a sodium glucose cotransporter 2 inhibitor, which suppresses glycaemic fluctuations, would reduce the oxidation of the Met residue. Indeed, the \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) of 18 diabetic participants (nine men and nine women; 54.3 ± 9.5 years; HbA1c 9.3 ± 1.7%, BMI 32.1 ± 5.6 kg/m2) was significantly lower after either canagliflozin, luseogliflozin or empagliflozin was administered for 28 days (Fig. 5). Effect of sodium glucose cotransporter 2 inhibitor treatment on the Met oxidation status of serum albumin. \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) was determined in 18 type 2 diabetic participants before and after 28 days' oral administration of a sodium glucose cotransporter 2 inhibitor. **p < 0.01, calculated using the Wilcoxon signed-rank test. Finally, we assessed the advantage of this updated methodology over our previous method that simply measured signal intensity ratios of Alb(Met147O) and Alb(Met147) ([Alb(Met147O]/[Alb(Met147)]) without the use of stable isotope-labelled peptides28. We reanalysed all clinical samples pretreated with L-Met and L-Cys to determine [Alb(Met147O]/[Alb(Met147)] and performed exactly the same statistical analyses as described above. [Alb(Met147O]/[Alb(Met147)] were significantly higher in the diabetic patients than in healthy volunteers (Supplementary Fig. S2). Unlike \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\), however, single regression analysis did not reveal the association of [Alb(Met147O]/[Alb(Met147)] with GA/HbA1c or total bilirubin (Supplementary Table S1). Multiple regression did not identify total bilirubin as an independent variable influencing [Alb(Met147O]/[Alb(Met147)] (Supplementary Table S2). In patients undergoing continuous glucose monitoring, [Alb(Met147O]/[Alb(Met147)] only showed a positive correlation with %CV of glucose levels and length of time spent with hypoglycaemia, but did not reveal any significant correlation with standard deviation or average glucose concentration, nor the percentage of time spent with normal or high glucose levels (Supplementary Fig. S2). [Alb(Met147O]/[Alb(Met147)] of 18 diabetic participants did not significantly decrease after the oral intake of a sodium glucose cotransporter 2 inhibitor for 28 days (Supplementary Fig. S3). These results demonstrate that the updated quantification approach using stable isotope-labelled peptides is far more sensitive in detecting redox status changes in disease pathophysiology over our previous techniques which only measured signal intensity ratios even when excess L-Met and L-Cys were used upon enzyme digestion of serum. In our previously published study, we determined whether the oxidation of Met residues of serum proteins could be used as a clinical marker of oxidative stress, and concluded that the mass spectral intensity ratio of the levels of oxidised and non-oxidised Met residues of some serum tryptic proteins could be quantified stably and reproducibly28. In the present study, we have successfully improved the accuracy and reproducibility of our method for the quantification of the ratio of Met147 to Met147O-containing tryptic peptides. First, we measured the concentrations of serum Alb(Met147) and Alb(Met147O) using stable isotope-labelled peptides by extrapolating the respective XIC data obtained using LC-MS analysis. In our previous method, the mass spectral signal intensity ratio of the serum tryptic peptides with and without MetO was determined28. However, this method was limited in accuracy because of broadening of the elution peak, which likely occurs as the LC column condition deteriorates. The degree of this broadening may differ between oxidised and non-oxidised Met, which could lead to lower accuracy when many samples are analysed. The quantification of peptides containing both oxidised and non-oxidised Met that we used in the present study eliminates this drawback by reducing the effects of column condition on the analysis. Second, the addition of excess L-Met and L-Cys to serum samples prior to the determination of \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) also improved the accuracy of LS-MS quantification. Met is prone to spontaneous oxidation once the surfactant used for digesting the serum proteins is removed prior to LC-MS analysis. The Alb(Met147) in the trypsinised serum samples becomes significantly oxidised from 1 week of storage, but the addition of excess L-Met prior to this prevents the oxidation of Alb(Met147) for at least 1 month. Another difficulty associated with the use of LC-MS technology for the accurate quantification of serum tryptic peptides containing Cys residues is the undesirable carbamidomethylation of peptide N-termini that is associated with the reductive alkylation procedure that prevents disulfide formation. In the present study, we have shown that the addition of excess L-Cys greatly enhances the XIC intensity of non-carbamidomethylated Alb(Met147) and SI-Alb(Met147). Furthermore, the addition of excess L-Cys and L-Met did not significantly affect \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) levels, even after serum samples were stored for 2 years at −80 °C. Thus, the addition of excess L-Met and L-Cys eliminates undesirable spontaneous Met oxidation and carbamidomethylation of N-terminal residues of target peptides, which leads to a marked improvement in the accuracy of the quantification of Alb(Met147), Alb(Met147O), SI-Alb(Met147) and SI-Alb(Met147O) using LC-MS. Having successfully established an accurate, stable and reproducible method for the quantification of the oxidation rate of Met residues in blood proteins using a single drop of human serum, we then determined whether this would reflect the redox status associated with conditions that are known to modulate oxidative stress status in experimental diabetes30,31,32,33. In humans, it remains controversial whether glucose fluctuation during diabetes activates oxidative stress34,35,36 and it has yet to be established whether hypoglycaemia affects redox status37. In the current study, we found that \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) was significantly higher in diabetic than in healthy participants. It did not correlate with HbA1c or GA, which reflect mean blood glucose over a period, but significantly correlated with GA/HbA1c ratio, which reflects glycaemic variability38. Therefore, we used CGM to determine whether \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) reflects features of the pathophysiology of diabetes, such as glucose fluctuations and hypoglycaemia. \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) positively correlated with parameters that directly reflect blood glucose fluctuations, such as SD, %CV, duration of hypoglycaemia and duration of hyperglycaemia, and negatively correlated with the length of time the patients were normoglycaemic. In agreement with this, 4 weeks of treatment of 18 type 2 diabetes patients with a sodium glucose cotransporter 2 inhibitor, which reduces glycaemic fluctuation by enhancing urinary glucose excretion, significantly reduced \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\). Least square multiple regression analysis also showed that serum bilirubin, an endogenous antioxidant39,40, HDL-cholesterol and eGFR are independent variables that negatively influence \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\). Taken together, these findings suggest that a variety of pathophysiological factors that have been shown experimentally to affect oxidative stress, such as glucose fluctuation and hypoglycaemia in diabetes, serum bilirubin, and renal function, do indeed affect indicators of oxidative stress in human blood proteins in a clinical setting. To assess the superiority of our updated methodology, we reanalyzed all clinical samples using previous methodology that measures signal intensity ratios of MetO- and Met-containing peptides, and compared the results with those of the new method. [Alb(Met147O]/[Alb(Met147)] levels were not associated with the GA/HbA1c ratio or serum total bilirubin when regression analyses were performed using entire samples. In patients undergoing continuous glucose monitoring, [Alb(Met147O]/[Alb(Met147)] only showed a positive correlation with %CV of glucose levels and length of time spent with hypoglycaemia, but did not reveal any significant correlation with the standard deviation or average glucose concentration, nor the percentage of time spent with normal or high glucose levels (new Supplementary Fig. S2). In contrast, \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) were associated with all of the measured parameters reflecting glycemic excursion, hypoglycaemia, and hyperglycaemia (Fig. 4), and endogenous factors theoretically affecting oxidative stress status, such as serum bilirubin, HDL cholesterol, and eGFR (Table 3). These results demonstrate that the updated approach is far more sensitive in detecting redox status changes over our previous techniques which simply measured signal intensity ratios. To date, clinical biomarkers that can detect the intravascular oxidative stress status elicited by blood glucose fluctuations, hypoglycaemia, and hyperglycaemia in diabetes, as well as that affected by endogenous factors such as serum bilirubin and HDL cholesterol, have been unavailable. Our results demonstrate that our upgraded mass spectrometry approach shows a marked improvement in usefulness such that it can detect the intravascular redox status in human disease pathophysiology. In our previous report, we identified 53 trypsin-digested serum peptides that contained MetO28. However, the XICs of many of their MS1 overlapped with other enzyme-digested peptides when measured by LC-MS with a short analysis time (<60 min), which prevented detection of their signal intensities. MetO-containing peptides derived from low serum concentration proteins elicit comparatively larger background signals, rendering accurate measurement difficult. As a consequence, we selected five methionine-containing peptides technically quantifiable with relatively high concentrations and little signal overlap28. Met residues already highly oxidised at baseline in healthy subjects, such as Met1181 of complement C3, cannot reflect redox status changes elicited by disease pathophysiology28. Therefore, in the current study, we focused on Met147 of serum albumin as a promising clinical biomarker candidate because Met147 has lower baseline oxidation levels than most other Met residues, such as Met332 or Met363. The oxidation of two Met residues in the rat brain, Met288 and Met572, was also reported41. Interestingly, these Met residues with elevated baseline oxidation levels are located on the surface of rat and human serum albumin molecules. It is speculated that baseline oxidation and/or susceptibility to oxidation depends upon the position of each Met residue in the three-dimensional structure of the protein molecule. Met is highly susceptible to oxidation, and elevated Met(O)-protein levels have been demonstrated in a variety of oxidative stress-related diseases. Further, Met(O) reductase has not been detected in human blood samples29. Therefore, Met(O) in selected Met residues could be used as oxidative stress biomarkers as long as Met- and Met(O)-containing peptides are accurately quantified. In conclusion, our LC/MS methodology appears to be sufficiently accurate and sensitive for the detection of the effects of intravascular redox status in human pathophysiology. The study population consisted of 40 healthy volunteers and 124 diabetic patients (94 with type 2 diabetes and 30 with type 1 diabetes) who visited the Kitasato University Hospital between May 2016 and July 2019. A diagnosis of type 2 diabetes was made on the basis of insulin-independence, according to the criteria of the Japan Diabetes Society (patients with anti-glutamic acid decarboxylase autoantibody >1.5 U/ml or serum C-peptide <0.5 ng/ml were excluded)42. Clinical records were reviewed for all the potential participants, and those with acute inflammatory diseases, malignancies, or recent episodes of cerebrovascular or cardiovascular accident were excluded from the study. Serum sample collection Venous blood was collected into vacutainers containing pro-coagulant, allowed to clot at room temperature for approximately 1 h, unless otherwise indicated, and then centrifuged at 1,000 × g for 20 min at room temperature. The separated serum was stored at −80 °C until processing. Patients underwent routine evaluations for the systemic diseases covered by the universal health coverage system in Japan42,43 using electrocardiography (ECG), radiography (chest and abdomen), ultrasonography (neck and/or abdomen), urinalysis, complete blood count, and the measurement of 15 serum biochemical analysis items. The diabetic participants also underwent ophthalmological and neurological testing, the anti-glutamic acid decarboxylase antibody test, and measurements of GA and/or HbA1c, fasting serum insulin, and urinary albumin-to-creatinine ratio. Synthesis of stable isotope-labelled peptides The following two stable isotope-labelled peptides were synthesized by Scrum Inc. (Tokyo, Japan) using L-phenylalanine-N-9-fluorenylmethoxycarbonyl (13C9, 98%; 15N, 98%): SI-Alb(Met147), LVRPEVDVMC(Carbamidomethyl)TAFHDNEETFLK, and SI-Alb(Met147O), LVRPEVDVM(Oxidation)C(Carbamidomethyl)TAFHDNEETFLK, with the underlined amino acids containing the stable isotope. Trypsin digestion of serum proteins Trypsin digestion of serum proteins was performed essentially as described44, but with the following minor modifications. One-hundred-and-ninety-five microlitres of 200 mM triethylammonium bicarbonate, 12 mM sodium deoxycholate, and 12 mM sodium lauryl sulfate was added to 5 μL of thawed serum, and 20 μL of this solution was added to 10 μL of 2.425 μmol/L SI-Alb(Met147) or 0.156 μmol/L SI-Alb(Met147O), the mixture was vortexed, and then it was mixed with 0.8 μL of 500 mM Bond-Breaker TCEPTM Solution (Thermo Fisher Scientific, MA, USA) and 1.2 μL of 200 mM tetraethylammonium tetrahydroborate (Sigma-Aldrich) and incubated at 50 °C for 30 min. The mixture was then incubated in a dark room at room temperature with 2 μl of 375 mM iodoacetamide (Nacalai Tesque, Kyoto, Japan) for 30 min and then with 2 μl of 400 mM L-Cys (Fujifilm Wako Pure Chemical, Japan) for 10 min, after which 2 μl of 100 ng/μL Lys-C and 2 μl of 100 ng/μL trypsin were added and digestion was carried out for 24 h at 37 °C. Eighteen microlitres of the digest was then mixed with 50 μL of 10% acetonitrile (ACN) and 50 μL of 5% trifluoroacetic acid, and the mixture was centrifuged at 19,000 g for 15 min. The supernatant was recovered and 2 μL of 100 mM L-Met (Tokyo Chemical Industry, Tokyo, Japan) was added, prior to LC-MS analysis. Quantification of the oxidised and non-oxidised Met residue-containing tryptic peptides using LC-MS Analysis of the tryptic digests of the serum samples was performed using LC-MS, essentially as described28. Tryptic digests of the serum samples were injected onto a 2.0 mm (inner diameter) × 50 mm CAPCELL PACK MGIII-H S3 column attached to a Nanospace SI-2 HPLC system (Shiseido Fine Chemicals, Tokyo, Japan). The column temperature was maintained at 45 °C. The flow rate of the mobile phase was 200 μL/min, and mobile phase A consisted of 0.05% formic acid (FA) and mobile phase B consisted of 0.05% FA/90% ACN. The mobile phase gradient was programmed as follows: 0% B (0–3 min), 0–55.5% B (3–40 min), 55.5–80% B (40–40.1 min) and 80% B (40.1–45 min). Peptides were introduced from the chromatography column either to an LTQ-Orbitrap Discoverer (Thermo Fisher Scientific) or Q-Exactive (Thermo Fisher Scientific). Full-scan MS spectra were acquired using the Orbitrap (m/z 300–2,000) of an LTQ-Orbitrap Discoverer at a mass resolution of 30,000 at an m/z of 400, while full-scan MS spectra were acquired using the Orbitrap (m/z 300–1,200) of a Q-Exactive at a mass resolution of 70,000 at an m/z of 200. The areas of the XIC for each peptide, Alb(Met147O) (m/z = 667.3202, z = 4), SI-Alb(Met147O) (m/z = 672.3338, z = 4), Alb(Met147) (m/z = 663.3215, z = 4) and SI-Alb(Met147) (m/z = 665.8283, z = 4) were defined following XIC analysis of the LC-MS data using Skyline 3.7.0.1131745. \({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) was extrapolated from the XICs generated from Alb(Met147) and SI-Alb(Met147), and \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\) from Alb(Met147O) and SI-Alb(Met147O). The serum \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) data obtained using the LTQ-Orbitrap Discoverer and Q-Exactive were strongly correlated (p < 0.0001. r = 0.937, n = 13) and were corrected using the following equation when required: $$({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}/{{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})})=1.12\times {({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}/{{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})})}_{{\rm{QE}}}-0.010941$$ Continuous glucose monitoring Seven non-diabetic volunteers and 28 diabetic patients underwent continuous glucose monitoring (CGM) using the iProTM 2 CGM system (Medtronic Minimed Inc. Northridge, CA)46,47 for 4–7 days and provided an overnight-fasted blood sample during the period. The complete SGL data were used to assess each glucose profile. The SD and %CV were used to assess glycaemic fluctuations, and the maximum, minimum and mean SGL were used to evaluate glycaemic control. The amount of time over a 24-h period each participant spent in the hypoglycaemic range (SGL < 70 mg/dL), in the normoglycaemic range (70 < SGL < 140 mg/dL) and in the hyperglycaemic range (SGL > 140 mg/dL) were used to analyse their relationships with \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) levels. Data are expressed as mean ± standard deviation unless stated otherwise. Differences in \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) levels according to clotting time were analysed using one-way ANOVA. The Mann-Whitney U test was used to compare \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) values between diabetic and non-diabetic groups. The remaining comparisons, evaluating the effects of treatment on \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) level, were performed using the Wilcoxon signed-rank test for paired data. Linear regression models were used to compare the measured values of \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) and to determine the correlations with age, sex, BMI, GA, HbA1c GA/HbA1c and biochemical parameters. Multivariate analyses were performed essentially as described46,47, except that age, sex, BMI, GA/HbA1c, eGFR, uric acid, total bilirubin, HDL-cholesterol and the use of metformin, a statin or an angiotensin-converting enzyme inhibitor/angiotensin II receptor blocker were used as explanatory variables, and \({{\rm{C}}}_{{\rm{Alb}}({{\rm{Met}}}^{147}{\rm{O}})}\)/\({{\rm{C}}}_{{{\rm{Alb}}({\rm{Met}}}^{147})}\) was used as the objective variable. All analyses were performed using GraphPad Prism 5.02 software (GraphPad Software Inc. San Diego, CA, USA) and/or JMP ver. 5.0.1a (SAS, Cary, NC, USA). P < 0.05 was considered to represent statistical significance. The protocol was approved by the Kitasato University Medical School/Hospital Ethics Committee (B17-040, B15-181) and written informed consent was obtained from all participants. All study protocols were performed in accordance with the relevant guidelines and regulations of Kitasato University Medical School as well as the Ethical Guidelines for Medical and Health Research Involving Human Subjects in Japan and under the Code of Ethics of the Helsinki Declaration. All data generated or analyzed during this study are included in this article. Drummond, G. R., Selemidis, S., Griendling, K. K. & Sobey, C. G. Combating oxidative stress in vascular disease: NADPH oxidases as therapeutic targets. Nat. Rev. Drug. Discov. 10, 453–471 (2011). Madamanchi, N. R., Vendrov, A. & Runge, M. S. Oxidative stress and vascular disease. Arterioscler. Thromb. Vasc. Biol. 25, 29–38 (2005). Giugliano, D., Ceriello, A. & Paolisso, G. Oxidative stress and diabetic vascular complications. Diabetes Care 19, 257–267 (1996). Pitocco, D., Tesauro, M., Alessandro, R., Ghirlanda, G. & Cardillo, C. Oxidative stress in diabetes: implications for vascular and other complications. Int. J. Mol. Sci. 14, 21525–21550 (2013). Nishikawa, T. et al. Normalizing mitochondrial superoxide production blocks three pathways of hyperglycaemic damage. Nature 404, 787–790 (2000). Jay, D., Hitomi, H. & Griendling, K. K. Oxidative stress and diabetic cardiovascular complications. Free. Radic. Biol. Med. 40, 183–192 (2006). Yorek, M. A. The role of oxidative stress in diabetic vascular and neural disease. Free. Radic. Res. 37, 471–480 (2003). Cachofeiro, V. et al. Oxidative stress and inflammation, a link between chronic kidney disease and cardiovascular disease. Kidney Int, S4–9 (2008). Daenen, K. et al. Oxidative stress in chronic kidney disease. Pediatr. Nephrol. 34, 975–991 (2019). Himmelfarb, J., Stenvinkel, P., Ikizler, T. A. & Hakim, R. M. The elephant in uremia: oxidant stress as a unifying concept of cardiovascular disease in uremia. Kidney Int. 62, 1524–1538 (2002). Gorrini, C., Harris, I. S. & Mak, T. W. Modulation of oxidative stress as an anticancer strategy. Nat. Rev. Drug. Discov. 12, 931–947 (2013). Sosa, V. et al. Oxidative stress and cancer: an overview. Ageing Res. Rev. 12, 376–390 (2013). Schoneich, C. Methionine oxidation by reactive oxygen species: reaction mechanisms and relevance to Alzheimer's disease. Biochim. Biophys. Acta 1703, 111–119 (2005). Glaser, C. B., Yamin, G., Uversky, V. N. & Fink, A. L. Methionine oxidation, alpha-synuclein and Parkinson's disease. Biochim. Biophys. Acta 1703, 157–169 (2005). Shringarpure, R. & Davies, K. J. Protein turnover by the proteasome in aging and disease. Free. Radic. Biol. Med. 32, 1084–1089 (2002). Ruan, H. et al. High-quality life extension by the enzyme peptide methionine sulfoxide reductase. Proc. Natl Acad. Sci. USA 99, 2748–2753 (2002). Liguori, I. et al. Oxidative stress, aging, and diseases. Clin. Interv. Aging 13, 757–772 (2018). Venkataraman, K., Khurana, S. & Tai, T. C. Oxidative stress in aging–matters of the heart and mind. Int. J. Mol. Sci. 14, 17897–17925 (2013). Dalle-Donne, I. et al. Proteins as biomarkers of oxidative/nitrosative stress in diseases: the contribution of redox proteomics. Mass. Spectrom. Rev. 24, 55–99 (2005). Griendling, K. K. & FitzGerald, G. A. Oxidative stress and cardiovascular injury: Part I: basic mechanisms and in vivo monitoring of ROS. Circulation 108, 1912–1916 (2003). Levine, R. L., Mosoni, L., Berlett, B. S. & Stadtman, E. R. Methionine residues as endogenous antioxidants in proteins. Proc. Natl Acad. Sci. USA 93, 15036–15040 (1996). Ezraty, B., Aussel, L. & Barras, F. Methionine sulfoxide reductases in prokaryotes. Biochim. Biophys. Acta 1703, 221–229 (2005). Kim, G., Weiss, S. J. & Levine, R. L. Methionine oxidation and reduction in proteins. Biochim. Biophys. Acta 1840, 901–905 (2014). Oien, D. B. & Moskovitz, J. Substrates of the methionine sulfoxide reductase system and their physiological relevance. Curr. Top. Dev. Biol. 80, 93–133 (2008). Weissbach, H., Resnick, L. & Brot, N. Methionine sulfoxide reductases: history and cellular role in protecting against oxidative damage. Biochim. Biophys. Acta 1703, 203–212 (2005). Zhang, X. H. & Weissbach, H. Origin and evolution of the protein-repairing enzymes methionine sulphoxide reductases. Biol. Rev. Camb. Philos. Soc. 83, 249–257 (2008). Vogt, W. Oxidation of methionyl residues in proteins: tools, targets, and reversal. Free. Radic. Biol. Med. 18, 93–105 (1995). Suzuki, S. et al. Methionine sulfoxides in serum proteins as potential clinical biomarkers of oxidative stress. Sci. Rep. 6, 38299, https://doi.org/10.1038/srep38299 (2016). Glaser, C. B. et al. Studies on the turnover of methionine oxidized alpha-1-protease inhibitor in rats. Am. Rev. Respir. Dis. 136, 857–861 (1987). Singh, P., Jain, A. & Kaur, G. Impact of hypoglycemia and diabetes on CNS: correlation of mitochondrial oxidative stress with DNA damage. Mol. Cell Biochem. 260, 153–159 (2004). Saito, S. et al. Glucose fluctuations increase the incidence of atrial fibrillation in diabetic rats. Cardiovasc. Res. 104, 5–14 (2014). Quagliaro, L. et al. Intermittent high glucose enhances apoptosis related to oxidative stress in human umbilical vein endothelial cells: the role of protein kinase C and NAD(P)H-oxidase activation. Diabetes 52, 2795–2804 (2003). Cardoso, S. et al. Insulin-induced recurrent hypoglycemia exacerbates diabetic brain mitochondrial dysfunction and oxidative imbalance. Neurobiol. Dis. 49, 1–12 (2013). Wentholt, I. M., Kulik, W., Michels, R. P., Hoekstra, J. B. & DeVries, J. H. Glucose fluctuations and activation of oxidative stress in patients with type 1 diabetes. Diabetologia 51, 183–190 (2008). Monnier, L. et al. Activation of oxidative stress by acute glucose fluctuations compared with sustained chronic hyperglycemia in patients with type 2 diabetes. JAMA 295, 1681–1687 (2006). Monnier, L. & Colette, C. Glycemic variability: should we and can we prevent it? Diabetes Care 31, S150–154 (2008). Ceriello, A. et al. Evidence that hyperglycemia after recovery from hypoglycemia worsens endothelial function and increases oxidative stress and inflammation in healthy control subjects and subjects with type 1 diabetes. Diabetes 61, 2993–2997 (2012). Ogawa, A. et al. New indices for predicting glycaemic variability. PLoS One 7, e46517 (2012). Stocker, R., Yamamoto, Y., McDonagh, A. F., Glazer, A. N. & Ames, B. N. Bilirubin is an antioxidant of possible physiological importance. Science 235, 1043–1046 (1987). Stocker, R., Glazer, A. N. & Ames, B. N. Antioxidant activity of albumin-bound bilirubin. Proc. Natl Acad. Sci. USA 84, 5918–5922 (1987). Moskovitz, J. Detection and localization of methionine sulfoxide residues of specific proteins in brain tissue. Protein Pept. Lett. 21, 52–55 (2014). Chida, S. et al. Levels of albuminuria and risk of developing macroalbuminuria in type 2 diabetes: historical cohort study. Sci. Rep. 6, 26380, https://doi.org/10.1038/srep26380 (2016). Kamata, Y. et al. Distinct clinical characteristics and therapeutic modalities for diabetic ketoacidosis in type 1 and type 2 diabetes mellitus. J. Diabetes Complications 31, 468–472 (2017). Masuda, T., Tomita, M. & Ishihama, Y. Phase transfer surfactant-aided trypsin digestion for membrane proteome analysis. J. Proteome Res. 7, 731–740 (2008). MacLean, B. et al. Skyline: an open source document editor for creating and analyzing targeted proteomics experiments. Bioinformatics 26, 966–968 (2010). Hayashi, A. et al. Distinct biomarker roles for HbA1c and glycated albumin in patients with type 2 diabetes on hemodialysis. J. Diabetes Complications 30, 1494–1499 (2016). Yoshino, S. et al. Molecular form and concentration of serum alpha2-macroglobulin in diabetes. Sci. Rep. 9, 12927, https://doi.org/10.1038/s41598-019-49144-7 (2019). We thank Rika Kato, Yukiko Kato and Junko Ohashi for their technical assistance. This work was supported in part by Grants-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan to M.S. (18H05383), Y.Ko. (17K19926) and by Kitasato University 'Shogaku-Kifu' unrestricted research support for M.S. The funders had no role in the study design, data collection or analysis, decision to publish, or preparation of the manuscript. No additional external funding was received for this study. We also thank Mark Cleasby, PhD, from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript. Department of Endocrinology, Diabetes and Metabolism, Kitasato University School of Medicine, 1-15-1 Kitasato, Minami-ku, Sagamihara, Kanagawa, 252-0374, Japan Akari Momozono , Sayaka Sasaki & Masayoshi Shichiri Department of Physics and Kitasato University School of Science, 1-15-1 Kitasato, Minami-ku, Sagamihara, Kanagawa, 252-0373, Japan , Yoshio Kodera , Yuzuru Nakagawa & Ryo Konno Center for Disease Proteomics, Kitasato University School of Science, 1-15-1 Kitasato, Minami-ku, Sagamihara, Kanagawa, 252-0373, Japan & Sayaka Sasaki Search for Akari Momozono in: Search for Yoshio Kodera in: Search for Sayaka Sasaki in: Search for Yuzuru Nakagawa in: Search for Ryo Konno in: Search for Masayoshi Shichiri in: A.M. and S.S. prepared serum samples and established analytical methodology. Y.N., R.K. and Y.K. performed LC-MS analysis. A.M., S.S. and M.S. collected serum samples from participants, evaluated their clinical course and disease pathophysiology, and confirmed the final diagnoses. Y.K. and M.S. designed the study and analysed data. A.M. performed statistical analyses and M.S. confirmed the results. M.S. wrote the manuscript. All authors discussed the results and commented on the manuscript. Correspondence to Masayoshi Shichiri. Supplementary information. Momozono, A., Kodera, Y., Sasaki, S. et al. Oxidised Met147 of human serum albumin is a biomarker of oxidative stress, reflecting glycaemic fluctuations and hypoglycaemia in diabetes. Sci Rep 10, 268 (2020) doi:10.1038/s41598-019-57095-2
CommonCrawl
How often does the sun emit 1 TeV photons? By mspringer on November 27, 2013. I had an interesting question posed to me recently: how frequently does the sun emit photons with an energy greater than 1 TeV? All of you know about the experiments going on at the LHC, where particles are accelerated to an energy which is equivalent to an electron being accelerated through a potential difference of trillions of volts (which is what a "trillion electron volts" - a TeV is). During the ensuing collisions between particles, high-energy TeV photons are produced. Of course everything is emitting light in the form of blackbody radiation all the time. Human beings emit mostly long-wavelength infrared, hot stoves emit shorter-wavelength infrared and red light, hotter objects like the sun emit across a broad range of wavelengths which include the entire visible spectrum. Here, from Wikipedia, is the spectrum of the sun: Solar spectrum. This graph is given in terms of wavelength. For light, energy corresponds to frequency, and frequency is inversely proportional to wavelength. Longer wavelength, lower frequency. A TeV is a gigantic amount of energy, which corresponds to a gigantically high frequency and thus a wavelength that would be pegged way the heck off the left end of this chart pegged almost but not quite exactly at 0 on the x axis. Let me reproduce the same blackbody as the Wikipedia diagram, but cast in terms of frequency: Same spectrum, in terms of frequency Here the x-axis is in hertz, and the y-axis is spectral irradiance in terms of watts per square meter per hertz. (That makes a difference - it's not just the Wikipedia graph with the x-axis relabeled although it gives the same watts-per-square-meter value when integrated over the same bandwidth region.) Ok, so what's the frequency of a 1 TeV photon? Well, photon energy is given by E = hf, where h is Planck's constant and f is the frequency. Plugging in, a 1 TeV photon has an frequency of about 2.4 x 1026 Hz. That's way off the right end of the graph. Thus you might think the answer is zero - the sun never emits such high-energy photons. But then again that tail never quite reaches zero, and there's a lot of TeVs per watt, and there's a lot of square meters on the sun... So to find out more exactly, let's take a look at the actual equation which gave us that chart: Planck's law for blackbody radiation: $latex \displaystyle B(f) = \frac{ 2 h f^3}{c^2} \frac{1}{e^\frac{h f}{kT} - 1}&s=2$ So you'd integrate that from 2.4 x 1026 Hz to infinity if you wanted to find how many watts per square meter the sun emits at those huge frequencies. (Here k is Boltzmann's constant, which is effectively the scale factor that converts from temperature to energy.) That's kind of an ugly integral though, but we can simplify it. That $latex e^\frac{h f}{kT}$ term? It's indescribably big. The hf term is 1 TeV, and the kT is about 0.45 eV (which is a "typical" photon energy emitted by the sun), so the exponential is on the order of e2200000000000. (The number of particles in the observable universe is maybe 1080 or so, for comparison.) Subtracting 1 from that gigantic number is absolutely meaningless, so we can drop it and end up with: $latex \displaystyle B(f) = \frac{ 2 h f^3}{c^2} e^{-\frac{h f}{kT}}&s=2$ which means the answer in watts per square meter is $latex \displaystyle I = \int_{a}^{\infty}\frac{ 2 h f^3}{c^2} e^{-\frac{h f}{kT}} \, df&s=2$ where "a" is the 1 TeV lower cutoff (in Hz). That exponential term now has a negative sign, so it's on the order of e-2200000000000. I'd say this is a safe place to stop and say "The answer is zero, the sun has never and will never emit photons of that energy through blackbody processes." But let's press on just to be safe. That expression above can be integrated pretty straightforwardly. I let Mathematica do it for me: $latex \displaystyle I = e^{-\frac{a h}{k T}}\frac{2 k T (a^3 h^3+3 a^2 h^2 k T+6 a h k^2 T^2+6 k^3 T^3 )}{c^2 h^3}&s=2$ So that's an exponential term multiplying a bunch of stuff. That bunch of stuff is a big number, because "a" is a big number and h is a tiny number in the denominator. I plug in the numbers and get that the stuff term is about 1093 watts per square meter, and you have to multiply that by the 1018 or so square meters on the surface of the sun. That's a very big number, but it's not even in the same sport as that e-2200000000000 term. Multiplying those terms together doesn't even dent the e-2200000000000 term. It's still zero for all practical purposes Which is a lot of work to say that our initial intuition was correct. 1 TeV from blackbody processes in the sun? Forget it. Now blackbody processes aren't the only things going on in the sun. I don't think there are too many TeV scale processes of other types, but stars can be weird things sometimes. I'd be curious to know if astrophysicists would know of other processes which might bump the TeV rate to something higher. [Personal note: I've been absent on ScienceBlogs since April, I think. Why? Writing my dissertation, defending, and summer interning. The upshot of all that is those things are done and I'm now Dr. Springer, and I have a potentially permanent position lined up next year. And now I might even have time to write some more!] Worked Problems Quick, hit the brakes! A reader emailed me a fun question from a physics exam he took, along these lines: A car driver going at some speed v suddenly finds a wide wall at a distance r. Should he apply brakes or turn the car in a circle of radius r to avoid hitting the wall? My first thought was that surely the question… Light from a Hairbrush Question from a reader: Pick up a comb, rub it with your hair and you have got some electric charge. Now shake it and you are generating an electromagnetic wave. Am I right? Yes indeed. So why don't we see light emitted when we brush our hair? Let's run some numbers. If you wiggle around an… The Human Eye, Optimized For Sunlight. Maybe. The human eye is sensitive to a portion of the electromagnetic spectrum that we call visible light, which extends from around 400 to 700 nanometer wavelength, peaking in the general vicinity of greenish light at 560 nanometers: Here's the intensity (formally: power per area per unit solid angle per… The Mathematics of Reddit Rankings, or, How Upvotes Are Time Travel Ok, so this isn't really physics as such, but it's pretty fascinating. There's a very large online community called Reddit in which users submit links which interest them. These links come with two little arrows beside them, and the users can vote the link up or down. Here's a screenshot of how the… Congratulations Dr Springer. By Acleron (not verified) on 27 Nov 2013 #permalink The obvious follow up question: What is the highest energy photon likely to be emitted by black body radiation by the sun this year? By MobiusKlein (not verified) on 28 Nov 2013 #permalink Congrats Dr Springer, and great post! Integrals, how lovely :) By wereatheist (not verified) on 28 Nov 2013 #permalink By Sue VanHattum (not verified) on 28 Nov 2013 #permalink Interesting deduction. However the mean energy of particles or photons at the very center of the sun is around 1000 eV. Even there TeV domain looks to be impossible! By Anandaram Mandyam (not verified) on 01 Dec 2013 #permalink Hi, very interesting and nice post. I'm quite sure there are >= 1 TeV photons accelerated by black hole. Anyway, my question is: why are you looking for 1 TeV photons exactly? Is there any measurement reporting more high-energy photons than expected? Another point is that I'm not really sure that Sun's atmosphere is transparent to 1 TeV photons, so maybe they are produced, but their energy gets "thermalized". I don't know for sure, just saying. By Riccardo Di Sipio (not verified) on 15 Dec 2013 #permalink Is Bitcoin Currently Experiencing a Selfish Miner Attack? Probably not. All right, now that you know my conclusion, let's see how to get there with data. First, some background. Let me give very quick overview of Bitcoin in this context. (There are many comprehensive overviews elsewhere.) Bitcoin is an ongoing ledger of transactions of along the lines of… I had an interesting question posed to me recently: how frequently does the sun emit photons with an energy greater than 1 TeV? All of you know about the experiments going on at the LHC, where particles are accelerated to an energy which is equivalent to an electron being accelerated through a… Everything in Pi... maybe. George Takei posted the following thing to Facebook recently: It got reposted by a bunch of people and provoked a tremendous amount of discussion (for a math topic, anyway), much of which was somewhere in the continuum between merely wrong and psychedelically incoherent. It's not a new subject - a… Why are clouds white? Why is the sky blue? It's a classic question - probably the classic question of the genre of explanatory popular physics. The famous short version of the answer is that Rayleigh scattering by air molecules affects short-waveength light more than long-wavelength light, and so blue light tends to get…
CommonCrawl
« Beth Harmon and the Inner World of Squares The case for moving to a red state » Chinese BosonSampling experiment: the gloves are off Two weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have achieved quantum supremacy via BosonSampling with 50-70 detected photons. I also did a four-part interview on the subject with Jonathan Tennenbaum at Asia Times, and other interviews elsewhere. None of that stopped some people, who I guess didn't google, from writing to tell me how disappointed they were by my silence! The reality, though, is that a lot has happened since the original announcement, so it's way past time for an update. I. The Quest to Spoof Most importantly, other groups almost immediately went to work trying to refute the quantum supremacy claim, by finding some efficient classical algorithm to spoof the reported results. It's important to understand that this is exactly how the process is supposed to work: as I've often stressed, a quantum supremacy claim is credible only if it's open to the community to refute and if no one can. It's also important to understand that, for reasons we'll go into, there's a decent chance that people will succeed in simulating the new experiment classically, although they haven't yet. All parties to the discussion agree that the new experiment is, far and away, the closest any BosonSampling experiment has ever gotten to the quantum supremacy regime; the hard part is to figure out if it's already there. Part of me feels guilty that, as one of reviewers on the Science paper—albeit, one stressed and harried by kids and covid—it's now clear that I didn't exercise the amount of diligence that I could have, in searching for ways to kill the new supremacy claim. But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can't be the responsibility of one or two reviewers. Instead, provided the claim is serious—as this one obviously is—the only thing to do is to get the paper out, so that the entire community can then work to knock it down. Communication between authors and skeptics is also a hell of a lot faster when it doesn't need to go through a journal's editorial system. Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, who (despite Google's result last year, which Gil still believes must be in error) rejects the entire possibility of quantum supremacy on quasi-metaphysical grounds. But other skeptics are current and former members of the Google team, including Sergio Boixo and John Martinis! And—pause to enjoy the irony—Gil has effectively teamed up with the Google folks on questioning the new claim. Another central figure in the vetting effort—one from whom I've learned much of what I know about the relevant issues over the last week—is Dutch quantum optics professor and frequent Shtetl-Optimized commenter Jelmer Renema. Without further ado, why might the new experiment, impressive though it was, be efficiently simulable classically? A central reason for concern is photon loss: as Chaoyang Lu has now explicitly confirmed (it was implicit in the paper), up to ~70% of the photons get lost on their way through the beamsplitter network, leaving only ~30% to be detected. At least with "Fock state" BosonSampling—i.e., the original kind, the kind with single-photon inputs that Alex Arkhipov and I proposed in 2011—it seems likely to me that such a loss rate would be fatal for quantum supremacy; see for example this 2019 paper by Renema, Shchesnovich, and Garcia-Patron. Incidentally, if anything's become clear over the last two weeks, it's that I, the co-inventor of BosonSampling, am no longer any sort of expert on the subject's literature! Anyway, one source of uncertainty regarding the photon loss issue is that, as I said in my last post, the USTC experiment implemented a 2016 variant of BosonSampling called Gaussian BosonSampling (GBS)—and Jelmer tells me that the computational complexity of GBS in the presence of losses hasn't yet been analyzed in the relevant regime, though there's been work aiming in that direction. A second source of uncertainty is simply that the classical simulations work in a certain limit—namely, fixing the rate of noise and then letting the numbers of photons and modes go to infinity—but any real experiment has a fixed number of photons and modes (in USTC's case, they're ~50 and ~100 respectively). It wouldn't do to reject USTC's claim via a theoretical asymptotic argument that would equally well apply to any non-error-corrected quantum supremacy demonstration! OK, but if an efficient classical simulation of lossy GBS experiments exists, then what is it? How does it work? It turns out that we have a plausible candidate for the answer to that, originating with a 2014 paper by Gil Kalai and Guy Kindler. Given a beamsplitter network, Kalai and Kindler considered an infinite hierarchy of better and better approximations to the BosonSampling distribution for that network. Roughly speaking, at the first level (k=1), one pretends that the photons are just classical distinguishable particles. At the second level (k=2), one correctly models quantum interference involving pairs of photons, but none of the higher-order interference. At the third level (k=3), one correctly models three-photon interference, and so on until k=n (where n is the total number of photons), when one has reproduced the original BosonSampling distribution. At least when k is small, the time needed to spoof outputs at the kth level of the hierarchy should grow like nk. As theoretical computer scientists, Kalai and Kindler didn't care whether their hierarchy produced any physically realistic kind of noise, but later work, by Shchesnovich, Renema, and others, showed that (as it happens) it does. In its original paper, the USTC team ruled out the possibility that the first, k=1 level of this hierarchy could explain its experimental results. More recently, in response to inquiries by Sergio, Gil, Jelmer, and others, Chaoyang tells me they've ruled out the possibility that the k=2 level can explain their results either. We're now eagerly awaiting the answer for larger values of k. Let me add that I owe Gil Kalai the following public mea culpa. While his objections to QC have often struck me as unmotivated and weird, in the case at hand, Gil's 2014 work with Kindler is clearly helping drive the scientific discussion forward. In other words, at least with BosonSampling, it turns out that Gil put his finger precisely on a key issue. He did exactly what every QC skeptic should do, and what I've always implored the skeptics to do. II. BosonSampling vs. Random Circuit Sampling: A Tale of HOG and CHOG and LXEB There's a broader question: why should skeptics of a BosonSampling experiment even have to think about messy details like the rate of photon losses? Why shouldn't that be solely the experimenters' job? To understand what I mean, consider the situation with Random Circuit Sampling, the task Google demonstrated last year with 53 qubits. There, the Google team simply collected the output samples and fed them into a benchmark that they called "Linear Cross-Entropy" (LXEB), closely related to what Lijie Chen and I called "Heavy Output Generation" (HOG) in a 2017 paper. With suitable normalization, an ideal quantum computer would achieve an LXEB score of 2, while classical random guessing would achieve an LXEB score of 1. Crucially, according to a 2019 result by me and Sam Gunn, under a plausible (albeit strong) complexity assumption, no subexponential-time classical spoofing algorithm should be able to achieve an LXEB score that's even slightly higher than 1. In its experiment, Google reported an LXEB score of about 1.002, with a confidence interval much smaller than 0.002. Hence: quantum supremacy (subject to our computational assumption), with no further need to know anything about the sources of noise in Google's chip! (More explicitly, Boixo, Smelyansky, and Neven did a calculation in 2017 to show that the Kalai-Kindler type of spoofing strategy definitely isn't going to work against RCS and Linear XEB, with no computational assumption needed.) So then why couldn't the USTC team do something analogous with BosonSampling? Well, they tried to. They defined a measure that they called "HOG," although it's different from my and Lijie Chen's HOG, more similar to a cross-entropy. Following Jelmer, let me call their measure CHOG, where the C could stand for Chinese, Chaoyang's, or Changed. They calculated the CHOG for their experimental samples, and showed that it exceeds the CHOG that you'd get from the k=1 and k=2 levels of the Kalai-Kindler hierarchy, as well as from various other spoofing strategies, thereby ruling those out as classical explanations for their results. The trouble is this: unlike with Random Circuit Sampling and LXEB, with BosonSampling and CHOG, we know that there are fast classical algorithms that achieve better scores than the trivial algorithm, the algorithm that just picks samples at random. That follows from Kalai and Kindler's work, and it even more simply follows from a 2013 paper by me and Arkhipov, entitled "BosonSampling Is Far From Uniform." Worse yet, with BosonSampling, we currently have no analogue of my 2019 result with Sam Gunn: that is, a result that would tell us (under suitable complexity assumptions) the highest possible CHOG score that we expect any efficient classical algorithm to be able to get. And since we don't know exactly where that ceiling is, we can't tell the experimentalists exactly what target they need to surpass in order to claim quantum supremacy. Absent such definitive guidance from us, the experimentalists are left playing whac-a-mole against this possible classical spoofing strategy, and that one, and that one. This is an issue that I and others were aware of for years, although the new experiment has certainly underscored it. Had I understood just how serious the USTC group was about scaling up BosonSampling, and fast, I might've given the issue some more attention! III. Fock vs. Gaussian BosonSampling Above, I mentioned another complication in understanding the USTC experiment: namely, their reliance on Gaussian BosonSampling (GBS) rather than Fock BosonSampling (FBS), sometimes also called Aaronson-Arkhipov BosonSampling (AABS). Since I gave this issue short shrift in my previous post, let me make up for it now. In FBS, the initial state consists of either 0 or 1 photons in each input mode, like so: |1,…,1,0,…,0⟩. We then pass the photons through our beamsplitter network, and measure the number of photons in each output mode. The result is that the amplitude of each possible output configuration can be expressed as the permanent of some n×n matrix, where n is the total number of photons. It was interest in the permanent, which plays a central role in classical computational complexity, that led me and Arkhipov to study BosonSampling in the first place. The trouble is, preparing initial states like |1,…,1,0,…,0⟩ turns out to be really hard. No one has yet build a source that reliably outputs one and only one photon at exactly a specified time. This led two experimental groups to propose an idea that, in a 2013 post on this blog, I named Scattershot BosonSampling (SBS). In SBS, you get to use the more readily available "Spontaneous Parametric Down-Conversion" (SPDC) photon sources, which output superpositions over different numbers of photons, of the form $$\sum_{n=0}^{\infty} \alpha_n |n \rangle |n \rangle, $$ where αn decreases exponentially with n. You then measure the left half of each entangled pair, hope to see exactly one photon, and are guaranteed that if you do, then there's also exactly one photon in the right half. Crucially, one can show that, if Fock BosonSampling is hard to simulate approximately using a classical computer, then the Scattershot kind must be as well. OK, so what's Gaussian BosonSampling? It's simply the generalization of SBS where, instead of SPDC states, our input can be an arbitrary "Gaussian state": for those in the know, a state that's exponential in some quadratic polynomial in the creation operators. If there are m modes, then such a state requires ~m2 independent parameters to specify. The quantum optics people have a much easier time creating these Gaussian states than they do creating single-photon Fock states. While the amplitudes in FBS are given by permanents of matrices (and thus, the probabilities by the absolute squares of permanents), the probabilities in GBS are given by a more complicated matrix function called the Hafnian. Roughly speaking, while the permanent counts the number of perfect matchings in a bipartite graph, the Hafnian counts the number of perfect matchings in an arbitrary graph. The permanent and the Hafnian are both #P-complete. In the USTC paper, they talk about yet another matrix function called the "Torontonian," which was invented two years ago. I gather that the Torontonian is just the modification of the Hafnian for the situation where you only have "threshold detectors" (which decide whether one or more photons are present in a given mode), rather than "number-resolving detectors" (which count how many photons are present). If Gaussian BosonSampling includes Scattershot BosonSampling as a special case, and if Scattershot BosonSampling is at least as hard to simulate classically as the original BosonSampling, then you might hope that GBS would also be at least as hard to simulate classically as the original BosonSampling. Alas, this doesn't follow. Why not? Because for all we know, a random GBS instance might be a lot easier than a random SBS instance. Just because permanents can be expressed using Hafnians, doesn't mean that a random Hafnian is as hard as a random permanent. Nevertheless, I think it's very likely that the sort of analysis Arkhipov and I did back in 2011 could be mirrored in the Gaussian case. I.e., instead of starting with reasonable assumptions about the distribution and hardness of random permanents, and then concluding the classical hardness of approximate BosonSampling, one would start with reasonable assumptions about the distribution and hardness of random Hafnians (or "Torontonians"), and conclude the classical hardness of approximate GBS. But this is theoretical work that remains to be done! IV. Application to Molecular Vibronic Spectra? In 2014, Alan Aspuru-Guzik and collaborators put out a paper that made an amazing claim: namely that, contrary to what I and others had said, BosonSampling was not an intrinsically useless model of computation, good only for refuting QC skeptics like Gil Kalai! Instead, they said, a BosonSampling device (specifically, what would later be called a GBS device) could be directly applied to solve a practical problem in quantum chemistry. This is the computation of "molecular vibronic spectra," also known as "Franck-Condon profiles," whatever those are. I never understood nearly enough about chemistry to evaluate this striking proposal, but I was always a bit skeptical of it, for the following reason. Nothing in the proposal seemed to take seriously that BosonSampling is a sampling task! A chemist would typically have some specific numbers that she wants to estimate, of which these "vibronic spectra" seemed to be an example. But while it's often convenient to estimate physical quantities via Monte Carlo sampling over simulated observations of the physical system you care about, that's not the only way to estimate physical quantities! And worryingly, in all the other examples we'd seen where BosonSampling could be used to estimate a number, the same number could also be estimated using one of several polynomial-time classical algorithms invented by Leonid Gurvits. So why should vibronic spectra be an exception? After an email exchange with Alex Arkhipov, Juan Miguel Arrazola, Leonardo Novo, and Raul Garcia-Patron, I believe we finally got to the bottom of it, and the answer is: vibronic spectra are not an exception. In terms of BosonSampling, the vibronic spectra task is simply to estimate the probability histogram of some weighted sum like $$ w_1 s_1 + \cdots + w_ m s_m, $$ where w1,…,wm are fixed real numbers, and (s1,…,sm) is a possible outcome of the BosonSampling experiment, si representing the number of photons observed in mode i. Alas, while it takes some work, it turns out that Gurvits's classical algorithms can be adapted to estimate these histograms. Granted, running the actual BosonSampling experiment would provide slightly more detailed information—namely, some exact sampled values of $$ w_1 s_1 + \cdots + w_ m s_m, $$ rather than merely additive approximations to the values—but since we'd still need to sort those sampled values into coarse "bins" in order to compute a histogram, it's not clear why that additional precision would ever be of chemical interest. This is a pity, since if the vibronic spectra application had beaten what was doable classically, then it would've provided not merely a first practical use for BosonSampling, but also a lovely way to verify that a BosonSampling device was working as intended. V. Application to Finding Dense Subgraphs? A different potential application of Gaussian BosonSampling, first suggested by the Toronto-based startup Xanadu, is finding dense subgraphs in a graph. (Or at least, providing an initial seed to classical optimization methods that search for dense subgraphs.) This is an NP-hard problem, so to say that I was skeptical of the proposal would be a gross understatement. Nevertheless, it turns out that there is a striking observation by the Xanadu team at the core of their proposal: namely that, given a graph G and a positive even integer k, a GBS device can be used to sample a random subgraph of G of size k, with probability proportional to the square of the number of perfect matchings in that subgraph. Cool, right? And potentially even useful, especially if the number of perfect matchings could serve as a rough indicator of the subgraph's density! Alas, Xanadu's Juan Miguel Arrazola himself recently told me that there's a cubic-time classical algorithm for the same sampling task, so that the possible quantum speedup that one could get from GBS in this way is at most polynomial. The search for a useful application of BosonSampling continues! And that's all for now! I'm grateful to all the colleagues I talked to over the last couple weeks, including Alex Arkhipov, Juan Miguel Arrazola, Sergio Boixo, Raul Garcia-Patron, Leonid Gurvits, Gil Kalai, Chaoyang Lu, John Martinis, and Jelmer Renema, while obviously taking sole responsibility for any errors in the above. I look forward to a spirited discussion in the comments, and of course I'll post updates as I learn more! This entry was posted on Wednesday, December 16th, 2020 at 2:16 am and is filed under Complexity, Quantum. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. 45 Responses to "Chinese BosonSampling experiment: the gloves are off" Comment #1 December 16th, 2020 at 10:05 am The typical steps of Quantum Supremacy seem to be: 1) celebrate that Gil Kalai's crackpot claims are finally refuted! 2) various confusing corrections and fine points that only Gil Kalai can follow A question about boson sampling: Years ago it seemed that the difficulty was firing many photons simultaneously, the experiment had to do an exponentially growing amount of generations to have a successful event. Has there been some major breakthrough in photonics? fred #1: In case this wasn't clear from the post—Gil will lose the Quantum Supremacy War, indeed he probably already lost it with Google's experiment. What we're trying to figure out is whether, even while losing the wider war, he'll make a better-than-expected showing in the Battle for BosonSampling. 😀 fred #2: Part III of the post talked about exactly that. Yes, Scattershot BosonSampling (which I blogged about back in 2014) was a major conceptual advance, which lets you use SPDC sources with no loss in classical computational hardness compared to single-photon sources. Gaussian BosonSampling, what the new experiment used, gives a further efficiency gain compared to SBS, but possibly with a loss in classical computational hardness—we're not sure yet! Jelmer Renema Says: Comment #5 December 16th, 2020 at 12:18 pm I don't think I was the one who came up with the abbreviation CHOG. My first suggestion was 'Feral HOG' but I confess the only reason for that was so that we could make jokes for all time to come about how boson sampling needs k = 30-50 Feral HOGs [1]. But seriously: I think it would be good if someone (and as Scott suggested, I think Chaoyang is the natural person to do it) came up with a different name than HOG altogether, since I've seen lots of people be confused by how dissimilar the two are. Maybe 'cross entropy difference'? [1] https://www.vox.com/future-perfect/2019/8/6/20756162/30-to-50-feral-hogs-meme-assault-weapons-guns-kids for the people who don't waste all their time on twitter. Jelmer #5: I love "Feral HOG"! You did call it "Chinese HOG" in emails to me; the contraction to "CHOG" was mine. In any case, yes, I agree that Chaoyang gets renaming rights if he wants them! Chaoyang Lu Says: Comment #7 December 16th, 2020 at 7:38 pm Hi Scott #6 Jelmer #5 Haha, I think CHOG is good, and here I would name the "C" after "C"hinese or "C"hen Ming-Cheng the co-first author of this paper and a brilliant young mind who proposed this analysis and did a lot of hard work behind it. John Preskill Says: "But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can't be the responsibility of one or two reviewers." Alas, Xanadu's Juan Miguel Arrazola himself recently told me that there's a cubic-time classical algorithm for the same sampling task Is that algorithm described somewhere? I can see sampling with probability proportional to the number of perfect matchings being easy classically (just pick k/2 disjoint edges at random?), but i'm wondering how they get the square. Comment #10 December 16th, 2020 at 11:29 pm I'm glad that you admit that in this case Gil's 2014 work seems to be spot on. If you really mean that you want QC skeptics to keep doing their good work I think it would help if your tone towards QC skeptics changes a bit. Most of your posts come out as mocking them. Regarding the reviewers role: I feel Nature should choose reviewers more carefully. You have been a champion of boson sampling so don't you think that there's bound to be at least a subconscious bias? You were rooting for this to work out right? Maybe Science could have chosen someone more skeptical since they are bound to question the results more mercilessly. Or at least someone more agnostic to the outcome. I mean clearly Scott is one of the biggest cheerleaders for QC and quantum supremacy. >Alas, Xanadu's Juan Miguel Arrazola himself recently told me that there's a cubic-time >classical algorithm for the same sampling task Juan Miguel Arrazola and colleagues kindly informed us of their nice results [which was later put on arXiv.2010.15595], after we submitted our manuscript. We had communicated over this and concluded that because in our work, collision modes dominate where the actual number of photons is approximate twice the number of clicked detectors, and thus the new algorithm cannot be readily used to speed up simulating our current experiment. We added a note at the end of our arXiv version of our paper available https://arxiv.org/abs/2012.01625 which was unedited (>2000 words cut due to editorial request) and much longer than the Science version. Michał Oszmaniec Says: Dear Scott, It is nice to to see that this time you took under the account advances in the classical simulation of Boson Samling and Gaussian Boson sampling in recent years. Two comments about simulation papers that you refer to. Renema,Shchesnovich, and Garcia-Patron https://arxiv.org/abs/1809.01953 – as far as I remember that nice work deals implicitelly with the standard m>>n^2 regime as the authors replace averages over unitary group by averages over GUE ensemble. Hence, it is not clear if their bounds bound hold if n~m. Kalai and Kinder paper https://arxiv.org/abs/1409.3093 – that work also studies permanents of GUE ensemble. Therefore, it is not clear for me if it can be used if n~m. And this is the relevant regime here as USTC paper had m=100 modes with n<=70 photons. A clarification: In my previous post I wrongly coined GUE ensemble. The proper matrices ensemble that is relevant to both borks is of course the ensemble of complex Gaussian matrixes. Comment #15 December 17th, 2020 at 10:29 am Hi Michal, Yes, that is an excellent point! Our results are indeed for #modes >> #photons, which is violated in the case of the experiment. The only one who has seriously studied the case #modes \(\approx \) #photons is Shchesnovich. I tried to find the refence but couldn't. Off the top of my head, he showed that for small k, in the high-collision regime there is a lower bound to the trace distance between the kth order approximation and the actual distribution, which arises from the fact that you begin to 'feel' the unitary constraint. No idea how that translates to the experiment though. correction: #modes >> #photons^2 Rahul #11: When I sent in my review, I clearly explained that I was only qualified to comment as a theoretical scientist, and that they needed to find others to vet the experimental optics aspects of the paper. Alas, I fear that the editors largely disregarded that advice, and relied on me a lot more than they should have. Having said that, I 100% stand by the decision to publish the work in Science. Indeed, even the skeptics who I've spoken to about it—the people who are doing the most right now to try to undermine the supermacy claim—agree that publication was more than warranted. If they'd gotten people like Jelmer Renema, Valery Shchesnovich, Daniel Brod, Christine Silberhorn, Sergio Boixo, etc. to review the paper prior to publication, the difference is simply that some of the technical back-and-forth that you're seeing happen right now, could've instead happened behind the scenes. I don't know if that would've been better or worse. Regarding Gil, and regarding my tone: in my defense, I've given Gil's ideas more attention and more airtime than almost anyone else in the "pro-QC" camp, and Gil has repeatedly written that he's flattered by that! Having said that, the fundamental problem is that Gil has staked out so many positions on QC over the years that we know to be ludicrously off-base, that he makes it really, really hard to separate the wheat from the chaff. Still, now that we've seen the direct relevance of Kalai-Kindler noise to an actual experiment—well, regardless of whether the supremacists or the anti-supremacists end up winning this particular skirmish (or whether it devolves into a complicated stalemate), I take public responsibility for not having engaged more carefully with the Kalai-Kindler paper before, and I'll ease up on the ribbing. 🙂 MatthewF Says: >>> Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, >>> who (despite Google's result last year, which Gil still believes must be in error) Can you maybe comment on Gil Kalais (and ofc. Rinott, Shoman) approach in "Statistical Aspects of the Quantum Supremacy Demonstration"? (https://arxiv.org/abs/2008.05177) I am not deep enough in the matter and simply not good enough in the maths to understand this. If my basic understanding is correct, one cannot directly proof quantum advantage, as the classical part cannot practically be executed. So the argument must be scaled from "smaller" quantum circuits to the larger ones. For this scaling, a noise and fidelity model has to be found (via estimators) and the problem lies in garantuing, that this model is actually valid. Looking at Fig. 8 in the arxiv preprint, it doesn't look like the estimator results in the expected data. Looking around but I have not seen a discussion of the findings and would be happy to hear some arguments, why Gil Kalai should be wrong about this here. Can anyone shed some light on where I get it wrong? LK2 Says: BosonSampling kind of reminds me the classical (reversible) billard-computation model. Is there a more formal analogy, or not al all? On another note: is it fully rejected by the QC community the fact that there might be a limit in the number of qbits which can be operated coherently? I mean: this limit might exist due to unavoidable sources of noise (I'm a dark matter physicist and we see cosmic rays even 2km underground), but the question is..are these limits so far away that we can safely build an useful QC (which means running useful algorithms + the error-correction qbits) ? Is this question fully answered by a theorem or it is a sort of open question? Very informative post: thanks Scott. When I read your blog I wonder why I did not went into TCS…during my physics undergrad after reading a CS book I learned just by chance about P,NP and the rest of the zoo and I was so impressed that I was about to change field. It is a wonderful perspective on the physical world. Here is a comment in a different direction that I think is also important. This boson sampling experiment is often described as simply — as in the title of this post — "Chinese boson sampling" or similar. It's very good up to a point to credit Chinese scientists for first-rate research. But it's also obviously simplistic to credit "China", a nation of 1.4 billion people, with achieving quantum supremacy. No one calls Google Sycamore "American quantum supremacy" or "American quantum circuit sampling". It was reasonably common to credit "Google", which is better than "American", but still not all that great. "Sycamore" is a good name, and "Google Sycamore" is fair. The name that the Shanghai research used themselves is Jiǔzhāng, which literally means Nine Chapters. It is refers an ancient Chinese math book, The Nine Chapters on the Mathematical Art. I think that "Nine Chapters" is also a great name, at least as good as "Sycamore". I think it would be better then to call it something like the Chinese Nine Chapters boson sampling or quantum supremacy experiment. Ryan O'Donnell Says: +1 to Greg Kuperberg's comment. "Jiuzhang" is not hard to say. Greg and Ryan: I admit that I paused for a minute when writing "Chinese BosonSampling." But then I reflected that, for better or worse, neither I nor most people would've had the slightest hesitation in writing about an "Argentinian experiment" or "Australian experiment" or whatever. And the alternatives—e.g., "USTC experiment," "Hefei experiment," "Jiuzhang experiment," "Pan group experiment," "Chaoyang Lu's group's experiment"—all had the problem either that they singled out one member of the team, or that most of my readers, glancing at the title, wouldn't understand what thing I was referring to. For me at least, it distinctly helps to translate "Jiǔzhāng" to "Nine Chapters", even if (heretofore?) it is nonstandard. Some foreign names and terms benefit from translation. E.g., it seems fine that the Pearl River is called that in English, and not "Zhū Jiāng". On the flip side, it seems dubious to call porcelain "china", even if the name was meant to give credit. I guess I take the point that it is non-standard, so at least at first it could be "Chinese Nine Chapters BosonSampling". I agree with the others that your title seems a bit off. I think what's missing is simply the word "experiment". If the title were something like "Chinese Boson Sampling Experiment" or "Chinese Boson Sampling Results" it would be fine. But "Chinese Boson Sampling" sounds like you're talking about some specific Chinese kind of Boson Sampling that is essentially unique to China and different from any kind of Boson Sampling that might be done in Australia or Germany, for example. Anyway that's just my two cents. I always get a good laugh of any mention of Gil on the blog, outside of him who would you have as your top 3 arch nemesis' in academia? Gerard #24: Ok, edited! Scott: Any comments on this : https://www.insidehighered.com/admissions/article/2020/12/14/u-texas-will-stop-using-controversial-algorithm-evaluate-phd Maybe a blog post request? The Death and Life of an Admissions Algorithm U of Texas at Austin has stopped using a machine-learning system to evaluate applicants for its Ph.D. in computer science. Critics say the system exacerbates existing inequality in the field. Rahul #27: As you can imagine, there was plenty of discussion about this among the CS faculty, but I have less to say about it than you might think. Based on a few years' experience, I actually thought the machine-learning system was pretty impressively calibrated—sure, it's biased, but (the part people keep forgetting) the only relevant question is whether humans are even more biased. On the other hand, mostly the system just confirmed what anyone could see for themselves with two minutes' perusal of an application. So there might be a little more work, but we'll manage fine without it. Thanks Scott. Beyond this particular UT case, what's your general opinion on using AI for things like recruiting, prison parole, admissions etc. The question you raised whether humans may be even more biased, is it empirically testable? Till Says: Regarding the first quantum supremacy claim from Arute et al.: Physicists are really pulling the wool over non-experts' eyes if they claim quantum supremacy from solutions to a "computational task" that can't be posed before the "computing" device is built. One could not have posed the "computational task" (which includes exact specification of gate operations) before experimental work began, because of the gates were non-ideal and rotation angles were calibrated in-situ. Till #30: Nope, you're just flatly misinformed about that. The mathematical specification of the Linear XEB task includes the specific pattern of 1- and 2-qubit gates (obviously), but it does not include noise, decoherence, or any other imperfections in the device. Craig Gidney Says: Scott #30: I think they're referring to the unitary refitting used in Google's supremacy paper. There were limitations in how exactly the parameters of the two qubit gate that was used could be hit, but it was possible to run experiments to figure out pretty accurately which nearby two qubit gate was being performed. So the latter was used when computing fidelities. Basically, in an ideal experiment, the circuit to execute would be generated by a referee and then run by the quantum machine as well as the classical machine. Instead, the quantum machine was saying "here are the operations I can do the best right now for each pair of qubits" and then the referee was picking a circuit that used those operations. This means that if you wanted to run the circuits again at a later date, the quantum hardware would underperform since its calibrated gates would differ slightly. The thing that convinced me that this caveat wasn't such a big deal (I was worried about it at the time) is that it's not a fundamental obstacle to performing arbitrary computations. Because the angle deviations are small, and because the "ideal" angles weren't divisors of a full turn, the overhead of decomposing desired gates into the two qubit gates that happen to be calibrated best is the same as the overhead of decomposing into a common two qubit gate. Similarly, it would still be possible to create client-certified random numbers under this limitation. Comment #33 December 21st, 2020 at 8:38 am Craig Gidney #32: Ok, thanks for clarifying what was meant! While I agree with you that this issue is minor, it's definitely one that would be good to fix in future quantum supremacy experiments. Ali G Says: Comment #34 December 23rd, 2020 at 8:47 pm Ain't it a bit racialist to say "Chinese bosonSampling"? It sounds a bit like "the China virus". Does you agree? Ali G #34: See above; I already changed it to "Chinese BosonSampling experiment." Do people no longer read before commenting? The Argument Against Quantum Computers – A Very Short Introduction | Combinatorics and more Says: […] For more details see my post about the recent HQCA Boson Sampling experiment, and also Scott Aaronson's recent post. […] Scott #31: The gates themselves are systematically incorrect rotations. That is not noise and it is a mistake to conflate systematics and noise. The authors themselves wrote that these errors can be corrected with single-qubit rotations. But they didn't correct them, because doing so would have reduced the gate depth or some such. My statement stands: One could not have posed the "computational task" (which includes exact specification of gate operations) before experimental work began. Scott #31: Arute et al. write "For the full experiment, we generate quantum circuits using the two-qubit unitaries measured for each pair during simultaneous operation, rather than a standard gate for all pairs. The typical two-qubit gate is a full iSWAP with 1/6th of a full CZ. Using individually calibrated gates in no way limits the universality of the demonstration." The calibrations may not impact "the universality of the demonstration" (whatever that means), but they certainly impact the ability to perform predefined computational tasks. What their "quantum processor" "computes" does not exist independently of the experiment. How can anyone call it a "computational task"? Sergio Boixo Says: Till #38: The best way to run experiments consists of two phases: 1) calibration and 2) running the desired circuits for a particular experiment. Note that the circuits used in the calibration are different from the circuits in the experiment, and in our case they do not involve more than two qubits. With newer calibration methods we discover small systematic deviations between the "native gates" and some standard gate. Experiments work better if the circuit design includes native gates. One reason is that we are often interested in composite gates, which can be implemented equally well with native gates (see Appendix A of arXiv:2010.07965) The specific values of the native gates are known after calibration, and before running the circuits. In the specific case of the 2019 experiment using "Sycamore" standard gates instead of native gates halfs the fidelity, see S30 in the supplementary material. Also the native are universal, see Sec. VII F. Sergio #39: Thanks!! So, my summary would be that yes, there's a calibration phase, and the 2-qubit gates used depend on the outcome of that phase, but there's still a clear separation enforced between the calibration phase and the actual running of the QC. Of course, it would be great in future sampling-based quantum supremacy experiments to be able to specify all the gates strictly from the outset, with no need for a calibration phase. One step at a time, though! Sergio #39: Being an experimentalist myself, I think I understand approximately what was done for calibration in the paper and also that the gates are universal. But your explanation doesn't address my objection: The paper does not describe a "computational" task because the task, due to the calibration issue, is defined only for that specific experiment. Computation is supposed to be more general than that or is this a new model of computation where, before you give it a problem, the machine first tells you what (non-standard) unitaries it can perform? Scott #40: If QC were not riding a hype cycle, it might be reasonable to cut the experiments more slack. But less rigor now is just going lead to more disillusionment in the future. Till, in case you're interested, John Martinis sent me the following message: Thanks for the nice discussion. We did not cheat by recalibrating via the large RCS circuit: we only calibrated using one and two qubit gates. Scott, this brings up a rather interesting concept that might be interesting for your readers. The essence of Till's idea was actually done as a test in figure S42 of the supplement (see here), which is actually one of the most important and interesting checks of the paper. Here, the numerical simulation puts in varying amounts of phase shift in a single qubit gate in the middle of the circuit, showing that for a pi phase shift you get zero fidelity, showing explicitly that one error kills XEB. However, if you look closely at the figure, you see the cosine curve is shifted slightly to the right from optimal, showing that the gate's calibration was not quite perfect. If we would have shifted the calibration of the gate, the fidelity would have been a tiny bit higher. (But of course we did not do that in our analysis.) You can imagine by doing this to each gate, all these tiny improvements might add up to something bigger, but probably only a 10% improvement since the measured fidelity matched so well the computed one. This matching is one of the most important findings of the experiment. It would be interesting to do this analysis of Figure S42 for each qubit and each time step. (The data is archived so it can be done). If you found a constant phase shift over time, this would indicate an overall calibration error. If it depends on time, then there may be some kind of tails in the time-trace of waveform control. In general, I thought this data could be used to build some kind of crosstalk model to understand the control errors to the qubit, especially since the crosstalk is small and you could maybe solve the general problem with a perturbation analysis. This is something I wanted to do after the QS experiment, but never got around to it. If Till or someone else is interested, I would be happy to talk and potentially collaborate on a paper to understand this better. I think it could be an important tool for probing the inner workings of a real quantum computer system: what happens to the 1- and 2-qubit calibrations when you run a complex circuit? Scott, if you wish to post this I would be happy to discuss with interested readers. Comment #43 December 31st, 2020 at 12:03 pm Scott #42: Thanks for sharing. John's message explains how much calibration was done, which is about as expected (but I'm no expert on superconducting QC). I guess the authors view this tayloring of the problem to the specific machine as reasonable, but it conflicts with the ideal (since at least Turing) that computation is not tied to any one computing device. Till #43: I'm with you in wanting a better experiment where calibration is less of an issue! I guess knowing some of the people involved, knowing the years of toil it took to get this far makes me less willing to knock "quantum supremacy modulo some initial calibration." Predictions For 2021 | Gödel's Lost Letter and P=NP Says: Comment #45 January 5th, 2021 at 2:25 pm […] have not abated. They have just recently been summarized by Gil Kalai here, and also here by Scott Aaronson in regard to an independent claim by researchers from USTC in […]
CommonCrawl
The effect of temperature on physical activity: an aggregated timeseries analysis of smartphone users in five major Chinese cities Janice Y. Ho1, William B. Goggins1, Phoenix K. H. Mo1 & Emily Y. Y. Chan1,2 Physical activity is an important factor in premature mortality reduction, non-communicable disease prevention, and well-being protection. Climate change will alter temperatures globally, with impacts already found on mortality and morbidity. While uncomfortable temperature is often perceived as a barrier to physical activity, the actual impact of temperature on physical activity has been less well studied, particularly in China. This study examined the associations between temperature and objectively measured physical activity among adult populations in five major Chinese cities. Aggregated anonymized step count data was obtained between December 2017-2018 for five major Chinese cities: Beijing, Shanghai, Chongqing, Shenzhen, and Hong Kong. The associations of temperature with daily aggregated mean step count were assessed using Generalized Additive Models (GAMs), adjusted for meteorological, air pollution, and time-related variables. Significant decreases in step counts during periods of high temperatures were found for cold or temperate climate cities (Beijing, Shanghai, and Chongqing), with maximum physical activity occurring between 16 and 19.3 °C. High temperatures were associated with decreases of 800-1500 daily steps compared to optimal temperatures. For cities in subtropical climates (Shenzhen and Hong Kong), non-significant declines were found with high temperatures. Overall, females and the elderly demonstrated lower optimal temperatures for physical activity and larger decreases of step count in warmer temperatures. As minor reductions in physical activity could consequentially affect health, an increased awareness of temperature's impact on physical activity is necessary. City-wide adaptations and physical activity interventions should seek ways to sustain physical activity levels in the face of shifting temperatures from climate change. The public health burden of hot and cold temperatures has been documented by studies showing impacts on excess mortality [1,2,3,4,5], hospital admissions [3, 6,7,8]. However, much less has been discussed about the impact of temperatures at the level of daily human activity. Physical activity is an important factor for premature mortality reduction, as insufficient physical activity is attributed to be responsible for an estimated 9% of premature deaths worldwide [10]. Furthermore, physical activity plays an essential role in preventing non-communicable disease and enhancing well-being [11,12,13]. Poor weather has often been described as a 'barrier to physical activity' in qualitative studies [14, 15], and research has shown that objectively-measured physical activity can be affected by temperatures [14, 16,17,18,19]. However, few studies have been conducted in the Asia, particularly in China, although this relationship may vary across different climates and regional contexts. With a population of 1.4 billion people as of 2019 [20], China currently has the largest population worldwide. Over the recent decades, the physical activity levels of the population have decreased, particularly in more urbanized areas [21]. A study found that work- and household- physical activity decreased by nearly half between 1991 and 2011, while active leisure and transport physical activity did not see a meaningful change in the same period [22]. According to World Health Organization (WHO) data, estimates of physical inactivity prevalence in 2016 among adults in China was 14.11% (10.1-19.37) [23]. The prevalence was found higher among Chinese people aged 45 and older in a nationwide survey, with physical inactivity prevalence at 19.31% (95% CI: 18.28–20.38%) in 2015 [24]. The Tsinghua-Lancet Commission on healthy cities in China calls for integration of health into all policies, including the increased facilitation of physical activity [21]. With climate change, temperatures in China are predicted to increase by 2.3 °C to 3.3 °C from 2000 to 2050 [25], increasing the occurrence of extremely hot temperatures in the coming decades. Yet there is a lack of understanding about the relationship between temperature and physical activity in China. Previous studies had only assessed small samples of the population [26, 27], or observed physical activity ecologically in specific locations such as public parks [28]. Ma et al. 2018 found a negative correlation between mean temperature and walking distance among 210 Pokemon GO players in Hong Kong [26]. Wang et al. 2017 reported no significant impact of temperature among 40 Beijing adults when monitored with an accelerometer every 2 months for an entire year [27]. Zhao et al. 2018 observed Harbin park users in the spring and found a positive correlation of ambient temperature with increased people conducting activities and increased intensity (METs) [28]. A more comprehensive understanding of temperature's effect on physical activity is needed at the population level. This knowledge would support the development of policies to address these potential impacts and to promote physical activity in the cities, as called for in the Tsinghua-Lancet Commission. Step counts are an important indicator of physical activity at the population level, as walking is a largely accessible, inexpensive, and a regularly conducted physical activity in everyday life [29]. The objective measurement of step counts has enabled studies to assess day-to-day variations of physical activity and its association with temperature [16, 19, 30,31,32,33,34]. However, studies have found different associations between temperature and step counts in various cities. Higher temperatures were associated with increased daily step count among COPD patients in London, United Kingdom [30] and residents in Prince Edward Island, Canada [32], whilst reduced steps were reported among adults in Qatar [16]. No associations were found between temperature and step counts among elderly in Cologne, Germany [33] and adults in Perth, Australia [31]. In Japan, curvilinear associations between temperature and step count were found among elderly in Nakanojo, with step counts peaking at 17 °C [19]. However, in Hokkaido, another Japanese study found elderly step counts were negatively associated with temperature during the snowfall season, but positively associated during non-snowfall season [34]. Multi-location comparisons could be useful to understand the varying associations between cities. With the advancement of technology, step counts have been increasingly assessed using smartphone accelerometer applications, which have been highlighted for their convenience in population studies and validated for their accuracy [35, 36]. A systematic review found that the accuracy of smartphone physical activity measurements ranged from 73 to 100% regardless of phone placement (n = 10 studies), although lower accuracy was found for stair climbing (52-79% accuracy) [37]. Using smartphones enables utilization of the same objective physical activity measurement and study methodology across locations, facilitating multi-location comparisons [38,39,40]. This study examines the associations between mean temperature and daily step counts across five major Chinese cities. Study setting and design This is a prospective aggregated timeseries study. Five major Chinese cities were assessed including Beijing (located in the North), Shanghai (East), Chongqing (Southwest), Shenzhen (South), and Hong Kong SAR (South, Special Administrative Region). These are major cities in China, in terms of the population size, and economic and political significance (see Table 1). The cities are also located in four divergent areas of the country, with varying climates and topographies. Table 1 Summary characteristics among five major Chinese cities (as of 2018) Physical activity data Anonymous aggregated secondary data on physical activity was obtained from the mobile application WeChat's in-app function WeRun (微信运动) for the duration of the study period. WeChat is an all-encompassing multi-function social media and messaging platform in China, with over 1.04 billion active monthly users as of 2018 [49]. The in-app function WeRun is a voluntary addition that enables users to compare fitness levels with their community and reads from the step count data of the phone's health applications (iPhone or Android) or other data sources such as smartwatches, as allowed by the user. Both the iPhone and Android phones have been validated against regularly accepted pedometers and accelerometers in field-based research [35, 50]. These studies have found comparable estimates in both laboratory and free-living environments, although the mobile phone-proxy estimates may be liable to underestimation due to inconstant phone carrying [35, 50]. The secondary physical activity data was obtained from users who had specifically enabled the fitness tracking function of WeRun, authorizing the collection of their daily step counts. All data was anonymized prior to retrieval and only obtained in aggregate form to protect personal privacy. Aggregated mean daily step counts were obtained for each city from anonymized users of the in-app function who were located in the city at night (10 pm). Aggregated mean daily step counts were also obtained stratified by gender and age group (18-64, 65+) along with the number of anonymized users included in each aggregate value. Weather and pollutant data Meteorological data were obtained from the China Meteorological Administration for the following stations for the mainland Chinese cities: Beijing (ID: 54511), Shanghai (ID: 58362), Chongqing (ID: 57516), and Shenzhen (ID: 59493). Data from Hong Kong was obtained from the Hong Kong Observatory. Daily mean temperature was used as the main exposure for this analysis, and also adapted into apparent temperature and percentile temperature of the study period. Apparent temperature was calculated from temperature and relative humidity using the following formulas [51, 52]: $$H=\left(\log 10(RH)-2\right)/0.4343+\left(17.62\ast {T}_{air}\right)/\left(243.12+{T}_{air}\right)$$ $${T}_{dewpt}=243.12\ast H/\left(17.62-H\right)$$ $${T}_{apparent}=-2.653+0.994\left({T}_{air}\right)+0.0153{\left({T}_{dewpt}\right)}^2$$ where RH = relative humidity, Tair = air temperature, Tdewpt = dew point temperature, and Tapparent = apparent temperature. Other daily meteorological covariates obtained from the meteorological stations included: mean relative humidity, total rainfall, mean wind speed, mean atmospheric pressure, and total sunshine hours. A square root transformation was done for rainfall and windspeed, in order to reduce the effect of outliers. Extreme weather event information on typhoon days were incorporated as binary indicators from a WMO report on China and the Hong Kong Observatory [53,54,55]. The occurrence of super typhoon Mangkhut was included separately due to the severity of the storm, which made landfall in Shenzhen and Hong Kong on Sept 16, 2018. Air pollution data was obtained from China National Environmental Monitoring Center network (CNEMC) and Hong Kong Environmental Protection Department. An air quality index was used instead of individual air pollutant variables, to reduce possible collinearity. In China, the Air Quality Index (AQI) is based on the concentration levels of six pollutants (SO2, NO2, PM2.5, PM10, CO, O3) and reported using a scale of 1-300+ [56]. All hourly AQI values were aggregated to the daily level and further log-transformed to adjust for the right skew. Missing variables were imputed using a simple moving average for consecutively missing data of twelve hours or less. Longer consecutive missing data were left as missing. The imputation was completed with the R package 'imputeTS'. In Hong Kong, the Air Quality Health Index (AQHI) was used, based on the concentration levels of four pollutants (SO2,NO2, O3, and particulate matter) and reported using a scale of 1-10+ [57]. Hourly AQHI values were available for twelve general stations located throughout the city, which were averaged together to indicate the daily value for the entire city. For values '10+', a value of 12 was used in the daily aggregation. The Tap Mun monitoring station was not included, as its rural location is not reflective of the residence of the general population. Time-related variables, such as month, day of week (DOW), and public holiday, were included to control the analysis. Mainland China had extra workdays to compensate for extended holiday periods, which were adjusted for in the analysis as well [58]. A city-wide marathon was included as a special event in Hong Kong during the study period and adjusted for in the analysis [59]. The authors were unable to identify the occurrence of city-wide events in the other Chinese cities during the study period. The associations of temperature were assessed on aggregated mean daily step count, adjusted by other meteorological conditions, air pollution index, and time-related variables. A stepdown analysis was conducted separately for each city using Generalized Additive Models (GAMs). Meteorological covariates with the highest p-value were removed in each model, until no variables with p-value over 0.1 remained. The Akaike information criterion (AIC) was also compared between models to ensure the model quality did not decrease. Air pollution index and time-related variables were kept in the model as control variables. The full model had a formula as follows: $$\begin{aligned}E\left( Daily\ mean\ step\ count\right)&= s(Mean\ temperature,k=4)+ s(Relative\ humidity,k=4)+ s(Precipitation,k=4)\\&+ s(Windspeed,k=4)+ s(Pressure,k=4)+ s(Sunshine,k=4)+s(AQI\ or\ AQHI,k=4)\\&+ factor(DOW)+ factor(Holiday)+ factor(Month)+ factor(Extra\ workdays)\\&+ factor(Typhoon)+ factor(Super\ typhoon)+ factor(Marathon)\end{aligned}$$ where s() indicates the smoothing function of continuous independent variables in R package "mgcv". k indicates the basis dimension for the smooth, such that k-1 is the maximum degrees of freedom considered for the variable. factor() indicates the categorical independent variables. AQI or AQHI indicates the Air Quality Index (used in China) or Air Quality Health Index (used in Hong Kong). DOW indicates the day of week. Meteorological variables were excluded if irrelevant to the city. The analysis was further stratified by gender and age group. Sensitivity analyses assessed the 1) effect of apparent temperature, 2) effect of percentile temperature, 3) removal of air pollution index, and 4) removal of outlier data caused by Typhoon Mangkhut in Shenzhen and Hong Kong. Statistical significance level was set at p ≤ 0.05. All analyses were conducted using R version 3.5.2 [60], using the package "mgcv" (Mixed GAM Computation Vehicle with Automatic Smoothness Estimation) [61]. Ethics approval was obtained from the Survey and Behavioural Research Ethics Committee of The Chinese University of Hong Kong (Date: August 13, 2018). The study period was from Dec 6, 2017 to Dec 31, 2018. During the study period, daily temperatures averaged from 12.7 °C (±SD 12.1) in Beijing, to 23.5 °C (±SD 5.3) in Hong Kong. The average amount of anonymized users included in the aggregated data during the study period were 11.1 million for Beijing, 9.6 mil for Shanghai, 2.8 mil for Chongqing, 4.9 mil for Shenzhen and 0.4 mil for Hong Kong. Compared to census data, the study samples in all five cities had a comparable gender distribution with their respective general populations (see Table 2). However, the sample populations tended to be significantly younger than the general populations. Table 2 Demographic comparison between sample population and city population of five Chinese cities (Unit 10,000 persons) The aggregated average step counts for each of the cities during the study period averaged 6846 steps for Beijing, 6703 for Shanghai, 7540 for Chongqing, 7209 for Shenzhen and 9040 for Hong Kong (see Table 3). The average step count was significantly higher among males than females (T-test, p < 0.001). Figure 1 demonstrates the trend of average daily step count during the study period for each city. The trends of average daily step count by gender and by age can be found in Supplemental materials, Figs. S1 and S2. Table 3 Summary findings of five Chinese cities Trend of average daily step count in each city. Note: BJ = Beijing, SH = Shanghai, CQ = Chongqing, SZ = Shenzhen, HK = Hong Kong Main models The final models for Beijing, Chongqing, and Hong Kong underwent a stepdown process, in which atmospheric pressure and relative humidity were removed (see Table 4). For Shanghai and Shenzhen, no changes were required from the full models. Table 4 Stepdown models of five Chinese cities Overall, three of five cities (Beijing, Shanghai, and Chongqing) had significant inverse U-shaped associations between temperature and daily step count in high temperatures (see Table 5 and Fig. 2). During periods of high temperature, populations in Beijing, Shanghai, and Chongqing had significantly lower physical activity compared to optimal temperatures, while no significant associations were found in Shenzhen and Hong Kong. In periods of low temperatures, while populations in Beijing, Shanghai, and Shenzhen also found significantly lower step counts compared to optimal temperatures, the amount of decrease was less than in hot temperatures. The optimal temperature of peak step counts varied slightly between cities. In Beijing, the estimate of optimal temperature was at 19.3 °C, with a change in − 386.0 steps (95% CI: − 626.6, − 145.5) for a 10 °C increase from optimal temperature. In Shanghai, the optimal temperature was 17.9 °C, with a change in − 432.7 steps (95% CI: − 636.2, − 229.1) and in Chongqing, the optimal temperature was 16.1 °C, with a − 321.7 decrease (95% CI: − 526.6, − 116.8) in average step count for 10 °C increase from optimal temperature. On days with extremely hot temperatures, step counts decreased by − 820 steps at 32.6 °C in Shanghai and − 1494 steps at 36.5 °C in Chongqing, when compared to their respective optimal temperatures. Table 5 Mean temperature associations on daily average step count, by city Relationships between temperature and daily step count in five Chinese cities. Note: The model for each city was adjusted for relative humidity#, precipitation, windspeed, pressure#, sunshine, AQI/AQHI, month, day of week, public holiday, extra workdays, typhoon, super typhoon, and marathon (#some cities had these variables removed in the stepdown process). Black markings along the x-axis indicate the actual existing temperature data of each city; Vertical red dotted lines indicate the identified optimal temperature; Grey shading indicates the 95% confidence interval In Shenzhen, a curvilinear association was found albeit non-significant at higher temperatures. At the highest temperature in the dataset (30.8 °C), there was a non-significant decrease of − 204.8 step counts (95% CI: − 514.5, 104.8) compared to the optimal temperature (24.2 °C). On the other hand, a weak non-significant negative linear temperature association was found for Hong Kong (Change in steps at 10 °C from pre-set temperature 20 °C: -105.4; 95% CI -268.5, 57.6). For other meteorological variables, higher relative humidity was negatively associated with step counts in Shanghai, Chongqing, and Shenzhen in a non-linear manner (see Supplemental materials, Fig. S3). High relative humidity in Beijing had a non-significant association with average step count. Rainfall and windspeeds were negatively associated with daily step count in all five cities, while daily sunshine hours were positively associated with step count, with a particularly strong association observed in inland Chongqing. Where atmospheric pressure remained in the model, it was found to be positively associated with step counts in Shenzhen and Shanghai. The air pollution index was significantly associated with physical activity levels in all cities except Beijing. Overall, the final model of these cities explained 73 to 88% of the variance in daily mean step counts (see Table 5 for model information). When stratified by gender, a lower optimal temperature was found among females than males in all four cities with curvilinear associations (Beijing, Shanghai, Chongqing, and Shenzhen) (see Table 6 and Supplemental materials Fig. S4). A slightly larger decline in step counts was found in Beijing among females at 10 °C above the optimal temperature (28.7 °C, change in steps: -405.4; 95% CI: − 641.1, − 169.6). Alternately, in Shenzhen a slightly larger effect was found among females at 10 °C colder temperatures from optimal (13.5 °C, change in steps: -338.1; 95% CI: − 629.9, − 46.4). In Hong Kong, the associations among both males and females remained non-significant. Table 6 Stratification results of the temperature-physical activity associations in five Chinese cities When stratified by age group, a lower optimal temperature was also found for the elderly over 65 in all cities with curvilinear associations (Beijing, Shanghai, Shenzhen, Hong Kong) (see Table 6 and Supplemental materials Fig. S4). In Chongqing, the association among the elderly was no longer inverse U-shaped, but had a steep significant negative slope. Additionally, in warmer temperatures, the elderly were associated with a markedly larger decrease in step counts compared with the adult age group (aged 18-64), with an approximate additional reduction of ~ 70 steps in Beijing and Shanghai and ~ 1200 steps in Chongqing. Furthermore, in Shenzhen and Hong Kong, the association of decreased step counts in warm temperatures was found significant among the elderly, while still remaining non-significant among the adults. In cold temperatures, there was no clear difference between the elderly and adults in most cities, except in Shenzhen, where the elderly were associated with a larger decrease in step counts with an approximate additional 130 ~ steps. Sensitivity analyses Several sensitivity analyses were conducted including 1) examining the effect of apparent temperature and 2) examining the effect of percentile temperature, 3) removal of air pollution index, and 4) removal of outlier data caused by Typhoon Mangkhut in Shenzhen and Hong Kong on Sept 16, 2018. The results were largely consistent with the primary findings (see Table 7). Table 7 Sensitivity analyses of the temperature-physical activity associations in five Chinese cities For apparent temperature models, the AIC was higher than for the original models in all cities aside from Hong Kong. A slightly higher optimal apparent temperature was found in Beijing, Shanghai, and Shenzhen. A slightly lower optimal apparent temperature was found in Chongqing, although the effect at + 10 °C was no longer significant. In Hong Kong, the effect at + 10 °C from 20 °C was to significantly decrease step counts by − 83.7 (95% CI: − 150.4, − 17.1). Optimal percentile temperature was found at the 48th percentile in Chongqing, 54th percentile in Shanghai, 58th percentile in Shenzhen, and 68th percentile in Beijing. Similar to the main model, no optimal percentile temperature was found for Hong Kong. The model AIC improved when using percentile temperature for all cities except Chongqing. Without the pollution index, the results remained consistent in Beijing, Shanghai, and Shenzhen, although the model AIC had a substantial increase from each city's original model. In Chongqing, the optimal temperature increased from 16 °C to 19.3 °C. Additionally, a curvilinear association was found in Hong Kong, with optimal temperatures at 21.9 °C and a marginally significant decrease of − 348.0 (95% CI: − 697.8, 1.8) for a 10 °C increase from optimal temperature. The results remained consistent when removing the typhoon outlier for Shenzhen and Hong Kong, while the model AIC improved from the original. When the two cities were hit by Typhoon Mangkhut on Sept 16, 2018, the aggregated daily step counts on that date dropped significantly to 3992 and 4682, respectively compared to average step counts. Inverse U-shaped associations of temperature on city-wide aggregated step counts in four of five Chinese cities (Beijing, Shanghai, Chongqing, and Shenzhen) were found, with significant decreases in high temperatures for three cities (Beijing, Shanghai, and Chongqing). Step counts peaked at optimal temperatures ranging from 16.0 °C in Chongqing, 17.9 °C in Shanghai, 19.3 °C in Beijing, to 24.2 °C in Shenzhen. In warm temperatures, average decreases of 322 to 433 steps were found for those cities at 10 °C increase from optimal temperature, while temperatures in Shenzhen did not extend high enough to find a significant association. On days with extremely hot temperatures, the mean step counts of the city population decreased as far as 800 to 1500 steps compared to the optimal temperature. The impact of temperature seemed to be greater in climates with wider temperature ranges, whereas cities in subtropical climates did not have significant declines in step counts on days with high temperatures. In Hong Kong, a non-significant association was found between temperature and step count, however, a marginally significant curvilinear association was found with optimal temperatures at 21.9 °C when the city-specific air pollution index (AQHI) was taken out of the model, and a significant negative association was found in high apparent temperatures. Optimal percentile temperatures ranged between 48th percentile in Chongqing to 68th percentile in Beijing. Other results remained largely consistent in the sensitivity analyses. Only a few temperature-physical activity studies have previously been conducted in China or in the Asian region. Two studies located in Japan had similarly found curvilinear associations between temperature and step counts, with step counts peaking between 17 °C and 20.7 °C [19, 63]. A study from Harbin, China, a city in the far north with a very cold climate, found a positive association during the spring months between temperature and the intensity of activity and number of active persons in the public park [28]. A previous study in Beijing found no seasonal variation and hourly association between temperature and average physical activity among 40 Chinese participants of an accelerometer study [27]. However, the study seemed to only consider the possibility of a linear association using general linear models. In Hong Kong, a study on Pokémon Go users found a significant negative association between temperature and daily distance travelled in the summer [26], while the negative association in our study was non-significant. In other published multi-location studies, a trail study across the USA found increasing optimal temperatures with warmer American-centric climate regions [18]. In this study, locations with similar climates had similar associations, however warmer locations did not necessarily have the greater effects in high temperatures. Chongqing and Shanghai (both climate Cfa under the Köppen-Geiger classification [47]) had similar optimal temperature peaks and clear decreased physical activity associations in warm temperatures. Warmer Shenzhen and Hong Kong (both climate Cwa) found similar optimal temperature peaks ranging in the early 20s °C, particularly in the Hong Kong model without air pollution. However, non-significant decreases in step counts were found with higher temperatures, as both cities had lower extreme temperatures (maximum temp: 30.8 °C and 31.2 °C, respectively) compared to the other cities. Surprisingly, Beijing (climate Dwa) had a relatively high optimal temperature (19.3 °C) and the highest percentile optimal temperature at 68th percentile, despite having an overall colder climate. This study found that optimal temperatures for physical activity ranged between 48th to 68th percentile, with the highest percentile found in Beijing. As indicated by Beijing's climate classification Dwa, the 'a' demonstrates hot summer temperatures in an overall cold snow climate zone 'D'. This produces a wider temperature range than other cities, as seen in Table 3. With half of days having mean temperatures below 12 °C in Beijing, the population may seek to take advantage of the warmer temperatures and other connected weather conditions (increased daylight hours, absence of icy surfaces etc.) to conduct more active leisure activities [64], leading to the findings of a greater increase of physical activity in warmer temperatures and higher optimal temperature than other cities. Additionally, the inter-city variations in physical activity patterns seem to demonstrate population adaptation to local climates [65] and may have also been influenced by variations in infrastructural or spatial patterns of the urban environment, such as the city density and urban sprawl [66]. As mentioned in a Beijing study on travel behaviour, the built environment can significantly affect people's allocation of time and pursuit of activities [67]. Other potential confounding factors such as socio-economic status, education, and employment may have also affected the analysis and the comparison across cities. In the stratified analyses, lower optimal temperatures were found among females in all cities with curvilinear relationships. This is a new finding, as previous studies that stratified by gender did not assess for a difference in optimal temperature between genders [63, 64, 68, 69]. Only one study stratifying by gender found overall lower step counts among females compared to males [63], while other studies did not find clear differences in temperature-related physical activity between males and females [64, 68, 69]. Our study also found that elderly over 65 had lower optimal temperatures and larger decreases of step count in warmer temperatures compared with the adult age group. These findings are consistent with several previous studies that stratified by age and found stronger temperature effects among those over 65, particularly over 80 [63, 70, 71]. These are also aligned with the physiological understanding of a lower heat tolerance among older adults due to a decreased capacity to thermoregulate [72, 73]. This was the first multi-location comparative study on temperature and physical activity located in China and in Asia. This study demonstrated a decrease of daily physical activity in high temperatures using aggregated objectively measured step counts from a large, anonymized sample size. The data collection method ensured that anonymized users were located in the respective cities in order to be included in the analysis. A non-linear statistical analysis allowed for flexible associations between temperature and physical activity, as well as all other meteorological variables. The analysis comprised over a year's duration, covered all seasons, and controlled for time-related variables and holidays and special events (typhoons and marathons) where feasible. However, this study's data collection was limited to only those who voluntarily downloaded the mobile application, and may have skewed towards the health-conscious, able-bodied (no mobility problems), younger, and more active subset of the population. This can be seen by the relatively high average daily step count of each city and the skewed age distribution compared to the general population, and suggests that this study may underestimate the temperature effect, particularly among more vulnerable populations. The accelerometer could only collect information on ambulatory activities when the phone was located with the person and was unable to account for any cycling or aquatic activities. As this study could not control for whether the anonymized users kept their phones on them, the aggregation from a large data sample could be an underestimation of actual physical activity levels. The aggregated data could not control at the individual participant level and would have included any visitors or temporary stay individuals who used the in-app function and were located in the city during any evenings of the study period. No uncertainty boundaries were provided around the step count estimates. Other potential confounding factors may not have been addressed in this study, such as socio-demographic characteristics, accessibility of transportation options, or indoor/outdoor activity. The effect of weather alerts and warnings on physical activity levels could not be adjusted in this analysis. Future research directions Future studies should assess the temperature-physical activity relationship in more climate and geographically diverse locations of China and other regions. An increased understanding is needed on the role of urban planning and spatial patterns in affecting the relationship between temperature and physical activity. Extreme temperature events could be assessed in warmer subtropical climate locations like Shenzhen and Hong Kong, to elucidate the effect of extreme temperatures. Furthermore, the singular effect of the super typhoon in these two cities also hints at the large impact that extreme weather events can have on population physical activity patterns. With climate change and an increased frequency of heat waves, typhoons, storms, and other climate-related hazards, there may be increased days where population activity is lowered by such extreme events. Studies could further examine and project the impact of physical activity in extreme weather events. Temperatures and physical activity demonstrated inverse-U shaped associations among several major cities in China, with physical activity peaking between 16 °C to 19.3 °C in temperate climate cities. Coupled with rising temperatures due to climate change, these reductions in physical activity could subsequently lead to consequential health effects, as the temperatures shift upward to levels higher than the optimal temperature for physical activity. Adaptations to temperature should be addressed in China's physical activity promotion guidelines, with healthcare providers empowered to provide appropriate physical activity recommendations and heat health prevention measures. Recommended city-wide interventions include increased access to indoor recreational facilities and urban design measures to alleviate the heat and to support sustainable physical activity levels in the face of climate change. The data that support the findings of this study are available from WeChat WeRun (https://www.wechat.com/) but restrictions apply to the availability of these data, which were used under agreement for the current study, and so are not publicly available. Barnett AG, Hajat S, Gasparrini A, Rocklov J. Cold and heat waves in the United States. Environ Res. 2012;112:218–24. Gasparrini A, Guo Y, Hashizume M, Lavigne E, Zanobetti A, Schwartz J, et al. Mortality risk attributable to high and low ambient temperature: a multicountry observational study. Lancet. 2015;386(9991):369–75. Kovats RS, Hajat S. Heat stress and public health: a critical review. Annu Rev Public Health. 2008;29:41–55. Chan EYY, Goggins WB, Kim JJ, Griffiths SM. A study of intracity variation of temperature-related mortality and socioeconomic status among the Chinese population in Hong Kong. J Epidemiol Community Health. 2012;66(4):322–7. Goggins WB, Chan EY, Yang C, Chong M. Associations between mortality and meteorological and pollutant variables during the cool season in two Asian cities. Environ Health. 2013;12(59):1–10. Chan EYY, Goggins WB, Yue JS, Lee P. Hospital admissions as a function of temperature, other weather phenomena and pollution levels in an urban setting in China. Bull World Health Organ. 2013;91(8):576–84. Chan EYY, Lam HCY, So SHW, Goggins WB, Ho JY, Liu S, et al. Association between ambient temperatures and mental disorder hospitalizations in a Subtropical City: a time-series study of Hong Kong special administrative region. Int J Environ Res Public Health. 2018;15(4):754. Ye X, Wolff R, Yu W, Vaneckova P, Pan X, Tong S. Ambient temperature and morbidity: a review of epidemiological evidence. Environ Health Perspect. 2012;120(1):19–28. Chan EYY, Goggins WB, Kim JJ, Griffiths S, Ma TK. Help-seeking behavior during elevated temperature in Chinese population. J Urban Health. 2011;88(4):637–50. Lee IM, Shiroma EJ, Lobelo F, Puska P, Blair SN, Katzmarzyk PT. Effect of physical inactivity on major non-communicable diseases worldwide: an analysis of burden of disease and life expectancy. Lancet. 2012;380(9838):219–29. Bull FC, Bauman AE. Physical inactivity: the "Cinderella" risk factor for noncommunicable disease prevention. J Health Commun. 2011;16(Suppl 2):13–26. Durstine JL, Gordon B, Wang Z, Luo X. Chronic disease and the link to physical activity. J Sport Health Sci. 2013;2(1):3–11. Physical Activity Guidelines Advisory Committee. 2018 Physical activity guidelines advisory Committee scientific report. Washington, DC: US Department of Health and Human Services; 2018. Chan CB, Ryan DA. Assessing the effects of weather conditions on physical activity participation using objective measures. Int J Environ Res Public Health. 2009;6(10):2639–54. Tucker P, Gilliland J. The effect of season and weather on physical activity: a systematic review. Public Health. 2007;121(12):909–22. Al-Mohannadi AS, Farooq A, Burnett A, Van Der Walt M, Al-Kuwari MG. Impact of climatic conditions on physical activity: a 2-year cohort study in the Arabian gulf region. J Phys Act Health. 2016;13(9):929–37. de Montigny L, Ling R, Zacharias J. The effects of weather on walking rates in nine cities. Environ Behav. 2011;44(6):821–40. Ermagun A, Lindsey G, Loh TH. Urban trails and demand response to weather variations. Transp Res Part D: Transp Environ. 2018;63:404–20. Togo F, Watanabe E, Park H, Shephard RJ, Aoyagi Y. Meteorology and the physical activity of the elderly: the Nakanojo study. Int J Biometeorol. 2005;50(2):83–9. National Bureau of Statistics of China. Total population (year-end). Beijing: National Bureau of Statistics of China; 2019. Available from: https://data.stats.gov.cn/english/easyquery.htm?cn=C01 Yang J, Siri JG, Remais JV, Cheng Q, Zhang H, Chan KKY, et al. The Tsinghua–lancet commission on healthy Cities in China: unlocking the power of cities for a healthy China. Lancet. 2018;391(10135):2140–84. Zang J, Ng SW. Age, period and cohort effects on adult physical activity levels from 1991 to 2011 in China. Int J Behav Nutr Phys Act. 2016;13:40. World Health Organization. Prevalence of insufficient physical activity among adults aged 18+ years (age-standardized estimate) (%). Geneva: World Health Organization; 2022. Available from: https://www.who.int/data/gho/data/indicators/indicator-details/GHO/prevalence-of-insufficient-physical-activity-among-adults-aged-18-years-(age-standardized-estimate)-(-). Li X, Zhang W, Zhang W, Tao K, Ni W, Wang K, et al. Level of physical activity among middle-aged and older Chinese people: evidence from the China health and retirement longitudinal study. BMC Public Health. 2020;20(1):1682. National Development and Reform Commission P. China's national climate change programme China: national development and reform commission; 2007. Ma BD, Ng SL, Schwanen T, Zacharias J, Zhou M, Kawachi I, et al. Pokemon GO and physical activity in Asia: multilevel study. J Med Internet Res. 2018;20(6):e217. Wang G, Li B, Zhang X, Niu C, Li J, Li L, et al. No seasonal variation in physical activity of Han Chinese living in Beijing. Int J Behav Nutr Phys Act. 2017;14(1):48. Zhao X, Bian Q, Zhao D, Zhang B. 寒地城市公园春季休闲体力活动强度与植被群落微气候调节效应适应性研究 Research on the Adaptability between the leisure physical Activity Intensity and Micro-climate Regulation of Vegetation Community of Cold Region Parks in Spring (Chinese). 中国园林. 2018(2):42-8. Hallal PC, Andersen LB, Bull FC, Guthold R, Haskell W, Ekelund U. Global physical activity levels: surveillance progress, pitfalls, and prospects. Lancet. 2012;380(9838):247–57. Alahmari AD, Mackay AJ, Patel AR, Kowlessar BS, Singh R, Brill SE, et al. Influence of weather and atmospheric pollution on physical activity in patients with COPD. Respir Res. 2015;16:71. Badland HM, Christian H, Giles-Corti B, Knuiman M. Seasonality in physical activity: should this be a concern in all settings? Health Place. 2011;17(5):1084–9. Chan CB, Ryan DA, Tudor-Locke C. Relationship between objective measures of physical activity and weather: a longitudinal study. Int J Behav Nutr Phys Act. 2006;3:21. Giannouli E, Fillekes MP, Mellone S, Weibel R, Bock O, Zijlstra W. Predictors of real-life mobility in community-dwelling older adults: an exploration based on a comprehensive framework for analyzing mobility. Eur Rev Aging Phys Act. 2019;16:19. Ogawa S, Seko T, Ito T, Mori M. Differences in physical activity between seasons with and without snowfall among elderly individuals residing in areas that receive snowfall. J Phys Ther Sci. 2019;31:12–6. Hekler EB, Buman MP, Grieco L, Rosenberger M, Winter SJ, Haskell W, et al. Validation of physical activity tracking via android smartphones compared to ActiGraph accelerometer: laboratory-based and free-living validation studies. JMIR Mhealth Uhealth. 2015;3(2):e36. Presset B, Laurenczy B, Malatesta D, Barral J. Accuracy of a smartphone pedometer application according to different speeds and mobile phone locations in a laboratory context. J Exerc Sci Fit. 2018;16(2):43–8. Bort-Roig J, Gilson ND, Puig-Ribera A, Contreras RS, Trost SG. Measuring and influencing physical activity with smartphone technology: a systematic review. Sports Med. 2014;44(5):671–86. Aral S, Nicolaides C. Exercise contagion in a global social network. Nat Commun. 2017;8:14753. Vanky AP, Verma SK, Courtney TK, Santi P, Ratti C. Effect of weather on pedestrian trip count and duration: City-scale evaluations using mobile phone application data. Prev Med Rep. 2017;8:30–7. Ho JY, Zijlema WL, Triguero-Mas M, Donaire-Gonzalez D, Valentin A, Ballester J, et al. Does surrounding greenness moderate the relationship between apparent temperature and physical activity? Findings from the PHENOTYPE project. Environ Res. 2021;197:110992. National Bureau of Statistics of China. 中国统计年鉴 China Statistical yearbook 2019. Beijing: China Statistics Press; 2019. Available from: http://www.stats.gov.cn/tjsj/ndsj/2019/indexeh.htm. Beijing Municipal Bureau of Statistics, Survey Office of the National Bureau of Statistics in Beijing. 北京统计年鉴 Beijing Statistical Yearbook 2019. Beijing: China Statistics Press; 2019. Available from: http://nj.tjj.beijing.gov.cn/nj/main/2019-tjnj/zk/indexeh.htm. Shanghai Municipal Statistical Bureau, Survey Office of the National Bureau of Statistics in Shanghai. 上海统计年鉴 Shanghai Statistical Yearbook 2019. Shanghai: China Statistics Press; 2019. Available from: http://tjj.sh.gov.cn/tjnj/zgsh/tjnj2019en.html. Chongqing Municipal Statistical Bureau, Survey Office of the National Bureau of Statistics in Chongqing. 重庆统计年鉴 Chongqing Statistical Yearbook 2019. Chongqing: China Statistics Press; 2019. Available from: http://tjj.cq.gov.cn/zwgk_233/tjnj/2019/zk/indexch.htm. Statistics Bureau of Shenzhen Municipality, Survey Office of the National Bureau of Statistics in Shenzhen. 深圳统计年鉴 Shenzhen Statistical Yearbook 2019 总第29期. Shenzhen: China Statistics Press 2019. Census and Statistics Department. Hong Kong Statistics: Population Estimates. Hong Kong SAR: Census and Statistics Department, The Government of Hong Kong Special Administrative Region; 2019. [updated 15 March, 2021]. Available from: https://www.censtatd.gov.hk/en/page_8000.html. Kottek M, Grieser J, Beck C, Rudolf B, Rubel F. World map of the Köppen-Geiger climate classification updated. Meteorol Z. 2006;15(3):259–63. Exchange Rates UK. Hong Kong dollar to Chinese Yuan spot exchange rates for 2018: Exchange Rates UK; 2018. Available from: https://www.exchangerates.org.uk/HKD-CNY-spot-exchange-rates-history-2018.html Statista. Number of monthly active WeChat users from 2nd quarter 2011 to 3rd quarter 2021(in millions). Hamburg; 2021. Available from: https://www.statista.com/statistics/255778/number-of-active-wechat-messenger-accounts/. [updated Feb 8, 2022]. Amagasa S, Kamada M, Sasai H, Fukushima N, Kikuchi H, Lee IM, et al. How well iPhones measure steps in free-living conditions: cross-sectional validation study. JMIR Mhealth Uhealth. 2019;7(1):e10418. Ballester J, Robine JM, Herrmann FR, Rodo X. Long-term projections and acclimatization scenarios of temperature-related mortality in Europe. Nat Commun. 2011;2:358. Sensirion. Application Note Dew-point calculation. In: Sensirion. Switzerland: 2006. p. 1-3. China Meteorological Administration. Member Report of China. Chiang Mai: ESCAP/WMO Typhoon Committee, 13th Integrated Workshop; 2018. Hong Kong Observatory. Tropical cyclone warning signals. Hong Kong: Hong Kong Observatory; 2018. Available from: https://www.hko.gov.hk/en/wxinfo/climat/warndb/warndb1.shtml Meteorological Bureau of Shenzhen Municipality. 预警服务: 历史预警查询. Shenzhen: Meteorological Bureau of Shenzhen Municipality; 2018. Available from: http://weather.sz.gov.cn/qixiangfuwu/yujingfuwu/lishiyujingchaxun/. Ministry of Ecology and Environment of the PRC. 如何读懂空气质量预报. Ministry of Ecology and Environment of the PRC. Available from: http://www.mee.gov.cn/. Environmental Protection Department. About AQHI. Hong Kong SAR: Environmental Protection Department HKSAR. Available from: https://www.aqhi.gov.hk/en/what-is-aqhi/about-aqhi.html. The State Council PRC. 国务院办公厅关于2019年部分节假日安排的通知. 国务院办公厅; 2018. Available from: http://www.gov.cn/zhengce/content/2018-12/06/content_5346276.htm. Major Sports Events Committee. "M" mark events calendar. Hong Kong: Major Sports Events Committee (MSEC), The Government of the Hong Kong Special Administrative Region; 2018. Available from: http://www.mevents.org.hk/en/calendar_2018.php R Core Team. R. A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2018. Wood S. Generalized additive models: an introduction with R. 2nd ed: Chapman and Hall/CRC; 2017. Shenzhen Municipal Statistics Bureau. 2018年深圳市社会性别统计报告. In: 市统计局, editor. Shenzhen: 深圳市统计局; 2019. Hino K, Lee JS, Asami Y. Associations between seasonal meteorological conditions and the daily step count of adults in Yokohama, Japan: results of year-round pedometer measurements in a large population. Prev Med Rep. 2017;8:15–7. Aspvik NP, Viken H, Ingebrigtsen JE, Zisko N, Mehus I, Wisloff U, et al. Do weather changes influence physical activity level among older adults? - the generation 100 study. PLoS One. 2018;13(7):e0199463. Ma W, Wang L, Lin H, Liu T, Zhang Y, Rutherford S, et al. The temperature-mortality relationship in China: an analysis from 66 Chinese communities. Environ Res. 2015;137:72–7. LV J, Yang BD, Yang YJ, Zhang ZH, Chen F, Liu GJ. Spatial patterns of China's major cities and their evolution mechanisms during the past decades of reform and opening up. Procedia Eng. 2017;198:915–25. Wang D, Chai Y, Li F. Built environment diversities and activity–travel behaviour variations in Beijing, China. J Transp Geogr. 2011;19(6):1173–86. Bosdriesz JR, Witvliet MI, Visscher TL, Kunst AE. The influence of the macro-environment on physical activity: a multilevel analysis of 38 countries worldwide. Int J Behav Nutr Phys Act. 2012;9(110):1–13. Klenk J, Buchele G, Rapp K, Franke S, Peter R, Acti FESG. Walking on sunshine: effect of weather conditions on physical activity in older people. J Epidemiol Community Health. 2012;66(5):474–6. Obradovich N, Fowler JH. Climate change may alter human physical activity patterns. Nat Hum Behav. 2017;1(5):1–7. Witham MD, Donnan PT, Vadiveloo T, Sniehotta FF, Crombie IK, Feng Z, et al. Association of day length and weather conditions with physical activity levels in older community dwelling people. PLoS One. 2014;9(1):e85331. Balmain BN, Sabapathy S, Louis M, Morris NR. Aging and thermoregulatory control: the clinical implications of exercising under heat stress in older individuals. Biomed Res Int. 2018;2018:8306154. Kenny GP, Yardley J, Brown C, Sigal RJ, Jay O. Heat stress in older individuals and patients with common chronic diseases. CMAJ. 2010;182(10):1053–60. The authors wish to thank the Tencent team for their data and technical support, with special thanks to Dr. Wujie Zheng and Ms. Diane Liu. The anonymized aggregated data was shared for research purposes only while the first author was an intern at Tencent. The authors would also like to thank Professor Ming Luo for his data support. JYH conducted the research while an intern at Tencent and was funded by the Hong Kong PhD Fellowship Scheme from the Hong Kong Research Grants Council (PF15-18545) during the same period. JYH is partially supported by the Research Impact Fund (Ref-No: R4046-18) of the Hong Kong Research Grants Council. The sponsors had no role in the design, data collection, analysis, interpretation, or writing of the study. The Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Hong Kong, China Janice Y. Ho, William B. Goggins, Phoenix K. H. Mo & Emily Y. Y. Chan Nuffield Department of Medicine, University of Oxford, Oxford, UK Emily Y. Y. Chan Janice Y. Ho William B. Goggins Phoenix K. H. Mo JYH, WBG, PKHM and EYYC contributed to the conceptualization, design, interpretation, and writing. JYH acquired and analysed the data. All authors read and approved the final manuscript. Correspondence to Emily Y. Y. Chan. Ethics approval on the secondary data was obtained from the Survey and Behavioural Research Ethics Committee of The Chinese University of Hong Kong (Date: August 13, 2018). The anonymized aggregated secondary data was obtained from WeChat users who had consented to participate in the fitness tracking channel WeRun (微信运动) within WeChat and have authorized the collection of their number of steps. 12966_2022_1285_MOESM1_ESM.docx Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Ho, J.Y., Goggins, W.B., Mo, P.K.H. et al. The effect of temperature on physical activity: an aggregated timeseries analysis of smartphone users in five major Chinese cities. Int J Behav Nutr Phys Act 19, 68 (2022). https://doi.org/10.1186/s12966-022-01285-1 Step count
CommonCrawl
4.3: Solve Mixture Applications with Systems of Equations [ "article:topic", "license:ccby", "showtoc:yes", "transcluded:yes", "authorname:openstaxmarecek", "source[1]-math-5140" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMonroe_Community_College%2FMTH_104_Intermediate_Algebra%2F4%253A_Systems_of_Linear_Equations%2F4.4%253A_Solve_Mixture_Applications_with_Systems_of_Equations MTH 104 Intermediate Algebra 4: Systems of Linear Equations Lynn Marecek Professor (Mathematics) at Santa Ana College Publisher: OpenStax CNX Solve Mixture Applications Solve Interest Applications Solve applications of cost and revenue functions Multiply: \(4.025(1,562)\). If you missed this problem, review [link]. Write 8.2% as a decimal. Earl's dinner bill came to $32.50 and he wanted to leave an 18% tip. How much should the tip be? Mixture application involve combining two or more quantities. When we solved mixture applications with coins and tickets earlier, we started by creating a table so we could organize the information. For a coin example with nickels and dimes, the table looked like this: Using one variable meant that we had to relate the number of nickels and the number of dimes. We had to decide if we were going to let n be the number of nickels and then write the number of dimes in terms of n, or if we would let d be the number of dimes and write the number of nickels in terms of d. Now that we know how to solve systems of equations with two variables, we'll just let n be the number of nickels and d be the number of dimes. We'll write one equation based on the total value column, like we did before, and the other equation will come from the number column. For the first example, we'll do a ticket problem where the ticket prices are in whole dollars, so we won't need to use decimals just yet. Translate to a system of equations and solve: A science center sold 1,363 tickets on a busy weekend. The receipts totaled $12,146. How many $12 adult tickets and how many $7 child tickets were sold? Step 1. Read the problem. We will create a table to organize the information. Step 2. Identify what we are looking for. We are looking for the number of adult tickets and the number of child tickets sold. Step 3. Name what we are looking for. Let \(a= \text{the number of adult tickets.}\) \(c= \text{the number of child tickets}\) A table will help us organize the data. We have two types of tickets, adult and child. Write in a and c for the number of tickets. Write the total number of tickets sold at the bottom of the Number column. Altogether 1,363 were sold. Write the value of each type of ticket in the Value column. The value of each adult ticket is $12. The value of each child tickets is $7. The number times the value gives the total value, so the total value of adult tickets is \(a·12=12a\), and the total value of child tickets is \(c·7=7c\). Fill in the Total Value column. Altogether the total value of the tickets was $12,146. Step 4. Translate into a system of equations. The Number column and the Total value column give us the system of equations. We will use the elimination method to solve this system. Multiply the first equation by \(−7\). Simplify and add, then solve for a. Substitute \(a=521\) into the first equation, then solve for c. Step 6. Check the answer in the 521 adult at $12 per ticket makes $ 6,252 842 child at $7 per ticket makes $58,994 The total receipts are $12,146\(\checkmark\) Step 7. Answer the question. The science center sold 521 adult tickets and 842 child tickets. The ticket office at the zoo sold 553 tickets one day. The receipts totaled $3,936. How many $9 adult tickets and how many $6 child tickets were sold? 206 adults, 347 children The box office at a movie theater sold 147 tickets for the evening show, and receipts totaled $1,302. How many $11 adult and how many $8 child tickets were sold? 42 adults, 105 children In the next example, we'll solve a coin problem. Now that we know how to work with systems of two variables, naming the variables in the 'number' column will be easy. Juan has a pocketful of nickels and dimes. The total value of the coins is $8.10. The number of dimes is 9 less than twice the number of nickels. How many nickels and how many dimes does Juan have? Step 1. Read the problem. We will create a table to organize the information. Step 2. Identify what we are looking for. We are looking for the number of nickels and the number of dimes. Step 3. Name what we are looking for. Let \(n= \text{the number of nickels.}\) \(d= \text{the number of dimes}\) We have two types of coins, nickels and dimes. Write n and d for the number of each type of coin. Fill in the Value column with the value of each type of coin. The value of each nickel is $0.05. The value of each dime is $0.10. The number times the value gives the total value, so, the total value of the nickels is \(n(0.05)=0.05n\) and the total value of dimes is \(d(0.10)=0.10d\). Altogether the total value of the coins is $8.10. The Total Value column gives one equation. We also know the number of dimes is 9 less than twice the number of nickels. Translate to get the second equation. Now we have the system to solve. Step 5. Solve the system of equations We will use the substitution method. Substitute \(d=2n−9\) into the first equation. Simplify and solve for n. To find the number of dimes, substitute \(n=36\) into the second equation. Step 6. Check the answer in the problem 63 dimes at \($0.10=$6.30\) 36 nickels at \($0.05=$1.80\) Total \(=$8.10\checkmark\) Step 7. Answer the question. Juan has 36 nickels and 63 dimes. Matilda has a handful of quarters and dimes, with a total value of $8.55. The number of quarters is 3 more than twice the number of dimes. How many dimes and how many quarters does she have? 13 dimes and 29 quarters Priam has a collection of nickels and quarters, with a total value of $7.30. The number of nickels is six less than three times the number of quarters. How many nickels and how many quarters does he have? 19 quarters and 51 nickels Some mixture applications involve combining foods or drinks. Example situations might include combining raisins and nuts to make a trail mix or using two types of coffee beans to make a blend. Carson wants to make 20 pounds of trail mix using nuts and chocolate chips. His budget requires that the trail mix costs him $7.60. per pound. Nuts cost $9.00 per pound and chocolate chips cost $2.00 per pound. How many pounds of nuts and how many pounds of chocolate chips should he use? Step 2. Identify what we are looking for. We are looking for the number of pounds of nuts and the number of pounds of chocolate chips. Step 3. Name what we are looking for. Let \(n= \text{the number of pound of nuts.}\) \(c= \text{the number of pounds of chips}\) Carson will mix nuts and chocolate chips to get trail mix. Write in n and c for the number of pounds of nuts and chocolate chips. There will be 20 pounds of trail mix. Put the price per pound of each item in the Value column. Fill in the last column using \(\text{Number}•\text{Value}=\text{Total Value}\) We get the equations from the Number and Total Value columns. We will use elimination to solve the system. Multiply the first equation by \(−2\) to eliminate c. Simplify and add. Solve for n. To find the number of pounds of chocolate chips, substitute \(n=16\) into the first equation, then solve for c. Step 6. Check the answer in the problem. \(\begin{array} {lll} 16+4 &= &20\checkmark \\ 9·16+2·4 &= &152\checkmark \end{array}\) Step 7. Answer the question. Carson should mix 16 pounds of nuts with 4 pounds of chocolate chips to create the trail Greta wants to make 5 pounds of a nut mix using peanuts and cashews. Her budget requires the mixture to cost her $6 per pound. Peanuts are $4 per pound and cashews are $9 per pound. How many pounds of peanuts and how many pounds of cashews should she use? 3 pounds peanuts and 2 pounds cashews Sammy has most of the ingredients he needs to make a large batch of chili. The only items he lacks are beans and ground beef. He needs a total of 20 pounds combined of beans and ground beef and has a budget of $3 per pound. The price of beans is $1 per pound and the price of ground beef is $5 per pound. How many pounds of beans and how many pounds of ground beef should he purchase? 10 pounds of beans, 10 pounds of ground beef Another application of mixture problems relates to concentrated cleaning supplies, other chemicals, and mixed drinks. The concentration is given as a percent. For example, a 20% concentrated household cleanser means that 20% of the total amount is cleanser, and the rest is water. To make 35 ounces of a 20% concentration, you mix 7 ounces (20% of 35) of the cleanser with 28 ounces of water. For these kinds of mixture problems, we'll use "percent" instead of "value" for one of the columns in our table. Example \(\PageIndex{10}\) Sasheena is lab assistant at her community college. She needs to make 200 milliliters of a 40% solution of sulfuric acid for a lab experiment. The lab has only 25% and 50% solutions in the storeroom. How much should she mix of the 25% and the 50% solutions to make the 40% solution? A figure may help us visualize the situation, then we will create a table to organize the information. Sasheena must mix some of the \(25%\) solution and some of the \(50%\) solution together to get \(200\space ml\) of the \(40%\) solution. Step 2. Identify what we are looking for. We are looking for how much of each solution she Step 3. Name what we are looking for. Let \(x= \text{number of }ml\text{ of }25% \text{ solution.}\) \(y= \text{number of }ml\text{ of }50%\text{ solution\) A table will help us organize the data. She will mix x \(ml\) of \(25%\) with y \(ml\) of \(50%\) to get \(200 \space ml\) of \(40%\) solution. We write the percents as decimals in the chart. We multiply the number of units times the concentration to get the total amount of sulfuric acid in each solution. Step 4. Translate into a system of equations. column and the Amount column. Now we have the system. We will solve the system by elimination. Multiply the first equation by \(−0.5\) to eliminate y. Simplify and add to solve for x. To solve for y, substitute \(x=80\) into the first equation. \(\begin{array} {lll} 80+120 &= &200\checkmark \\ 0.25(80)+0.50(120) &= &200\checkmark \\ {} &{} &\text{Yes!} \end{array} \) Step 7. Answer the question. Sasheena should mix \(80 \space ml\) of the \(25%\) solution with \(120 \space ml\) of the \(50%\) solution to get the \(200\space ml\) of the \(40%\) solution. LeBron needs 150 milliliters of a 30% solution of sulfuric acid for a lab experiment but only has access to a 25% and a 50% solution. How much of the 25% and how much of the 50% solution should he mix to make the 30% solution? 120 ml of 25% solution and 30 ml of 50% solution Anatole needs to make 250 milliliters of a 25% solution of hydrochloric acid for a lab experiment. The lab only has a 10% solution and a 40% solution in the storeroom. How much of the 10% and how much of the 40% solutions should he mix to make the 25% solution? 125 ml of 10% solution and 125 ml of 40% solution The formula to model simple interest applications is \(I=Prt\). Interest, I, is the product of the principal, P, the rate, r, and the time, t. In our work here, we will calculate the interest earned in one year, so t will be 1. We modify the column titles in the mixture table to show the formula for interest, as you'll see in the next example. Adnan has $40,000 to invest and hopes to earn \(7.1%\) interest per year. He will put some of the money into a stock fund that earns 8% per year and the rest into bonds that earns 3% per year. How much money should he put into each fund? Step 1. Read the problem. A chart will help us organize the information. Step 2. Identify what we are looking for. We are looking for the amount to invest in each fund. Step 3. Name what we are looking for. Let \(s= \text{the amount invested in stocks.}\) \(b= \text{the amount invested in stocks}\) Write the interest rate as a decimal for each fund. Multiply: Principal · Rate · Time We get our system of equations from the Principal column and the Interest column. by elimination. Multiply the top equation by \(−0.03\). Simplify and add to solve for s. To find b, substitute s = 32,800 into the first equation. problem. We leave the check to you. Step 7. Answer the question. Adnan should invest $32,800 in stock and $7,200 in bonds. Did you notice that the Principal column represents the total amount of money invested while the Interest column represents only the interest earned? Likewise, the first equation in our system, \(s+b=40,000\), represents the total amount of money invested and the second equation, \(0.08s+0.03b=0.071(40,000)\), represents the interest earned. Leon had $50,000 to invest and hopes to earn \(6.2%\) interest per year. He will put some of the money into a stock fund that earns 7% per year and the rest in to a savings account that earns 2% per year. How much money should he put into each fund? $42,000 in the stock fund and $8000 in the savings account Julius invested $7000 into two stock investments. One stock paid 11% interest and the other stock paid 13% interest. He earned \(12.5%\) interest on the total investment. How much money did he put in each stock? $1750 at 11% and $5250 at 13% The next example requires that we find the principal given the amount of interest earned. Rosie owes $21,540 on her two student loans. The interest rate on her bank loan is \(10.5%\) and the interest rate on the federal loan is \(5.9%\). The total amount of interest she paid last year was \($1,669.68\). What was the principal for each loan? Step 2. Identify what we are looking for. We are looking for the principal of each loan. Step 3. Name what we are looking for. Let \(b= \text{the principal for the bank loan.}\) \(f= \text{the principal on the federal loan}\) The total loans are $21,540. Record the interest rates as decimals Multiply using the formula I = Prt to get the Interest. The system of equations comes from the Principal column and the Interest We will use substitution to solve. Solve the first equation for b. Substitute b = −f + 21.540 into the second equation. Simplify and solve for f. To find b, substitute f = 12,870 into the first equation. Step 7. Answer the question. The principal of the federal loan was $12,870 and the principal for the bank loan was $8,670. Laura owes $18,000 on her student loans. The interest rate on the bank loan is 2.5% and the interest rate on the federal loan is 6.9%. The total amount of interest she paid last year was $1,066. What was the principal for each loan? Bank $4,000; Federal $14,000 Jill's Sandwich Shoppe owes $65,200 on two business loans, one at 4.5% interest and the other at 7.2% interest. The total amount of interest owed last year was $3,582. What was the principal for each loan? $41,200 at 4.5%, $24,000 at 7.2% Suppose a company makes and sells x units of a product. The cost to the company is the total costs to produce x units. This is the cost to manufacture for each unit times x, the number of units manufactured, plus the fixed costs. The revenue is the money the company brings in as a result of selling x units. This is the selling price of each unit times the number of units sold. When the costs equal the revenue we say the business has reached the break-even point. COST AND REVENUE FUNCTIONS The cost function is the cost to manufacture each unit times x, the number of units manufactured, plus the fixed costs. \[C(x)=(\text{cost per unit})·x+\text{fixed costs}\nonumber \] The revenue function is the selling price of each unit times x, the number of units sold. \[R(x)=(\text{selling price per unit})·x\nonumber \] The break-even point is when the revenue equals the costs. \[C(x)=R(x)\nonumber\] The manufacturer of a weight training bench spends $105 to build each bench and sells them for $245. The manufacturer also has fixed costs each month of $7,000. ⓐ Find the cost function C when x benches are manufactured. ⓑ Find the revenue function R when x benches are sold. ⓒ Show the break-even point by graphing both the Revenue and Cost functions on the same grid. ⓓ Find the break-even point. Interpret what the break-even point means. ⓐ The manufacturer has $7,000 of fixed costs no matter how many weight training benches it produces. In addition to the fixed costs, the manufacturer also spends $105 to produce each bench. Suppose x benches are sold. \(\begin{array} {ll} {\text{Write the general Cost function formula.}} &{C(x)=(\text{cost per unit})·x+\text{fixed costs}} \\ {\text{Substitute in the cost values.}} &{C(x)=105x+7000} \\ \end{array}\) ⓑ The manufacturer sells each weight training bench for $245. We get the total revenue by multiplying the revenue per unit times the number of units sold. \(\begin{array} {ll} {\text{Write the general Revenue function.}} &{C(x)=(\text{selling price per unit})·x} \\ {\text{Substitute in the revenue per unit.}} &{R(x)=245x} \\ \end{array}\) ⓒ Essentially we have a system of linear equations. We will show the graph of the system as this helps make the idea of a break-even point more visual. \[\left\{ \begin{array} {l} C(x)=105x+7000 \\ R(x)=245x \end{array} \right. \quad \text{or} \quad \left\{ \begin{array} {l} y=105x+7000 \\ y=245x \end{array} \right. \nonumber \] ⓓ To find the actual value, we remember the break-even point occurs when costs equal revenue. \(\begin{array} {ll} {\text{Write the break-even formula.}} &{\begin{array} {l} {C(x)=R(x)} \\ {105x+7000=245x} \end{array}} \\ {\text{Solve.}} &{\begin{array} {l} {7000=140x} \\ {50=x} \end{array}} \\ \end{array}\) When 50 benches are sold, the costs equal the revenue. When 50 benches are sold, the revenue and costs are both $12,250. Notice this corresponds to the ordered pair \((50,12250)\). The manufacturer of a weight training bench spends $15 to build each bench and sells them for $32. The manufacturer also has fixed costs each month of $25,500. ⓐ \(C(x)=15x+25,500\) ⓑ \(R(x)=32x\) ⓒ ⓓ 1,5001,500; when 1,500 benches are sold, the cost and revenue will be both 48,000 The manufacturer of a weight training bench spends $120 to build each bench and sells them for $170. The manufacturer also has fixed costs each month of $150,000. ⓐ \(C(x)=120x+150,000\) ⓑ \(R(x)=170x\) ⓓ \(3,000\); when 3,000 benches are sold, the revenue and costs are both $510,000 Access this online resource for additional instruction and practice with interest and mixtures. Interest and Mixtures Cost function: The cost function is the cost to manufacture each unit times x, the number of units manufactured, plus the fixed costs. \(C(x)=(\text{cost per unit})·x+\text{fixed costs}\) Revenue: The revenue function is the selling price of each unit times x, the number of units sold. \(R(x)=(\text{selling price per unit})·x\) Break-even point: The break-even point is when the revenue equals the costs. \(C(x)=R(x)\) In the following exercises, translate to a system of equations and solve. Tickets to a Broadway show cost $35 for adults and $15 for children. The total receipts for 1650 tickets at one performance were $47,150. How many adult and how many child tickets were sold? Tickets for the Cirque du Soleil show are $70 for adults and $50 for children. One evening performance had a total of 300 tickets sold and the receipts totaled $17,200. How many adult and how many child tickets were sold? 110 adult tickets, 190 child tickets Tickets for an Amtrak train cost $10 for children and $22 for adults. Josie paid $1200 for a total of 72 tickets. How many children tickets and how many adult tickets did Josie buy? Tickets for a Minnesota Twins baseball game are $69 for Main Level seats and $39 for Terrace Level seats. A group of sixteen friends went to the game and spent a total of $804 for the tickets. How many of Main Level and how many Terrace Level tickets did they buy? 6 good seats, 10 cheap seats Tickets for a dance recital cost $15 for adults and $7 dollars for children. The dance company sold 253 tickets and the total receipts were $2771. How many adult tickets and how many child tickets were sold? Tickets for the community fair cost $12 for adults and $5 dollars for children. On the first day of the fair, 312 tickets were sold for a total of $2204. How many adult tickets and how many child tickets were sold? 92 adult tickets, 220 children tickets Brandon has a cup of quarters and dimes with a total value of \($3.80\). The number of quarters is four less than twice the number of quarters. How many quarters and how many dimes does Brandon have? Sherri saves nickels and dimes in a coin purse for her daughter. The total value of the coins in the purse is \($0.95\). The number of nickels is two less than five times the number of dimes. How many nickels and how many dimes are in the coin purse? 13 nickels, 3 dimes Peter has been saving his loose change for several days. When he counted his quarters and nickels, he found they had a total value \($13.10\). The number of quarters was fifteen more than three times the number of dimes. How many quarters and how many dimes did Peter have? Lucinda had a pocketful of dimes and quarters with a value of \($6.20\). The number of dimes is eighteen more than three times the number of quarters. How many dimes and how many quarters does Lucinda have? 42 dimes, 8 quarters A cashier has 30 bills, all of which are $10 or $20 bills. The total value of the money is $460. How many of each type of bill does the cashier have? 17 $10 bills, 37 $20 bills Marissa wants to blend candy selling for \($1.80\) per pound with candy costing \($1.20\) per pound to get a mixture that costs her \($1.40\) per pound to make. She wants to make 90 pounds of the candy blend. How many pounds of each type of candy should she use? How many pounds of nuts selling for $6 per pound and raisins selling for $3 per pound should Kurt combine to obtain 120 pounds of trail mix that cost him $5 per pound? 80 pounds nuts and 40 pounds raisins Hannah has to make twenty-five gallons of punch for a potluck. The punch is made of soda and fruit drink. The cost of the soda is \($1.79\) per gallon and the cost of the fruit drink is \($2.49\) per gallon. Hannah's budget requires that the punch cost \($2.21\) per gallon. How many gallons of soda and how many gallons of fruit drink does she need? Joseph would like to make twelve pounds of a coffee blend at a cost of $6 per pound. He blends Ground Chicory at $5 a pound with Jamaican Blue Mountain at $9 per pound. How much of each type of coffee should he use? 9 pounds of Chicory coffee, 3 pounds of Jamaican Blue Mountain coffee Julia and her husband own a coffee shop. They experimented with mixing a City Roast Columbian coffee that cost $7.80 per pound with French Roast Columbian coffee that cost $8.10 per pound to make a twenty-pound blend. Their blend should cost them $7.92 per pound. How much of each type of coffee should they buy? Twelve-year old Melody wants to sell bags of mixed candy at her lemonade stand. She will mix M&M's that cost $4.89 per bag and Reese's Pieces that cost $3.79 per bag to get a total of twenty-five bags of mixed candy. Melody wants the bags of mixed candy to cost her $4.23 a bag to make. How many bags of M&M's and how many bags of Reese's Pieces should she use? 10 bags of M&M's, 15 bags of Reese's Pieces Jotham needs 70 liters of a 50% solution of an alcohol solution. He has a 30% and an 80% solution available. How many liters of the 30% and how many liters of the 80% solutions should he mix to make the 50% solution? Joy is preparing 15 liters of a 25% saline solution. She only has 40% and 10% solution in her lab. How many liters of the 40% and how many liters of the 10% should she mix to make the 25% solution? \(7.5\) liters of each solution A scientist needs 65 liters of a 15% alcohol solution. She has available a 25% and a 12% solution. How many liters of the 25% and how many liters of the 12% solutions should she mix to make the 15% solution? A scientist needs 120 milliliters of a 20% acid solution for an experiment. The lab has available a 25% and a 10% solution. How many liters of the 25% and how many liters of the 10% solutions should the scientist mix to make the 20% solution? 80 liters of the 25% solution and 40 liters of the 10% solution A 40% antifreeze solution is to be mixed with a 70% antifreeze solution to get 240 liters of a 50% solution. How many liters of the 40% and how many liters of the 70% solutions will be used? A 90% antifreeze solution is to be mixed with a 75% antifreeze solution to get 360 liters of an 85% solution. How many liters of the 90% and how many liters of the 75% solutions will be used? 240 liters of the 90% solution and 120 liters of the 75% solution Hattie had $3000 to invest and wants to earn 10.6%10.6% interest per year. She will put some of the money into an account that earns 12% per year and the rest into an account that earns 10% per year. How much money should she put into each account? Carol invested $2560 into two accounts. One account paid 8% interest and the other paid 6% interest. She earned 7.25%7.25% interest on the total investment. How much money did she put in each account? $1600 at 8%, 960 at 6% Sam invested $48,000, some at 6% interest and the rest at 10%. How much did he invest at each rate if he received $4000 in interest in one year? Arnold invested $64,000, some at \(5.5%\) interest and the rest at 9%. How much did he invest at each rate if he received $4500 in interest in one year? $28,000 at 9%, $36,000 at \(5.5%\) After four years in college, Josie owes $65, 800 in student loans. The interest rate on the federal loans is \(4.5%\) and the rate on the private bank loans is 2%. The total interest she owes for one year was \($2878.50\). What is the amount of each loan? Mark wants to invest $10,000 to pay for his daughter's wedding next year. He will invest some of the money in a short term CD that pays 12% interest and the rest in a money market savings account that pays 5% interest. How much should he invest at each rate if he wants to earn $1095 in interest in one year? $8500 CD, $1500 savings account A trust fund worth $25,000 is invested in two different portfolios. This year, one portfolio is expected to earn \(5.25%\) interest and the other is expected to earn 4%. Plans are for the total interest on the fund to be $1150 in one year. How much money should be invested at each rate? A business has two loans totaling $85,000. One loan has a rate of 6% and the other has a rate of 4.5% This year, the business expects to pay $4,650 in interest on the two loans. How much is each loan? $55,000 on loan at 6% and $30,000 on loan at \(4.5%\) The manufacturer of an energy drink spends $1.20 to make each drink and sells them for $2. The manufacturer also has fixed costs each month of $8,000. ⓐ Find the cost function C when x energy drinks are manufactured. ⓑ Find the revenue function R when x drinks are sold. The manufacturer of a water bottle spends $5 to build each bottle and sells them for $10. The manufacturer also has fixed costs each month of $6500. ⓐ Find the cost function C when x bottles are manufactured. ⓑ Find the revenue function R when x bottles are sold. ⓒ Show the break-even point by graphing both the Revenue and Cost functions on the same grid. ⓓ Find the break-even point. Interpret what the break-even point means. ⓐ \(C(x)=5x+6500\) ⓓ 1,500; when 1,500 water bottles are sold, the cost and the revenue equal $15,000 Take a handful of two types of coins, and write a problem similar to Example relating the total number of coins and their total value. Set up a system of equations to describe your situation and then solve it. In Example, we used elimination to solve the system of equations \(\left\{ \begin{array} {l} s+b=40,000 \\ 0.08s+0.03b=0.071(40,000). \end{array} \right. \) Could you have used substitution or elimination to solve this system? Why? Answers will vary. ⓐ After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. ⓑ What does this checklist tell you about your mastery of this section? What steps will you take to improve? cost function The cost function is the cost to manufacture each unit times xx, the number of units manufactured, plus the fixed costs; C(x) = (cost per unit)x + fixed costs. The revenue is the selling price of each unit times x, the number of units sold; R(x) = (selling price per unit)x. break-even point The point at which the revenue equals the costs is the break-even point; C(x)=R(x). 4.2E: Exercises Lynn Marecek via OpenStax source[1]-math-5140
CommonCrawl
Tick-borne encephalitis (TBE) cases are not random: explaining trend, low- and high-frequency oscillations based on the Austrian TBE time series Franz Rubel ORCID: orcid.org/0000-0002-0048-73791, Melanie Walter1, Janna R. Vogelgesang1 & Katharina Brugger1 Why human tick-borne encephalitis (TBE) cases differ from year to year, in some years more 100%, has not been clarified, yet. The cause of the increasing or decreasing trends is also controversial. Austria is the only country in Europe where a 40-year TBE time series and an official vaccine coverage time series are available to investigate these open questions. A series of generalized linear models (GLMs) has been developed to identify demographic and environmental factors associated with the trend and the oscillations of the TBE time series. Both the observed and the predicted TBE time series were subjected to spectral analysis. The resulting power spectra indicate which predictors are responsible for the trend, the high-frequency and the low-frequency oscillations, and with which explained variance they contribute to the TBE oscillations. The increasing trend can be associated with the demography of the increasing human population. The responsible GLM explains 12% of the variance of the TBE time series. The low-frequency oscillations (10 years) are associated with the decadal changes of the large-scale climate in Central Europe. These are well described by the so-called Scandinavian index. This 10-year oscillation cycle is reinforced by the socio-economic predictor net migration. Considering the net migration and the Scandinavian index increases the explained variance of the GLM to 44%. The high-frequency oscillations (2–3 years) are associated with fluctuations of the natural TBE transmission cycle between small mammals and ticks, which are driven by beech fructification. Considering also fructification 2 years prior explains 64% of the variance of the TBE time series. Additionally, annual sunshine duration as predictor for the human outdoor activity increases the explained variance to 70%. The GLMs presented here provide the basis for annual TBE forecasts, which were mainly determined by beech fructification. A total of 3 of the 5 years with full fructification, resulting in high TBE case numbers 2 years later, occurred after 2010. The effects of climate change are therefore not visible through a direct correlation of the TBE cases with rising temperatures, but indirectly via the increased frequency of mast seeding. The tick-borne encephalitis (TBE) virus is a flavivirus persisting in a natural transmission cycle between small mammals and ticks. Humans can be infected, but they are ecologically dead-end hosts [1]. TBE vectors in Central Europe are predominantly ticks of the genus Ixodes, especially Ixodes ricinus, the castor bean tick [2]. Since TBE can be a serious disease in humans [3], it is notifiable in almost all endemic TBE areas. Despite the availability of efficient vaccines [4, 5], TBE cases in Central Europe has risen sharply in recent decades [6]. In 2018, historical maximum values of 584 cases in Germany [7] and 377 cases in Switzerland [8] were registered. In Austria, 154 cases were the highest reported since 1994, although more than 80% of the population is vaccinated [9, 10]. Without vaccination probably more than 800 TBE cases per year would occur in Austria. Looking at the long time series of TBE cases, some of which date back to the 1950s [11], the question arises of how the temporal variations of these TBE cases can be explained. It can be taken into account that climate and environmental variables, averaged over large areas such as Central Europe, explain biological relationships much better than those with high local accuracy, as discussed in the fundamental papers on patterns and scales in ecology from Levin [12] and Hallett et al. [13]. Additionally, it can be taken into account that different mechanisms act on different time scales. For example, long-term TBE trends, that have been observed over many decades, have been linked to factors such as demographic trends, changes in land use and associated wildlife density, or changes in human recreational behavior and related exposure [14]. Not least, climate change has been discussed as a possible driver [15, 16]. While in Sweden TBE incidence was significantly related to milder winters and higher spring and autumn temperatures [17], for the Baltic countries it was stated that climate change cannot explain the increase in TBE cases [18]. Here, it is assumed that climate change plays a only minor role in explaining the trend of Austrian TBE cases. Instead, the demographic development of the population is assumed to be the most probable cause for the rising TBE trend. This long-term trend in the Austrian TBE time series is superimposed by cyclical fluctuations. The duration period of these cyclical fluctuations was determined by Zeman [19] for 6 time series of TBE cases in Austria, the Czech Republic, the German federal states Bavaria and Baden-Wuerttemberg, Slovenia, and Switzerland. Calculating the power spectra from the detrended time series of Austrian TBE cases results in 2 dominant periods of the oscillations. The first has a period of 10 years (low-frequency oscillations), the second has a period of 2–3 years (high-frequency oscillations) [19]. It is well-known that the large atmospheric circulation variability is responsible for population and disease fluctuations [20]. Atmospheric circulation variability is also referred to as climate variability and is often described by so-called teleconnection indices. The best known of these climate variability or anomaly indices is the El Niño Southern Oscillation (ENSO), which occurs in areas around the tropical Pacific, especially in the southern hemisphere. ENSO triggered Malaria, Dengue, Rift Valley fever and other vector-borne disease outbreaks [21]. The ENSO impact on outbreaks reaches as far as the south of the USA, where a rodent-borne hantavirus outbreak was associated with the 1997–1998 El Niño [21]. The most studied climate variability of the northern hemisphere is the North Atlantic Oscillation (NAO). It has been linked to a variety of disease outbreaks in the USA and Western Europe [22]. For example, Hubálek [23] studied 14 viral, bacterial and protozoan notifiable human diseases in the Czech Republic and their association with NAO indices, but no correlation was found for the tick-borne diseases TBE and Lyme borreliosis. Palo [24] also found no correlation between NAO and the number of Swedish TBE cases. Another teleconnection index describing the large-scale atmospheric circulation variability is the Scandinavian index (SCAND). It is less known than ENSO and NAO, and there is currently only one study that correlates human disease data, the UK asthma mortality, with SCAND fluctuations [25]. Since SCAND describes the large-scale atmospheric circulation variability from Central Europe to Central Asia, it is hypothesized that it is suitable for describing the 10-year oscillations in TBE cases in Austria. The 2–3 year oscillations might be caused by the variations in beech fructification, which is responsible for the population dynamics of small mammals [26]. This also describes the oscillations in the population of I. ricinus whose larvae and nymphs feed mainly on yellow-necked mice Apodemus flavicollis and bank voles Myodes glareolus [27] and thus contribute to the natural TBE virus transmission cycle. Brugger et al. [28] demonstrated that with the beech fructification index 2 years prior, the annual average temperature of the previous year and the past winter temperature, the I. ricinus nymphal density can be described with great accuracy. However, some peaks in TBE time series cannot be explained by tick density. An example are the extraordinary high TBE numbers in 2006, which were observed in some European countries (not in Austria). They were explained by recreational behavior of humans, i.e. more outdoor activities in the extremely warm year 2006 [29]. Because there are no long-term studies, a simple hypothesis is pursued, according to which more sunshine hours should lead to more outdoor activities and thus to a higher exposure of the human population. Using the Austrian time series, we aimed at identifying the demographic and environmental factors associated with the trend and the oscillations by using generalized linear models (GLMs). Both the observed and the predicted TBE time series are to be subjected to a spectral analysis. The resulting power spectra should then indicate which predictors are responsible for the trend, the high-frequency and the low-frequency oscillations, and with which explained variance they contribute to the TBE oscillations. The model development presented here differs fundamentally from the usual approaches, according to which significant predictors are selected from a large number of possible predictors by stepwise modeling [30], whereby their contribution to the frequency spectrum is not taken into account. So far, only two GLMs have been developed to predict TBE time series. The first is a GLM to predict the numbers of Swedish TBE cases by using December precipitation and red fox (Vulpes vulpes) or mink (Mustela vison) abundance as predictors [31]. The second GLM confirmed the high correlation between red fox density and TBE cases [24], which underpin the causal link between beech fructification, small mammals, and their predators, red foxes and mink. Two ways of representing annual TBE time series are in use. On the one hand the absolute number of annual TBE cases is indicated, on the other hand the TBE incidence, i.e. the number of annual TBE cases per 100,000 inhabitants. Here, absolute numbers of annual TBE cases are used to allow the demographic parameters of the human population (Fig. 1) to be used as predictors. Thus, for example, the variance of the TBE time series explained by the population growth can be determined. Demographics of the Austrian human population. Left axis: population in million (black line), right axis: birth rate, mortality rate and net migration rate in 1,000 per year (red lines). Noteworthy is the net migration, which is exclusively responsible for the population growth. Period 1979–2018 Demographics of the human population Since the Austrian population has risen sharply in recent years, the demographic development must be taken into account. It is described by the birth rate, the mortality rate, and the net migration rate. Figure 1 shows the official demographic data [32]. According to this, the human population increased by more than 1.2 million in the period 1979–2018. This is mainly due to the net migration rate, i.e. the difference between annual immigration and emigration. Four major net migration (immigration) events occurred within the 40-year period 1979–2018. The 2 most outstanding immigration events were caused by the Yugoslavian Civil War in 1991 and the Syrian Civil War in 2015. Net migration peaks were also observed 1981 after the suppression of the anti-communist social movement Solidarnosc in Poland and during 2001–2005 after the labor market has been opened further [33]. The difference between the birth rate and the mortality rate, the reproduction rate, is on average just 3,000 people per year. This is one order of magnitude lower than the mean net migration rate of about 30,000 people per year. Here, the total population NTOT and the net migration NMIG were used as predictors and listed in Table 1. Table 1 Input data and model output for the period 1979–2018: total human population NTOT in 106, annual human net migration NMIG in 104 per year, reported tick-borne encephalitis (TBE) cases NTBE, vaccination coverage VC, four-year average of log-transformed Scandinavian indices SI, beech fructification index 2 years prior Fyear−2 and annual sunshine duration in hours SD Human TBE time series The official human TBE time series in Austria for the period 1979–2018 is analyzed. This 40-year time series was documented by the Department of Virology, Medical University of Vienna, acting as the national reference laboratory for TBE virus infections. Only hospitalized patients with a serologically confirmed recent infection with TBE virus were counted as cases and published together with the vaccination coverage of the Austrian population [34]. In Austria, TBE is a notifiable disease and thus accuracy of the records is very high. Using the official vaccination coverage a hypothetical time series without TBE vaccination was estimated. $$ N = \frac{1}{1 -{VC}} \; N_{{TBE}} $$ Here, NTBE are the annual TBE cases documented by the national reference laboratory for TBE virus infections, VC is the official vaccination coverage within the interval [0, 1], and N is the hypothetical TBE cases without vaccination. Values of NTBE, VC and N are listed in Table 1. In the following, only the hypothetical TBE cases N are used to investigate the natural trend and the oscillations in the Austrian TBE time series. Climate teleconnection To describe the decadal changes of the large-scale climate in Central Europe several teleconnection indices are available. Here, the Scandinavian index (SCAND) developed by Barnstone and Livezey [35] was used, which the authors originally called the Eurasia-1 pattern. With the help of SCAND an atmospheric circulation pattern, i.e. the spatial arrangement of northern hemisphere high- and low-pressure systems, is characterized by a single index value. Like ENSO and NAO, the SCAND is therefore well suited for investigating correlations of large-scale atmospheric circulation patterns with the cases of vector-borne diseases. A time series of the monthly SCAND is provided for the period 1950 to present on the Climate Prediction Center (CPC) website of the National Oceanic and Atmospheric Administration [36]. The SCAND describes an atmospheric circulation center over Scandinavia, with weaker centers of opposite sign over western Europe and eastern Russia/western Mongolia. Positive values are associated with below-average temperatures across western Europe and central Russia. It is also associated with above-average precipitation across central and southern Europe and below-average precipitation across Scandinavia (Fig. 2). For the TBE endemic areas in Austria and southern Germany, this means that high SCAND values represent cooler and rainier periods. In turn, low SCAND values describe above-average warm and dry periods. Maps showing correlation between the high July values of the Scandinavian index (SCAND) and the monthly climate anomalies during June, July, and August. Maps of temperature and precipitation anomalies (departures from mean in percent) were adapted from NOAA [36] To determine the optimal correlation between TBE and SCAND, so-called cross-correlation maps (CCMs) were used. With CCMs optimal time lags and accumulation periods of predictors can be determined [37]. As known from vector biology, the best correlation between arthropod vectors or disease cases caused by pathogens they transmit and environmental temperature is obtained when temperature data were averaged over the period of the life cycle of the vector. For example, the life cycle of Culex pipiens, the vector of West Nile virus, is about 2–3 weeks during the mosquito activity period. With CCMs, 18 days were determined as the optimal averaging period [38]. If one wants to describe the dynamics of the Bluetonge virus vector Culicoides obsoletus by temperature, the somewhat longer averaging period of 37 days was estimated according the typical length of the life cycle of biting midges [39]. For TBE and its main vector I. ricnus the averaging period of the climate predictor SCAND should therefore be 2–6 years [40], 3–6 years [41], or 4–6 years [42]. In fact, the optimal averaging period determined by the application of CCMs is 4 years. Ideally, predictors should be normally distributed, which frequently can be achieved with a log-transformation. Therefore, the Scandinavian index SI used here is derived from the log-transformed monthly values of SCAND [36], which were averaged over 4 years. The SI values are listed in Table 1. Beech fructification The natural transmission cycle of the TBE virus depends on the availability of suitable hosts for the main vector I. ricinus. Preferred hosts of I. ricinus larvae are among others small rodents [43]. As no observations of small rodents are available for the long time series investigated in this study, the fructification index of the European beech (Fagus sylvatica) was applied for indicating the rodent density. Beechnuts are a basic food source for small rodents resulting in population peaks one year after mast seeding [44, 45]. Higher host densities cause higher densities of larvae of the TBE vectors I. ricinus. Two year after mast seeding, higher densities of I. ricinus nymphs are observed [28], which may be responsible for peaks in human TBE time series. Since the mast seeding is continental-scale synchronized [46], only a single time series, the beech fructification index published by Konnert et al. [47], is used here. This index is defined as the annual seed production and is divided in the following four classes: (0) absent, i.e. no fructification, (1) scarce, i.e. sporadic occurrence of fructification, but not noticeable at first sight, (2) common, i.e. clearly visible fructification, and (3) abundant, i.e. full fructification, also known as mast seeding. The values of the beech fructification indices 2 years prior Fyear−2 are listed in Table 1. Annual sunshine duration It is hypothesized that human recreational behavior affects the number of TBE cases. For example, higher outdoor activities increase exposure and they should therefore increase the TBE cases. Since there is no established predictor for human outdoor activities, the annual sunshine duration in hours is used as such here. With large-scale considerations in focus, this should be representative for Central Europe. Of all meteorological services in Central Europe, only the German Weather Service offers such open data [48]. This is averaged over the entire region of Germany and should also be representative of the smaller neighboring countries such as Austria. Thus, in addition to the averaged log-transformed Scandinavian index SI, and the beech fructification index 2 years prior Fyear−2, the annual sunshine duration SD is the third large-scale predictor used for the analysis of TBE time series (Table 1). All statistical analysis and modeling was done with the Language and Environment for Statistical Computing R [49]. GLMs were used to describe relationships. In the course of this, an overdispersion was observed since the dispersion parameter was generally greater than 1. This overdispersion was taken into account by using negative binomial models implemented with the R package mass [50]. To assess the necessary conditions for the application of GLMs, the predictors used (total population, net migration, Scandinavian index, beech fructification index, and annual sunshine duration) were tested for collinearity. This test is commonly used to select from a large number of predictor variables those that are most strongly correlated with the target variable, here the TBE cases. In addition, the so chosen predictor variables should be only weakly correlated with each other. Otherwise, their number can be further reduced. Here, however, a different approach is pursued: a small number of biologically well interpretable predictors are given. The check for collinearities (Fig. S1) therefore has only a control function, all correlations between the individual predictors are well below the threshold of |R|=0.7 [51]. With the R package psych [52] additional model diagnoses were created. These include examining the model errors for randomness (residual vs. fitted plot, scale location plot) and normal distribution (normal Q–Q plot), both of which are prerequisites for the applicability of GLMs [53]. Cook's distance (residual vs. leverage plot) was used to test which TBE observations have the greatest influence on the regression. Outliers can be defined and eliminated if necessary [53], but this was not applied here. While the statistical methods described above are intended to ensure the reliability of the selected model, it is particularly interesting how well the annual TBE fluctuations are described by the chosen predictor variables. Therefore, the models were verified by the root-mean-square error (RMSE) and the explained variance (R2). The advantage of these verification measures is that with the RMSE the error is specified in units of the target variable, i.e. the TBE cases, and R2 is well known. If different GLMs are developed, the best model can be chosen with the help of the Akaike information criterion (AIC). The AIC estimates the quality of each model, relative to each of the other models. For better interpretability, however, the adjusted R2 (R\(^{2}_{{adj}}\)) is given here. In general, the use of additional predictors in GLMs leads to higher R2 values, even if they do not make a significant contribution to the model. With R\(^{2}_{{adj}}\) this is considered, which also determines model performance, relative to each of the other models [53]. A key objective of this study is to describe the causes of the trend as well as the low-frequency and the high-frequency oscillations of the TBE cases. For this purpose, the power spectra of both the observed and the predicted TBE cases are calculated as described e.g. by [54]. The predictors for the GLMs should be selected so that the power spectra of the predictions match those of the observations as closely as possible. In 4 steps GLMs (negative binomial regression models) were developed, which demonstrate the influence of the selected predictors on the model performance as well as on the power spectrum of the predicted TBE time series. Thus, a final model was stepwise developed, which explains more than two-thirds of the variance in the observations. The first GLM uses only one predictive variable, the total population NTOT. Figure 3(GLM1) shows the TBE cases without vaccination N (grey bars) with the predicted TBE cases NGLM1 (red line) representing a good approximation of the linear trend calculated from the observations N (black line). A rank-order correlation coefficient after Spearman of R=0.29 between the total population and the observed TBE cases was estimated (Fig. S1). The corresponding power spectrum for the observations shows 2 maxima. The first one is located at a period of 3 years (high-frequency oscillations), the second one is located at a period of 10 years (low-frequency oscillations). The power spectrum of the model shows no maximum but only red noise, as expected for the trend. Observed (grey bars) and predicted (red lines) Austrian tick-borne encephalitis series (left) and corresponding power spectra (right). GLM1: model using exclusively the human population NTOT as predictor variable resulting in a good approximation of the linear trend depicted by the black line. GLM2: model extended by the predictors net migration rate NMIG and Scandinavian index SI to explain low-frequency oscillations. GLM3: model extended by the beech fructification index 2 years prior Fyear−2 to explain also high-frequency oscillations. GLM4: best performance model extended by the annual sunshine duration SD. For each model the verification measures root-mean-square error (RMSE) and explained variance R2 (with R\(^{2}_{{adj}}\) in brackets) are given. Period 1979–2018 The second GLM was extended to explain the low-frequency oscillations in addition to the trend. Additional predictors used were the transformed Scandinavian index SI and the annual net migration NMIG. Both contribute to the 10-year TBE oscillation. The rank-order correlation between the TBE cases and the SI was R=0.52 (Fig. S1). In Austria, periods of high SI are related to relatively cool summers with above-average precipitation (Fig. 2). Another highly significant contribution to explain the TBE cases is provided by the Austrian net migration rate, which can be considered as an socio-economic predictor. The net migration NMIG is negatively correlated with the numbers of the TBE cases (R=-0.14). This suggests that new arrivals are less exposed to TBE virus infections, although they are responsible for the long-term population growth and thus also for the long-term increase in TBE cases. Since there is no study on this topic so far, it is hypothesized that the majority of immigrants from abroad initially settle in big cities where they are less exposed to TBE foci. The model therefore reduces the overestimated TBE cases during random net migration events. Figure 3(GLM2) shows the extended GLM verified with an error of RMSE=122 TBE cases and an explained variance of R2=0.44 (R\(^{2}_{{adj}}\)=0.48). The power spectrum clearly shows that the 10-year oscillation of the observed TBE time series is well associated with the 2 predictors SI and NMIG. The third GLM considers the beech fructification index 2 years prior Fyear−2 as an additional predictor for the high-frequency oscillations. The fact that the mast seeding 2 years prior has a high influence on the density of the main TBE vectors I. ricinus has already been shown by Brugger et al. [28]. Here it is demonstrated that this also applies to the TBE cases (R=0.38). Figure 3(GLM3) shows the model extended by the predictor beech fructification index Fyear−2. The power spectrum clearly shows that the fructification index contributes to the explanation of the high-frequency oscillations, although the power is slightly too low compared to the spectrum of the observed TBE cases. The period of the low-frequency oscillations, on the other hand, fits those of the observed TBE time series very well. The explained variance increases to R2=0.64 (R\(^{2}_{{adj}}\)=0.66), resulting in a further reduction of the error of RMSE=98 cases. Considering the uncertainties in the observed TBE cases and the fact that time series of disease cases are generally difficult to explain, this result can be classified as very good. The parameter estimates and the significance levels of the predictors of this GLM are summarized in Table 2, where the discrete values of the fructification index are modeled as a factor. The factor Fyear−2=0 is set as default and the factors Fyear−2=1, 2 or 3 are considered by different parameter estimates. Thus, the GLM requires only the four predictors NTOT,NMIG, SI and Fyear−2, all of which contribute to the model very significantly (p <0.001). Table 2 Summary of GLM3, a negative binomial model for tick-borne encephalitis cases without vaccination (hypothetical cases) The fourth GLM was extended by a predictor for the outdoor activity of humans. So far, no influence of human recreational behavior on the annual TBE time series has been considered, which should lead to a further improvement of the model. A climatic parameter that should have a plausible influence on an increased outdoor activity of humans is annual sunshine duration SD in hours. It is directly (without a lag time) correlated with the TBE cases (R=0.27). Thus, the correlation between SD and N is similar to the correlation between Fyear−2 und N. Since there is no appreciable collinearity between SD and Fyear−2 (Fig. S1), the consideration of SD results in an increased model performance. Figure 3(GLM4) shows the results of the GLM with the additional predictor SD. It explains the variations of the TBE cases even better, namely with R2=0.70 (R\(^{2}_{{adj}}\)=0.70), which leads to a further reduction of the error of RMSE=89 cases. The parameter estimates and the significance levels of the predictors of the final model are summarized in Table 3. Again, all predictors contribute significantly to model performance, with most p-values being very significant. Of course, the relative contribution of the fructification index decreases at the expense of sunshine duration, as both predictor variables are responsible for high-frequency oscillations. Statistical features for the final GLM as described in "Statistical modelling" section are provided in Fig. S2, the AIC values for the stepwise developed models in Table S1. The climate of the TBE endemic areas in Austria and neighboring countries is still characterized by the boreal coniferous climate of the northern hemisphere in the 1960s. According to the well-known Köppen-Geiger climate classification it is known as Dfb climate, a boreal climate with rain at all seasons and warm summers [55]. Until today, this boreal coniferous climate has almost completely retreated from the Alps and has been replaced by a warm temperate Cfb climate [56]. It is called, according to the prevailing tree species in natural forests, beech climate. In order to take this into account, spruce monocultures threatened by climate change are being gradually replaced by deciduous or mixed forests across the region of the greater Alps. These more species-rich forests also provide better living conditions for the most important TBE virus vector I. ricinus, resulting in higher I. ricinus densities [57, 58]. Beech fructification is a predictor of the intensity of the natural TBE virus transmission cycle between small mammals and ticks, with a high beech fructification index increasing the population density of small mammals and of I. ricinus larvae one year thereafter. One more year later, significantly higher densities of questing I. ricinus nymphs are responsible for the more frequent transmission of the TBE virus to humans [28]. However, effects on TBE cases due to these changes in forestry were only visible in the last decade, where higher frequencies of years with full fructification of beech (mast seeding) are responsible for several peaks in the TBE time series. A total of 3 of the 5 years with full fructification occurred after 2010 (Table 1). The effects of climate change are therefore not visible through a direct correlation of the TBE cases with rising temperatures, but indirectly via the increased frequency of mast seeding (Fyear−2). In combination with the rapidly increasing human population (N TOT, N MIG) and a slight decline in vaccination coverage (VC), this explains the major effects of rising numbers of Austrian TBE cases observed after 1995 under real conditions with vaccination (see NTBE in Table 1). Additionally, 10-year oscillations are associated with the large-scale distribution of atmospheric high and low pressure systems (SI) resulting in the third model version GLM3, which explains 64% of the variation of Austrian TBE cases. In Austria, periods of high SI are related to relatively cool summers with above-average precipitation (Fig. 2). This may influence the density of the TBE vector I. ricinus, which does not like hot summers and extreme drought. Remarkably, oscillations with periods of 10 and 3–4 years were also observed for the TBE vector Ixodes persulcatus in Russia. Years with high tick densities follow frequently those with the peak population density of small mammals [41]. However, no direct correlation between human TBE cases and the TBE vectors I. ricinus and I. persulcatus has yet been published. Similar ecological connections are also known from the oak forests of eastern North America [44]. With the additional predictor sunshine duration SD in GLM4 the explained variance continues to increase. The hypothesis was that the exposure of the population increases with an increasing number of hours of sunshine. This hypothesis seems confirmed, as the explained variance in GLM4 increased to 70%. But that must not hide the fact that further studies on the contribution of human behavior to the cases of TBE are needed. It should be noted that GLM4 cannot be used for TBE forecasts because neither social behavior itself (e.g. during the unexpected SARS-CoV-2 pandemic 2020) nor SD can be estimated for the next 1–2 years. There is also a fraction of unexplained variance of 30%, which needs further research. In particular, rare extreme events are difficult to detect by statistics, because low case numbers result in low significance. For example, Dautel et al. [59] have shown for Germany that extreme low temperatures in January and February 2012, in combination with the lack of a protective snow cover, led to decreasing numbers of I. ricinus nymphs as well as very low numbers of human TBE cases in the same year (also recognizable in the Austrian TBE time series). The inclusion of this and similar field studies could help to improve future predictions. Another aspect that has not been considered so far concerns the gender and age distribution of TBE cases within the population. The age distribution of the TBE cases shows a maximum at 55 years [3], with generally more men being infected with the TBE virus [60]. It has not yet been investigated whether the increasing number of TBE cases are related to the aging society. It should also be noted that alternative predictors for the explanation of TBE cases are mentioned in the literature. In Slovenia, for example, a correlation was found between TBE cases and roe deer density 3 years ago [61]. A high roe deer (Capreolus capreolus) density is interpreted as a high host density for the TBE vector I. ricinus and consequently responsible for high TBE cases. For Austria, the results of Knap and Avs̆ic̆-Z̆upanc [61] could be confirmed, but the explained variance of the GLM4 decreased with the use of roe deer density instead of SI from 70% to 58%. This is because the correlation between the roe deer density and the TBE cases is largely due to the concurrent trends. The estimation of unknown wildlife densities from hunting index generally leads to some uncertainties. In addition, Austrian hunting data [62] from the statistical database STATcube provided by Statistics Austria [63] are not available in near real-time, as the hunting year differs from the calendar year. It covers the period from April 1 to March 31 of the following year. Wildlife data are therefore generally less well-suited as predictors than climate data, especially with regard to a possible forecast of the next year's TBE cases. The TBE models presented here confirm the work of Hallett et al. [13] that large-scale indices can predict ecological processes very well, probably better than local weather and climate parameters. As the beech fructification indices of the years 2017 and 2018 are responsible for the TBE cases in 2019 and 2020, GLM3 can also be applied to forecast the TBE cases of the next 2 years. This is possible because for the forecast mainly the high-frequency oscillations caused by Fyear−2 are interesting. The predictors relevant to the trend and the low-frequency oscillations, on the other hand, can be extrapolated by simple methods such as persistence or linear interpolation. GLM3 is also applicable to the neighboring countries such as Germany or Switzerland using the same predictors, since the Scandinavian index is representative for all of Central Europe and also the beech fructification is large-scale synchronized. This has be examined in a follow-up study that was carried out during the review process of the paper presented here [64]. The verification with independent TBE cases from 2019 has demonstrated the good performance of the forecasts. Finally, it should be noted that the findings presented here can subsequently be used to create process models of the type susceptible-infected-recovered (SIR). These models represent the highest stage of development in epidemiological modelling as, unlike statistical models, they map the dynamics of population health based on the underlying processes of disease transmission. A first SIR model on the dynamics of the Austrian human TBE cases was presented by Rubel [65]. TBE: GLM: Generalized linear model ENSO: El Niño Southern Oscillation NAO: SCAND: Scandinavian index Transformed Scandinavian index VC: Vaccination coverage Cross- correlation map RMSE: Root-mean-square error AIC: Akaike information criterion SIR model: Susceptible-infected-recovered model Kahl O, Pogodina VV, Poponnikova T, Süss J, Zlobin VI. A short history of TBE In: Dobler G, Erber W, Bröker M, Schmitt H-J, editors. The TBE Book, 2. edn. Chap. 1. Singapore: Global Health Press: 2019. p. 11–18. Lindquist L, Vapalahti O. Tick-borne encephalitis. Lancet. 2008; 371:1861–71. Kaiser R. The clinical and epidemiological profile of tick-borne encephalitis in southern Germany 1994–98. Brain. 1999; 122:2067–78. Heinz FX, Holzmann H, Essl A, Kundi M. Field effectiveness of vaccination against tick-borne encephalitis. Vaccine. 2007; 25:7559–67. Heinz FX, Stiasny K, Holzmann H, Grgic-Vitek M, Kriz B, Essl A, Kundi M. Vaccination and tick-borne encephalitis, Central Europe. Emerg Infect Dis. 2013; 19:69–76. Süss J. Tick-borne encephalitis 2010: Epidemiology, risk areas, and virus strains in Europe and Asia - An overview. Ticks Tick Borne Dis. 2011; 2:2–15. Robert Koch-Institut. SurvStat@RKI 2.0. 2019. https://survstat.rki.de (visited on February 7, 2019). Bundesamt für Gesundheit. Frühsommer-Meningoenzephalitis (FSME): Ausweitung der Risikogebiete (in German). BAG-Bulletin. 2019; 6:12–4. Kunz C. TBE vaccination and the Austrian experience. Vaccine. 2003; 21(S1):50–5. Kunze U, Böhm G. Frühsommer-Meningo-Enzephalitis (FSME) und FSME-Schutzimpfung in Österreich: Update 2014. Wien Med Wochenschr. 2015; 165:290–5. Dobler G, Erber W, Bröker M, Schmitt H-J. The TBE Book, 2nd edn.Singapore: Global Health Press; 2019. Levin SA. The problem of pattern and scale in ecology. Ecology. 1992; 73:1943–67. Hallett TB, Coulson T, Pilkington JG, Clutton-Brock TH, Pemberton JM, Grenfell BT. Why large-scale climate indices seem to predict ecological processes better than local weather. Nature. 2004; 430:71–5. Zeman P, Benes C. Spatial distribution of a population at risk: An important factor for understanding the recent rise in tick-borne diseases (Lyme borreliosis and tick-borne encephalitis in the Czech Republic). Ticks Tick Borne Dis. 2013; 4:522–30. Daniel M, Danielová V, Fialová A, Malý M, Kŕiz̆ B, Nuttall PA. Increased relative risk of tick-borne encephalitis in warmer weather. Front Cell Infect Microbiol. 2007; 8:90. Gray JS, Dautel H, Estrada-Peña A, Kahl O, Lindgren E. Effects of climate change on ticks and tick-borne diseases in Europe. Interdiscip Perspect Inf Dis. 2009; id=593232. Lindgren E, Gustafson R. Tick-borne encephalitis in Sweden and climate change. The Lancet. 2001; 358:16–8. Sumilo D, Asokliene L, Bormane A, Vasilenko V, Golovljova I, Randolph SE. Climate change cannot explain the upsurge of tick-borne encephalitis in the Baltics. PLoS ONE. 2007; 2:500. Zeman P. Cyclic patterns in the central European tick-borne encephalitis incidence series. Epidemiol Infect. 2017; 145:358–67. Stenseth NC, Mysterud A, Ottersen G, Hurrell JW, Chan K-S, Lima M. Ecological effects of climate fluctuations. Science. 2002; 297:92–6. Kovats RS, Bouma MJ, Hajat S, Worrall E, Haines A. El Niño and health. The Lancet. 2007; 362:1481–9. Morand S, Owers KA, Waret-Szkuta A, McIntyre KM, Baylis M. Climate variability and outbreaks of infectious diseases in Europe. Sci Rep. 2013; 3:1774. Hubálek Z. North Atlantic weather oscillation and human infectious diseases in the Czech Republic, 1951–2003. Europ J Epidemiol. 2005; 20:263–70. Palo RT. Tick-borne encephalitis transmission risk: Its dependence on host population dynamics and climate effects. Vector-borne Zoon Dis. 2014; 14:346–52. Majeed H, Moore GWK. Influence of the Scandinavian climate pattern on the UK asthma mortality: a time series and geospatial study. BMJ Open. 2018; 8:020822. Clement J, Maes P, van Ypersele de Strihou C, van der Groen G, Barrios JM, Verstraeten WW, van Ranst M. Beechnuts and outbreaks of Nephropathia epidemica (NE): of mast, mice and men. Nephrol Dial Transplant. 2010; 25:1740–6. Galfsky D, Król N, Pfeffer M, Obiegala A. Long-term trends of tick-borne pathogens in regard to small mammal and tick populations from Saxony, Germany. Parasit Vectors. 2019; 12(1). https://doi.org/10.1186/s13071-019-3382-2. Brugger K, Walter M, Chitimia-Dobler L, Dobler G, Rubel F. Forecasting next season's Ixodes ricinus nymphal density: the example of southern Germany 2018. Exp Appl Acarol. 2018; 75:281–8. Randolph SE, Asokliene L, Avsic-Zupanc T, Bormane A, Burri C, Gern L, Golovljova I, Hubalek Z, Knap N, Kondrusik M, Kupca A, Pejcoch M, Vasilenko V, Z̆ygutiene M. Variable spikes in tick-borne encephalitis incidence in 2006 independent of variable tick abundance but related to weather. Parasit Vectors. 2008; 1:44. Whittingham MJ, Stephens PA, Bradbury RB, Freckleton RP. Why do we still use stepwise modelling in ecology and behaviour?J Anim Ecol. 2006; 75:1182–9. Haemig PD, De Luna SS, Grafström A, Lithner S, Lundkvist A, Waldenström J, Kindberg J, Stedt J, Olsén B. Forecasting risk of tick-borne encephalitis (TBE): Using data from wildlife and climate to predict next year's number of human victims. Scand J Infect Dis. 2011; 43:366–72. Statistics Austria. Bevölkerungsstand und -veränderung (in German). Vienna; 2018. https://www.statistik.at (visited on November 23, 2018). Bischof G, Rupnow D. Migration in Austria. Innsbruck: Innsbruck University Press; 2017. Heinz FX, Stiasny K, Holzmann H, Kundi M, Sixl W, Wenk W, Kainz W, Essl A, Kunz C. Emergence of tick-borne encephalitis in new endemic areas in Austria: 42 years of surveillance. Euro Surveill. 2015; 20(13):21077. Barnstone AG, Livezey RE. Classification, seasonality and persistence of low-frequency atmospheric circulation patterns. Mon Wea Rev. 1987; 115:1083–126. NOAA. Northern Hemisphere Teleconnection Patterns, Scandinavia (SCAND). Maryland, USA: Climate Prediction Center of the National Oceanic and Atmospheric Administration (NOAA); 2019. http://www.cpc.ncep.noaa.gov/data/teledoc/telecontents.shtml. Brugger K, Walter M, Chitimia-Dobler L, Dobler G, Rubel F. Seasonal cycles of the TBE and Lyme borreliosis vector Ixodes ricinus modelled with time-lagged and interval-averaged predictors. Exp Appl Acarol. 2017; 73:439–50. Lebl K, Brugger K, Rubel F. Predicting Culex pipiens/restuans population dynamics by interval lagged weather data. Paraisit Vectors. 2013; 6:129. Brugger K, Rubel F. Bluetongue disease risk assessment based on observed and projected Culicoides obsoletus spp. vector densities. PLoS ONE. 2013; 8(4):60330. Gray JS. The development and seasonal activity of the tick Ixodes ricinus: a vector of Lyme borreliosis. Rev Med Vet Entomol. 1991; 79:323–33. Balashov YS. Demography and population models of ticks of the genus Ixodes with long-term life cycles. Entomol Rev. 2012; 92:323–33. Kahl O, Petney TN. Biologie und Ökologie des wichtigsten FSME-Virus-Überträgers in Mitteleuropa, der Zecke Ixodes ricinus (in German) In: Rubel F, Schiffner-Rohe J, editors. FSME in Deutschland: Stand der Wissenschaft. Baden-Baden: Deutscher Wissenschaftsverlag: 2019. p. 23–38. Gray JS, Kahl O, Lane RS, Levind ML, Tsaoe JI. Diapause in ticks of the medically important Ixodes ricinus species complex. Ticks Tick-Borne Dis. 2016; 7:992–1003. https://doi.org/10.1016/j.ttbdis.2016.05.006. Ostfeld RS, Jones CG, Wolff JO. Of mice and mast: Ecological connections in eastern deciduous forests. BioScience. 1996; 46:323–30. https://doi.org/10.2307/1312946. Clement J, Vercauteren J, Verstraeten WW, Ducoffre G, Barrios JM, Vandamme A-M, Maes P, Van Ranst M. Relating increasing hantavirus incidences to the changing climate: the mast connection. Int J Health Geogr. 2009; 8:1. Nussbaumer A, Waldner P, Etzold S, Gessler A, Benham S, Thomsen IM, Jørgensen BB, Timmermann V, Verstraeten A, Sioen G, Rautio P, Ukonmaanaho L, Skudnik M, Apuhtin V, Braun S, Wauer A. Patterns of mast fruiting of common beech, sessile and common oak, Norway spruce and Scots pine in Central and Northern Europe. Forest Ecol Manage. 2016; 363:237–51. Konnert M, Schneck D, Zollner A. Blühen und Fruktifizieren unserer Waldbäume in den letzten 60 Jahren. LWF Wissen. 2016; 74:37–45. Deutscher Wetterdienst. Open Data Provided by the Climate Data Center of the German Weather Service. Offenbach am Main. 2019. https://opendata.dwd.de/climate_environment/CDC. Last access 11 June 2019. R Development Core Team. R: A Language and Environment for Statistical Computing, Version 3.3.2. Vienna, Austria: R Foundation for Statistical Computing; 2016. ISBN 3-900051-07-0, http://www.R-project.org. Ripley B, Venables B, Bates DM, Hornik K, Gebhardt A, Firth D. Mass: Support Functions and Datasets for Venables and Ripley's MASS (Modern Applied Statistics with S, 4th Edition, 2002). 2019. R package version 7.3-51.4. Dormann CF, Elith J, Bacher S, Buchmann C, Carl G, Carré G, Marquéz JRG, Gruber B, Lafourcade B, Leitäo PJ, Münkemüller T, McClean C, Osborne PE, Reineking B, Schröder B, Skidmore AK, Zurell D, Lautenbach S. Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography. 2013; 36:27–46. Revelle W. Psych: Procedures for Psychological, Psychometric, and Personality Research. Evanston, Illinois: Northwestern University; 2018. https://CRAN.R-project.org/package=psych. R package version 1.8.12. Zuur AF, Ieno EN, Walker NJ, Saveliev AA, Smith GM. Mixed Effects Models and Extensions in Ecology with R. New York: Springer; 2009. Stull RB. An Introduction to Boundary Layer Meteorology. Dordrecht: Kluwer Academic Publishers; 1991. Kottek M, Grieser J, Beck C, Rudolf B, Rubel F. World map of the Köppen-Geiger climate classification updated. Meteorol Z. 2006; 15:259–63. Rubel F, Brugger K, Haslinger K, Auer I. The climate of the European Alps: Shift of very high resolution Köppen-Geiger climate zones 1800–2100. Meteorol Z. 2017; 26:115–25. Boehnke D, Brugger K, Pfäffle M, Sebastian P, Norra S, Petney T, Oehme R, Littwin N, Lebl K, Raith J, Walter M, Gebhardt R, Rubel F. Estimating Ixodes ricinus densities on the landscape scale. Int J Health Geogr. 2015; 14:23. Brugger K, Boehnke D, Petney T, Dobler G, Pfeffer M, Silaghi C, Schaub GA, Pinior B, Dautel H, Kahl O, Pfister K, Süss J, Rubel F. A density map of the tick-borne encephalitis and Lyme borreliosis vector Ixodes ricinus (Acari: Ixodidae) for Germany. J Med Entomol. 2016; 53:1292–302. Dautel H, Kämmer D, Kahl O. How an extreme weather spell in winter can influence vector tick abundance and tick-borne disease incidence In: Braks MAH, van Wieren SE, Takken W, Sprong H, editors. Ecology and Prevention of Lyme Borreliosis. Ecology and Control of Vector-borne Diseases, vol. 4. Wageningen: Wageningen Academic Publishers: 2016. p. 335–49. Kiffner C, Zucchini W, Schomaker P, Vor T, Hagedorn P, Niedrig M, Rühe F. Determinants of tick-borne encephalitis in counties of southern Germany, 2001-2008. Int J Health Geogr. 2010; 9:42. Knap N, Avs̆ic̆-Z̆upanc T. Correlation of TBE incidence with red deer and roe deer abundance in Slovenia. PLoS ONE. 2013; 8(6):66380. Reimoser S, Reimoser F. Habitat quality & hunting bag: culling densities of different wildlife species in Austria since 1955. Part 1: Roe deer (Capreolus capreolus) (in German). Weidwerk. 2005; 6/2005:14–5. Statistics Austria. Statistical Database STATcube. 2020. https://www.statistik.at/web_en/publications_services/statcube/index.html. Last access 04 Feb 2020. Rubel F, Brugger K. Tick-borne encephalitis incidence forecasts for Austria, Germany, and Switzerland. Ticks Tick Borne Dis. 2020; 11:101437. https://doi.org/10.1016/j.ttbdis.2020.101437. Rubel F. Erklärende Modelle zur Dynamik der FSME-Erkrankungen In: Rubel F, Schiffner-Rohe J, editors. FSME in Deutschland: Stand der Wissenschaft. Baden-Baden: Deutscher Wissenschafts-Verlag: 2019. p. 243–60. The authors are very grateful to Olaf Kahl for critical reading the manuscript and his valuable hints. No funding was obtained for this study. Unit for Veterinary Public Health and Epidemiology, University of Veterinary Medicine Vienna, Austria, Veterinaerplatz 1, Vienna, 1210, Austria Franz Rubel, Melanie Walter, Janna R. Vogelgesang & Katharina Brugger Franz Rubel Melanie Walter Janna R. Vogelgesang Katharina Brugger FR, MW, JRV, and KB contributed to the study conception and design. Material preparation, data collection and analysis were performed by FR and KB. The first draft of the manuscript was written by FR and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Correspondence to Franz Rubel. FR received speaker honoraria from Pfizer Deutschland GmbH. KB received unrestricted research grants from Pfizer Deutschland GmbH. All authors received travel grants and author honoraria from Pfizer Deutschland GmbH. Additional file S1. Frequency distributions (red bars) of the Austrian TBE incidence without vaccination N and the predictors used in the generalized linear models (GLMs): total human population NTOT, net migration NMIG, transformed scandinavian index SI, beech fructification index 2 years prior Fyear−2 and, annual sunshine duration in hours SD. the following rank-order correlations with N have been determined: r=0.29 (NTOT), r=-0.14 (NMIG), r=0.52 (SI), r=0.38 (Fyear−2), and r=0.27 (SD). maximum collinearity of r=0.59 (NTOT vs. NMIG). Additional file S2. Statistical features of GLM4. Additional file S3. Akaike information criterion (AIC) and explained variance (R\(^{2}_{{adj}}\)) values for stepwise developed models GLM1–GLM4. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Rubel, F., Walter, M., Vogelgesang, J.R. et al. Tick-borne encephalitis (TBE) cases are not random: explaining trend, low- and high-frequency oscillations based on the Austrian TBE time series. BMC Infect Dis 20, 448 (2020). https://doi.org/10.1186/s12879-020-05156-7 Vector-borne disease Mast seeding Ixodes ricinus Parasitological diseases
CommonCrawl
Decoherence effects in the quantum qubit flip game using Markovian approximation Piotr Gawron1, Dariusz Kurzyk2,1 & Łukasz Pawela1 Quantum Information Processing volume 13, pages 665–682 (2014)Cite this article We are considering a quantum version of the penny flip game, whose implementation is influenced by the environment that causes decoherence of the system. In order to model the decoherence, we assume Markovian approximation of open quantum system dynamics. We focus our attention on the phase damping, amplitude damping and amplitude raising channels. Our results show that the Pauli strategy is no longer a Nash equilibrium under decoherence. We attempt to optimize the players' control pulses in the aforementioned setup to allow them to achieve higher probability of winning the game compared with the Pauli strategy. Quantum information experiments can be described as a sequence of three operations: state preparation, evolution and measurement [1]. In most cases, one cannot assume that experiments are conducted perfectly; therefore, imperfections have to be taken into account while modeling them. In this work, we are interested in how the knowledge about imperfect evolution of a quantum system can be exploited by players engaged in a quantum game. We assume that one of the players possesses the knowledge about imperfections in the system, while the other is ignorant of their existence. We ask a question of how much the player's knowledge about those imperfections can be exploited by him/her for their advantage. We consider implementation of the quantum version of the penny flip game, which is influenced by the environment that causes decoherence of the system. In order to model the decoherence, we assume Markovian approximation of open quantum system dynamics. This assumption is valid, for example, in the case of two-level atom coupled to the vacuum, undergoing spontaneous emission (amplitude damping). The coherent part of the atom's evolution is described by one-qubit Hamiltonian. Spontaneous emission causes an atom in the excited state to drop down into the ground state, emitting a photon in the process. Similarly, phase damping channel can be considered. This channel causes a continuous decay of coherence without energy dissipation in a quantum system [2]. The paper is organized as follows: in the two following subsections, we discuss related work and present our motivation to undertake this task. In Sect. 2, we recall the penny flip game and its quantum version; in Sect. 3, we present the noise model; in Sect. 4, we discuss the strategies applied in the presence of noise and finally in Sect. 5, we conclude the obtained results. Imperfect realizations of quantum games have been discussed in literature since the beginning of the century. Johnson [3] discusses a three-player quantum game played with a corrupted source of entangled qubits. The author implicitly assumes that the initial state of the game had passed through a bit-flip noisy channel before the game began. The corruption of quantum states in schemes implementing quantum games was studied by various authors, e.g., in [4], the authors study the general treatment of decoherence in two-player, two-strategy quantum games; in [5], the authors perform an analysis of the two-player prisoners' dilemma game; in [6], the multiplayer quantum minority game with decoherence is studied; in [7, 8], the authors analyze the influence of the local noisy channels on quantum Magic Squares games, while the quantum Monty Hall problem under decoherence is studied first in [9] and subsequently in [10]. In [11], the authors study the influence of the interaction of qubits forming a spin chain on the qubit flip game. An analysis of trembling hand-perfect equilibria in quantum games was done in [12]. Prisoners' dilemma in the presence of collective dephasing modeled by using the Markovian approximation of open quantum systems dynamics is studied in [13]. Unfortunately, the model applied in this work assumes that decoherence acts only after the initial state has been prepared and ceases to act before unitary strategies are applied. Another interesting approach to quantum games is the study of relativistic quantum games [14, 15]. This setup has also been studied in a noisy setup [16]. In the quantum game, theoretic literature decoherence is typically applied to a quantum game in the following way: The entangled state is prepared, It is transferred through a noisy channel, Players' strategies are applied, The resulting state is transferred once again through a noisy channel, The state is disentangled, Quantum local measurements are performed, and the outcomes of the games are calculated. In some cases, where it is appropriate, steps 4 and 5 are omitted. The problem with the above procedure is that it separates unitary evolution from the decoherent evolution. In Miszczak et al. [11], it was proposed to observe the behavior of the quantum version of the penny flip game under more physically realistic assumptions where decoherence due to coupling with the environment and unitary evolution happen simultaneously. In the papers, the authors study an implementation of the qubit flip game on quantum spin chains. First, a design, expressed in the form of quantum control problem, of the game on the trivial, one-qubit spin chain is proposed. Then the environment in the form of an additional qubit is added, and spin-spin coupling is adjusted, so one of the players, under some assumptions, can not detect that the system is implemented on two qubits rather than on one qubit. In the paper, it is shown that if one of the players posses the knowledge about the spin coupling, he or she can exploit it for augmenting his or hers winning probability. Game as a quantum experiment In this work, our goal is to follow the work done in [11] and to discuss the quantum penny flip game as a physical experiment consisting in preparation, evolution and measurement of the system. For the purpose of this paper, we assume that preparation and measurement, contrary to noisy evolution of the system are perfect. We investigate the influence of the noise on the players' odds and how the noisiness of the system can be exploited by them. The noise model we use is described by the Lindblad master equation, and the dynamics of the system is expressed in the language of quantum systems control. Penny flip game In order to provide classical background for our problem, let us consider a classical two-player game, consisting in flipping over a coin by the players in three consecutive rounds. As usual, the players are called Alice and Bob. In each round, Alice and Bob performs one of two operations on the coin: flips it over or retains it unchanged. At the beginning of the game, the coin is turned heads up. During the course of the game the coin is hidden and the players do not know the opponents actions. If after the last round, the coin tails up, then Alice wins, otherwise the winner is Bob. The game consists of three rounds: Alice performs her action in the first and the third round, while Bob performs his in the second round of the game. Therefore, the set of allowed strategies consists of eight sequences \((N,N,N), (N,N,F),\) \( \ldots , (F,F,F)\), where \(N\) corresponds to the non-flipping strategy and \(F\) to the flipping strategy. Bob's pay-off table for this game is presented in Table 1. Looking at the pay-off tables, it can be seen that utility function of players in the game is balanced; thus, the penny flip game is a zero-sum game. Table 1 Bob's pay-off table for the penny flip game A detailed analysis of this game and its asymmetrical quantization can be found in [17]. In this work it was shown that there is no winning strategy for any player in the penny flip game. It was also shown, that if Alice was allowed to extend her set of strategies to quantum strategies she could always win. In Miszczak et al. [11] it was shown that when both players have access to quantum strategies the game becomes fair and it has the Nash equilibrium. Qubit flip game The quantum version of the qubit flip game was studied for the first time by Meyer [18]. In our study, we wish to follow the work done in the aforementioned paper [11]. Hence, we consider a quantum version of the penny flip game. In this case, we treat a qubit as a quantum coin. As in the classical case the game is divided into three rounds. Starting with Alice, in each round, one player performs a unitary operation on the quantum coin. The rules of the game are constrained by its physical implementation. In order to obtain an arbitrary one-qubit unitary operation it is sufficient to use a control Hamiltonian built using only two traceless Pauli operators [19]. Therefore, we assume that in each round each of the players can choose three control parameters \(\alpha _1,\alpha _2,\alpha _3\) in order to realize his/hers strategy. The resulting unitary gate is given by the equation: $$\begin{aligned} U(\alpha _1,\alpha _2,\alpha _3)=\hbox {e}^{-\mathrm{i}\alpha _3\sigma _z \Delta t} \hbox {e}^{-\mathrm{i}\alpha _2\sigma _y \Delta t} \hbox {e}^{-\mathrm{i}\alpha _1\sigma _z \Delta t}, \end{aligned}$$ where \(\Delta t\) is an arbitrarily chosen constant time interval. Therefore, the system defined above forms a single qubit system driven by time-dependent Hamiltonian \(H(t)\), which is a piecewise constant and can be expressed in the following form $$\begin{aligned} H(t)= {\left\{ \begin{array}{ll} \alpha _1^{A_1}\sigma _z &{} \text { for } 0\le t < \Delta t,\\ \alpha _2^{A_1}\sigma _y &{} \text { for } \Delta t\le t < 2\Delta t,\\ \alpha _3^{A_1}\sigma _z &{} \text { for } 2\Delta t\le t < 3\Delta t,\\ \alpha _1^{B}\sigma _z &{} \text { for } 3\Delta t\le t < 4\Delta t,\\ \alpha _2^{B}\sigma _y &{} \text { for } 4\Delta t\le t < 5\Delta t,\\ \alpha _3^{B}\sigma _z &{} \text { for } 5\Delta t\le t < 6\Delta t,\\ \alpha _1^{A_2}\sigma _z &{} \text { for } 6\Delta t\le t < 7\Delta t,\\ \alpha _2^{A_2}\sigma _y &{} \text { for } 7\Delta t\le t < 8\Delta t,\\ \alpha _3^{A_2}\sigma _z &{} \text { for } 8\Delta t\le t \le 9\Delta t. \end{array}\right. } \end{aligned}$$ Control parameters in the Hamiltonian \(H(t)\) will be referred to vector \(\mathrm {\alpha }=(\alpha _1^{A_1}, \alpha _2^{A_1}, \alpha _3^{A_1}, \alpha _1^{B}, \alpha _2^{B}, \alpha _3^{B}, \alpha _1^{A_2}, \alpha _2^{A_2}, \alpha _3^{A_2})\), where \(\alpha _i^{A_1},\alpha _i^{A_2}\) are determined by Alice and \(\alpha _i^{B}\) are selected by Bob. Suppose that players are allowed to play the game by manipulating the control parameters in the Hamiltonian \(H(t)\) representing the coherent part of the dynamics, but they are not aware of the action of the environment on the system. Hence, the time evolution of the system is non-unitary and is described by a master equation, which can be written generally in the Lindblad form as $$\begin{aligned} \frac{\mathrm{d}\rho }{\mathrm{d}t}=-\mathrm{i}[H(t),\rho ] + \sum _j \gamma _j(L_j\rho L_j^\dagger - \frac{1}{2}\{L_j^\dagger L_j,\rho \}), \end{aligned}$$ where \(H(t)\) is the system Hamiltonian, \(L_j\) are the Lindblad operators, representing the environment influence on the system [2] and \(\rho \) is the state of the system. For the purpose of this paper we chose three classes of decoherence: amplitude damping, amplitude raising and phase damping which correspond to noisy operators \(\sigma _{-}=| 0 \rangle \langle 1 |\), \(\sigma _{+}=| 1 \rangle \langle 0 |\) and \(\sigma _z\), respectively. Let us suppose that initially the quantum coin is in the state \(| 0 \rangle \langle 0 |\). Next, in each round, Alice and Bob perform their sequences of controls on the qubit, where each control pulse is applied according to Eq. (3). After applying all of the nine pulses, we measure the expected value of the \(\sigma _z\) operator. If \(\mathrm{tr}(\sigma _z\rho (T))=-1\) Alice wins, if \(\mathrm{tr}(\sigma _z\rho (T))=1\) Bob wins. Here, \(\rho (T)\) denotes the state of the system at time \(T=9\Delta t\). Alternatively we can say that the final step of the procedures consists in performing orthogonal measurement \(\{O_\mathrm{tails}\rightarrow | 1 \rangle \langle 1 |,O_\mathrm{heads}\rightarrow | 0 \rangle \langle 0 |\}\) on state \(\rho (T)\). The probability of measuring \(O_\mathrm{tails}\) and \(O_\mathrm{heads}\) determines pay-off functions for Alice and Bob, respectively. These probabilities can be obtained from relations \(p(\mathrm{tails})=\langle 1 |\rho (T)| 1 \rangle \) and \(p(\mathrm{heads})=\langle 0 |\rho (T)| 0 \rangle \). Nash equilibrium In this game, pure strategies cannot be in Nash equilibrium [18]. Hence, the players choose mixed strategies, which are better than the pure ones. We assume that Alice and Bob use the Pauli strategy, which is mixed and gives Nash equilibrium [11]; therefore, this strategy is a reasonable choice for the players. According to the Pauli strategy, each player chooses one of the four unitary operations \(\{{1\!\!1}, \mathrm{i}\sigma _{x}, \mathrm{i}\sigma _{y}, \mathrm{i}\sigma _{z}\}\) with equal probability. Thus, to obtain the Pauli strategy, each player chooses a sequence of control parameters \((\alpha _1^\square , \alpha _2^\square , \alpha _3^\square )\) listed in Table 2. The symbol \(\square \) can be substituted by \(A_1,B,A_2\). It means that in each round, one player performs a unitary operation chosen randomly with a uniform probability distribution from the set \(\{ {1\!\!1}, \mathrm{i}\sigma _x, \mathrm{i}\sigma _y, \mathrm{i}\sigma _z \}\). Table 2 Control parameters for realizing the Pauli strategy Influence of decoherence on the game In this section, we perform an analytical investigation which shows the influence of decoherence on the game result. In accordance with the Lindblad master equation, the environment influence on the system is represented by Lindblad operators \(L_j\), while the rate of decoherence is described by parameters \(\gamma _j\). In our game, players use the Pauli strategy; hence, the quantum system evolves depending on the Hamiltonians expressed as \(H(t)=\alpha _i^\square \sigma _y\) or \(H(t)=\alpha _i^\square \sigma _z\). To simplify the discussion, we consider Hamiltonians represented by diagonal matrices. In our case, \(H=\alpha _i^\square \sigma _z\) is diagonal, but Hamiltonian \(\alpha _i^\square \sigma _y\) requires diagonalization. Therefore, we will consider solutions of Lindblad equations for the Hamiltonians given by \(H_z = \alpha _i^\square \sigma _z\) and \(H_y = \alpha _i^\square U^\dagger \sigma _y U=\alpha _i^\square \left( \begin{array}{ll} -1 &{} 0\\ 0 &{} 1 \end{array} \right) \), where \(U=\left( \begin{array}{ll} -\frac{\sqrt{2}}{2} &{} -\frac{\sqrt{2}}{2} \\ \mathrm{i}\frac{\sqrt{2}}{2} &{} -\mathrm{i}\frac{\sqrt{2}}{2} \end{array} \right) \) is unitary matrix, whose columns are the eigenvectors of \(\sigma _y\). Thus, we consider the solutions of the Lindblad equation for the Hamiltonian of the form $$\begin{aligned} H=\beta _1| 0 \rangle \langle 0 | + \beta _2| 1 \rangle \langle 1 |. \end{aligned}$$ Amplitude damping and amplitude raising First we consider the amplitude damping decoherence, which corresponds to the Lindblad operator \(\sigma _{-}\). Thus, the master Eq. (3) is expressed as $$\begin{aligned} \frac{\mathrm{d}\rho }{\mathrm{d}t}=-\mathrm{i}[H,\rho (t)] + \gamma (\sigma _{-}\rho (t)\sigma _{+}-\frac{1}{2}\sigma _{+}\sigma _{-}\rho (t)-\frac{1}{2}\ \rho (t)\sigma _{+}\sigma _{-}), \end{aligned}$$ where \(\sigma _{+}=\sigma _{-}^\dagger =| 1 \rangle \langle 0 |\). The equation can be rewritten in the following form $$\begin{aligned} \frac{\mathrm{d}\rho }{\mathrm{d}t}=A\rho (t)+\rho (t) A^\dagger + \gamma \sigma _{-}\rho (t)\sigma _{+}, \end{aligned}$$ where \(A=-\mathrm{i}H(t)-\frac{1}{2}\gamma \sigma _{+}\sigma _{-}\). In solving this equation it is helpful to make a change of variables \(\rho (t)=\hbox {e}^{At}\hat{\rho }(t)\hbox {e}^{A^\dagger t}\). Hence, we obtain $$\begin{aligned} \frac{\mathrm{d}\hat{\rho }}{\mathrm{d}t}=\gamma B(t)\hat{\rho }(t) B^{\dagger }(t), \end{aligned}$$ where \(B(t)=\hbox {e}^{-At}\sigma _{-}\hbox {e}^{At}=\hbox {e}^{-\mathrm{i}(\beta _2-\beta _1)t-\frac{\gamma }{2}t}\sigma _{-}\). It follows that $$\begin{aligned} \frac{\mathrm{d}\hat{\rho }}{\mathrm{d}t}=\gamma \hbox {e}^{-\gamma t} \sigma _{-}\hat{\rho }(t) \sigma _{+}. \end{aligned}$$ Due to the fact that \(\sigma _{-}\sigma _{-}=\sigma _{+}\sigma _{+}=0\) and \(\sigma _{-}\frac{\mathrm{d}\hat{\rho }}{\mathrm{d}t}\sigma _{+}=0\) it is possible to write \(\hat{\rho }(t)\) as $$\begin{aligned} \hat{\rho }(t)=\hat{\rho }(0) -\hbox {e}^{-\gamma t}\sigma _{-}\hat{\rho }(0)\sigma _{+}. \end{aligned}$$ Coming back to the original variables we get the expression $$\begin{aligned} \rho (t)=\hbox {e}^{At}\rho (0)\hbox {e}^{A^\dagger t}-\hbox {e}^{-\gamma t}\sigma _{-}\rho (0)\sigma _{+}. \end{aligned}$$ In order to study the asymptotic effects of decoherence on the results of the game, we consider the following limit $$\begin{aligned} \lim _{\gamma \rightarrow \infty } \hbox {e}^{At}\rho (0)\hbox {e}^{A^\dagger t}-\hbox {e}^{-\gamma t}\sigma _{+}\rho (0)\sigma _{-} = | 0 \rangle \langle 0 |\rho (0)| 0 \rangle \langle 0 |. \end{aligned}$$ Let \(\rho (0)=| 0 \rangle \langle 0 |\); thus, the above limit is equal to \(| 0 \rangle \langle 0 |\). This result shows that for high values of \(\gamma \), chances of winning the game by Bob increase to 1 as \(\gamma \) increases. Figure 1 shows an example of the evolution of a quantum system with amplitude damping decoherence for two values of the parameter \(\gamma \). Figure 1a, b show the player's control pulses. In this case they are the ones implementing the Pauli strategy. Figure 1c, d show the time evolution of the state expressed as the expectation values of the observables \(\sigma _x\), \(\sigma _y\) and \(\sigma _z\) for both cases. Finally, Fig. 1e, f show the evolution of the qubit's state in the Bloch sphere. This shows how a little amount of noise influences the evolution of the system and changes the probability of winning the game. Example of the time evolution of a quantum system with the amplitude damping decoherence for a sequence of control parameters \(\alpha \) and fixed \(\gamma =0.1\) (left side), \(\gamma =0.7\) (right side). a Control parameters \(\alpha =(-\frac{\pi }{4},-\frac{\pi }{2},\frac{\pi }{4},0,-\frac{\pi }{2},0,-\frac{\pi }{4},-\frac{\pi }{2},\frac{\pi }{4})\). b Control parameters \(\alpha =(0,-\frac{\pi }{2},0,-\frac{\pi }{4},-\frac{\pi }{2},\frac{\pi }{4},-\frac{\pi }{4},0,-\frac{\pi }{4})\). c Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\). d Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\). e Time evolution of a quantum coin. f Time evolution of a quantum coin The noisy operator \(\sigma _{+}\) is related to amplitude raising decoherence, and the solution of the master equation has the following form $$\begin{aligned} \rho (t)=\hbox {e}^{At}\rho (0)\hbox {e}^{A^\dagger t}-\hbox {e}^{-\gamma t}\sigma _{+}\rho (0)\sigma _{-}, \end{aligned}$$ where \(A=-\mathrm{i}H(t) -\frac{1}{2}\gamma \sigma _{-}\sigma _{+}\). It is easy to check that as \(\gamma \rightarrow \infty \) the state \(| 1 \rangle \langle 1 |\) is the solution of the above equation, in which case Alice wins. Phase damping Now, we consider the impact of the phase damping decoherence on the outcome of the game. In this case, the Lindblad operator is given by \(\sigma _z\). Hence, the Lindblad equation has the following form $$\begin{aligned} \frac{\mathrm{d}\rho }{\mathrm{d}t}&= -\mathrm{i}[H,\rho (t)] + \gamma (\sigma _z\rho (t)\sigma _z - \frac{1}{2}\sigma _z\sigma _z\rho (t) -\frac{1}{2}\rho (t)\sigma _z\sigma _z)\nonumber \\&= -\mathrm{i}[H,\rho (t)] + \gamma (\sigma _z\rho (t)\sigma _z - \rho (t)). \end{aligned}$$ Next, we make a change of variables \(\hat{\rho }(t)=\hbox {e}^{\mathrm{i}Ht}\rho (t)\hbox {e}^{-\mathrm{i}Ht}\), which is helpful to solve the equation. We obtain $$\begin{aligned} \frac{\mathrm{d}\hat{\rho }}{\mathrm{d}t}&= \frac{\mathrm{d}\hbox {e}^{\mathrm{i}Ht}}{\mathrm{d}t}\rho (t)\hbox {e}^{-\mathrm{i}Ht}+ \hbox {e}^{\mathrm{i}Ht}\frac{\mathrm{d}\rho }{\mathrm{d}t}\hbox {e}^{-\mathrm{i}Ht}+ \hbox {e}^{\mathrm{i}Ht}\rho (t)\frac{\mathrm{d}\hbox {e}^{-\mathrm{i}Ht}}{\mathrm{d}t}\nonumber \\&= \mathrm{i}H \hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht}\hat{\rho }(t)\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht} - \mathrm{i}\hbox {e}^{\mathrm{i}Ht}H\hbox {e}^{-\mathrm{i}Ht}\hat{\rho }(t)\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht}\nonumber \\&+\, \mathrm{i}\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht}\hat{\rho }(t)\hbox {e}^{\mathrm{i}Ht}H\hbox {e}^{-\mathrm{i}Ht}+ \gamma \hbox {e}^{\mathrm{i}Ht}\sigma _z\hbox {e}^{-\mathrm{i}Ht}\hat{\rho }(t) \hbox {e}^{\mathrm{i}Ht}\sigma _z\hbox {e}^{-\mathrm{i}Ht}\nonumber \\&-\,\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht}\hat{\rho }\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht} -\mathrm{i}\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht}\hat{\rho }\hbox {e}^{\mathrm{i}Ht}\hbox {e}^{-\mathrm{i}Ht}H \nonumber \\&= \gamma (\sigma _z\hat{\rho (t)}\sigma _z - \hat{\rho (t)}). \end{aligned}$$ It follows that the solution of the above equation is given by $$\begin{aligned} \hat{\rho }(t)&= | 0 \rangle \langle 0 |\rho (0)| 0 \rangle \langle 0 | + | 1 \rangle \langle 1 |\rho (0)| 1 \rangle \langle 1 | + \nonumber \\&+\, \mathrm{e}^{-2\gamma t} (| 0 \rangle \langle 0 |\rho (0)| 1 \rangle \langle 1 |+| 1 \rangle \langle 1 |\rho (0)| 0 \rangle \langle 0 |). \end{aligned}$$ $$\begin{aligned} \rho (t)&= | 0 \rangle \langle 0 |\rho (0)| 0 \rangle \langle 0 | + | 1 \rangle \langle 1 |\rho (0)| 1 \rangle \langle 1 | + \nonumber \\&+\, \mathrm{e}^{-2\gamma t}\mathrm{e}^{-\mathrm{i}H t}(| 0 \rangle \langle 0 |\rho (0)| 1 \rangle \langle 1 |+| 1 \rangle \langle 1 |\rho (0)| 0 \rangle \langle 0 |) \mathrm{e}^{\mathrm{i}H t}. \end{aligned}$$ Consider the following limit $$\begin{aligned} \lim _{\gamma \rightarrow \infty } \rho (t)= | 0 \rangle \langle 0 |\rho (0)| 0 \rangle \langle 0 | + | 1 \rangle \langle 1 |\rho (0)| 1 \rangle \langle 1 |. \end{aligned}$$ The above result is a diagonal matrix dependent on the initial state. For high values of \(\gamma \), the initial state \(\rho (0)\) has a significant impact on the game. If \(\rho (0)=| 0 \rangle \langle 0 |\) then \(\lim _{\gamma \rightarrow \infty } \rho (t)=| 0 \rangle \langle 0 |\). This kind of decoherence is conducive to Bob. Similarly, if \(\rho (0) = | 1 \rangle \langle 1 |\), then Alice wins. The evolution of a quantum system with the phase damping decoherence and fixed Hamiltonian is shown in Fig. 2. Figures 2a,b show the player's control pulses. In this case they are the ones implementing the Pauli strategy. Figure 2c,d show the time evolution of the state expressed as the expectation values of the observables \(\sigma _x\), \(\sigma _y\) and \(\sigma _z\) for both cases. Finally, Fig. 2e,f show the evolution of the qubit's state in the Bloch sphere. In this case, we can see that a low amount of phase damping noise does not have a significant impact on the outcome of the game. On the other hand, for higher values of \(\gamma \) we can see mainly the effect of the decoherence rather than the effect of player's actions, i.e., the state evolves almost directly toward the maximally mixed state. Example of the time evolution of a quantum system with the phase damping decoherence for fixed \(\gamma =0.5\) (left side), \(\gamma =5\) (right side) and a sequence of control parameters \(\alpha \). a Control parameters \(\alpha =(-\frac{\pi }{4},-\frac{\pi }{2},\frac{\pi }{4},-\frac{\pi }{4},-\frac{\pi }{2},\frac{\pi }{4},-\frac{\pi }{4},-\frac{\pi }{2},\frac{\pi }{4})\). b Control parameters \(\alpha =(0,-\frac{\pi }{2},0,0,-\frac{\pi }{2},0,0,-\frac{\pi }{2},0)\). c Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\). d Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\). e Time evolution of a quantum coin. f Time evolution of a quantum coin Optimal strategy for the players Due to the noisy evolution of the underlying qubit, the strategy given by Table 2 is no longer a Nash equilibrium. We study the possibility of optimizing one player's strategy, while the other one uses the Pauli strategy. It turns out that this optimization is not always possible. If the rate of decoherence is high enough, then the players' strategies have little impact on the game outcome. In the low noise scenario, it is possible to optimize the strategy of both players. In each round, one player performs a series of unitary operations, which are chosen randomly from a uniform distribution. Therefore, the strategy of a player can be seen as a random unitary channel. In this section \(\Phi _{A_1},\Phi _{A_2}\) denote mixed unitary channels used by Alice who implements the Pauli strategy. Similarly, \(\Phi _B\) denotes channels used by Bob. Optimization method In order to find optimal strategies for the players, we assume the Hamiltonian in (3) to have the form $$\begin{aligned} H = H(\varepsilon (t)), \end{aligned}$$ where \(\varepsilon (t)\) are the control pulses. As the optimization target, we introduce the cost functional $$\begin{aligned} J(\varepsilon )=\mathrm{tr}\{ F_0(\rho (T)) \}, \end{aligned}$$ where \(F_0(\rho (T))\) is a functional that is bounded from below and differentiable with respect to \(\rho (T)\). A sequence of control pulses that minimizes the functional (19) is said to be optimal. In our case we assume that $$\begin{aligned} \mathrm{tr}\{ F_0(\rho (T)) \} = \frac{1}{2} || \rho (T) - \rho _\mathrm{T} ||_\mathrm{F}^2, \end{aligned}$$ where \(\rho _\mathrm{T}\) is the target density matrix of the system. In order to solve this optimization problem, we need to find an analytical formula for the derivative of the cost functional (19) with respect to control pulses \(\varepsilon (t)\). Using the Pontryagin principle [20], it is possible to show that we need to solve the following equations to obtain the analytical formula for the derivative $$\begin{aligned} \frac{\mathrm{d}\rho (t)}{\mathrm{d}t}&= -\mathrm{i}[H(\varepsilon (t)) ,\rho (t)] - \mathrm{i}L_\mathrm{D} [\rho (t)],\; t\in [0, T],\end{aligned}$$ $$\begin{aligned} \frac{\mathrm{d}\lambda (t)}{\mathrm{d}t}&= -\mathrm{i}[H(\varepsilon (t)) ,\lambda (t)] - \mathrm{i}L_\mathrm{D}^\dagger [\lambda (t)],\; t\in [0, T],\end{aligned}$$ $$\begin{aligned} L_\mathrm{D}[A]&= \mathrm{i}\sum _j \gamma _j(L_j A L_j^\dagger - \frac{1}{2}\{L_j^\dagger L_j,A\}),\end{aligned}$$ $$\begin{aligned} \rho (0)&= \rho _\mathrm{s}, \end{aligned}$$ $$\begin{aligned} \lambda (T)&= F'_0(\rho (T)), \end{aligned}$$ where \(\rho _\mathrm{s}\) denotes the initial density matrix, \(\lambda (t)\) is called the adjoint state and $$\begin{aligned} F'_0(\rho (T)) = \rho (T) - \rho _\mathrm{T}. \end{aligned}$$ The derivation of these equations can be found in [21]. In order to optimize the control pulses using a gradient method, we convert the problem from an infinite dimensional (continuous time) to a finite dimensional (discrete time) one. For this purpose, we discretize the time interval \([0, T]\) into \(M\) equal sized subintervals \(\Delta t_k\). Thus, the problem becomes that of finding \(\varepsilon =[\varepsilon _1,\ldots ,\varepsilon _M]^\mathrm{T}\) such that $$\begin{aligned} J(\varepsilon ) = \inf _{\zeta \in \mathbb {R}^M}J(\zeta ). \end{aligned}$$ The gradient of the cost functional is $$\begin{aligned} G = \left[ \frac{\partial J}{\partial \varepsilon _1}, \ldots , \frac{\partial J}{\partial \varepsilon _M} \right] ^\mathrm{T}. \end{aligned}$$ It can be shown [21] that elements of vector (28) are given by $$\begin{aligned} \frac{\partial J}{\partial \varepsilon _k} = \mathrm{tr}\left\{ -\mathrm{i}\lambda _k \left[ \frac{\partial H(\varepsilon _k)}{\partial \varepsilon _k}, \rho _k \right] \right\} \Delta t_k, \end{aligned}$$ where \(\rho _k\) and \(\lambda _k\) are solutions of the Lindblad equation and the adjoint system corresponding to time subinterval \(\Delta t_k\), respectively. To minimize the gradient given in Eq. (28) we use the BFGS algorithm [22]. Optimization setup Our goal is to find control strategies for players, which maximize their respective chances of winning the game. We study three noise channels: the amplitude damping, the phase damping and the amplitude raising channel. They are given by the Lindblad operators \(\sigma _-\), \(\sigma _z\) and \(\sigma _+ = \sigma _-^\dagger \), respectively. In all cases, we assume that one of the players uses the Pauli strategy, while for the other player we try to optimize a control strategy that maximizes that player's probability of winning. However, in our setup it is convenient to use the value of the observable \(\sigma _z\) rather than probabilities. Value 0 means that each player has a probability of \(\frac{1}{2}\) of winning the game. Values closer to 1 mean higher probability of winning for Bob, while values closer to -1 mean higher probability of winning for Alice. Optimization results The results for the phase damping channel are shown in Fig. 3. As it can be seen, in this case, both players are able to optimize their strategies, and so Alice can optimize her strategy for low values of \(\gamma \) to obtain the probability of winning grater than \(\frac{1}{2}\). The region where this occurs is shown in the inset. For high noise values, she is able to achieve the probability of winning equal to \(\frac{1}{2}\). In the case of high values of \(\gamma \), the best strategy for Alice is to drive the state as close as possible to the maximally mixed state on her first move. This state can not be changed neither by Bob's actions, nor by the phase damping channel. On the other hand, optimization of Bob's strategy shows that he is able to achieve high probabilities of winning for relatively low values of \(\gamma \). This is consistent with the limit shown in Eq. (17) as our initial state is \(\rho =| 0 \rangle \langle 0 |\). Figure 4 presents optimal game strategies for both players. For Alice we chose \(\gamma =1.172\) which corresponds to her maximal probability of winning the game. In the case of Bob's strategies we arbitrarily choose the value \(\gamma =1.610\). In these cases the evolution of the qubit is much more complex. This is due to the fact that the players are not restricted to the Pauli strategy. Mean value of the pay-off for the phase damping channel with and without optimization of the player's strategies. The inset shows the region where Alice is able to increase her probability of winning to exceed \(\frac{1}{2}\) Game results for the phase damping channel. Optimal Alice's strategy when \(\gamma = 1.172\) (left side), and optimal Bob's strategy when \(\gamma = 1.610\) (right side). a Optimal controls for Alice, b Optimal controls for Bob, c Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\), d Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\), e Time evolution of a quantum coin, f Time evolution of a quantum coin Amplitude damping Next, we present the results obtained for the amplitude damping channel. They are shown in Fig. 5. Unfortunately, for Alice, for high values of \(\gamma \) Bob always wins. This is due to the fact that in this case the state quickly decays to state \(| 0 \rangle \langle 0 |\). Additionally, Bob is also able to optimize his strategies. He is able to achieve probability of winning equal to 1 for relatively low values of \(\gamma \). However, for low values of \(\gamma \), the interaction allows Alice to achieve higher than \(\frac{1}{2}\) probability of winning. The region where this happens is magnified in the inset. Interestingly, for very low values of \(\gamma \), Alice can increase her probability of winning. This is due to the fact that low noise values are sufficient to distort Bob's attempts to perform the Pauli strategy. On the other hand, they are not high enough to drive the system toward state \(| 0 \rangle \langle 0 |\). Optimal game results for both players are shown in Fig. 6. For both players, we chose \(\gamma =0.621\) which corresponds to Alice's maximal probability of winning the game. As can be seen, in this case, the evolutions of the observables \(\sigma _x\), \(\sigma _y\) and \(\sigma _z\) show rapid oscillations. This behavior is turned on by applying control pulses associated with the \(\sigma _y\) Hamiltonian. Mean value of the pay-off for the amplitude damping channel with and without optimization of the player's strategies. The inset shows the region where Alice is able to increase her probability of winning to exceed \(\frac{1}{2}\) Game results obtained for the amplitude damping channel with \(\gamma \) equal to \(0.621\). Optimal Alice's strategy (left side), and optimal Bob's strategy (right side). a Optimal controls for Alice, b Optimal controls for Bob, c Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\), d Mean values of \(\sigma _x,\sigma _y\) and \(\sigma _z\), e Time evolution of a quantum coin, f Time evolution of a quantum coin Amplitude raising Finally, we present optimization results for the amplitude raising channel. The optimization results, shown in Fig. 7, indicate that Alice can achieve probability of winning equal to 1 for lower values of \(\gamma \) compared with the unoptimized case. In this case, Bob cannot do any better than in the unoptimized case due to a limited number of available control pulses. Mean value of the pay-off for the amplitude raising channel with and without optimization of the player's strategies We studied the quantum version of the coin flip game under decoherence. To model the interaction with external environment, we used the Markovian approximation in the form of the Lindblad equation. Because of the fact that Pauli strategy is a known Nash equilibrium of the game, therefore, it was natural to investigate this strategy in the presence noise. Our results show that in the presence of noise, the Pauli strategy is no longer a Nash equilibrium. One of the players, Bob in our case, is always favoured by amplitude and phase damping noise. If we had considered a game with another initial state i.e.,, \(\rho _0=| 1 \rangle \langle 1 |\), Alice would have been favoured in this case. Our next step was to check if the players were able to do better than the Pauli strategy. For this, we used the BFGS gradient method to optimize the players' strategies. Our results show that Alice, as well as Bob, are able to increase their respective winning probabilities. Alice can achieve this for all three studied cases, while Bob can only do this for the phase damping and amplitude damping channels. Heinosaari, T., Ziman, M.: The Mathematical Language of Quantum Theory: From Uncertainty to Entanglement. Cambridge University Press, Cambridge (2011) Book MATH Google Scholar Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2000) MATH Google Scholar Johnson, N.F.: Playing a quantum game with a corrupted source. Phys. Rev. A 63(2), 20302 (2001) Article ADS MathSciNet Google Scholar Flitney, A.P., Derek, A.: Quantum games with decoherence. J. Phys. A Math. Gen. 38(2), 449 (2005) Article ADS MathSciNet MATH Google Scholar Chen, J.-L., Kwek, L., Oh, C.: Noisy quantum game. Phys. Rev. A 65(5), 052320 (2002) Article MathSciNet ADS Google Scholar Flitney, A.P., Hollenberg, L.C.L.: Multiplayer quantum minority game with decoherence. Quantum Inf. Comput. 7(1), 111–126 (2007) MathSciNet MATH Google Scholar Gawron, P., Miszczak, J.A., Sładkowski, J.: Noise effects in quantum magic squares game. Int. J. Quantum Inf. 6(1), 667–673 (2008) Article MATH Google Scholar Pawela, Ł., Gawron, P., Puchała, Z., Sładkowski, J.: Enhancing pseudo-telepathy in the magic square game. PLoS ONE 8(6), e64694 (2013) Gawron, P.: Noisy quantum monty hall game. Fluct. Noise Lett. 9(1), 9–18 (2010) Article MathSciNet Google Scholar Khan, S., Ramzan, M., Khan, M.K.: Quantum monty hall problem under decoherence. Commun. Theor. Phys. 54(1), 47 (2010) Article MathSciNet ADS MATH Google Scholar Miszczak, J.A., Gawron, P., Puchała, Z.: Qubit flip game on a Heisenberg spin chain. Quantum Inf. Process. 11(6), 1571–1583 (2012) Pakuła, I.: Analysis of trembling hand perfect equilibria in quantum games. Fluct. Noise Lett. 8(01), 23–30 (2008) Nawaz, A.: Prisoners' dilemma in the presence of collective dephasing. J. Phys. A Math. Theor. 45(19), 195304 (2012) Salman, K., Khan, M.K.: Relativistic quantum games in noninertial frames. J. Phys. A Math. Theor. 44(35), 355302 (2011) Article ADS MATH Google Scholar Goudarzi, H., Beyrami, S.: Effect of uniform acceleration on multiplayer quantum game. J. Phys. A Math. Theor. 45(22), 225301 (2012) Salman, K., Khan, M.K.: Noisy relativistic quantum games in noninertial frames. Quantum Inf. Process. 12(2), 1351–1363 (2013) Piotrowski, E.W., Sładkowski, J.: An invitation to quantum game theory. Int. J. Theor. Phys. 42(5), 1089–1099 (2003) Article MathSciNet MATH Google Scholar Meyer, D.A.: Quantum strategies. Phys. Rev. Lett. 82(5), 1052 (1999) d'Alessandro, D.: Introduction to Quantum Control and Dynamics. CRC press, Boca Raton (2007) Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.: The Mathematical Theory of Optimal Processes. Interscience Publishers, New York (1962) Jirari, H., Pötz, W.: Optimal coherent control of dissipative \(N\)-level systems. Phys. Rev. A 72(1), 013409 (2005) Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes in FORTRAN 77, volume 1 of Fortran Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, Cambridge (1992) The work was supported by the Polish Ministry of Science and Higher Education Grants: P. Gawron under the project number IP2011 014071. D. Kurzyk under the project number N N514 513340. Ł. Pawela under the project number N N516 481840. Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Bałtycka 5, 44-100 , Gliwice, Poland Piotr Gawron, Dariusz Kurzyk & Łukasz Pawela Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 , Gliwice, Poland Dariusz Kurzyk Piotr Gawron Łukasz Pawela Correspondence to Dariusz Kurzyk. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Gawron, P., Kurzyk, D. & Pawela, Ł. Decoherence effects in the quantum qubit flip game using Markovian approximation. Quantum Inf Process 13, 665–682 (2014). https://doi.org/10.1007/s11128-013-0681-y Issue Date: March 2014 Lindblad master equation Decoherence effects Quantum games Open quantum systems
CommonCrawl
Laundering CNV data for candidate process prioritization in brain disorders Maria A. Zelenova1,2, Yuri B. Yurov1,2, Svetlana G. Vorsanova1,2 & Ivan Y. Iourov ORCID: orcid.org/0000-0002-4134-83671,2 Molecular Cytogenetics volume 12, Article number: 54 (2019) Cite this article Prioritization of genomic data has become a useful tool for uncovering the phenotypic effect of genetic variations (e.g. copy number variations or CNV) and disease mechanisms. Due to the complexity, brain disorders represent a major focus of genomic research aimed at revealing pathologic significance of genomic changes leading to brain dysfunction. Here, we propose a "CNV data laundering" algorithm based on filtering and prioritizing of genomic pathways retrieved from available databases for uncovering altered molecular pathways in brain disorders. The algorithm comprises seven consecutive steps of processing individual CNV data sets. First, the data are compared to in-house and web databases to discriminate recurrent non-pathogenic variants. Second, the CNV pool is confined to the genes predominantly expressed in the brain. Third, intergenic interactions are used for filtering causative CNV. Fourth, a network of interconnected elements specific for an individual genome variation set is created. Fifth, ontologic data (pathways/functions) are attributed to clusters of network elements. Sixth, the pathways are prioritized according to the significance of elements affected by CNV. Seventh, prioritized pathways are clustered according to the ontologies. The algorithm was applied to 191 CNV data sets obtained from children with brain disorders (intellectual disability and autism spectrum disorders) by SNP array molecular karyotyping. "CNV data laundering" has identified 13 pathway clusters (39 processes/475 genes) implicated in the phenotypic manifestations. Elucidating altered molecular pathways in brain disorders, the algorithm may be used for uncovering disease mechanisms and genotype-phenotype correlations. These opportunities are strongly required for developing therapeutic strategies in devastating neuropsychiatric diseases. Brain disorders frequently result from genomic variations altering a variety of molecular and cellular pathways [1]. Due to a significant overlap between genetic variations associated with phenotypic spectrum of various disorders, psychiatric genetic research may be focused on interactomes (networks of interacting genes/proteins) influencing certain pathways. This is further supported by the findings indicating an increase of total burden of rare, inherited or de novo copy number variations (CNVs) to be associated with psychiatric disorders [2], suggesting that different malfunctioning genes might be involved in the same biological process, disruption of which causes the disease. Protein-protein interaction (PPI) networks (molecular pathways) seem to be a more reliable drug target than gene mutations or CNV per se. Indeed, molecular pathways to intellectual disability (ID), autism spectrum disorders (ASD) and schizophrenia are repeatedly reported to be based on specific PPIs [1, 3]. The convergent pathways include, but are not limited to, those regulating neurogenesis, neuronal migration, synaptic functions, transcription, translation, cell cycle and programmed cell death [1, 4]. The majority of the networks altered in brain diseases regulate either processes crucial for neural development and functioning or those influencing cell cycle and communication. Particularly, RhoGTPase pathway is involved in nervous system development, dendritic spines formation and neuronal differentiation [5]; Ras/RAP pathway is responsible for long term potentiation of AMPA receptors (Ras) and long term depression (Rap) [6]. Cell cycle pathway may be altered to produce genome instability leading to cancer or neurodegenerative diseases [7]. More precisely, ERK/PI3K signaling pathway influences the more general pathway regulating the cell cycle and cell differentiation and is altered in neurodevelopmental diseases [8]. Wnt signaling pathway takes part in neuronal migration, dendrite and synapse formation, as well as axon guidance [9]. However, the alterations to these pathways are rarely addressed in the CNV context, probably due to the lack of appropriate bioinformatic algorithms [10, 11]. Here, we propose an algorithm for "laundering" CNV data based on a previously described bioinformatic technique for CNV prioritization [12]. The algorithm may be applicable for identifying causative (candidate) processes for brain disorders in diagnostic and basic research. We propose a CNV prioritization algorithm — "data laundering" — suitable both for diagnostic and basic research. The algorithm is based on an idea that brain diseases result from genomic alterations affecting directly the brain [13, 14] and, consequently, predominant expression of a gene in the central nervous system increases the probability of its contribution to a neurobehavioral phenotype [12]. We designate the algorithm as "laundering" because of the resemblance to machine-washing (each step processes the data from the previous stage to be filtered several times using different criteria). Figure 1 schematically outlines the procedure. Data laundering algorithm for CNV prioritization First, a pool of CNVs is obtained by molecular karyotyping. At this stage, CNVs are checked for recurrence by in-house and web databases. In-house databases of genomic variants obtained by similar microarray types are applied to spot recurrent aberrations. It is to note that the indexation of CNV in a Database of Genomic Variants or any other database dedicated to non-pathogenic genome variations is not a criterion for the exclusion at following stages. Further, the localization and ontology of CNV genes (e.g. using UCSC genome browser, NCBI gene, OMIM, PubMed etc.) are obtained. At this stage, genes lacking appropriate ontology, CNV encompassing introns, recurrent/non-pathogenic CNV are excluded from further analysis. It is worth mentioning that here, CNV are defined as copy number DNA gains/losses < 500 kbp. Secondly, the genes are in silico analyzed in terms of the expression in the central nervous system. As brain pathology is suggested to be mainly associated with neurobehavioral phenotypes, it is recommended to proceed to the next step with a pool of genes highly expressed in the brain. Third step is referred to as retrieving gene-gene interactions. Considering the differences in databases, it is suggested to use several resources (e.g. NCBI gene, BioGRID, STRING). Here, we have merged data from NCBI gene, BioGRID and STRING. During the fourth step, the gene list is evaluated for uncovering interactions and interaction enriched gene clusters (sets of interacting genes). Further, only large groups of interacting genes are analyzed, leaving aside small clusters of interacting elements. This criterion is based on a hypothesis that highly interacting genes (proteins) are more likely to be involved in the same processes or influence a disease with similar symptoms [15]. Fifth, the pathway lists are obtained for the set of interacting genes. During database selection, one should consider such parameters as the nature and curability of pathway data. Here, Gene Ontology (GO), KEGG, Reactome, NCBI Biosystems were used. Sixth, to process the pathway lists, we introduce a parameter (prioritization criterion) to determine significantly enriched pathways. To calculate the parameter, a total number of genes for each pathway are obtained. Pathways, in which less than 25 genes are affected by CNVs, are excluded. The remaining pathways are ranked using the index of pathway prioritization (IPP): $$ {I}_{PP}=\frac{\sum {N}_{CNV\ genes}}{\sum {N}_{pathway\ genes}} $$ where IPP — index of pathway prioritization; NCNV genes — number of CNV genes in a pathway found in molecularly karyotyped cohort; Npathway genes — total number of pathway genes. If the IPP is higher than average (i.e. evaluated by three sigma rule), the pathway is prioritized. Seventh, ontologies attributed to the elements of prioritized pathways are considered; pathways are clustered according to the involvement in shared networks (cascades of processes) [16]. Thus, the algorithm provides a set of enriched processes (clusters of pathways) in a disease or in an individual patient. Using the algorithm and Affymetrix CytoScan HD microarray, we analyzed 191 genomes (DNA isolated from peripheral blood) of children with ID, ASD and congenital abnormalities without gross chromosomal and genomic rearrangements (i.e only the CNVs less than 500 kbp in size were included). The raw results of the algorithm processing are shown in Fig. 2. Intermediate results before pathway clustering We obtained a set of 741 genes affected by pathogenic or likely pathogenic CNVs. "Expression filtering" allowed to select 307 genes highly expressed in the brain. Cross-checking interactions of these genes (step 3) was used for building an interactome (step 4) encompassing 3156 genes. These genes were involved in 302, 978, 3380, and 2350 pathways, according to KEGG, REACTOME, Biosystems, and GO, respectively. For each pathway, we calculated IPP, which allowed us to obtain enriched pathways for each database: KEGG — 1, REACTOME — 11, Biosystems — 0, GO — 27. Pathway clustering was performed according to pathway ontologies. The application of CNV prioritization or "data laundering" algorithm yielded 39 genomic networks (pathways) forming 13 clusters of processes, involving 475 genes. These pathway clusters were as follows: neurodegenerative diseases, proteasome, signaling by ERBB4, transcription regulation, regulation of TP53, signaling by NOTCH, senescence, mitosis, DNA repair, vesicles functioning, actin functioning, macromolecular interactions, B cells functioning (Table 1). Table 1 Pathways organized by clusters According to the value of Ipp, the most significant pathways clusters were "proteasome", "neurodegenerative diseases", "regulation of TP53", "vesicles functioning", "signaling by NOTCH", "actin functioning". Proteasome cluster was the most enriched one. Alterations to the proteasome complexes decrease proteolytic activity leading to the accumulation of damaged or structurally abnormal proteins. Similar protein accumulation may underlie neurodegenerative, cardiovascular and autoimmune diseases [17]. Neurodegenerative diseases cluster was enriched in genes associated with several devastative diseases and implicated in a variety of molecular/cellular processes. More precisely, CDK5 is involved in synaptic plasticity and neuronal migration; DCTN1 takes part in the formation of mitotic spindle and axons; FUS regulates gene expression and maintains the integrity of the genome; GRN regulates cell growth, and OPTN participates in membrane transport [18]. The p53-pathway consists of genes that respond to a wide range of stress signals. Stress responses include apoptosis, cellular senescence and cell cycle arrest. In addition, p53-regulated genes may produce proteins that transmit stress signals to neighboring cells and. These genes are involved in DNA reparation, regulation of p53 and binding to signaling pathways [19]. Disruption of synaptic vesicles is associated with developmental disorders. The fusion of synaptic vesicles with a presynaptic plasma membrane, followed by the release of a neurotransmitter, is essential for the neural transmission [20]. The proteins belonging to the SNARE complex (Synaptic-soluble N-ethylmaleimide-sensitive factor attachment receptor) participate in the majority of membrane-vesicles fusion events. A number of diseases are associated with mutations in the genes of this complex; for example, homozygous mutations of SNAP29 leading to impaired endocytic recycling and cell motility has been associated with CEDNIK syndrome (cerebral dysgenesis, nervous system disorders, ichthyosis and palmar-plantar keratodermia) [21]. Additionally, a decrease of SNAP25 was found in the hippocampus of patients with schizophrenia. A single nucleotide polymorphism (SNP) in SNAP25 was associated with hyperactivity in ASD. In high-functioning autism, increased syntaxin 1A expression was observed. Various studies showed that the reduced expression in the anterior part of the cingulate gyrus was observed in patients with ASD [22]. Notch signaling plays a significant role in embryonic development and dendritic development. In mammals, deletions of the Notch signal modulator (Numb) disrupted the maturation of neurons in the developing cerebellum, and violated axon branching in sensory ganglia [23]. The dysfunction of the signaling pathways that reorganize synaptic actin is associated with a variety of brain development abnormalities, including ASD, schizophrenia and ID. Indeed, genes such as SHANK3, GIT1, DISC1, SRGAP3, OPHN1, LIMK1, NRG1, CYFIP1, SYNGAP1, KALRN, NCKAP1 and CNKSR2 regulate upward signaling that stimulate the dynamics of the actin cytoskeleton in dendritic spines [24]. Currently, individual genome analysis obtains big data which are to be processed for basic, diagnostic and therapeutic purposes. Molecular karyotyping detects CNVs, which may output candidate gene and pathway lists. To discover genetic basis of an individual's phenotype or pathways to a disorder, it appears necessary to answer two questions: What are the pathways disrupted by CNV genes? Do these pathways merge into a single global cluster reflecting a specific cellular/molecular process? Application of bioinformatic strategies similar to the data laundering algorithm is able to answer these questions. It is necessary to stress that data analysis requires tools considering multiple factors and a theoretic background. Pathway clustering represents a promising bioinformatic tool, enabling the development of therapeutic strategies based on a molecular mechanism [25]. Similarly, our algorithm may be used as for an individual, particularly, as for a disease, as a whole. Furthermore, "data laundering" method is based on freely available web tools. The algorithm has been applied to a cohort of 191 children with ID, ASD and congenital abnormalities, yielding 13 pathway clusters potentially associated with brain disorders: neurodegenerative diseases, proteasome, signaling by ERBB4, transcription regulation, regulation of TP53, signaling by NOTCH, senescence, mitosis, DNA repair, vesicles functioning, actin functioning, macromolecular interactions, B cells functioning. Thus, the data laundering algorithm using CNV data allows obtaining clusters of candidate (disease-associated) processes. This algorithm is important for further basic and diagnostic research. Moreover, the application of the algorithm to molecular diagnostics of genomic pathology makes it possible to expand our knowledge about disease mechanisms in individual cases. Our findings may have importance for the development of therapeutic strategies and relevant psychological intervention for genetically determined ID and ASD cases caused by CNVs [26,27,28]. Molecular pathways are key elements of etiological concepts in brain disorders, significantly contributing to our understanding of neurological and psychiatric diseases. To determine disease mechanisms, one has to uncover the molecular and cellular pathways in addition determining a gene or chromosome abnormality underlying the condition. In other words, the main task for such studies is to find disrupted biological processes, which should be properly reflected in common disease description [29,30,31,32]. The application of our algorithm can lead to successful identification of molecular and cellular mechanisms for brain diseases for developing personalized therapeutic strategies. The datasets used and analyzed during the current study are available at http://dekanat.bsu.edu.ru/f.php/1/disser/case/filedisser/filedisser/998_dissertaciya_zelenova.pdf ADHD: GTP: Guanosine triphosphate Protein-protein interaction RNA: SNP: Parikshak NN, Gandal MJ, Geschwind DH. Systems biology and gene networks in neurodevelopmental and neurodegenerative disorders. Nat Rev Genet. 2015;16(8):441–58. Takumi T, Tamada K. CNV biology in neurodevelopmental disorders. Curr Opin Neurobiol. 2018;48:183–92. Willsey AJ, Morris MT, Wang S, et al. The psychiatric cell map initiative: a convergent systems biological approach to illuminating key molecular pathways in neuropsychiatric disorders. Cell. 2018;174(3):505–20. Vorsanova SG, Yurov YB, Iourov IY. Neurogenomic pathway of autism spectrum disorders: linking germline and somatic mutations to genetic-environmental interactions. Curr Bioinform. 2017;12(1):19–26. Huang GH, Sun ZL, Li HJ, Feng DF. Rho GTPase-activating proteins: regulators of rho GTPase activity in neuronal development and CNS diseases. Mol Cell Neurosci. 2017;80:18–31. Zhang L, Zhang P, Wang G, Zhang H, Zhang Y, Yu Y, et al. Ras and Rap signal bidirectional synaptic plasticity via distinct subcellular microdomains. Neuron. 2018;98(4):783–800.e4. Iourov IY, Vorsanova SG, Liehr T, Kolotii AD, Yurov YB. Increased chromosome instability dramatically disrupts neural genome integrity and mediates cerebellar degeneration in the ataxia-telangiectasia brain. Hum Mol Genet. 2009;18(14):2656–69. Levitt P, Campbell DB. The genetic and neurobiologic compass points toward common signaling dysfunctions in autism spectrum disorders. J Clin Invest. 2009;119(4):747–54. Cristino AS, Williams SM, Hawi Z, et al. Neurodevelopmental and neuropsychiatric disorders represent an interconnected molecular system. Mol Psychiatry. 2014;19(3):294–301. Iourov IY, Vorsanova SG, Zelenova MA, Korostelev SA, Yurov YB. Genomic copy number variation affecting genes involved in the cell cycle pathway: implications for somatic mosaicism. Int J Genomics. 2015;2015:757680. Dharanipragada P, Vogeti S, Parekh N. iCopyDAV: integrated platform for copy number variations-detection, annotation and visualization. PLoS One. 2018;13(4):e0195334. Iourov IY, Vorsanova SG, Yurov YB. In silico molecular cytogenetics: a bioinformatic approach to prioritization of candidate genes and copy number variations for basic and clinical genome research. Mol Cytogenet. 2014;7:98. Kingsbury MA, Yung YC, Peterson SE, Westra JW, Chun J. Aneuploidy in the normal and diseased brain. Cell Mol Life Sci. 2006;63(22):2626–41. Iourov IY, Vorsanova SG, Yurov YB. Chromosomal variation in mammalian neuronal cells: known facts and attractive hypotheses. Int Rev Cytol. 2006;249:143–91. Huttlin EL, Bruckner RJ, Paulo JA, et al. Architecture of the human interactome defines protein communities and disease networks. Nature. 2017;545(7655):505–9. Yurov YB, Vorsanova SG, Iourov IY. Network-based classification of molecular cytogenetic data. Curr Bioinform. 2017;12(1):27–33. Schmidt M, Finley D. Regulation of proteasome activity in health and disease. Biochim Biophys Acta. 2014;1843(1):13–25. Morello G, Guarnaccia M, Spampinato AG, La Cognata V, D'Agata V, Cavallaro S. Copy number variations in amyotrophic lateral sclerosis: piecing the mosaic tiles together through a systems biology approach. Mol Neurobiol. 2018;55(2):1299–322. Fischer M. Census and evaluation of p53 target genes. Oncogene. 2017;36(28):3943–56. Chen J, Yu S, Fu Y, Li X. Synaptic proteins and receptors defects in autism spectrum disorders. Front Cell Neurosci. 2014;8:276. Rapaport D, Lugassy Y, Sprecher E, Horowitz M. Loss of SNAP29 impairs endocytic recycling and cell motility. PLoS One. 2010;5:e9759. Ramakrishnan NA, Drescher MJ, Drescher DG. The SNARE complex in neuronal and sensory cells. Mol Cell Neurosci. 2012;50(1):58–69. Yoon K, Gaiano N. Notch signaling in the mammalian central nervous system: insights from mouse mutants. Nat Neurosci. 2005;8(6):709–15. Yan Z, Kim E, Datta D. Synaptic actin dysregulation, a convergent mechanism of mental disorders? J Neurosci. 2016;36(45):11411–7. Iourov IY, Vorsanova SG, Voinova VY, Yurov YB. 3p22.1p21.31 microdeletion identifies CCK as Asperger syndrome candidate gene and shows the way for therapeutic strategies in chromosome imbalances. Mol Cytogenet. 2015;8:82. Iourov IY, Vorsanova SG, Zelenova MA, Vasin KS, Kurinnaia OS, Korostelev SA, Yurov YB. Structural variations of the genome in autistic spectrum disorders with intellectual disability. Zh Nevrol Psikhiatr Im S S Korsakova. 2016;116(7):50–4. Benger M, Kinali M, Mazarakis ND. Autism spectrum disorder: prospects for treatment using gene therapy. Mol Autism. 2018;9:39. Iourov IY, Zelenova MA, Vorsanova SG, Voinova VV, Yurov YB. 4q21.2q21.3 duplication: molecular and neuropsychological aspects. Curr Genomics. 2018;19(3):173–8. Heng HH, Horne SD, Chaudhry S, Regan SM, Liu G, Abdallah BY, Ye CJ. A Postgenomic perspective on molecular Cytogenetics. Curr Genomics. 2018;19(3):227–39. Iourov IY. Cytogenomic bioinformatics: practical issues. Curr Bioinform. 2019;14(5):372–3. Iourov IY, Vorsanova SG, Yurov YB. Pathway-based classification of genetic diseases. Mol Cytogenet. 2019;12:4. Mi Z, Guo B, Yin Z, Li J, Zheng Z. Disease classification via gene network integrating modules and pathways. R Soc Open Sci. 2019;6(7):190214. Authors thank Alexey A. Mikryukov from VNIIOFI (Moscow, Russia) for help in computational analysis. Our study has been partially supported by RFBR and CITMA according to the research project No. 18–515-34005. Prof. SG Vorsanova is supported by the Government Assignment of the Russian Ministry of Health, Assignment no. AAAA-A18–118051590122-7. Prof. IY Iourov is supported by the Government Assignment of the Russian Ministry of Science and Higher Education, Assignment no. AAAA-A19–119040490101-6. Mental Health Research Center, Russia, Moscow, 115522 Maria A. Zelenova , Yuri B. Yurov , Svetlana G. Vorsanova & Ivan Y. Iourov Academician Yu.E. Veltishchev Research Clinical Institute of Pediatrics, N.I, Pirogov Russian National Research Medical University, Ministry of Health of the Russian Federation, Russia, Moscow, 125635 Search for Maria A. Zelenova in: Search for Yuri B. Yurov in: Search for Svetlana G. Vorsanova in: Search for Ivan Y. Iourov in: MAZ wrote the manuscript and performed bioinformatic analyses; YBY provided data and made significant theoretical input; SGV provided data and made significant theoretical input; IYI wrote the manuscript, performed bioinformatic analyses made significant theoretical input. All authors read and approved the final manuscript. Correspondence to Ivan Y. Iourov. Informed consent was obtained from all individual participants included in the study. Zelenova, M.A., Yurov, Y.B., Vorsanova, S.G. et al. Laundering CNV data for candidate process prioritization in brain disorders. Mol Cytogenet 12, 54 (2019) doi:10.1186/s13039-019-0468-7 Technology and methods
CommonCrawl
Part-alignment procedure on coordinate measuring machine (CMM) for dimensional and geometrical measurements Part-alignment is a compulsory process that needs to be performed for CMM measurements. This process is performed right after the qualification process of CMM probing system. Only after part-alignment is performed, dimensional and geometrical measurements by a CMM can be carried out. Wahyudin Syam Aug 27, 2022 • 8 min read The main purpose of part-alignment process in CMM measurement is to construct the coordinate transformation chain from the machine coordinate system (MCS) of a CMM to the work-piece coordinate system (WCS) on a measured part. After the transformation chain of coordinate system from a MCS to WCS is established, the coordinate reference for CMM measurements can be moved to WCS coordinate system. The main advantage of using WCS coordinate system for measurement is that measurement errors due to part placement and fixturing errors (even in micrometre scale) can be avoided. The reason is that in reality, does not matter how precise we place our part on a CMM table (even with the most expensive fixturing system) for measurement, at micro- or sub-micromere-scale, the part has a movement and deviate from its intended position due to, for example, too large fixturing force. That small movement will be significant from a MCS point of view, but not that significant from a WCS point of view. Figure 1 below shows the illustration of part-alignment process on a CMM. In figure 1, the transformation chain of coordinate system from a MCS to WCS is established with the part-alignment process. In other words, part-alignment can be viewed as the way a CMM knows the precise and actual position and orientation of a workpiece to be measured that is placed on the CMM measurement table. Figure 1: Part-alignment process to construct or establish a transformation coordinate system from MCS to WCS of a CMM. With the establishment of WCS, the CMM can know the actual position and orientation of a part to be measured. Note that to understand the part-alignment process in CMM measurements, knowledge about homogenous matrix and its operation, and coordinate transformation systemhave to be firstly understood. (Note: All 3D illustrations were created by using a CATIA 3D modelling software) Let go into the details! READ MORE: The probing system of tactile-CMM: Vector diagram and qualification process. The process to reconstruct a work-piece coordinate system (WCS) As being briefly mentioned before, from the perspective of CMM, part-alignment has a purpose so that a CMM knows the actual position and orientation of a measured workpiece. That is, the part-alignment is a way for the CMM to be able to "see" the workpiece. From the perspective of precision, the purpose of part-alignment is to move the reference coordinate system for measurement from MCS to WCS. By doing this reference roto-translation to WCS, fixturing-related errors (including part placement) can be eliminated as the actual position and orientation (that includes placement deviation) of the workpiece after fixturing processes can be known. It is important to note that the centre point of the WCS reference system should be placed on a measured workpiece (usually at the corner for prismatic parts) and not on other places. Common mistakes found in daily CMM operation are to place a WVS on, for example, fixtures. To define the position (location) and orientation of a measured workpiece placed on a CMM measurement table, there are two main parameters that need to be quantified during a part-alignment process: The spatial coordinate of the centre point of the WCS (to know the location/position of a part) The unit vector on three directions on the centre point (to know the orientation of the part) To obtain the centre point and unit vectors of a WCS during a part-alignment process, we need to quantify five parameters, that are: $X$-coordinate of the centre point of a WCS $Y$-coordinate of the centre point of a WCS $Z$-coordinate of the centre point of a WCS Unit vector 1 $(N_{1})$ Note that the third unit vector (unit vector 3) can be obtained by taking the cross product of the unit vector 1 and unit vector 2 (that have been determined/quantified from the part-alignment process). Hence, the complete three unit vector to know the orientation of a part in 3D can be known. The cross product to get the unit vector 3 is: Figure 2 below shows the process of establishing a WCS from a part-alignment process for CMM measurement. In figure 2, a block workpiece is shown that is a common case in CMM measurement as well as for reference to explain how a part-alignment process is performed. Figure 2: The process to establish or quantify a WCS for tactile-CMM measurement. (a) determining the centre point of the WCS and the three unit vectors and (b) the quantified WCS. To determine the $X,Y,Z-$ coordinates for the centre location of the WCS, the centre location commonly can be: The intersection between three planes (the most common) The centre point of a circle The centre point of a sphere The intersection between two axes The intersections between an axis with a plane To determine the unit vector 1 $(N_{1})$ and unit vector 2 $(N_{2})$ for the orientation of the WCS, these two vectors commonly can be (or the combination of): The normal vector of two planes The direction vector of a cylinder The direction vector of the edge of a block Figure 2 above shows how to determine a WCS from the intersection of three planes and the normal vector of the two planes out of the three planes. This method to determine a WCS is the most common method to use. In figure 2 above, the WCS will be placed on one of the top corners of the block. To realise this WCS, points on surface or plane 1, 2 and 3 are taken or probed. The number of points to be taken on each plane should be minimum to be three points. As the rule of thumb, minimum four points need to be probed for a plane so that the averaging effect during a least-square fitting can be obtained. The least-square fitting is the process to associate a plane geometry to the probed or taken points of the planes. The results of the fitting process on the probed points are a point on the planes and a unit vector that has an origin on the point on the plane (see figure 2a-right above). Hence, when the point and the unit normal vector (originated from the point) on each plane has been determined, then, the WCS can be determined by calculating the intersection point from the three planes (fitted planes) as shown in figure 2b above. This intersection point represents the location of the workpiece with respect to the MCS of the CMM in use. Then, two normal vectors from two planes out the three planes are used as the unit direction vector of the WCS centred on the intersection point. The third unit direction vector can be obtained from a mathematical calculation, that is the results of cross-product between the two unit direction vectors. READ MORE: The probing system of tactile-CMM: Important aspects to consider for probing system. How to correctly determine work-piece coordinate system (WCS)? Examples of correct and incorrect WCS determination are shown in figure 3 below by using a block workpiece or part. A correct WCS determination is when all 5 (+1) elements of the WCS, that are the WCS centre point $X,Y,Z-$ coordinates and the three unit normal vector $( N_{1}, N_{2}, N_{3} = N_{1} \times N_{2} )$, have been fully defined. In figure 3-top row, it can be observed that all elements of the WCS have been correctly defined. The definition of "correctly defined" is by evaluating the change of position and orientation of the part. When the part position and orientation are changed, the WCS can represent these changes. That is, the WCS position and orientation are also changed following the part movement and rotation. However, in figure 3 bottom-row for incorrect WCS determination, when the WCS is not correctly defined, that is when all the 5 (+1) elements of the WCS are not fully defined, when the position and orientation of the part change or move, the WCS cannot represent that change. That is, the WCS will look the same even though the part position and orientation change as shown in figure 3-bottom row. Figure 3: Example of a correct WCS determination (top) and an incorrect WCS determination (bottom). READ MORE: The probing system of tactile-CMM: The history, configuration and mechanism. The determination of the work-piece coordinate system (WCS) of parts with complex shape Figure 4 below shows a practical and real example on how to correctly determine a WCS and select required features on a part with complex shape or geometry. In figure 4, since the part has a complex geometry, there are no three different planes with different unit normal vector directions to be selected in order to establish the WCS on the part. Hence, for this complex part, the procedure to establish the centre location and unit vector directions of the WCS are as follow (following figure 4): 1. The centre point of the WCS (the $X,Y,Z-$ coordinates) is selected as the intersection point between the cylinder axis and plane 2 (see figure 4 below). Hence, the plane 2 and cylinder have to be probed to collect points to numerically reconstruct or fit the plane 2 and cylinder geometry. 2. The two unit normal vectors are determined as the unit directional vector of the cylinder axis (see figure 4) and the normal vector of the plane 1 shown in figure 4. 3. The third unit normal vector is determined as the cross-product form the two unit normal vectors that have been defined before, that is $N_{3} = N_{1} \times N_{2}$. From the procedure above, all required elements or features to correctly establish the WCS can be obtained. Figure 4: Example of a correct WCS determination for a part with complex shape. Part-alignment process on CMM measurement is instrumental. Because this alignment process is needed so that a CMM can locate or "see" the position and orientation of a measured part placed on the CMM's measurement table. Beside to know where the part on the CMM is, the part-alignment process also removes error sources originated from placement and fixturing-related errors. The reduction of these errors will improve the accuracy of CMM measurements. The main goal of the part-alignment process is to establish a WCS on the measured part. In this post, the procedure to determine a correct WCS is presented in detail. In addition, real practical examples on parts with simple and complex geometry are presented so that readers can have a practical understanding about part-alignment process on CMM. You may find some interesting items by shopping here. Optical coordinate measuring machine (Optical-CMM): Performance verification and measurement uncertainty estimation Performance verification and measurement uncertainty estimation of an optical coordinate measuring machine (optical-CMM) are very important aspects assuring that the optical-CMM works within its specification and its measurement results are traceable to the definition of metre. Optical coordinate measuring machine (Optical-CMM): Two fundamental limitations Optical coordinate measuring machines (Optical-CMM) have advantages over tactile (contact) CMMs, such as more part-feature accessibility, no surface damaging-risk and large surface points capture in relatively a short period of time (compared to tactile CMMs). Digital transformation of dimensional and geometrical measurements Today era is the time for thorough digitalisation, from product design, process design to data analysis and executive summary making. Wahyudin Syam Jan 10, 2023 • 11 min read
CommonCrawl
EPJ Nonlinear Biomedical Physics Brain connectivity extended and expanded Obrad Kasum1,2, Edin Dolicanin1,2, Aleksandar Perovic1,2 & Aleksandar Jovanovic1,2 EPJ Nonlinear Biomedical Physics volume 3, Article number: 4 (2015) Cite this article The article is focused on the brain connectivity extensions and expansions, with the introductory elements in this section. In Causality measures and brain connectivity models, the necessary, basic properties demanded in the problem are summerized, which is followed by short introduction to Granger causality, Geweke developments, PDC, DTF measures, and short reflections on computation and comparison of measures. Analyzing model semantic stability, certain criteria are mandatory, formulated in preservation/coherence properties. In the sequel, a shorter addition to earlier critical presentation of brain connectivity measures, together with their computation and comparison is given, with special attention to Partial Directed Coherence, PDC and Directed Transfer Function, DTF, complementing earlier exposed errors in the treatment of these highly renowned authors and promoters of these broadly applied connectivity measures. Somewhat more general complementary methods are introduced in brain connectivity modeling in order to reach faithful and more realistic models of brain connectivity; this approach is applicable to the extraction of common information in multiple signals, when those are masked by, or embedded in noise and are elusive for the connectivity measures in current use; the methods applied are: Partial Linear Dependence and the method of recognition of (small) features in images contaminated with noise. Results are well illustrated with earlier published experiments of renowned authors, together with experimental material illustrating method extension and expansion in time. Critical findings, mainly addressing the connectivity model stability, together with the positive effects of method extension with weak connectivity are summarized. Granger's method (some extended application in [1-9]), [10-12], has been in the focus of extensive research in neuroscience, expanded in various developments. We list some of the standardized connectivity measure terminology [13-20], mentioning Granger – Geweke counterpart measure couples, contrary to Bacala - Sameshima [18] concept of causality measure counterpart. These renowned leading authors refer to "proper frequency domain counterparts to Granger causality". Our correction is based on Geweke fundamental relation between temporal and frequency domain causality measures. We shortly focus our attention to the earlier analyzed measure comparisons, adding important argumentation. We introduce methods of Partial (Linear) Dependence – PLD and (image) small object recognition in order to deal with the weak brain connectivity- connectivity elusive or undetectable by the connectivity measures or heavily masked by noise, hopefully extending standard methods. These methods are applicable to both, frequency distributions, spectral like objects, and to frequency-time distributions, e.g. spectrograms, which expand the time point to dynamics-in-time view. This article is partly extension of our work [4], from which we reproduce fragments necessary for developments and discussion here. In some circumstances brain processes might exhibit behavior similar to stochastic systems or fluid dynamics. No matter how much such analogies and similarities might be fruitful, we better keep some reserve for subjecting the brain to either statisticians or plumbers. We should not forget that brain is a highly complex information processing system, with reach information flow between large number of co-processing points, which is our basic initial hypothesis, better: axiom-1. Then, obviously, the dynamics of connectivity patterns has essential role, which includes connectivity patterns and their time switching as well. With broader application domains which include neurology research and practice, expanding the sophistication of involved models which already operate with connectivity arguments in the most sensitive segments, strongly influencing expert's decision making, the demand for careful critical reinvestigation of theory and application has become continuously necessary. Establishing of neurological disorders, psychological evaluations, highly confident polygraphy are all of crucial significance for the subjects involved. Finally, we witnessed on a recent conference, an expert's elaboration of evaluations concerning level of patient brain damage after a stroke, consciously lost according to the contemporary criteria, with bad prognosis and consequential termination reasoning and planning. We know that we do not know the circumstances so well in order to produce categorical conclusions in such matters. First, we observe, that in the contemporary connectivity modeling some procedures, computations and estimates need increased care in order to lead towards correct conclusions. It is shown that the neighborhood of zero is of accented importance in such evaluations and that unification of values with the difference below zero thresholds is necessary as the first step in computation and comparison evaluation. Harmonization of thresholds corresponding to measures involved in comparisons is an open issue requiring mathematically reasonable solutions. Computational stability is a general demand everywhere. Varying fundamental parameters in small neighborhoods of elsewhere published and established values, we perform detailed analysis of semantic stability of the deduced published exemplary connectivity models. When we face computational instability, if we deal with models of the real world, it immediately generates semantic instability and often a singularity. In this context, if connectivity graphs essentially change when computational differences of arguments occur within computational zero, then this is immediately reflected semantically as proportionally unstable maps of brain connectivity structures. This is not acceptable in any interpretation of experimental data and questions applicability of the brain connectivity models. We extract some characteristic examples involving wrong logic and those based on reasoning with insufficient care and precision. Analysis of the operators involved - used in the measure computation and comparison by renowned authors, proved that the spectral maximum selected as the representative invariant for both DTF and PDC measures before their comparison is not justified and might lead to the not well founded or invalid conclusions. Suggestions for the improvements of used simplification operators in computation of measures and for complementary comparisons are given. Second, there is an attitude with respect to connectivity present at large, where the thicker connectivity arrows are proportionally more important than the thinner ones, with discrimination made following corresponding signal intensity-energy level. If we remember the above stated axiom, we have to become more sensitive towards connectivity concept in general and rephrase importance criterion, or rather erase it completely. In the information processes, a weaker energy process can be much more important than those at higher energies. Also, short or ultra-short messages might precede hierarchically the longer transports. Consequently, when building a brain connectivity model, we have to take in consideration all discernible data related to connectivity. Even more, we should accept an extra hypothesis, axiom-2: there are processes related to connectivity, which are indiscernible or hardly discernible from noise, with unknown importance for the brain functioning models established by connectivity measures in the current use. Connectivity as currently performed and understood is omitting the essential temporal dimension. The conclusive connectivity graphs, the aims in connectivity estimations, are to be replaced or rather expanded in time dimension, since brain dynamic changes can be essential at the millisecond scale. This is for us the axiom-3, which has to be respected. Even for short events, these graphs change or massively change in time and it is necessary to integrate their time dynamics into the model that should make sense, strongly analogous to the relation between individual spectrum and time-spectrum, spectrogram. The former makes sense only when there is no intrinsic dynamics, thus, in highly stationary processes only. This does not apply to e.g. music: one cannot be aware of a melody, nor detect it applying single spectrum. Finally, let us note that connectivity graphs present in recent research reports are usually structures with highly limited number of nodes, essentially a lot under currently achieved resolution of signal acquisition sensors (EEG, MEG, mixes). With earlier listed simplifications we are offered highly reduced graphs as brain connectivity models, which leads to oversimplified understanding of brain functioning. Certainly, in very short time, what we depict with 6 node or restricted 20 node graph today, with stronger currents as essential, thus with up to 20 directed graph links, when expanded with real weaker connectivity within currently achieved resolution, with added connectivity time dynamics in the range of millisecond resolution, will be represented with hundreds of nodes and higher number of more realistic links, probably easily reaching 216 or 220 links. Such combinatorial explosion is realistic. The forthcoming large size models would have to be generated, inspected, analyzed, compared, classified and monitored automatically, offering synthesis in higher abstraction synthetic invariants to experts. Who is modelling Internet functional or dysfunctional connectivity with up to 20 links? Brain is much more complex than the Internet. We are approaching the end of the connectivity modelling golden age, end of simple functionality explanations, we better be prepared for the change and work on it. We present connectivity measure terminology (expanded in the Appendix), then our enhancements applicable in weak brain connectivity, where the standard connectivity measures remain undecidable, followed by examples with published measure comparisons with extended scrutiny and examples of applications of methods focused on weak brain connectivity, ending with conclusions integrated into a Proposition. Signals and software used in experiments shown here are available at our web: http://www.gisss.math.rs/. Causality measures, brain connectivity models Initial points After detailed inspection of arguments involved in analysis and comparison of certain mostly used connectivity measures in the current literature, we propose inclusion of the following points when building connectivity models: I1. connectivity estimation separated from other properties of interest, e.g. "connectivity strength"; I2. beside directed connectivity, separate treatment of bare connectivity – with no direction indication, when direction is more difficult to determine, and as a correctness test in the graphs deduced; I3. more precise calculations and aggregations; I4. scrutiny of involved operators; I5. appropriate changes in the calculation and comparison procedures resulting in the more precise modeling of the connectivity structures and properties; I6. special attention to the thresholds involved and related numeric zero which is basic for all other conclusions; I7. stability analysis; stability of computations and model in the neighborhood of zero; stability wrt. all involved parameters. I8. model stability in time; I9. differential connectivity: inspection of deduced connectivity models by comparison to the structures deduced by other faithful connectivity measures and methods; I10. harmonization of basic parameters of involved measures; I11. proper definition of the reduction level (rounding filtered or "negligible" contents); I12. connectivity graphs time expansion; I13. alternative or additional approaches in model integration; Granger causality, Geweke developments, PDC, DTF All details on the method are available in the cited and other literature. All definitions and elements are briefly given in Appendix. When we have three variables x(t), w(t) and y(t), if the value of x(t+1) can be determined better using past values of all the three, rather than using only x and w, then it is said that the variable y Granger causes x, or G-causes x. Here w is a parametric variable or a set of variables. In the bivariate case, G-causality is expressed using linear autoregressive mode $$ \begin{array}{l}x(t)={\displaystyle \sum_{j=1}^p}{a}_{11}(j)x\left(t-j\right)+{\displaystyle \sum_{j=1}^p}{a}_{12}(j)y\left(t-j\right)+{E}_1(t)\\ {}y(t)={\displaystyle \sum_{j=1}^p}{a}_{21}(j)x\left(t-j\right)+{\displaystyle \sum_{j=1}^p}{a}_{22}(j)y\left(t-j\right)+{E}_2(t),\end{array} $$ where p is the order of linear model and E i are the prediction errors. The model consists of the linear recursive and the stochastic component. Thus, if coefficients of y in the first equation of (1) are not all zero, we say that y G-causes x; similarly for variable y. The multivariate formulation was exploited more by Granger followers, Geweke [13,14] and others (e.g. [15]) rewriting (1) $$ \boldsymbol{x}(t)={\displaystyle \sum_{j=1}^p}\boldsymbol{A}(j)\boldsymbol{x}\left(t-j\right)+\boldsymbol{E}(t), $$ where x(t) = (x 1(t), …, x n (t)) is a vector of variables, A(j), j = 1, …, p coefficient matrix defining variable contributions at step t − j, E(t) are prediction errors. The condition on this model is that the covariances of variables are stationary, which is not always easy to assess. With other contributions, Geweke introduced spectral form of connectivity measures \( {I}_{j\to i}^2 \) (λ), G-causality measure from channel j to i at frequency λ (now G- should be doubled) as well as a set of other suitable measures, which were all popular among the followers. He introduced conditional causality; we mention here his linear causality F y → x of y to x; in frequency domain he introduced the measure of linear causality at a given frequency f y → x (λ), which was followed by other similar or very similar concepts, among which the directed transfer function DTF ij (λ), and partial directed coherence PDC ij (λ), measuring connectivity from channel j to i at frequency λ gained major attention and application. After numerous analysis and comparisons of these two measures e.g. [17,18], later in [21] authors of PDC defined information PDC and DTF, aiming at measuring the information flow between nodes j and i at frequency λ, the measures iPDC ij (λ), iDTF ij (λ). They state a theorem in [21] with nine equivalent conditions characterizing absence of connectivity between two nodes j and i, of which we reproduce conditions 4- 6: $$ \begin{array}{l}0\Big)\ \mathrm{nodes}\ j\ \mathrm{and}\ \mathrm{i}\ \mathrm{are}\ \mathrm{not}\ \mathrm{connected};\\ {}a\left)\ i{\mathrm{PDC}}_{ij}\left(\lambda \right)=0,\forall \lambda \in \right[-\pi, \pi \Big);\\ {}b\left)\ i{\mathrm{DTF}}_{ij}\left(\lambda \right)=0,\forall \lambda \in \right[-\pi, \pi \Big);\\ {}c\left)\ {f}_{y\to x}\left(\lambda \right)=0,\forall \lambda \in \right[-\pi, \pi \Big)\end{array} $$ The theorem is for two var case. For the general case, authors announced soon publishing. Otherwise, we note that all important conclusions in their earlier papers, especially [18] are reaffirmed again in [21]. Computation and comparison of measures Certain normalizations are often necessary before measure comparisons, when we estimate their difference at a point or on a subset of a common domain [4], e.g. for normalized measures, for compatibility estimation we could define $$ \begin{array}{l}mc\left(\mu, \nu, \xi \right)=\left|\mu \left(\xi \right)-\nu \left(\xi \right)\right|,\\ {}m{c}^{*}\left(\mu, \nu, D\right)={\displaystyle \underset{D}{\int }}\left|\mu \left(\xi \right)-\nu \left(\xi \right)\right|d\xi .\end{array} $$ The measure comparison provides similarity degree - a metrics in a suitable space of measures. After analysis of published measure comparisons we noticed presence of certain operators which we expose here. Measuring similarity of measures (involving other operators [4]): quite generally (observing the contemporary practice needs) we define similarity for measure μ and ν by a scheme $$ Sim\left(\mu, \nu, i,j,D\right)=P\Big(sim\left({N}_1\left(\mu, i,j,D\right),{N}_2\left(\nu, i,j,D\right)\right), $$ where N k are normalization operators, sim a basic similarity, P external operator (e.g. posterior grading), i, j is the graph link from j to i, D parameter-set for N k ; e.g. mc and mc* measures are the special cases of (5). Choosing the operators properly would contribute to estimation quality; the opposite will generate erroneous reasoning. Computational stability implying semantic stability, depending on involved operators is demanding, before deriving any conclusions. Comparisons in three predicates: basic connectivity between two nodes, directed connectivity, connectivity-grading, should be desagregated- done separately. Previous considerations should include time dynamics which is omitted here while staying closer to the existing practice in the treatment of standard brain connectivity. Preservation/coherence properties Measures satisfy preservation properties, e.g. monotony, cardinal monotony, translation invariance, some additivity, approximations. The following semantic stability criteria STC are mandatory. P1. substructure invariance, i.e. restriction of a measure to a substructure should not change its range; thus, measure values on the intersection of substructures remains coherent. P2. Structural stability; measure computation and comparisons/similarity estimates should be invariant to some degree of fluctuations of the involved operators (here P, sim, N k ). These conditions should secure measure stability in extended, repeated and similar experiments. P3. measure comparison should be stable in all involved parameters. P4. continuity in models - similarity measures: small must remain small and similar has to remain similar in measure comparisons. The small difference of argument implies bounded difference of the result; this applies to predicate connected, with small shift of argument. Structural properties of measures are determined on small objects - in the zero neighborhoods, whence the measure zero ideal is of key importance, which is the reason to list separately the zero-axioms, ZAx, for either a single considered measure or for a set of compared measures: ZAx: Z0. substructure partitioning invariance (measure restriction to a substructure remains coherent); Z1. fluctuations of operators involved in measure computation and comparison need be tolerant (continuity); Z2. in similar circumstances numeric zero (significance threshold) should be stable quantity (to allow comparability of results); Z3. comparison of a set of measures needs prior unification (argumentation necessary) of their zero thresholds (for otherwise, what is zero for one is not zero for other measures; consequently, the measure values which are identical for one measure, are discerned as different by other measures; that must cause problems); Z4. in similar circumstances grading should be stable quantity; Z5. measure values which are different by ≤ numeric-zero should remain identical in any posterior computation/grading, if applied (this is in accordance with the prior congruence on the ideal of zero measure sets); Z6. values in any posterior computation/grading (if applied) should differ by no less than numeric-zero and grades should be of unified diameter; in this way values in posterior grading range are harmonics of numeric-zero; Z7. final grading as a (small) finite projection of normalized range [0, 1] needs some conceptual harmonization with the standard additivity of measures; this step should involve fuzzification; Z8. grading should be acceptable by various aspects present in the interpretation of related experimental practice (that means that the picture obtained using a projection/grading of [0, 1] range should not semantically be distant from the original picture – based on the [0, 1] range with, e.g. a sort of continuous grading); Z9. Connectivity graphs time expansion; it is necessary to introduce time dynamics in these observations. Mathematical principles must be respected for consistency preservation; computations in repeated and similar experiments have to be comparable and stable. Measure computation and comparisons are complex, consisting of steps, some of which usually do not commute, demanding care and justification. We will present two methods applicable to the weak brain connectivity, the case when a set of signals share a common information, which is either hardly detectable or even undetectable by direct observation or the connectivity measures in current use. The first is based on rather pragmatic property, partial linear dependence, PLD: for the set of functions (signals or vectors) S = {f i : i ∈ I} with the common domain, the set of restrictions to a subdomain is linearly dependent (wrt. some usual scalar product), while the complementary restrictions are linearly independent. PLD could be used to extract the common information easier. Then we might be looking for the maximal PLD sets, corresponding to different common information. If we generalize this slightly we come to PD in case when we have dependence which is nonlinear. The second approach exploits the methods originally introduced to eliminate clutter in radar images and to enhance small objects in images and is applicable to both signals and images. Since spectrograms are a sort of images, we can apply those methods to the spectrograms, spectrogram composites, or somewhat more general objects of the similar kind. Both methods are applicable to the noise contaminated signals. PLD - PD If applied to a given frequency, a reduced size set of frequencies, or a known frequency band, this method can supply good answers with not really complex calculations. Similar can be the case for the frequency distributions (spectra, composites, connectivity measures, spectrograms) - parts containing frequencies with poor signal to noise ratio, especially when multiple spectra or spectrograms are available. Alternatively, if we start with independent sequences, feeding all of them with relatively small magnitude process, we should be able to establish the threshold level from the lower side, i.e. when the shared information becomes perceptible. By a MS – a modulation system we designate the usual meanings of signal modulation, i.e. coding or fusion of information process with some (set of) base function (carrier). In technical practice, an MS can be of any usual sort as AM, FM, PCM, BFH, some of their meaningful combinations, or generalized technically and mathematically. Thus, $$ MS:\left(F,S\right)\to H, $$ where F is a subset of B - a system of base functions, while S = {g 1, g 2, …, g n } is a set of information contents, H the fusion output, all components of F, S, H are time functions, in practice - finite sequences. In simplest case F, S, H are all singletons. Obviously, a brain connectivity path might accommodate broader activities, inclusive lower frequency and high frequency information patterns. For two functions f 1 and f 2 we say that they are independent wrt. causality measure μ, if $$ \mu \left({f}_1,{f}_2\right)=0. $$ In practice, for experimental f 1, f 2, that would be $$ \mu \left({f}_1,{f}_2\right)<\varepsilon\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{all}\ \varepsilon >0, $$ down to the numeric (noise or statistical) threshold. If we inject/modulate a sequence G of information sequences, into a couple of μ - independent f 1 and f 2, resulting in f 1 ' and f 2 ' and if $$ \left\Vert G\right\Vert < \min \left(\left\Vert {f}_1\right\Vert, \left\Vert {f}_2\right\Vert \right) $$ for a suitable norm, where the norm of G could be e.g. $$ \left\Vert G\right\Vert = \sup \left\{\left\Vert g\right\Vert :g\in S\right\} $$ we can well have $$ \mu \left({f}_1^{\hbox{'}},{f}_2\hbox{'}\right)\approx 0, $$ while f 1' and f 2' share a vector of information sequences G. The case becomes more complicated if f 1' and f 2' are modulated using different MS's, or have different delays involved in modulation of a set G, or if we deal with spectral features with non-constant frequencies, or if modulation processes involve some headers - protocols. Let us pretend that the PD is PLD, thus simplifying expressions. Similarly, we introduce the local P(L)D, as the (linear) dependence in a subset of a set of all coordinates, in our examples in the frequency distribution form (e.g. power spectra or composite spectra). A set G of time functions/sequences is locally (linearly) dependent at frequency λ (∈SR – a frequency domain [0,Nf] (Niquest frequency) for sampling rate SR) $$ \pi \left(G,\lambda \right)=\Pi \left\{F\left(g,\lambda \right):g\in G\right\}>0, $$ where F(g, λ) is e.g. the λ-th coordinate of Fourier (power) spectrum of g. Then for Fourier spectrograms of G in time interval T, define $$ {\pi}_s\left(\mathrm{G},\lambda, T\right)=\Pi \left\{S\left(g,\lambda, T\right):g\in G\right\}>0, $$ where S(g, λ, T) is time integral of F(g, λ) in the epoch (scrolling time interval) T, i.e. the integral of the time trace of the spectral line at λ: $$ S\left(g,\lambda, T\right)={\displaystyle \underset{T}{\int }}F\left(d,\lambda \right)dt. $$ The condition (9) expresses that all spectra of elements of G have a non zero λ coordinate (or its time trace integral). For a fragment Λ of the spectral domain SR, select $$ \pi \left(G,\Lambda \right)={\displaystyle \underset{\Lambda}{\int }}\pi \left(G,\lambda \right)d\lambda, $$ the restriction to Λ of the product of all power spectra of elements of G, and π s (G,Λ,T) integrating π s (G, λ, T) over Λ likewise: $$ {\pi}_s\left(G,\Lambda \right)={\displaystyle \underset{\Lambda}{\int }}{\pi}_s\left(G,\lambda, T\right)d\lambda . $$ Besides we define simple quotient measures for power spectra, energy density indices for λ, $$ ED\left(g,\lambda \right)=\frac{F\left(g,\lambda \right)}{{\displaystyle {\int}_{SR}}F\left(g,\lambda \right)d\lambda }, $$ the energy at the frequency λ relative total spectral energy and the similar index for spectral neighborhood of λ, i.e. for Λ subset of SR (the spectral resolution - the set of all frequencies in the spectrum). Thus, with $$ ED\left(g,\lambda, \Lambda \right)=\frac{F\left(g,\lambda \right)}{{\displaystyle {\int}_{\Lambda}}F\left(g,\lambda \right)d\lambda }, $$ where Λ is a subset of SR, some neighborhood of λ. Clearly, the more prominent (globally) the spectral line at λ, the higher the first index; the more prominent spectral line at λ locally (within Λ), the higher the second index. Similarly for products for g ∈ G, we define ΠED(G, λ), and ΠED(G, λ, Λ) $$ \varPi ED\left(G,\lambda, \Lambda \right)=\frac{\pi \left(G,\lambda \right)}{\pi \left(G,\Lambda \right)}, $$ $$ \varPi ED\left(G,\lambda \right)=\frac{\pi \left(G,\lambda \right)}{\pi \left(G,\mathrm{S}\mathrm{R}\right)}, $$ and for spectrogram-like composites $$ \varPi EDS\left(G,\lambda, T\right)=\frac{\pi_s\left(G,\lambda, T\right)}{\pi_s\left(G,\Lambda, T\right)}, $$ with ΠEDS(G, T) = ΠEDS(G, SR, T). In practice, all above integrals become finite sums, we might wish to use different indices in different occasions, which is why we distinguish them all. Obviously, we can have situations with the global index negligible while the local index is perceptible. Obviously, we can easily extend the above definitions to include some modulation fluctuations, or we could rephrase the concepts in order to fit better specific needs. In our practice that means that we can search for sets, the subsets of E, signals - electrode network measurements, which contain the same frequency component and select the optimal subsets in P(E). Clearly, the larger set of signals shares common information, the easier should be its extraction. However, with E at 300 now days, growing larger in time, the search for suitable, or larger G's through P(E), having already 2298 elements, is quite a task, without a specializing guidance it will miss all optimums. With some previous knowledge on involved functionality, or starting with rather small sets – as seeds and expanding them we might be in a position to learn how to enlarge initial seeds. After preprocessing of connectivity for selectivity for certain application, we can use (linear) dependent channels to enhance the periodic component present in all members of a set G, which might be near or even below noise threshold in all inspected signals, as illustrated with the experiments below. The advantage is in the following property: the local processes contain independent components behaving quite randomly, the noise behaves randomly and random components will be zero flushed by the above criteria, in either composite spectra or composite spectrograms. Even without knowing at which frequencies interesting periodic patterns might be expected, the above method provides a high resolution spectral and spectrogram scanning. If there are artifacts which are characteristic for certain frequency bands, in case when the searched information is out of these bands with sufficient frequency separation, we might be able to localize and extract even the features embedded in the noise. Thus, e.g. in the composite spectra and spectrograms, the first index ED(g, λ) might easily converge to the numeric zero, while the second index ED(g, Λ), for certain spectral neighborhoods Λ of λ can locally amplify the hidden information, exposing it to perception. The same is even more obvious for spectrograms and products (ΠED, ΠEDS) indices, where we might exploit further properties of spectrogram (composite's) features. When G is modulated by certain MS's, we can still separate the carriers if known or well estimated, even within the same procedure, as above. The above procedure could be extended involving specific sorts of comb like filters, unions of the narrow band filters, which could enhance weak spectral components. Application of image processing methods A variety of problems in image processing is related to the contour – object detection, extraction and recognition. This we encountered in cytology preparations, variety of optical images, radar images, spectrogram features [4], mixing and concatenating processing methods. Small object recognition We have developed procedures for small object recognition and filtering by size based on the intensity discrimination (intensity of considered pixels) and by specific sort of image spectroscopy. Here we shortly present an alternative method for the efficient recognition of smaller, dot-like objects, with diameter < 10 pixels. Method applies to both matrices and vectors, thus to both images and signals/spectrograms. Spectral features which are stable and narrow in frequency are examples of such sort of vectors. The method is an improved Tomasi, Shi, Kanade procedure (see e.g. [22]) for the extraction of characteristic features from a bitmap. It is robust and proved efficient, possessing all highly desirable properties. As an input we use a simple monochrome (0 = white, 255 = black) bitmap (matrix) A of a fixed format, (here in 400 × 400 pixel resolution). The components of A, signal amplitude values, or e.g. spectrogram intensities are denoted by A(x, y), where x indicates the corresponding row and y indicates the column. Spatial x -wise and y -wise differences I x and I y are defined as follows: $$ {I}_x=\frac{\partial A\left(x,y\right)}{\partial x},\kern1em {I}_y=\frac{\partial A\left(x,y\right)}{\partial y}. $$ The matrix G of sums of spatial square differences is given with $$ G={\displaystyle \sum_{x={p}_x-{\omega}_x}^{p_x+{\omega}_x}}{\displaystyle \sum_{y={p}_y-{\omega}_y}^{p_y+{\omega}_y}}\left[\begin{array}{cc}\hfill {I_x}^2\hfill & \hfill {I}_x{I}_y\hfill \\ {}\hfill {I}_x{I}_y\hfill & \hfill {I_y}^2\hfill \end{array}\right], $$ where ω x = ω y is the width of integration window (with optimal values between 2 and 4), p x and p y are such that the formula (20) is defined. Rewriting G more compactly as $$ G=\left[\begin{array}{cc}\hfill a\hfill & \hfill b\hfill \\ {}\hfill c\hfill & \hfill d\hfill \end{array}\right], $$ computing the eigenvalues $$ {\lambda}_{1,2}=\frac{a+d}{2}\pm \frac{\sqrt{{\left(a-d\right)}^2+4bc}}{2}. $$ $$ \lambda \left(x,y\right)= \min \left({\uplambda}_1\left(\mathrm{x},\mathrm{y}\right),{\uplambda}_2\left(\mathrm{x},\mathrm{y}\right)\right) $$ for inner pixels; for given lower threshold T min and parameter A max (here 255) set $$ {\lambda}_{\max }= \max \left\{\uplambda \left(x,y\right)\Big|\ \left(x,y\right)\ \mathrm{is}\ \mathrm{an}\ \mathrm{inner}\ \mathrm{pixel}\right\} $$ and define the extraction matrix by $$ E\left(x,y\right)=\left\{\begin{array}{ll}\frac{A_{\max }}{\lambda_{\max }}\lambda \left(x,y\right),\hfill & \frac{A_{\max }}{\lambda_{\max }}\lambda \left(x,y\right)>{T}_{\min}\hfill \\ {}0,\hfill & \frac{A_{\max }}{\lambda_{\max }}\lambda \left(x,y\right)<{T}_{\min}\hfill \end{array}\right.. $$ If two images or spectrograms are available (two consecutive shots or two significantly linearly independent channels) we obtain a solution in even harder case for automatic extraction. Let B and C be two images where every pixel is contaminated with noise which has a normal Gaussian distribution, in which stationary signal is injected, objects at coordinates (x 1, y 1), … (x 10, y 10), all with intensity e.g. m (within [0, 255] interval) and fluctuation parameter p; we generate the new binary image A in e.g. two steps (or by some other efficient method): $$ \begin{array}{l}A\left(x,y\right)=\mathrm{abs}\left(B\left(x,y\right)-C\left(x,y\right)\right)\\ {}\mathrm{If}\ A\left(x,y\right)<p\kern0.5em \mathrm{then}\ A\left(x,y\right)=255\kern0.5em \mathrm{else}\kern0.5em A\left(x,y\right)=0;\end{array} $$ This simple discrimination reduces random noise significantly and exhibits signals together with residual noise. Performing procedure (19) thru (26), we generate filtered image with extracted signals. The method is adaptable, using two parameter optimization (minimax): minimal integral surface of detected objects, then maximization of the number of small objects. Small object recognition using Kalman filter banks Alternative method for the detection/extraction of small features is based on a bank of Kalman filters. After the construction of the initial sequence of images Z k , the bank of one-dimensional simplified Kalman filters (see e.g. [23]) is defined using the iterative procedure as follows: $$ \begin{array}{l}{K}_k\left(x,y\right)=\frac{P_{k-1}\left(x,y\right)+Q}{P_{k-1}\left(x,y\right)+Q+R}\\ {}{\widehat{X}}_k\left(x,y\right)={\widehat{X}}_{k-1}\left(x,y\right)+{K}_k\left(x,y\right)\left({Z}_k\left(x,y\right)-{\widehat{X}}_{k-1}\left(x,y\right)\right)\\ {}{P}_k\left(x,y\right)=\left(1-{K}_k\left(x,y\right)\right)\left({P}_{k-1}\left(x,y\right)+Q\right).\end{array} $$ Initially: \( {P}_0\left(x,y\right)={\widehat{X}}_0\left(x,y\right)=0,Q=1,R=100 \), where Q is the covariance of the noise in the target signal, R is the covariance of noise of the measurement. We put (depending on problem dynamics): the output filtered image in k th iteration is the matrix \( {\widehat{X}}_k \), the last of which is input in the procedure described by equations (19) to (26), finally generating the image with extracted objects. This method shows that the signal level could remain unknown if we approximately know statistical parameters of noise and statistics of measured signal to some extent. In our basic case we know that signal mean is somewhere between 0 and 255 and that it is contaminated with noise with unknown sigma. Connectivity measure evaluations In this section we focus to the final result of the connectivity measure computations – the brain connectivity directed graphs, as the main model representing brain connectivity patterns. Due to various technological and methodological limitations, contemporary mapping of brain activity using electroencephalography and magneto encephalography operates with a few hundreds of brain signals, thus, close to mega links. No doubt, this resolution will be continuously increasing, down to a few millimeters per electrode and better, all in 3D, increasing proportionally the cardinality of connectivity graphs, as discussed earlier. In a graph we define orbits of individual nodes: the k-th orbit of a node a will consist of nodes whose distance via a directed path from node a is k (separate for both in/out paths). It is assumed that the connectivity graphs exhibit direct connections of processes which are directed. This was ambition of all scientists who proposed the connectivity measures in brain analysis; this is expectation of all scientists interpreting their experiments with computation of the connectivity measures. We just add that there might be scenarios where bare connectivity is decidable, while directed is hard to resolve. We rather briefly analyze some important published examples of connectivity measure computations and measure comparisons. These measures are commonly used to determine brain connectivity patterns. We will exhibit erroneous or misleading conclusions in modeling of brain connectivity which is of crucial importance for experimental scientists in this area. In [4] we presented critical observations concerning key examples from [18,24], which were originally used in the argumentation in comparison of two major brain connectivity measures, PDC and DTF, showing essential superiority of PDC. We will reconsider our example 6 from [4], where corresponding calculations and comparisons were exposed to our detailed investigation. The reason for this reinspection is in the following. Our methodology used in this example was hypothetical to a small extent: investigating parametric stability of connectivity models following from argumentation of Baccala – Sameshima, we borrowed statistical significance values for PDC from other published results of the same and other authors, while omitting their asymptotic estimation published in [25], because the main theorem there was without proof and a small fraction of the text was unclear, probably with mistake, which in the absence of the proof was not easy to overcome. Since that time, we noticed that numerous renowned researchers use the results from [25] asserting that the PDC statistical significance is solved with results there published. We start with our original example from [4], appending arguments concerning this asymptotic estimation of PDC from [25]. It was multiply shown in [18,24] for the simulated models that PDC exactly determines the structural connectivity graphs of directly connected processes, while DTF is rather imprecise, mixing the direct connections with transitive influences, thus reducing use of DTF to the first orbits. The results for PDC on examples with synthetic models are impressive. Conclusions in [18,24] on the two measure comparisons using neurological data are quite the same: PDC exactly describes the direct structural connectivity, while DTF has undetermined degree of imprecision in description of direct structural connectivity, however, profoundly respecting D. Adams Axiom, with the antecedent regularly fulfilled: $$ \left(\forall x\right)\left(\forall y\right)\left( Thresh\left(x,y\right)\Rightarrow Conn\left(x,y\right)\right). $$ Example 1. (partly shortened example 6 from [24]). Analyzing structural stability, in order to emphasize importance of all steps in measure computations and comparison procedure, we discuss in more details a crucial example of PDC/DTF computation and comparison, using real neurological data, by Sameshima – Baccala, which was used in quality estimation of PDC over DTF. Analysis of an experiment focused on two shortly separated time slices: [8,10] s and [13,15] s, with frequency range [0,48] Hz, detailed in [18] (exhibiting structural connectivity changes which supported our demand to expand connectivity models in time dimension). Recordings were made synchronously at CA1, CA3, A3, A10, A17, and DG electrodes (having a common substructure with another of their experiments). We reproduce some of their findings/diagrams in order to be able to present our analysis. The first time slice representation with mutual interactions of recorded signals- structures for both PDC and DTF is given on Figure 1 (the same way of presentation is rather frequent in the related literature), depicting classical coherence with solid lines; shaded spectra correspond to respectively PDC and DTF calculations. PDC calculated connectivity matrix on the left, shaded spectra, DTF connectivity matrix on the right, shaded spectra; DC presented with solid lines (reproduced from [4], originating in [18]). The authors chose here common 0.20 zero-threshold (which is very high: 20% of the normalized range, or four to five times greater than for PDC in their previous examples). They introduced some simplification: not comparing the measures at each respective frequency, these matrices in Figure 1 were used to determine the spectral maximum for calculated PDC and DTF values for all frequencies, for each individual link, which is presented in the Table 1 left side. The matrix in the Table 1 left was used subsequently to integrate the connectivity diagrams for both PDC and DTF, for the first time slice of the experiment, shown as first two graphs in Figure 2. The similar kind of spectral distribution matrices as in Figure 1 was published for second time slice (interval [13,15] s), from which the table of maximums for both PDC and DTF was derived similarly, which is presented in Table 1 right. This matrix of maximums was subsequently used to generate connectivity diagrams for the second time slice, shown as third and fourth graph in Figure 2. Graphs in Figure 2 depict together connectivity patterns and degree (power intensity at maximum) of each connectivity link by arrows in five different degrees (blank for zero and dashed, thin, thicker, thick), in the normalized [0,1] range partitioned into five values, each 0.2 in diameter. Thus, with zero ≤ 0.2, spectral maxima were extracted for each calculated signal pair, projecting - grading the obtained value into the corresponding connectivity degree for each of PDC, DTF, finally considering their difference in connectivity degree to draw the conclusions on PDC/DTF performance (analysis of connectivity diagrams differences) – Figure 2. Table 1 Left side table corresponds to the first time slice of the experiment – related to Figure 1 ; each matrix coordinate has on top the PDC spectral maximum from the shadow spectrum at corresponding coordinates in Figure 1 left, below the spectral maximum for DTF - similarly obtained from Figure 1 right; the connectivity links are sorted column vise, i.e. in the first column are A10 links towards the areas defined as row names Connectivity diagrams, the first two relating activity of involved brain structures which are obtained from the matrices in Figure 1 and Table 1 left: PDC-first diagram, DTF- second diagram. Connectivity diagrams, for the second time slice, for PDC- the third diagram, and DTF – rightmost diagram. Note that the diagram changes first-third for PDC and second-forth for DTF depict brain dynamics in time (3 s later) in the described experiment. Clearly, increased time resolution will improve our understanding of processes in the brain during experiment, thus replacing single diagrams with their time changes, i.e. time sequences of diagrams (redrawn from [18]). In terms of comparisons/similarity of measures as in [18], we can reconstruct here applied procedure (similarly in numerous other studies, which is partly implicit), as the following sequence of steps (*): (*) 1. set common zero = 0.2; 2. apply N 1 operator: provides PDC power maximum, for all frequencies, for a given pair of input nodes; 3. apply N 2 operator: provides DTF power maximum for a given pair of input nodes, for all frequencies; 4. apply P operator (the same operator P) for both PDC and for DTF were applied as projections (the five value grading, corresponding to connectivity degree, after calculation of spectral maxima); 5. Difference of the graded maxima is exhibited as visualized difference –a pair of connectivity graphs depicting all pairs of signals in the respective time slices, 2 s each. First, in concordance with the structural stability conditions which we stated above (on the intersection of two substructures measure is common; in repeated measurements (here, experiments) measure fluctuations must remain tolerable, i.e. obtained values coherent), we will show how rather slight variations of zero threshold, borrowed from the similar experiments cited above and presented in the cited articles, influence connectivity estimates in the same example. Thus, ranging zero threshold thru {0.2, 0.1, 0.06} (thus, including values from other experiments), we obtain three different connectivity difference patterns, for each time slice of this experiment. Rather than comparing all connectivity degrees, we restrict our comparisons to a single quality: the existence of connectivity only – shown in the graphs in Figures 3 and 4 as the differences at zero level which is essential. Complete connectivity difference diagrams (including connectivity degrees as usual) are easily regenerated according to the related grading: five grades if zero = 0.2; ten grades if zero = 0.1; 33 grades when zero = 0.06. The diagrams of difference in connectivity for the first time slice, shown in Table 1, left (related first two diagrams in Figure 2), for common zero threshold equal to: 0.20, 0.10 and 0.06 (left to right) respectively; thus, the first diagram is complementing the first two graphs in Figure 2 with respect to connectivity only: if we take a union of this graph links and the links in the first -PDC graph in Figure 2 the result is the second graph -DTF; grading is not shown for the simplicity. Solid lines show: DTF connected, while PDC disconnected. Thus, in all cases, connectivity graph for PDC is a substructure of a corresponding graph for DTF. Note: with a data from Table 1. left, taking 0.06. instead of 0.20 zero threshold, for the first time slice PDC has 10 more connectivity links, while DTF obtains 8 new links. The diagrams of difference in connectivity for the second time slice of the experiment as shown in Table 1, right (related to third and fourth graph in Figure 2), when common zero threshold is equal to: 0.20, 0.10 and 0.06 respectively (e.g. the first diagram is complementing graphs three and four in Figure 2 with respect to connectivity only, grading not shown for simplicity). Solid lines: DTF connected, while PDC disconnected; dashed lines – opposite. In the first two cases connectivity graph for PDC is not a substructure of a corresponding graph for DTF; note (Table 1, right) the increase of connectivity links for the second time slice, resulting from reduced threshold, which is similar to the first time slice. We notice immediately that small changes in zero-threshold have substantial consequences in the changes of connectivity structures and their differences. Stability analysis is mandatory whenever we have serious synthesis, i.e. when we organize and map experimental data into higher level structures with semantic significance. The brain connectivity graphs are of high importance and their stability is mandatory. Second, in order to reduce or overcome some of listed problems, we shall make/suggest some changes in the measure comparison sequence, while maintaining the original procedure as much as possible: (**) 1. Varying common zero as done in Figures 3 and 4 ; 2. apply N 1 operator: provides PDC power maximum, for all frequencies, for a given pair of inputs; 3. apply N 2 operator: provides DTF power maximum, for all frequencies, for a given pair of inputs; 4. perform zero- ideal congruence for PDC; i.e. identify the corresponding values from previous step whose difference ≤ zero; 5. perform zero- ideal congruence for DTF; 6. perform zero-ideal congruence for PDC and DTF corresponding values; 7. generate the graph of connectivity difference; 8. apply P operator (the same projector operator P) for both PDC and for DTF, on their respectivegraphs (optional). Clearly, in (**) we have two updates of the original procedure (*): zero-threshold: common as in (*), varying over values which were present in the above mentioned similar, related experiments. zero unification - performed prior to grading, consequently, avoiding that the small (difference) becomes bigger or big, just because ranges of measures are replaced (simplified) by coarser than original smooth [0,1]-range; In Table 2 we calculated coordinate vise differences (obtained from Table 1) for corresponding N 1,2 normalized values for PDC and DTF for both time slices of the experiment. The calculated differences in Table 2 are used in the corrected comparison sequence (* *) in the zero – unification step, in order to generate more appropriate diagrams of PDC/DTF connectivity difference, which are presented in Figure 5 (for the first time slice of the experiment, for the three different zero-threshold values) and Figure 6 (for the second time slice of the experiment). Table 2 The difference of PDC power maximum and DTF power maximum coordinate vise The first time slice corrected connectivity comparison (i.e. using corrected procedure (**) instead of (*)). Unification of measures (based on differences in Table 2) prior to grading leads to the simplification of connectivity difference graphs – they are substructures of graphs in Figure 3. From left to right: the difference in connectivity (corresponding to original diagrams in Figure 2) for zero threshold equal to: 0.20, 0.10 and 0.06 respectively. Solid lines: DTF connected while PDC disconnected. The second time slice corrected connectivity comparison (i.e. using corrected procedure (**) instead of (*)). Unification of measures prior to grading leads to the graphs of connectivity difference, which are substructures of graphs in Figure 4. From left to right, the difference in connectivity (corresponding to original diagrams in Figure 2) for common zero threshold equal to: 0.20, 0.10 and 0.06 respectively. Solid lines: DTF connected while PDC disconnected. After the above basic convergence of the two measure comparison, we should not omit the following divergence, basically maintaining the same sort of procedure, just introducing the slight variations in the same kind of argumentation. Third, as mentioned above we did not essentially depart from measure computations and comparison deduced by Sameshima and collaborators in the cited papers. However, we have to notice that the Z3 is violated in the above analysis and resulting graphs, in the following sense: zero thresholds (with the large difference) are unified to the max of the two without proper argumentation. Strictly: the measures have to be independently computed for each node, generating corresponding connectivity graphs. These computations have to be performed independently for each measure, using the corresponding significance level for the zero threshold, without any common zero harmonization. Finally, the agreement of the two measures is presented with the two graphs, to obtain the combined connectivity difference = measure comparison graph. If we strictly follow the procedure from [18] corrected to (***) and the arguments related to statistical significance, with values 0.2 for PDC taken from [18] as above, and known value for normalized DTF (0.0045), then we would get results differing much more. In this case the procedure corresponding to (*) is corrected as follows (***) 1. Set zero separately for each of {PDC, DTF}; 4. perform zero- ideal congruence for PDC; 6. generate the graph of connectivity difference. For instance, just for the matrices in Table 1, we have to conclude, for the connectivity only with degree of connectivity omitted, that there are numerous other links differing the resulting graphs. The strict graphs of differential connectivity only for Table 1 respecting (***), we present in Figure 7. Comparing these with the first graphs in Figures 3 and 4, respectively, we note gigantic discrepancy. The same should be performed for other thresholds in these examples. Even, when the measures have identical value, but between the two thresholds one measure will indicate connectivity, the other will deny it; example: in the first matrix $$ 0<\mathrm{D}\mathrm{T}\mathrm{F}- threshold\ DT{F}_{DG\to A10} = numerically\ {\mathrm{PDC}}_{DG\to A10}=\mathrm{P}\mathrm{D}\mathrm{C}-\mathrm{threshold}; $$ Left and right: difference of connectivity graphs for DTF and PDC, respectively for the data from Table 1 left and right; connectivity for both DTF and PDC are determined separately for their respective independent significance thresholds, PDC threshold - by authors of [18] in the comparison study. Then with the resulting connectivity graphs for each measure, the graph of connectivity difference is generated. Continuous arrows mark DTF confirmation of connectivity, while PDC is disconnected. Compare these two graphs to the corresponding graphs on the left side in Figure 3 and 4. In both cases, PDC connectivity graph is a substructure of the corresponding graph for DTF. Hence, even when the analyzed systems are tuned so that the compared measures measure all the links approximately identically, when we have largely departing zero thresholds for the involved measures, we can obtain easily arbitrarily large number of links which are zero for one and non-zero for the measure with lower threshold. Moreover, when the measure values are in the opposite order, i.e. when the one with lower threshold is smaller than the other with the larger threshold, but both measure values being between the thresholds, the measure with smaller value of threshold will indicate connectivity, while the other with the larger value will deny it. Example: in the second matrix (in Table 1.) $$ 0< threshold\ DT{F}_{DG\to A3}< numerically\kern0.5em {\mathrm{PDC}}_{DG\to A3}= PDC- threshold, $$ which is completely paradoxal. Obviously, arbitrary choice of zero threshold can distort mathematics and generate paradoxes. Obviously, harmonizing the thresholds (reducing their difference) will reduce or eliminate listed problems. And obviously this cannot be done at will. This is why careful prior investigation of connectivity data related to Z3 and the method is mandatory, or otherwise we remain in the alchemist morasses. Our fundamental concepts, models and comprehension cannot depend on the sample rate. We will conclude this stability investigation with the following observations. Let us shortly comment the threshold values for PDC published in [25] as obtained from asymptotic studies. The statistical significance for PDC for VAR processes of infinite order, with the estimates for appropriate approximations under certain conditions are established. Theoretical part is without proofs, but there are examples which should illustrate the theoretical achievements. The authors offer under hypothesized conditions the threshold distributions for respectively 20, 200 and 2000 samples, which are bounded respectively by 0.2, 0.15 and 0.01 (for 0.15: the non-constant distribution, with 80% of frequency range below 0.1). The conditions are rather general, so that the above threshold bounds are quite generally applicable, with the last case regularly prevailing. Consequently, regularly used samplings easily provide basis for the application of the last distribution threshold. The first two values corresponding to 20 and 200 data samples have been included in the above elaboration, as the first two threshold values, the first as provided by authors, the second borrowed from similar experiments. With the experiment frequency domain, for 2 s time we have not less than 200 samples. With contemporary usual sampling for the studied intervals one should have not less 2000 samples, which corresponds to the final listed threshold, 0.01. When this threshold is applied, we can say that the thresholds for both measures are roughly harmonized, one for PDC being (still) double of threshold for normalized DTF, which we stated to be the basic condition prior to measure comparison. However, in this case, if we look at the data provided by authors, the connectivity difference graph is identic for both time slices to the graph in Figure 8, applying either original procedure with the PDC threshold adjusted to 0.01 or its corrections. Thus the (**) and (***) corrections are losing importance. Namely, every link gets unified and Adams's axiom applies. Conclusively, in this case, for the data provided with this illustrative experiment, the two measures have no difference in connectivity. Unfortunately, in this highly realistic case as affirmed in [25], all nodes are connected. All that (strongly) implies that PDC advantages essentially or completely vanish with the reasonably realistic increase of order – number of samples followed with measure thresholds harmonization. In this way the original analysis and comparison performed, together with fundamental conclusions, in [17,18] and other publications of these top rated authors, eroded to a complete annihilation and method destabilization, though exclusively based on the data and theory provided by authors, thus strengthening the previous findings published in [4]. Concluding this investigation of measure comparison, we note that our choice of thresholds for PDC in stability study [4], using the borrowed values from similar published experiments, proved to be completely consistent with the values finally delivered in [25], which strongly supported our procedure. (For other challenges with DTF see [26-30]). Connectivity difference graph for PDC – DTF with roughly harmonized zero thresholds, for both experiments, for all procedure variants (*) to (***). Adams's axiom applies. Clearly, either the argumentation shown here was unknowable to the authors of [18] or they purposely selected the huge bias in the compared measures thresholds, in order to optimize the targeting conclusions. Experiments with PLD and small object recognition Here we briefly present potential of the methods, starting with PLD application examples, following with application of small object recognition methods. Some of the work was developed in [5]. Example 2. In the neuro acoustic experiments, the first shown is the simple example where PLD ~ 0 for two spectrograms. The signals containing calibration, external stimulation at respectively 1 kHz and 3 kHz, the two of each, are shown in Figure 9, left side, together with corresponding signal power spectra, from two signals, with direct reading of stimulation tunes intensities, on the right side in Figure 9. Clearly, the power spectra of signals, for the signals of the same tune should be linearly dependent in the stimulated frequency, while the power spectra of signals containing different stimulation tunes would remain independent in the large neighborhoods of the stimulated tunes. Thus, we get here The EEG signals (right column) with 1 kHz and 3 kHz (left column) calibrations, on the left part; right, the power spectra of signals on the left side. π({sig1, sig2}, 1 kHz) > 0, π({sig3, sig4}, 3 kHz) > 0, while π({sig1/2, sig3}, λ) = 0 and π({sig1/2,sig4}, λ) = 0, for large interval Λ of λ's, and similar results for the πs index. This is well shown in Figure 10, right side, top-down, presenting power spectrograms of sig1 and sig3, with 1 kHz and 3 kHz stimulations respectively. On the left side of Figure 10, the composite - product spectrogram is shown (over the whole domain), exhibiting amplified very low frequency - VLF structure, while 1 and 3 kHz structures are mutually annihilated, as linearly independent; here, for Λ = SR\[0, 52 Hz], π(G, Λ) ~ 0 if G contains signals with different stimulation (e.g. {sig1, sig3}, {sig1, sig4}, the same with sig2) else, it is large. The indices EDS(g, λ, T), EDS(g, Λ, T), ΠEDS(G, λ, T), and ΠEDS(G, Λ, T) will resolve very well this situation (in the values {0, large number of λ's}), as well. On the right side, power spectrograms with 1 kHz and 3 kHz stimulations. On the left side, the composite -product of power spectrograms from the right side with both prominent stimulation spectrogram formations annihilated, while the VLF proportionally amplified. Example 3. PLD has been applied in the experiments with imagined – inner tunes. We briefly comment some of that work, where we filtered - combing the imagined tunes. We filtered - combed signals with imagined tunes together with signals with stimulated tunes, with nice results as well. In Figure 11, left side, we have 8 EEG signals with the imagined tune C2 which were recorded after the externally played C2 - the tuning, ended. On the right we have the system with elements corresponding to their power spectra and a number of PLD components and indices, both local and global. After the tuning – stimulation playing tune C2 ended, the 8 channel EEG recording started with the same tune imagined – inner tune C2; EEG signals are to the left, the system structure in large window, to the right. The Figure 12, in the enlarged windows shows major energy distribution in the LF part of spectrum of one channel; relative magnitude of the 50 Hz line has the value 54, while the frequency 528 Hz has the value of 2.7 and is globally and locally indiscernible and embedded in the spectral environment, with a pro mile fraction of spectral energy. In Figure 13 magnified neighborhood of C2 frequency shows no discernible spectral line, while the composite spectra magnify (globally) artifacts, the 50 Hz and its harmonics. Similar holds for spectrograms of individual signals. Power spectra of these signals exhibit some artifacts (50 Hz multiples and some other isolated HF), while the traces of the imagined – inner tune are indistinguishable from the noise level. Switching to PLD indices, multiple spectral dot products reduce overall spectral randomness – the coordinates with random fluctuations are mutually linearly independent, so their products converge to zero, while the coordinates with the linearly dependent values are enhanced relatively to the noise threshold and become locally discernible or prominent; even when their integral contribution to the overall composite spectral power is very small. Experiments in Figures 14, 15, 16, 17 and 18, in the reduced domain SR to Λ = [200, 2000 Hz], we have the major artifact spectral frequency at 250Hz with value 184 (energy**3), while the inner tone C2 shows 15 units in the composite spectrogram. The power spectrograms time*frequency*intensity matrix S is basically exponentiated by 3: S**3- corresponding to the first 3 EEG channel power spectra coordinate wise product in time, πs({sig1, sig2, sig3}, SR, T) over the 5 s time interval T will be multiply enlarged compared to initial spectrograms. However the ΠED({sig1, sig2, sig3}, C2, [500, 545 Hz]) and its time integration ΠEDS over the selected interval T and Λ = [500, 545 Hz] are becoming dominating high (relative T, Λ), allowing clear C2 recognition in the expected frequency interval (arrow marked composite spectrum max on the right); at the same time the composite energy ratio of restriction to Λ (local) and global – composite spectrum energy over SR is nearly zero, i.e. negligible. At the same time, taking the whole set of signals G = E erases C2: ΠED(G, C2, [500, 545 Hz]) ~0, similarly ΠEDS show no trace of C2, confirming it is not present in all elements of G. Rather similar situation we have in other experiments shown on the following figures, supporting the PLD effectiveness in case when there is a common small or invisible information in multiple signal spectra, and more so, for spectrograms. The sets of signals which do not share "common information" are erased. Similar conclusions should work in more general cases. Left: enlarged the initial power spectrum of channel 1, with the dominant power spectrum line corresponding to 50 Hz artefact, with the value 54; note that the only prominent lines are in the lower part of the spectrum, the second dominating – the 150 Hz line. Right: marked position of 528 Hz; note that the neighborhood of C2 in the complete spectrum has no discernible line, with the value 2.7 at 528 Hz. Left enlarged window: in the spectrum part, with Λ = [200, 2000 Hz], the value 1.8 at 528 Hz, is undistinguishable in its spectral environment. Right, the 3-spectra composite, in the same Λ, with the value of 184 (cubed) units in the 250 Hz power harmonic. Left, composite power spectrum in the reduced domain, Λ = [200, 2000 Hz], with somewhat reduced spectral complexity and quite enhanced spectral line at 528 Hz, with the value 15.2 at the shown part of the signal. Right: with still reduced Λ = [500, 545 Hz], the 528 Hz frequency (marked) becomes locally dominant (in time). In the similar experiment, the major spectral frequency at 6.1Hz exhibits magnitude 8.71 in the original signal power spectrum of one channel (shown left), while the inner C2 tone corresponding frequency at 522.78Hz, marked line in the right spectrum, has the value of 0.17 embedded in its spectral frequency neighborhood "noise", with the magnitude ratio over 50, consuming less than one pro mile of spectral energy. Left image: marked spectral frequency in the 3-composite spectrogram at the 2.08Hz with magnitude of 8510 in the cubed units; position of the C2 is marked with arrow. Right image: shows a composite part with initial dominant lines – artifacts of 50Hz, followed by arrow marked position of C2 inner tone with the (cubed) value of 0.0128. Relative ratio of the composite values of the dominant line to the composite C2 is 664843; the ratio of the composite C2 line to the spectral energy is less than 1/10**7. However, C2 becomes locally discernible with rather high local PLD indices (the same Λ – omitting the harmonics of 50 Hz). Large window: another locally well discernible C2, at 525 Hz, top view, with Λ = [500, 545 Hz], over a time interval T of 3 s. ΠEDS(G, Λ, T) is very high, where G has 4 signals, while ΠEDS over whole domain (SR) is very large and the relative value ΠEDS(G,Λ,T)/ ΠEDS(G,SR,T) is negligible as in previous example. Another example with search for inner C2. In both images, on the left side enlarged are composite power distributions, with LF very magnified and the cross measurements active, showing the value of max at VLF and the 522.8 Hz indistinguishable transversal profile, along the time axes. In the top right diagrams, we see the Λ = SR/Initial VLF, showing some structures arising from 0 level, while the temporal feature corresponding to C2 is still negligible at the cross line, within the 0 floor, lower arrow in the right-right top structure. The C2 temporally lasting features emerged in the reduced Λ = [500, 545 Hz], as seen in both images in lower right windows (arrowed features), the 3-composite spectrogram structures. The proportions of PLD indices are similar to previous cases. Example 4. In the following synthetic example we have introduced several dots (useful signals) with the amplitude a = 120 and we have contaminated the image with random and the cloudlike noise. The left hand side image in Figure 19 shows bitmap with the random contamination of signal – dots. The right hand side image of the same figure shows the resulting bitmap after the application of the procedure for the noise reduction: initially setting A max = 255, T min = 124, the extraction procedure yields image shown in Figure 19 right. Somewhat different situation we have in Figure 20. An application of the method of small feature extraction with signal embedded in the Gaussian noise, with one source and the two independent sources, are shown on Figures 21 and 22 (respectively), verifies the problem approach with the method of small object recognition. Dot like structure is embedded in the noise (left); signal separation from noise (right). The left hand side image shows similar example to the Figure 19 contaminated with nonhomogeneous noise containing aggregations and granular elements similar in size and intensity to the signal. The right hand side image shows results of the reduction of noise: some new dots belonging to noise cannot be distinguished from the signal – top and low right. Note that the amplitude of the target signal is lower than the chosen lower threshold. Left: a Gaussian noise image with injected small objects below threshold; middle: partial reduction with residual noise; right: after application of the above method to the noisy image on the left, the noise is completely reduced, yielding fully automatic small object recognition. Two Gaussian noise images with the injected small objects below threshold, following with the signals well extracted and the noise completely reduced – rightmost image. Example 5. An application of Kalman filters in small object extraction. In the experiment shown, the initial sequence of images Z k of the size 200 × 200 pixels is generated as follows. First, in each image we have introduced the noise by Z k (x, y) = randn(0, 90); Here " randn " generates random numbers in the interval [0, 255] using Gaussian distribution with μ = 0 and σ = 90. Then, in each image 10 objects are injected (useful signal) at the same positions, each of them of the size around 10 pixels, with random (Gaussian) fluctuation in intensity around mean value (here 120). After the construction of the initial sequence of images, the bank of 200 × 200 = 40000 one-dimensional simplified Kalman filters is defined using the iterative procedure as defined above. The result of Kalman filtering is passed as an input to the initial procedure for small object recognition, finally resulting in the image with extracted objects which is shown in Figure 23. We can notice that the minor small objects reshaping is present in the result, with the whole pattern preservation. Further improvements and corrections are possible. The method of small object recognition originally developed for marine radar object tracking, works with vectors too. It is applicable for automatic extraction of signals which are embedded in noise and are imperceptible, in spectra and spectrograms as well, like PLD, especially in case when we can provide at least two sources which are sufficiently linearly independent. The performance constraint to small object – those within 10 pix in diameter is quite generally easily met with spectra, spectrograms, composites and frequency distributions like e.g. connectivity measure and its time trace. Left: injected signal; middle: the image on the left injected in the Gaussian noise - contamination; right: the result of processing after 44th iteration and application of the method described by formulae 19-26, providing complete object extraction. Consistently and consequently with the initial points and preservation properties and the presented examples and argumentation, the following more or less compact conclusion on connectivity measure computation, comparison, comprising methods for the weak connectivity, incorporating remarks from [4] in order to group together the complete set is formulated as Separation of different properties. We proposed here to detach and separately investigate connectivity from connectivity degree. We propose further to distinguish between directed connectivity and non-directed connectivity. There are different situations in which it could be possible to establish the last without solving directed connectivity, especially in the case of weak connectivity. Differentiated properties could be investigated with different methods. Partial linear dependence and method of small object recognition. They can determine graph substructures with the shared information, which is their contribution in case when the connectivity measures do not resolve the issue for whatever reason. They can be used to generate time expansion of connectivity structures, exposing model's time dynamics. The PLD indices might be of special interest if there is any prior knowledge on where the masked information, as a frequency pulse or trigonometric polynomial components might be. Covering noise. Both enhancing methods might resolve the common information in case when it is indiscernibly embedded in noise or e.g. spectral neighborhood. Time delay. Both methods might uncover substructures with common information with tracking in time, resolving possibly present time delays. Comparison/computational sequence. The corrected comparisons of DTF and PDC, for connectivity only, as performed above with published data, ultimately show very reduced differences of two measures for zero-threshold received from the analyzed and similar published experiments (most importantly above the zero-threshold, exhibiting connected structures versus those which are not), thus confirming that if analyzed with computation and comparison procedure corrections proposed here, the connectivity structures are much less different than it was demonstrated in [17,18], as presented in the graphs to the left, Figures 3 and 4, versus Figures 5 and 6 (left graphs) with original common zero threshold. However, strictly performed original comparison method opens room for large connectivity difference, and large connectivity model oscillations (Figures 3, 4, 5, 6, 7 and 8). This is even more accented with the asymptotic study by authors, where 0.01 is very reasonable PDC threshold. This zero threshold eliminates the difference of measures in the offered examples with the author's comparison methodology, against original aims and involves other undesirable properties. The instability of PDC – DTF comparisons is most significantly due to the possibly different values of statistical significance of PDC, ultimately corresponding to different sampling. Aggregation prior to comparison; functionally related frequencies. The above analysis was performed, maintaining strictly the reasons and methodology performed by authors of the original analysis [17,18]. Here we have to stress that performed as it is and with our interventions in the original evaluations as well, PDC and DTF measure comparison was not performed directly on the results of these measures computations, thus, comparing directly DTF ij (λ) and PDC ij (λ), (for relevant λ ' s) - the results of measurements at each couple of nodes for each frequency in the frequency domain, but, instead, as in the cited articles with the original comparisons, the measurement of differences of these two connectivity measures was performed on their synthetic representations - their prior "normalizations" - aggregations, obtained as the $$ \max \left\{\mu \left(i,j,\lambda \right)\ \Big|\ \lambda \in rng(Sp)\right\} $$ (where rng(Sp) is the effective spectral - frequency range) and, in the original, on their further coarser projections. Essential departure from connectivity comparison analysis. In this way, in comparison of these measures the authors had substantial departure from original connectivity measurement computations for PDC and DTF which cannot be accepted without detailed further argumentation. Comparison of connectivity in unrelated frequencies. If the parts of frequency domain are related to different processes which are (completely) unrelated, for example, if one spectral band is responsible for movement detection in BCI, while the other is manifested in the deep sleep, then, depending on the application, either can be taken as representative, but most often we will not take such individual maxima of both as representing quantity in the functional connectivity analysis; however, on the restrictions such procedure can be completely reasonable. If we look closely at the corresponding coordinates in the distribution matrices in Figure 1, for the compared measures we will find examples of frequency maxima distant in the frequency domain or even in the opposite sides of frequency distributions (e.g. (5,1) – first column fifth row; then (4,2), (5,2) or (6,5)). Essential abandoning of the frequency domain in PDC, DTF measure computation and comparison. Obviously, we are approaching the question: when we have advantage of frequency measures over the temporal domain measures. In the comparison of DTF and PDC via their aggregations as explained above, much simpler insight is obtained; supplementary argumentation is necessary: for the aggregation choice, stability estimation complemented with comparison of DTF and PDC counterparts, for which we would propose their G-inverses DTF and PDC, as we introduced them. Zero thresholds and connectedness. Maintaining original or corrected computational sequence in measure comparison, note that DTF computation and PDC computation are performed independently. Consequently, each of these two measures computations should apply the corresponding appropriate zero threshold, thus determining zero-DTF and zero-PDC independently for each of the computations, with independent connectedness conclusion for each measure for each pair of nodes. Then, the connectivity graphs would be statistically correct. However, if the two zeroes differ substantially, that might cause paradoxal results in comparisons. Possible zero threshold harmonization - unification would be highly desirable, as in e.g. [18] and other cited articles, but it must be properly derived. Our fundamental concepts, models and comprehension cannot depend on the sample rate. Aggregations over frequency domain. Note that frequency distributions exhibited in Figure 1 for PDC and DTF are somewhere identical, somewhere similar/proportional and somewhere hardly related at all - as the consequence of different nature of these two measures (which is established by other numerous elaborations). The same is true for the spectral parts above zero threshold. Consequently, if the comparison of the two measures connectivity graphs was performed at individual frequencies or narrow frequency bands, the resulting graphs of differences in connectivity would be more fateful; they would be similar to those presented at certain frequencies, but would differ much more on the whole frequency domains. Obviously, connectivity at certain frequency or provably related frequencies is sufficient connectivity criterion; such criterion is valid to establish that compared measures behave consistently or diverge. Brain dynamics and connectivity measures. Spectral time distributions – Spectrogram like instead of spectral distributions are necessary to depict brain dynamics. In the cited articles, dynamic spectral behavior is nowhere mentioned in measure comparison considerations, but it is modestly present in some examples of brain connectivity modeling – illustrating PDC applicability to the analysis focused on specific event - details in [17,18]. Trend change: in [27] authors recently started using matrices of spectrogram distributions instead of matrix distributions as in Figure 1. This is gaining popularity. Spectral stability analysis. Comparison of PDC and DTF as in here analyzed articles, shows no concerns related to frequency distribution stability /spectral dynamics and comparison results. It is clear that comparisons based on individual frequency distributions are essentially insufficient, except in proved stationary spectra, and that local time history of frequency distributions – spectrograms, need to be used instead. Brain is not a static machine with a single step instruction execution. Characterization theorem for (dis)connectedness [21]. Here we have simply sensitive play of quantifiers. By contraposition of the statement of the characterization theorem, involving information PDC and DTF as cited above, we obtain equivalence of the following conditions o) the nodes j, i are connected; a') ∃ λ(λ ∈ [−π, π] ∧ iPDC ij (λ) ≠ 0); b') ∃ λ(λ ∈ [−π, π] ∧ iDTF ij (λ) ≠ 0); c') ∃ λ(λ ∈ [−π, π] ∧ f j → i (λ) ≠ 0); and similarly with other conditions in the list. Observe conditions a') and b'). Note that λ is independently existentially quantified above. That would suggest that iPDC and iDTF simultaneously confirm the existence of connectivity from j to i. However they might do it in totally unrelated frequencies, which could make that equivalence meaningless, similarly as discussed in 9. This is all wrong. The equivalence of a') and b') clearly contradicts the nonequivalence of PDC and DTF, which is extensively verified in the cited very detailed analysis of Sameshima and collaborators, since these are the special cases of iPDC and iDTF. However, the statement of the theorem is true for the two var case only, when the orbits are reduced to 1st orbits only. In this case cumulative influence reduces to the direct influence, with no transitive nodes. This generates a limit for the theorem generalizations. Zero threshold in iPDC and iDTF. Authors in [21] do not mention zero thresholds at all. As shown multiply above, in practice it has to be determined. Again, as in detail discussed above, note that the same problems are equally present here. E.g. computationally we could easily have $$ 0<i{\mathrm{DTF}}_{ij}\left(\lambda \right)=i{\mathrm{PDC}}_{ij}\left(\lambda \right)=0. $$ Nobody will like that. Recent DTF based connectivity graphs with simplified orbits. In the recent publications and conference reports of research teams using DTF as connectivity measure [27-29], presenting even rather complex brain connectivity graphs involving rather numerous nodes, majority of graphs contain practically only 1st orbits, which is the case when deficiencies of DTF are significantly masked since cascade connectivity is hidden, graphs are not faithful, departing seriously from reality. Both DTF and PDC measures are not applicable in real time applications like Brain Computer Interfaces – BCI, where the will generated patterns in brain signals are recognized and classified by a number of direct methods. Some of methods related to weak connectivity are applicable in real time. The DTF based connectivity diagrams where the zero is chosen arbitrarily high or much higher than the established zero threshold and where connectivity is restricted to a single narrow band, intentionally reduce the number of really connected connectivity links by large amount, offering highly distorted facts that are established by DTF. The similar holds for synthetic spectrogram connectivity matrices. If the methodology of [18] and [21] was used, one could not deduce less than 10 times more connected nodes in the "memory" task and the "cognitive" task using DTF, which is strongly inconsistent with the presented connectivity diagrams – factual proofs, which the DTF authors derived from the supplied matrices. In all experiments DTF converges towards Adams Axiom. The DTF has been making a number of serious problems since its invention. The authors have been continuously making efforts to solve the problems inventing newer modifications of DTF, adding additional measures, or applying arbitrary restrictions to their connectivity measure in order to reach connectivity diagrams which should look more faithful. Hardly had they succeeded in these intentions. Clearly, without careful mathematical consideration and argumentation connectivity graphs, in here cited and many others published articles are of shaken fatefulness and need supplementary corroboration. Connectivity measures are different enough that the question of their logical coherence is appropriate. This is elaborated through measure comparisons. Here the published comparison of DTF and PDC measures is discussed in some detail as an illustrative example, giving enough material for this issue to be more carefully investigated. As verified on a number of nontrivial synthetic systems, connectivity conclusions by DTF are not well founded, while PDC has good capacity in precise structural description, confirming PDC superiority to DTF measure. Quite often PDC ≤ DTF, but it does not hold generally, hence PDC is not a general refinement of DTF and these two measures are essentially different, especially if compared frequency – pointwise. When applied to real neurologic data with the original methodology, the methods seem to be highly semantically unstable generating large model structural oscillations with possible PDC threshold variation. Quite generally on published data, when thresholds are roughly harmonized PDC-DTF connectivity differences vanish, opposite to the conclusions published in [18]. Comparison after frequency aggregations leads to wrong conclusions on functional connectivity, unless appropriate modulators are involved. The zero threshold harmonization when comparing measures is a difficult and challenging issue which ought's to be solved properly, prior to measure computations and comparisons in general. Two methods, the Partial Linear Dependence, PLD and the method of small object recognition are added for enhanced connectivity problem treatment, comprising time expansions, with examples of their contribution in cases when the shared information is masked or embedded in noise. The number of innovative alternative approaches is growing; aiming to overcome certain difficulties they are successfully applied in demanding applications e.g. [31,32]. Kroger JK, Elliott L, Wong TN, Lakey J, Dang H, George J. Detecting mental commands in high frequency EEG: Faster brain-machine interfaces. In: Proc. of the 2006 Biomedical Engineering Society Annual Meeting, Chicago, 2006. Watkins C, Kroger J, Kwong N, Elliott L, George J. Exploring high-frequency EEG as a faster medium of brain-machine communication. In: Proceedings Institute of Biological Engineering 2006 Annual Meeting, Tucson. Jovanović A, Perović A. Brain computer interfaces - some technical remarks. Int J Bioelectromagn. 2007;9(3):191–203. http://www.ijbem.org/volume9/number3/090311.pdf. Jovanovic A, Perovic A, Borovcanin M. Brain connectivity measures: computations and comparisons. EPJ Nonlinear Biomed Phys. 2013;1:2. www.epjnonlinearbiomedphys.com/content/1/1/2. Perović A, Dordević Z, Paškota M, Takači A, Jovanović A. Automatic recognition of features in spectrograms based on some image analysis methods. Acta Polytechnica Hungarica. 2013;10:2. Jovanovic A, Kasum O, Peric N, Perovic A. Enhancing microscopic imaging for better object and structural detection, insight and classification. In: Mendez-Vilas A, editor. Microscopy: advances in scientific research and education, FORMATEX Microscopy series N6, vol. 2. 2014. Liu L, Arfanakis K, Ioannides A. Visual field influences functional connectivity pattern in a face affect recognition task. Int J Bioelectromagnetism. 2007;9:4. Aoyama A, Honda S, Takeda T. Magnetoencephalographic study of auditory feature analysis associated with visually based prediction. Int J Bioelectromagn. 2009;11(3):144–8. http://www.ijbem.org/volume11/number3/1103008.pdf. Grierson M. Composing with brainwaves: Minimal trial P300b recognition as an indication of subjective preference for the control of a musical instrument. Proceedings of the ICMC, Belfast. 2008. Granger CWJ. Investigating causal relations by econometric models and cross-spectral methods. Econometrica. 1969;37:424. Granger CWJ. Testing for causality: a personal viewpoint. J Econ Dyn Contr. 1980;2:329. Granger CWJ, Morris MJ. Time series modelling and interpretation. J R Stat Soc Ser A. 1976;139:246. Geweke J. Measurement of linear dependence and feedback between multiple time series. J Am Stat Assoc. 1982;77:304. Geweke J. Measures of conditional linear dependence and feedback between time series. J Am Stat Assoc. 1984;79:907. Kaminski M, Blinowska K. A new method of the description of the information flow in the brain structures. Biol Cybern. 1991;65:203. Kaminski M, Ding M, Truccolo W, Bressler S. Evaluating causal relations in neural systems: Granger causality, directed transfer function and statistical assessment of significance. Biol Cybern. 2001;85:145. Sameshima K, Baccala LA. Using partial directed coherence to describe a neuronal assembly interactions. J Neurosci Meth. 1999;94:93. Baccala L, Sameshima K. Partial directed coherence: a new concept in neural structure determination. Biol Cybern. 2001;84:463. Chen Y, Bressler SL, Ding M. Frequency decomposition of conditional Granger causality and application to multivariate neural field potential data. J Neurosci Methods. 2006;150:228. Schelter B, Winterhalder M, Eichler M, Peifer M, Hellwig B, Guschlbauer B, et al. Testing for directed influences among neural signals using partial directed coherence. J Neurosci Meth. 2005;152:210. Takahashi DS, Baccala LA, Sameshima K. Information theoretic interpretation of frequency domain connectivity measures. Biol Cybern. 2010;103:463–9. 28. Shi J, Tomasi C. Good Features to Track, preprint, www.ai.mit.edu/courses/6.891/handouts/shi94good.pdf. Welch G, Bishop G. An Introduction to the Kalman Filter. Chapter Hill: University of North Carolina at Chapter Hill; 2004. TR 95-014, April 5. Baccala L, Sameshima K. Overcoming the limitations of correlation analysis for many simultaneously processed neural structures, Chapter 3, M.A.L. In: Nicolelis ed. Progress in brain research, vol 130, Elsevier Sc; 2001. p. 1–15. Takahashi DY, Baccalá LA, Sameshima K. Partial directed coherence asymptotics for VAR processes of infinite order. Int J Bioelectromagn. 2008;10(1):31–6. http://www.ijbem.org/volume10/number1/100105.pdf. Blinowska K. Review of the methods of determination of directed connectivity from multichannel data. Med Biol Eng Comput. 2011;49:521. doi: 10.1007/s11517-011-0739-x. Blinowska K, Kus R, Kaminski M, Janiszewska J. Transmission of brain activity during cognitive task. Brain Topogr. 2010;23:205. doi: 10.1007/s10548-010-0137-y. Brzezicka A, Kaminski M, Kaminski J, Blinowska K. Information transfer during a transitive reasoning task. Brain Topogr. 2011;24:1. doi: 10.1007/s10548-010-0158-6. Kus R, Blinowska K, Kaminski M, Basinska-Starzycka A. Transmission of information during continuous attention test. Acta Neurobiol Exp. 2008;68:103. Blinowska K. Methods for localization of time-frequency specific activity and estimation of information transfer in brain. Int J Bioelectromagn. 2008;10(1):2–16. www.ijbem.org. Dhamala M, Rangarajan G, Ding M. Estimating Granger causality from Fourier and wavelet transforms of time series data. Phys Rev Lett. 2008;100(018701):1. Singh H, Li Q, Hines E, Stocks N. Classification and feature extraction strategies for multi channel multi trial BCI data. Int J Bioelectromagn. 2007;9(4):233. www.ijbem.org. The work on this paper was supported by Serbian Ministry of Education projects III41013, ON174009 and TR36001. GIS-Group for Intelligent Systems, Faculty of Math, University of Belgrade, Studen trg 16, 11000, Belgrade, Serbia Obrad Kasum , Edin Dolicanin , Aleksandar Perovic & Aleksandar Jovanovic State University of Novi Pazar, Novi Pazar, Serbia Search for Obrad Kasum in: Search for Edin Dolicanin in: Search for Aleksandar Perovic in: Search for Aleksandar Jovanovic in: Correspondence to Aleksandar Jovanovic. All authors have jointly worked and developed theoretical parts and implementations with equal contributions. All authors read and approved the final manuscript. OK is a senior member at GIS and implementation director; ED is assistant professor and coordinator in the second branch of GIS; AP is associated professor and director of development; AJ is associated professor and coordinator of GIS. All authors are involved in mathematical modeling, AI and theoretical and applied logic. Here we list some terminology, mostly introduced by Geweke and followers, with definition of mostly used and prominent brain connectivity measures. Geweke [13] defined spectral form of G-causality, which from (2) by Fourier transform, gives: $$ \boldsymbol{A}\left(\lambda \right)\boldsymbol{x}\left(\lambda \right)=\boldsymbol{E}\left(\lambda \right), $$ $$ \boldsymbol{A}\left(\lambda \right)=-{\displaystyle \sum_{j=0}^p}\boldsymbol{A}(j){e}^{-2 i\pi \lambda j}, $$ with A(0) = -I giving for x $$ \boldsymbol{x}\left(\lambda \right)={\boldsymbol{A}}^{-1}\left(\lambda \right)\boldsymbol{E}\left(\lambda \right)=\boldsymbol{H}\left(\lambda \right)\boldsymbol{E}\left(\lambda \right), $$ where H is a transfer matrix of the system. In the bivariate case or with two blocks of variables, G-causality measure from channel j to i at frequency λ, Geweke [13] defined by $$ {I}_{j\to i}^2=\left|{H}_{ij}\left(\lambda \right)\right|{}^2=\left|{a}_{ij}\left(\lambda \right)\right|{}^2\left|\boldsymbol{A}\left(\lambda \right)\right|{}^{-2}. $$ He introduced conditional causality and a number of measures; his linear causality of y to x is defined as $$ {F}_{y\to x}= \ln \left(\left|{\Sigma}_1\right|/\left|{\Sigma}_2\right|\right), $$ where Σ1 = var(ε 1), Σ2 = var(E 1(t)), with similar expressions for vector variables. In frequency domain he introduced the measure of linear causality at a given frequency. Stated for two variables or two blocks of variables: $$ {f}_{y\to x}\left(\lambda \right)= \ln \left(\left|{S}_{xx}\left(\lambda \right)\right|{\left|{H}_{xx}\left(\lambda \right){\Sigma}_2\left(\lambda \right){H}_{xx}^{*}\left(\lambda \right)\right|}^{-1}\right). $$ Here, \( {H}_{xx}^{*}\left(\lambda \right) \) is the Hermitian transpose of H xx (λ), | ⋅ | is matrix determinant and S xx (λ) is the upper left block of the spectral density matrix S(λ) written as $$ S\left(\lambda \right)=\left[\begin{array}{cc}\hfill {S}_{xx}\left(\lambda \right)\hfill & \hfill {S}_{yx}^{*}\left(\lambda \right)\hfill \\ {}\hfill {S}_{yx}\left(\lambda \right)\hfill & \hfill {S}_{yy}\left(\lambda \right)\hfill \end{array}\right]=H\left(\lambda \right){\Sigma}_2\left(\lambda \right){H}^{*}\left(\lambda \right),\ H\left(\lambda \right)=\left[\begin{array}{cc}\hfill {H}_{xx}\left(\lambda \right)\hfill & \hfill {H}_{xy}\left(\lambda \right)\hfill \\ {}\hfill {H}_{yx}\left(\lambda \right)\hfill & \hfill {H}_{yy}\left(\lambda \right)\hfill \end{array}\right]. $$ We mention improvement of Geweke causality measures, proposed in [19], using matrix partitioning, providing corrections of Geweke conditional measures not suffering of deficits of the original Geweke measures – occasional negative values and peaks believed to be artifacts. Later, also in frequency domain, Kaminski and Blinowska [15] introduced an adaptation of Granger - Geweke causality measure to m variables, which they called Directed Transfer Function (DTF), with $$ {\mathrm{DTF}}_{ij}\left(\lambda \right)=\left|{H}_{ij}\left(\lambda \right)\right|{\left({\displaystyle \sum_{k=1}^n}{\left|{H}_{ik}\right|}^2\right)}^{-\frac{1}{2}}, $$ measuring causality from j to i at frequency λ; before, they use the same expression as in (32) for the non-normalized DTF definition. Authors of DTF multiply claimed that DTF is superior over Granger's measure in causality application to the brain connectivity problems, but with accumulation of experience with DTF, Kaminski et al [16] propose use of additional connectivity measure DC together with DTF, in order to reach direct connectivity between nodes i and j in frequency domain; DC is defined by $$ {\mathrm{DC}}_{ij}\left(\lambda \right)={\sigma}_{jj}{H}_{ij}\left(\lambda \right){\left({\displaystyle \sum_{k=1}^{\mathrm{n}}}{\upsigma}_{kk}^2{\left|{H}_{ik}\left(\lambda \right)\right|}^2\right)}^{-1/2}, $$ where σ kl (k, l = 1, …, n) are components of the covariance matrix Σ2. DC measure was earlier considered by Sameshima and Baccala, e.g. [17], and other authors. Baccala and Sameshima [18] introduced a normalized measure called Partial Directed Coherence (PDC), measuring direct influence of channel j to channel i at frequency λ, with $$ {\mathrm{PDC}}_{ij}\left(\lambda \right)={\pi}_{ij}\left(\lambda \right)={A}_{ij}\left(\lambda \right){\left({\boldsymbol{a}}_j^{*}\left(\lambda \right){\boldsymbol{a}}_j\left(\lambda \right)\right)}^{-1/2}, $$ where a j is the j -th column in A (λ) and \( {\boldsymbol{a}}_j^{*} \) is Hermitian transpose of a j . Among certain further generalizations, we mention here iPDC ij (λ), the information PDC, intended to measure information flow between nodes j and i (in the sense of Information theory) by Sameshima and collaborators [21], which is obtained from (37) with the expansion by a factor $$ i{\mathrm{PDC}}_{ij}\left(\lambda \right)={\overline{A}}_{ij}\left(\lambda \right){\sigma}_{ii}^{-1/2}{\left({\overline{a}}_j^{*}\left(\lambda \right){\boldsymbol{\Sigma}}_{\boldsymbol{w}}^{-1}{\overline{a}}_j\left(\lambda \right)\right)}^{-1/2}, $$ where \( {\boldsymbol{\Sigma}}_{\mathbf{w}}=\mathbb{E}\left(\boldsymbol{w}(n){\boldsymbol{w}}^T(n)\right) \) is a positive definite covariance matrix of the so called zero mean wide stationary process w(n). With the same intention for DTF, they define information DTF as following, $$ i{\mathrm{DTF}}_{ij}\left(\lambda \right)={\overline{H}}_{ij}\left(\lambda \right){\rho}_{jj}^{1/2}{\left({\overline{h}}_j^{*}\left(\lambda \right){\boldsymbol{\Sigma}}_{\boldsymbol{w}}^{-1}{\overline{h}}_j\left(\lambda \right)\right)}^{-1/2}, $$ where ρ jj is the variance of the so called partialized innovation process ζ j (n) defined by \( {\zeta}_j(n)={w}_j(n)-\mathbb{E}\left({w}_j(n)/\left\{{w}_l(n)\ :l\ne j\right\}\right) \); obviously, as generalizations they have respectively PDC and DTF as their special cases. Granger inverse. In [18] authors claim that PDC is the proper counterpart of Granger causality measure in frequency domain. We observe first the fundamental agreement of the two approaches given by Geweke: $$ {F}_{\boldsymbol{y}\to \boldsymbol{x}}={\left(2\pi \right)}^{-1}{\displaystyle \underset{-\infty }{\overset{+\infty }{\int }}}{f}_{\boldsymbol{y}\to \boldsymbol{x}}\left(\lambda \right)d\lambda . $$ We will use it as a definition of the counterpart measure when only one of the measures is defined. For a couple of measures F x,y and f x,y , with parameter vector x, y we say that they are G-counterparts or G-inverse (G for Geweke) over domain D ⊆ ℝ (ℝ is the set of real numbers), if they satisfy $$ {F}_{xy}=c{\displaystyle \underset{D}{\int }}{f}_{x,y}\left(\lambda \right)d\lambda =c{\displaystyle \underset{\mathrm{\mathbb{R}}}{\int }}{\chi}_D\left(\lambda \right){f}_{x,y}\left(\lambda \right)d\lambda, $$ where χ D is the characteristic function of the set D, which is slightly more general than (41), omitting D when D = ℝ. Thus, if one of the counterparts is given, the other can be calculated using (42). Specially, substituting DTF ij and PDC ij for f in (41)/(42), we obtain their proper (time domain) counterparts, the G-inverses, which we designate by DTF ij and PDC ij . This would complement the claim of Baccala – Sameshima that PDC is the proper inverse of Granger causality measure, with the proper solution for proper inverse. The G-inverse can be defined more generally, i.e. relative more general aggregation operator than integral form present in (42), thus realizing G-inverse relative arbitrary aggregation operator. Kasum, O., Dolicanin, E., Perovic, A. et al. Brain connectivity extended and expanded. EPJ Nonlinear Biomed Phys 3, 4 (2015) doi:10.1140/epjnbp/s40366-015-0019-z DOI: https://doi.org/10.1140/epjnbp/s40366-015-0019-z Brain connectivity measure Measure comparison Weak brain connectivity Partial (Linear) Dependence
CommonCrawl
Diagnosis of diabetes insipidus observed in Swiss Duroc boars Alexander Grahofer1, Natalie Wiedemar2, Corinne Gurtner3, Cord Drögemüller2 & Heiko Nathues1 Diabetes insipidus (DI) is a rare disease in humans and animals, which is caused by the lack of production, malfunction or dysfunction of the distal nephron to the antidiuretic effect of the antidiuretic hormone (ADH). Diagnosis requires a thorough medical history, clinical examination and further laboratory confirmation. This case report describes the appearance of DI in five Duroc boars in Switzerland. Two purebred intact Duroc boars at the age of 8 months and 1.5 years, respectively, with a history of polyuric and polydipsic symptoms had been referred to the Swine Clinic in Berne. Based on the case history, the results of clinical examination and the analysis of blood and urine, a tentative diagnosis of DI was concluded. Finally, the diagnosis was confirmed by findings from a modified water deprivation test, macroscopic examinations and histopathology. Following the diagnosis, three genes known to be involved in inherited DI in humans were analyzed in order to explore a possible genetic background of the affected boars. The etiology of DI in pigs is supposed to be the same as in humans, although this disease has never been described in pigs before. Thus, although occurring only on rare occasions, DI should be considered as a differential diagnosis in pigs with polyuria and polydipsia. It seems that a modified water deprivation test may be a helpful tool for confirming a diagnosis in pigs. Since hereditary forms of DI have been described in humans, the occurrence of DI in pigs should be considered in breeding programs although we were not able to identify a disease associated mutation. Primary disorders of water balance, such as diabetes insipidus (DI) or psychogenic polydipsia, belong to the 'polyuric and polydipsic complex'. DI is characterized by polyuria, a markedly decreased urine specific gravity and compensatory polydipsia without other findings. Combining a thorough clinical examination with specific laboratory testing for particular causes (e.g., diabetes mellitus, hyperthyroidism, pyelonephritis, chronic renal failure), the diagnosis can be established and several other differential diagnoses can be ruled out [1, 2]. In order to verify a presumptive diagnosis, a water deprivation test, an antidiuretic hormone (ADH) stimulation test as well as measurement of endogenous ADH have been described [1–4]. DI is caused by an inadequate secretion, release or activity of ADH [5–8]. The ADH, also known as vasopressin, is a neurohypophyseal peptide hormone and its most important function, maintained through interaction with the V2-receptor in the kidney, is to increase water reabsorption [9]. Any distinction of the several forms of DI cannot solely relay on clinical examination, because symptoms are rather unspecific. A central DI (CDI) occurs due the inadequate secretion of ADH, whereas nephrogenic DI (NDI) is characterized by an insufficient or absent response of the distal nephron to the antidiuretic effect of vasopressin [6–8, 10]. The gestational DI (GDI) is caused by the enzyme 'cysteine aminopeptidase', which is produced in the placenta and degrades ADH [6–8]. DI has rarely been reported in pets and farm animals [10, 11]. In humans the disease occurs with a prevalence of 1:25,000 [8]. From these cases less than 10 % can be attributed to a hereditary background. In this context, three different genes are often analyzed in order to identify the form of DI [14]: the arginine vasopressin gene (AVP), the arginine vasopressin receptor 2 gene (AVPR2) and the vasopressin-sensitive water channel gene (AQP2). In most of the human CDI cases the disease is acquired and can have various underlying etiologies, such as tumors, trauma, infections or central nervous system malformations [12, 13]. There are also familial forms caused by mutations in AVP. Most of these mutations are inherited in a dominant manner [14] though recessive cases have been reported [15]. The time of onset of first clinical symptoms in familial CDI shows huge variation and it usually occurs after one year of age [14]. The acquired forms of NDI are often secondary diseases following other metabolic disorders (e.g. diabetes mellitus), urinary tract diseases or drug abuse [12]. Familial NDI can either be caused by mutations in the AVPR2 or AQP2 gene [10, 14]. In both circumstances, symptoms usually manifest themselves in the first weeks after birth [14]. Two particular AVPR2 mutations [14] are responsible for about 90 % of familial NDI cases [10], they cause an X-linked recessive form of NDI and therefore mainly affect male individuals [14] with a frequency of 4–8 per 1 million man born alive [16]. Furthermore, there are autosomal dominant and recessive variants of NDI [17]. Mutations in AQP2 are the least frequent cause of familial DI. In most of the cases it is inherited in an autosomal recessive way [10]. This case report describes the findings in Duroc boars with DI. The clinical symptoms were partly in accordance with case reports in other animal species. To the authors' knowledge, this is the first report of the DI in pigs and specifically in Duroc boars. Two purebred Duroc boars were referred to the Swine Clinic Berne, Vetsuisse-Faculty, for further investigation of polyuric and polydipsic symptoms combined with reduced growth despite maintenance of appetite. Owners of the Duroc boars were informed about the possible examinations in written form and they agreed upon Terms of Services that include the intention to publish descriptions of clinical cases in reports. Both animals had a significant drop of semen quality and were excluded from the semen collection process in a boar stud. The first symptoms appeared around a month before referral to the clinic and no other relevant medical history was reported. No treatment was administered by the herd attending veterinarian or animal care taker prior to presentation of the case in the clinic. The boars were kept in individual pens interspersed with straw. In the boar stud, the animals were fed with commercial feed and were provided with fresh water from the public supplier. Water was freely available through a nipple drinker system. The vaccination program included an immunization against Erysipelothrix rhusiopathiae twice and porcine parvovirus once a year. A yearly deworming of all boars was not conducted, because of negative results from regular examinations of faeces. According to the EU-regulations the boar stud was also tested for Classical swine fever, Brucella spp. and Foot-and-mouth disease virus and in addition for Porcine reproductive and respiratory syndrome virus, Porcine herpesvirus 1 and African swine fever virus. During the interview with the herd attending veterinarian and the thorough clinical history three very similar cases were further identified, where some years ago Duroc boars had shown identical signs. In these three cases the herd attending veterinarian had performed on-farm necropsies. Furthermore, samples from each boar had been sent to a laboratory for diagnostic purpose including the histopathology and bacteriological investigation of the urinary tract system. Also a physical-chemical urinalysis was conducted. At necropsy and the histopathological examination no distinctive gross lesions were found that could explain the polyuric and polydipsic disorders. Apart from this, pathogens not specific for urinary tract infection were found with low quantity in culture, likely due to environmental contamination. All of the urine samples from these three pigs showed a highly decreased specific gravity without other abnormalities. Case #1 Case #1 was a 1.5 year old intact Duroc boar with a body weight of 172 kg. Clinical examination revealed the boar was alert and in a moderate body condition. The boar showed a dull, ruffled bristled coat and the integument had multiple superficial skin wounds. The boar had several lesions with an exudative inflammatory process on the ears as well as lateral to the left carpal joint and lateral to the right tarsus. The rectal body temperature was 38.6 °C. The heart rate was 116 beats per minute and the heart sound was slightly muffled. The respiratory rate was 16 breaths per minute, with the animal showing a costoabdominal, abdominal breathing type and a moderate expiratory as well as a slight inspiratory respiratory noise. Based on the examination of the mucous membrane there was currently no evidence that the animal had a circulatory insufficiency. The neurological examination revealed no pathological findings. During abdominal ultrasonography, the cranial part of the urinary bladder could not be identified, because the vesica urinaria extended below the ribs. The bladder was completely distended with echogenic urine. No sediment was observed in the urine. The thickness, the regularity of the bladder wall and mucosal relief, all dependent on the bladder's volume, i.e. filling, were evaluated and assessed as being physiological. During the clinical examination no urination could be observed. The genitals were evaluated visually, by digital palpation and by ultrasound examination. The two testes were located in and freely moveable within the scrotum. The scrotum had several skin abrasions and a hard thickening of the skin around 10×4cm on the left lateral side. There was a physiological asymmetry of the testes with both being around 1.5 fists big, but one being slightly bigger than the other one. The tissue was soft and elastic and no pain reaction could be recognized by palpation. No abnormality was found during the ultrasound scan. Results from the concurrent blood examination and the urine analysis are listed in Tables 1 and 2. Table 1 Blood parameters of case #1 Table 2 Urine parameters of case #1 Macroscopic examination and histopathology were performed and are described in the section 'gross examination and histopathological findings'. Case #2 was an 8 months old intact Duroc boar with a body weight of 133 kg. The clinical examination revealed the boar was alert and in a moderate body condition. The bristles and integument were in a good condition, although there were multiple marble sized, firm nodules on both ears and decubitus ulcerations on both carpi and the left tarsus. The rectal body temperature was 37.8 °C. The heart rate was 100 beats per minute and the heart sound was slightly muffled. The respiratory rate was 20 breaths per minute, with the animal showing a costoabdominal, abdominal breathing type and a moderate expiratory respiratory noise. Based on the examination of the mucous membrane there was currently no evidence that the animal had a circulatory insufficiency. The animal showed a slightly arched back and tripling in hindquarters. The neurological examination revealed no pathological findings, except of proprioceptive deficits in hindquarters. Moreover, the panniculus reflex was slightly decreased from pelvis until thorax and thereafter moderately increased. During abdominal ultrasonography, the cranial part of the urinary bladder could not be identified, because the vesica urinaria extended into the ribcage. The bladder was completely distended with echogenic urine and no sediment was observed in the urine. The thickness, regularity of the bladder wall and mucosal relief, all dependent on bladder's volume, were evaluated and assessed as being physiological. During the clinical examination urination could be observed nearly every half hour. The genitals were evaluated visually, by digital palpation and by ultrasound examination. Both testes were located and freely moveable within the scrotum. There was a physiological asymmetry of the two fist-sized testes. The tissue was soft and elastic and no pain reaction could be recognized by palpation. No abnormality was found during the ultrasound examination. The caudal part of the paired bulbourethral gland was manually palpated. The consistency and the size of the gland revealed no pathological findings. A blood sample was taken and a complete blood count was performed. Furthermore, the concentration of Thyroxin (T4) was determined in order to exclude hyperthyroidism. The results are listed in Table 3. For further diagnostic examinations and as a support for the final diagnosis an 'abrupt water deprivation test' was conducted. The body weight was measured and a urine sample was taken and examined immediately prior to the trial (Table 4). Then the animal was completely deprived of water and food for 6 h. During the test, the general condition was monitored every 30 min. After the water deprivation test a 6.7 % loss of body weight was assessed and the urine specific gravity was marginally increased from 1.001 to 1.008. Table 3 Blood parameter of case #2 including Thyroxin (T4) Table 4 Comparing urine parameter before and after the water deprivation test Based on the results of the abrupt water deprivation test it was decided to try a therapy with 3-5 drops of Desmopressinacetat (Minirin® solution for intranasal application) into the conjunctival sac every 8 h for 5 days. During the treatment period a clinical examination and measurement of the urine specific gravity was conducted daily. Just a slightly increase of the urine specific gravity was observed (Fig. 1). Five days after the last treatment with Desmopressinacetat, a blood sample was taken and analyzed for Copeptin with an immunoassay for humans. The concentration of Copeptin was lower than 0.8 pmol/l in the serum. Line diagram displaying the development of the urine specific gravity over treatment time. Data were obtained during desmopressin administration with a Duroc boar suffering from polydipsia and polyuria For further diagnostics a necropsy was performed and completed by histopathological examinations. The results are described in the section below. Gross examination The boar showed multiple skin abrasions on various parts of the body. The soles of all claws showed fissures. Bilateral the cranial lung lobes were affected by a chronic suppurative bronchopneumonia and about 20 % of the lung tissue was affected. The pericardium contained 0.1 l (L) of clear, serous fluid. The urinary bladder contained approximately 9 L of clear yellowish urine (Fig. 2). The mucosal lining of the urinary bladder was intact and the lumen of the urethra was free from obstruction. The testes were softer than normal on palpation. Macroscopic examination of a Duroc boar suffering from polydipsia and polyuria: enlarged urinary bladder Like the first animal, the boar showed multiple skin abrasions and the claws showed fissures. The boar also suffered from a bilateral chronic suppurative bronchopneumonia which affected about 20 % of the lung tissue. The pericard contained about 1 L of serous and clear fluid. The boar had multiple renal cysts in both kidneys of up to 0.5 cm in diameter. The urinary bladder was filled with 3 L of clear yellowish urine and the mucosa was without changes. The lumen of the urethra was free. Both testes were softer than normal on palpation. Histological findings for both boars The macroscopically affected parts of the lung showed a severe infiltration with degenerated neutrophils into the lumen of bronchi and bronchioli and extending into the alveoli. Additionally, there was exudation of fibrin admixed with necrotic debris and proliferation of fibroblasts. In both pigs the tubuli seminiferi were mostly devoid of mature spermatids and contained a reduced amount of spermatogonia. Additionally, there was also a multifocal reduced amount of sertoli cells. The interstitium of the testes contained few foci of lymphocytes with a mild amount of edema. In the gray and white matter of the spinal cord of case #1, there were multiple swollen and hypereosinophilic axons (spheroids). Additionally, in the white substance along the whole length of the spinal cord there were few glial nodules. In the interstitium of the kidneys of boar 1, there were few foci composed of lymphocytes and a lesser amount of macrophages. Some renal tubuli of case #1 contained a small amount of eosinophilic proteinaceous material. Few tubuli of case #2 showed mild multifocal degeneration of epithelial cells. The corpus of the urinary bladder showed no histological changes. Multifocal in the hypophysis there were small amounts of mineralized colloid. No special findings were present in the hypothalamus. Bacteriological investigation A bacteriological examination of the affected lung from case #1 yielded a moderate to high concentration of Trueperella pyogenes and Pasteurella multocida subsp. multocida. With the aim of examining a possible genetic background of the disease, the pedigrees of the five affected boars were analyzed. All cases can be traced back to a total of 101 common ancestors, the closest of them 3, to 6 generations away from the affected animals (Fig. 3). According to the knowledge about the genetic causes of DI in humans, the annotated exons of the three functional candidate genes AVP, AVPR2 and AQP2 were sequenced in material obtained from the affected boars and from healthy animals serving as controls, which have been collected in our laboratory in the course of other ongoing studies. The DNA was isolated either from EDTA-blood using the Nucleon Bacc2 kit (GE Healthcare) or from ear punch biopsies using QIAGEN's DNeasy spin kit according to the manufacturers' instructions. Exon-spanning primers (Table 5) were designed using Primer3 software [17] after masking of repetitive sequences with RepeatMasker [18]. PCR reactions were carried out in 10 μl volumes with 5 pmol primer, 5 μl Amlitaq Gold 360 Master Mix (LifeTechnologies), 1 μl 360 GC Enhancer (LifeTechnologies) and ~20 ng of genomic DNA. PCR-products were amplified using GeneAmp 9700 thermocycler (LifeTechnologies), the amplification conditions were 10 min at 95 °C, 32 cycles of 30 s denaturation at 95 °C, 30 s annealing at 60 °C and 1 min elongation at 72 °C, followed by a 7 min hold at 72 °C. To remove redundant primers and nucleotides PCR-products were purified with 1 unit exonuclease I (Roche) and 0.5 units of shrimp alkaline phosphatase (New England BioLabs) in a 30 min incubation step at 37 °C, followed by a 15 min inactivation step at 80 °C. Subsequently the PCR-products were sequenced with the BigDye Terminator v.3.1 cycle sequencing kit (LifeTechnologies) at the following conditions: a hold of 96 °C, followed by 25 cycles of 10 s at 96 °C, 5 s at 50 °C and 2 min at 60 °C. Sequencing-products were resolved on an ABI 3730 capillary sequencer (LifeTechnologies) and the obtained sequence data was analyzed with Sequencher 5.1 software. The sequences were compared to the pig reference sequence and deviations from the reference (variants) were searched for in the affected animals (Table 6). As the mode of inheritance is not defined, both homozygous and heterozygous variants were considered. Subsequently, variants, which were present in the healthy controls, were excluded. In AVP three exonic single nucleotide polymorphisms (SNPs) were found in the cases, but all of them were also present in homozygous state in control animals. In AVPR2 two exonic SNPs were found in four of the cases but all of them were present in healthy control animals as well. In AQP2 no variants were found in the cases. Pedigree of five Duroc boars affected by polydipsia and polyuria. All the cases (shown in black) can be traced back to a particular number of common ancestors. The closest common ancestor is shown in the figure (labelled with an arrow). It's a boar which is 3, 4, 4, 5 and 6 generations away respectively Table 5 Primer used for sequencing of DNA from Duroc boars affected by polydipsia and polyuria and from negative controls Table 6 Sequence variants detected in two candidate genes from Duroc boars suffering from polydipsia and polyuria Clinical signs, findings of the physical examination and results of further diagnostic methods confirmed the diagnosis of DI in two Duroc boars originating from a semen collection centre. There are several forms of DI described, but in the present study it was not possible to classify the type of DI accurately, as there are some limitations in diagnostics for pigs. With a clinical history characterized by polyuria and polydipsia in pigs several differential diagnoses have to be kept in mind and each one has to be excluded from the list of differential diagnosis. Therefore, clinical examination and further tests are essential and need to be performed sequentially. During the physical examination the urinary bladder of both boars were extremely full and distended, but dehydration, which is often reported in DI [19], could not be observed. It is noteworthy that in pigs there is no adequate method to measure the hydration status and therefore experimental studies in DI use diuretic medication [20] or fluid restricted pigs [21] for evaluation. As a next step, analysis of the urine was performed and revealed a hyposthenuria with a specific gravity of 1.001 in one boar and 1.002 in the other boar. The specific gravity in pigs, which is the lowest among animals, usually is 1.020 on average and can range between 1.010 and 1.050 [22, 23]. No further abnormalities such as glucosuria or inflammatory signs were detected. Several blood compounds are able to provoke dysfunction of the renal system, hence a complete blood count was conducted and showed a slight chronic inflammation, which might have been caused by the pneumonia. As a further diagnostic approach a modified water deprivation test was performed. The test is designed to determine, whether ADH is released in response to dehydration and if the kidneys are able to respond normally to the hormone [1–3]. The aim of the test is to achieve maximal ADH secretion and thereby higher concentrated urine. This commonly occurs after a 3 to 5 % loss of body weight due to water loss, which makes it necessary to measure the body weight several times. Additionally, emptying of the bladder with a catheter is required [1]. Due to the characteristic anatomy of the urethra of male pigs, transurethral catheterization of the bladder is impossible [24]. The endpoint of the water deprivation test is a loss of body weight greater than 5 % or a specific gravity of the urine higher than 1.030 [1]. A more accurate measurement is the assessment of the urine's osmolality [1]. Unfortunately, this was not feasible throughout the whole water deprivation test, because of logistical and financial reasons. The osmolality of urine can also be approximated by multiplying the last two digits of the urine specific gravity by 36 [25]. There was one single measurement of urine's osmolality of case #2, which was performed during the trial and revealed a result of 32.5 mOsm/Kg. Comparing the measured value with the calculated urine osmolality of 36 mOsm/Kg, there was a good correlation, which can also be observed in other species, e.g. in dogs. The pig is a good model for renal research. The ratio between urine and serum osmolality in healthy pigs and healthy humans is 3.3 [21]. In the literature, there are numerous different equations described to calculate serum osmolarity in humans. One group reported an equation (Eq. 1) derived from results of an experiment in pigs, where concentrations are expressed in mEq/L [26]. Equation 1: $$ \mathrm{Serum}\ \mathrm{osmolarity}\ \left(\mathrm{mEq}/\mathrm{l}\right) = 1.8177\ *\ \left[\mathrm{N}\mathrm{a}\right]\ \left(\mathrm{mEq}/\mathrm{L}\right) + \left[\mathrm{Urea}\right]\ \left(\mathrm{mEq}/\mathrm{L}\right) + \left[\mathrm{Glucose}\right]\ \left(\mathrm{mEq}/\mathrm{L}\right) + 26.05 $$ A baseline of 294.9 with a SD of ± 1.8 mOsm/Kg in 10–40 kg femal Yorkshire-Duroc crossbred pigs was reported [27]. Furthermore, in fattening pigs 284.74 ± 5.73 mOsm/Kg [26] was described. In order to get an indication for serum osmolarity in boars, additional three Duroc boars at the age of 8 month to 1.1 year from the semen collecting centre were tested. A reference range from 321 to 326 was measured. The calculated value of serum osmolarity in case #2 was 290.9 mOsm/Kg and, thus, within the above mentioned reference values taken from the literature. In contrast, the calculated value was significantly lower than those measured in the control boars. However, the result generally must be interpreted with caution, because there is a significant bias by food and water intake [28], which can have an impact on the level of osmolarity. If we compare the ratio of serum osmolarity with urine osmolality it is almost three times higher than the average value of healthy pigs. After the water restriction test a therapeutic attempt was tried, as it is commonly performed with small animals and also with horses. The animals are treated with desmopressin acetate (DDAVP), a synthetic analogue of ADH. There are several routes of administration available, but most often the conjunctival route is chosen. Therefore, one to two drops are applied into the conjunctival sac of both eyes every 12 to 24 h [3, 29]. The eye drop method is a non-invasive, practical and effective way of hormone administration [30]. An oral application of desmopressin is possible, but the bioavailability is lower compared to the afore mentioned method [19, 31]. The duration of the effect of DDAVP varies from eight to 24 h [3]. Since no specific dosage regime was available for pigs, a treatment of the boar 2-3 times per day with 3–5 drops in the conjunctival sac of one eye was proposed. Only a slight decrease of the urine specific gravity was observed. In small animals this result would lead to diagnosis of primary NDI. However, the porcine vasopressin contains a lysine residue in position 8 (lysine-vasopressin), which makes the pig quite different from other mammalian species, where vasopressin contains an arginine residue in position 8 (arginine-vasopressin) [9]. Therefore the porcine V2 receptor has less sensitivity to desmopressin than human V2 receptors. In the literature a two hundred times lower affinity of desmopressin on porcine V2 receptor is described [32]. Even in high doses desmopressin did not induce any hematological response [33]. The authors clinical interpretation is that desmopressin cannot be recommended for diagnostics or treatment in pigs due to the implications mentioned above. Unfortunately, we performed both diagnostic approaches, therapeutic attempt and modified water deprivation test, only in one boar, because it was only possible to keep one boar in our facilities, regarding to Swiss legislation. To prove the efficiency for both tests in a larger number of pigs with DI, warrants further investigation. Another diagnostic tool for the confirmation of the disease's aetiology is the measurement of endogenous ADH in plasma, where in case of diseases osmotic and cardiovascular homeostasis are disturbed [4]. However, the reliability of assessments of plasma ADH levels is poor because the hormone is unstable, largely attached to platelets, and rapidly cleared from plasma [34, 35]. Therefore, the level of a precursor of ADH, Copeptin, which is stable for days, is usually measured in plasma samples [35]. Test kits, specifically designed for pigs, are available for the purpose of clinical research, but they are rarely used and in the presented case the authors could not find a laboratory offering the determination of Copeptin in swine plasma. The genetic analysis revealed that all the candidate variants in the three DI candidate genes were obviously not associated with the disease and therefore no genetic explanation of the phenotype was found by sequencing of these three genes. Pedigree analysis showed a large number of common ancestors among the affected cases indicating inbreeding. But these shared ancestors are some generations ago which decreases the likelihood of a common inherited simple recessive mutation. Nevertheless, a dominantly inherited mutation with a late onset of clinical signs could also be the cause of the disease, as for example human CDI is caused by dominant mutations of AVP [14]. A recessive inheritance analogue to AQP2 mutations in humans [10] with inbreeding loops further behind in the pedigree is possible as well as an X-linked disease like NDI caused by AVPR2 mutations in humans [14]. Even though, both types of mutations usually manifest in the first weeks after birth in humans. As the phenotype in the boars was recognized later in life one can also hypothesize that a dominant mutation in one of these three genes which is located in the upstream, intronic, or downstream regions affecting the expression level is causing the disease. The used approach is only appropriate to detect variants in the coding region and therefore other more comprehensive methods like genome-wide-association mapping in combination with sequencing of the genome of one case could be useful to map the responsible locus in the swine genome and finally to find the causative mutation. This case report provides a description of a diagnostic approach to confirm DI in pigs. The report also addresses the limitations and pitfalls while diagnosing DI due to the limited availability of different tests and methods for pigs. To the author's knowledge, this report is the first describing DI in Duroc boars. Severe polyuria and polydipsia as major clinical signs indicated an inclusion of Diabetes insipidus in the differential diagnosis. Failure to differentiate polyuric syndromes from other conditions may lead to an incorrect or inconclusive diagnosis of DI. Importantly, a response to ADH administration cannot be used as a diagnostic approach in pigs, because the chemical structure of the product commonly used in humans, cats and dogs is not stimulating the V2 receptors in swine. The observed relationship of the affected animals suggest a possible genetic cause. Although coding mutations in three DI candidate genes can be excluded a genetic background of the disease could not be ruled out and should be carefully investigated in future using more comprehensive methods. Nichols R. Polyuria and polydipsia. Diagnostic approach and problems associated with patient evaluation. Vet Clin North Am Small Anim Pract. 2001;31:833–44. Grünbaum EG, Moritz A. The diagnosis of nephrogenic diabetes insipidus in the dog. Tierarztl Prax. 1991;19:539–44. Nichols R, Hohenhaus AE. Use of the vasopressin analogue desmopressin for polyuria and bleeding disorders. J Am Vet Med Assoc. 1994;205:168–73. Robertson GL. The use of vasopressin assays in physiology and pathophysiology. Semin Nephrol. 1994;14:368–83. Braun U, Feller B, Gerber A, Ossent P. Diabetes insipidus bei einem Braunviehrind mit Hydrocephalus internus. Schweiz Arch Tierheilkd. 2008;150:409–12. Fenske W, Allolio B. Clinical review: Current state and future perspectives in the diagnosis of diabetes insipidus: a clinical review. J Clin Endocrinol Metab. 2012;97:3426–37. Shapiro M, Weiss JP: Diabetes & Metabolism Diabetes Insipidus : A Review. J Diabetes Metab 2012:1–11. Di Iorgi N, Napoli F, Allegri AEM, Olivieri I, Bertelli E, Gallizia A, et al. Diabetes insipidus--diagnosis and management. Horm Res pædiatrics. 2012;77:69–84. Petersen MB. The effect of vasopressin and related compounds at V1a and V2 receptors in animal models relevant to human disease. Basic Clin Pharmacol Toxicol. 2006;99:96–103. Wesche D, Deen PMT, Knoers NVAM. Congenital nephrogenic diabetes insipidus: the current state of affairs. Pediatr Nephrol. 2012;27:2183–204. Burnie AG, Dunn JK. A case of central diabetes insipidus in the cat: diagnosis and treatment. J Small Anim Pract. 1982;23:237–41. Baylis PH, Cheetham T. Diabetes insipidus. Arch Dis Child. 1998;79:84–9. Article CAS PubMed Central PubMed Google Scholar Al-Agha AE, Thomsett MJ, Ratcliffe JF, Cotterill AM, Batch JA. Acquired central diabetes insipidus in children: a 12-year Brisbane experience. J Paediatr Child Health. 2001;37:172–5. Fujiwara TM, Bichet DG. Molecular biology of hereditary diabetes insipidus. J Am Soc Nephrol. 2005;16:2836–46. Willcutts MD. Autosomal recessive familial neurohypophyseal diabetes insipidus with continued secretion of mutant weakly active vasopressin. Hum Mol Genet. 1999;8:1303–7. Bichet DG. Vasopressin receptor mutations in nephrogenic diabetes insipidus. Semin Nephrol. 2008;28:245–51. Homepage Primer3 [http://bioinfo.ut.ee/primer3-0.4.0] Homepage Repeat Masker Server [http://www.repeatmasker.org] Aroch I, Mazaki-Tovi M, Shemesh O, Sarfaty H, Segev G. Central diabetes insipidus in five cats: clinical presentation, diagnosis and oral desmopressin therapy. J Feline Med Surg. 2005;7:333–9. Fahlman A, Dromsky DM. Dehydration effects on the risk of severe decompression sickness in a swine model. Aviat Sp Env Med. 2006;77:102–6. Hess JR, MacDonald VW, Winslow RM. Dehydration and shock: an animal model of hemorrhage and resuscitation of battlefield injury. Biomater Artif Cells Immobilization Biotechnol. 1992;20:499–502. Kahn CM, editor. The Merck veterinary manual. 10th ed. Merck: Whitehouse Station; 2010. Drolet R. Urinary system. In: Zimmermann JJ, Karriker LA, Ramirez A, Schwartz KJ, Steveson GW, editors. Dis Swine. 10th ed. Chichester: Wiley-Blackwell; 2012. p. 363–79. Holliman CJ, Kenfield K, Nutter E, Saffle JR, Warden GD. Technique for acute suprapubic catheterization of urinary bladder in the pig. Am J Vet Res. 1982;43:1056–7. Hendriks HJ, de Bruijne JJ, van den Brom WE. The clinical refractometer: a useful tool for the determination of specific gravity and osmolality in canine urine. Tijdschr Diergeneeskd. 1978;103:1065–8. Căpriţă R, Căpriţă A. Experimentally-derived formula for computing serum osmolarity in pigs. Sci Pap Anim Sci Biotechnol. 2009;42:537–42. Houpt TR, Anderson CR. Spontaneous drinking: is it stimulated by hypertonicity or hypovolemia? Am J Physiol. 1990;258:143–8. Houpt TR, Yang H. Water deprivation, plasma osmolality, blood volume, and thirst in young pigs. Physiol Behav. 1995;57:49–54. Harb MF, Nelson RW, Feldman EC, Scott-Moncrieff JC, Griffey SM. Central diabetes insipidus in dogs: 20 cases (1986-1995). J Am Vet Med Assoc. 1996;209:1884–8. Kranenburg LC, Thelen MHM, Westermann CM, de Graaf-Roelfsema E, van der Kolk JH. Use of desmopressin eye drops in the treatment of equine congenital central diabetes insipidus. Vet Rec. 2010;167:790–1. Critchley H, Davis SS, Farraj NF, Illum L. Nasal absorption of desmopressin in rats and sheep. Effect of a bioadhesive microsphere delivery system. J Pharm Pharmacol. 1994;46:651–6. Ufer E, Postina R, Gorbulev V, Fahrenholz F. An extracellular residue determines the agonist specificity of V2 vasopressin receptors. FEBS Lett. 1995;362:19–23. Bowie EJ, Solberg LA, Fass DN, Johnson CM, Knutson GJ, Stewart ML, et al. Transplantation of normal bone marrow into a pig with severe von Willebrand's disease. J Clin Invest. 1986;78:26–30. Scollan KF, Bulmer BJ, Sisson DD. Validation of a commercially available enzyme immunoassay for measurement of plasma antidiuretic hormone concentration in healthy dogs and assessment of plasma antidiuretic hormone concentration in dogs with congestive heart failure. Am J Vet Res. 2013;74:1206–11. Morgenthaler NG, Struck J, Alonso C, Bergmann A. Assay for the measurement of copeptin, a stable peptide derived from the precursor of vasopressin. Clin Chem. 2006;119:112–9. Andersson H, Wallgren M, Rydhmer L, Lundström K, Andersson K, Forsberg M. Photoperiodic effects on pubertal maturation of spermatogenesis, pituitary responsiveness to exogenous GnRH, and expression of boar taint in crossbred boars. Anim Reprod Sci. 1998;54:121–37. The authors gratefully acknowledge Dr. Thierry Francey, Small Animal Clinic, Vetsuisse Faculty University of Berne, for his continuous support during this study. The authors also thank the veterinarians from the boar stud for the provision of data. Clinic for Swine, Department of Clinical Veterinary Medicine, Vetsuisse Faculty, University of Berne, Bremgartenstrasse 109a, CH-3012, Bern, Switzerland Alexander Grahofer & Heiko Nathues Institute of Genetics, Department of Clinical Research and Veterinary Public Health, Vetsuisse Faculty, University of Berne, Bremgartenstrasse 109a, CH-3012, Bern, Switzerland Natalie Wiedemar & Cord Drögemüller Institute of Animal Pathology, Department of Infectious Diseases and Pathobiology, Vetsuisse Faculty, University of Berne, Länggassstrasse 122, CH-3012, Bern, Switzerland Corinne Gurtner Alexander Grahofer Natalie Wiedemar Cord Drögemüller Heiko Nathues Correspondence to Heiko Nathues. There are no competing interests of any of the authors that could inappropriately influence or bias the content of the paper. AG performed the clinical examination, developed the diagnosis, designed the treatment of the boars, summarized the results of the cases and drafted the manuscript. NW planned and performed the genetic analysis and drafted parts of the manuscript. CG conducted the macroscopic examination and the histopathology. HN and CD supervised and coordinated the project. All authors contributed to the development and the revisions of the manuscript and approved the final version. Grahofer, A., Wiedemar, N., Gurtner, C. et al. Diagnosis of diabetes insipidus observed in Swiss Duroc boars. BMC Vet Res 12, 22 (2016). https://doi.org/10.1186/s12917-016-0645-4 Hyposthenuria Antidiuretic hormone Water deprivation test
CommonCrawl
Ameliorative effect of selenium yeast supplementation on the physio-pathological impacts ofchronic exposure to glyphosate and or malathion in Oreochromis niloticus Marwa A. Hassan ORCID: orcid.org/0000-0001-8835-10481, Samaa T. Hozien2, Mona M. Abdel Wahab2 & Ahmed M. Hassan1 Pesticide exposure is thought to be a major contributor to living organism health deterioration, as evidenced by its impact on both cultured fish species and human health. Commercial fish diets are typically deficient in selenium (Se); hence, supplementation may be necessary to meet requirements during stress. Therefore, this study was conducted to investigate the protective role of selenium yeast (SY) supplementation for 60 days against the deleterious effects of glyphosate and or malathion chronic toxicity at sublethal concentrations in Oreochromis niloticus . Two hundred and ten fish were divided into seven groups (n = 30/group) as follows: G1 (negative control); G2 (2 mg L− 1 glyphosate); G3 (0.5 mg L− 1 malathion); G4 (glyphosate 1.6 mg L− 1 and malathion 0.3 mg L− 1); G5 (glyphosate 2 mg L− 1 and SY 3.3 mg kg− 1); G6 (malathion 0.5 mg L− 1 and SY 3.3 mg kg− 1); and G7 (glyphosate 1.6 mg L− 1; malathion 0.3 mg L− 1 and SY 3.3 mg kg− 1). Results revealed significant alteration in growth performance parameters including feed intake (FI), body weight (BW), body weight gain (BWG), specific growth rate (SGR), feed conversion ratio (FCR), and protein efficiency ratio (PER). G4 has the highest documented cumulative mortalities (40%), followed by G3 (30%). Additionally, the greatest impact was documented in G4, followed by G3 and then G2 as severe anemia with significant thrombocytopenia; leukocytosis; hypoproteinemia; increased Alanine aminotransferase (ALT) and Aspartate aminotransferase (AST), urea, and creatinine, as well as malondialdehyde (MDA), superoxide dismutase (SOD) and glutathione peroxidase (GPx). Considering the previously mentioned parameters, selenium yeast (Saccharomyces cerevisiae) (3.3 mg kg− 1 available selenium) mitigated the negative impact of both the agrochemicals, whether exposed singly or in combination, in addition to their antioxidative action. In conclusion, our study found that organophosphorus agrochemicals, single or combined, had negative impacts on Oreochromis niloticus regarding growth performance, biochemical and hematological changes in the serum, as well as induced oxidative damage in liver and kidney tissues. Supplementation of SY at the rate of 3.3 mg kg− 1 diet (2.36 mg kg− 1 selenomethionine and 0.94 mg organic selenium) ameliorated the fish performance and health status adversely affected by organophosphorus agrochemical intoxication. Peer Review reports Egyptian aquaculture production has increased significantly in the last decade, and Egypt is currently the leading producer in Africa. Egypt is the world's ninth-largest aquaculture supplier, producing over 1.6 million tonnes in 2019, with Nile tilapia accounting for the overwhelming majority (66%) [1]. The most farmed fish is Nile tilapia, and Egypt is the world's second-largest producer of farmed tilapia after China, with a total value of over 900,000 USD [2]. Grey mullet and carp are frequently farmed, often in combination with tilapia, and together these species represent more than 95% of Egyptian aquaculture production [2]. Egypt is one of the countries with restricted water resources with limited volume and quality of water available for fish farming [3]. The aquaculture business, regardless of size, is not permitted to utilize irrigation or Nile water and must alternatively depend on water from agricultural drainage canals and groundwater [4]. The Nile Delta area is surrounded by semi-intensive fish culture using both brackish and freshwater, which is Egypt's most significant farming technology accounting for 86% of aquaculture production [2]. The agricultural drainage water polluted with various agricultural chemicals including organophosphates is negatively influencing the quality of farmed fish [5]. Pesticide and herbicide pollutants, particularly run-off from agricultural areas, are a major global concern because of the acute and chronic toxicity to aquatic organisms [6]. Since the ban on organochlorines (OC) due to their continuing harmful effects, organophosphates (OP) have been chosen as the most preferred insecticides in today's world to make pest-free crops more productive [7]. OP are considered global environmental hazardous substances due to their sustained use. Significant levels of total organophosphorus pesticide residues were detected in aquaculture water (73.57 ± 62.97 ppb), sediment (103.03 ± 16.05 ppb), and fish muscle samples (Claris gariepinus 49.1 ± 17.8 ppb, Tilapia zilli 48.3 ± 18.9 ppb and Oreochromis niloticus (45.6 ± 28.7 ppb) [8]. On March 20, 2015, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) categorized two organophosphate insecticides (malathion and diazinon) and one herbicide (glyphosate) as "probably carcinogenic to humans" (Group 2A). However, these two pesticides and glyphosate are widely used in Egypt [9]. Glyphosate traces have been identified in surface waterways in various sites (8.7 ug L − 1 [10], 86 ug L − 1 [11], and 430 ug L − 1 [12], malathion has been estimated by Derbalah and Shaheen [13] in water samples at different fish farms sites and found their concentration ranged from 0.37 to 4.12 μg L− 1. Exposure to pesticides, either chronic or acute, could have deleterious impacts on fish performance, physiology, biochemistry, population stability, and the entire ecosystem [14, 15]. Consequently, fish consumption can be identified as a key component of human exposure to these pollutants, indicating their potential risk to human health due to bioaccumulation in farmed fish [16, 17]. Frequent sub-lethal pesticide exposure affects the fish's growth performance, survival rate, hepato-somatic index, and immunity [18]. N-(phosphoromethyl) glycine, also known as glyphosate, is a weed control herbicide derived from phosphonic acid and glycine that is widely used in agriculture [19]. It is a significant organophosphate (OP), a water-soluble herbicide with a broad spectrum of activity, used to eradicate grass and other unwanted broad-leaf weeds that compete with crops grown worldwide [20]. Glyphosate induces hepatic and renal impairment in Oreochromis niloticus, and mortality was directly related to exposure dosage; it could be considered highly toxic to Nile tilapias, hence its use near a fish farm or in nearby aquatic environments should be prohibited [21]. Malathion is introduced into the environment at sub-lethal levels, causing serious intimidation as well as severe metabolic disturbances in fish, resulting in a decline and impairment in growth rate and physiological condition [22]. Exposure to malathion at sub-lethal concentrations induced biochemical and hematological alterations in Oreochromis niloticus and led to oxidative damage [23]. Selenium (Se) is an important trace element for human and animal species' successful functionality; however, unlike many other trace elements, it has a limited quantitative range of concentrations between deficiency and physiological conditions and toxic concentrations [24, 25]. A significant amount of research in different animal species has shown the importance of selenium in animals including fish [26] and laying hens for Se-enriched egg production [27]. Selenium can be found in the environment in its inorganic elemental state (Se0) as selenides (Se2−), selenates (SeO42−), or selenites (SeO32−) [28], and organic forms as selenomethionine (SeMet) and seleno-cystein (SeCys), [25]. The transformations of selenium depend on various factors such as pH, amount of free oxygen, redox potential, and humidity. Anaerobic conditions, and an acidic environment support the formation of selenium molecules in lower oxidation states, while the higher oxidation states of this element are dominant under aerobic conditions and at alkaline pH [29]. Bioaccumulation of toxic substances triggers redox reactions generating free radicals, especially ROS, which cause physiological alterations in fish tissues [30]. Selenoproteins (SePs) play vital biological roles [31], within cells as components of enzymes including glutathione peroxidase (GPx), deiodinase iodothyronine, and thioredoxin reductase (TRxR), which protect cells from the toxic and harmful effects of free radicals, especially ROS. They also participate in the oxidation of hydrogen peroxide and lipid hydroperoxides as an antioxidant factor [32]. Additionally, Se has been shown to improve performance, counteract reactive oxygen species, and protect the structure and function of proteins, DNA, and chromosomes from oxidative damage [33]. GPx group SePs are predominant in all three domains of life (archaea, bacteria, and eukarya) [34]. The bacteria, protozoa, fungi, and terrestrial plants contain SeCys56-containing GPx sequence homology. GPx carries out a variety of biological roles in cells, including the regulation of hydrogen peroxide (H2O2), hydroperoxide detoxification, and the maintenance of cellular redox homeostasis [25]. Furthermore, the presence of Se in the active site of GPx affects both its catalytic activity and spatial conformation [23]. Excess selenium can be detrimental to the body; however, reliable measurement of dangerous amounts of selenium is challenging due to the element's presence in numerous chemical forms [25]. Se toxicity is affected by the Se compound, mode of administration, species of animals, exposure period, idiosyncrasy, physiological condition, and association with other metals, among other factors [35]. Both organic and inorganic forms of selenium can have a detrimental effect on the organism [36]. The dose-dependent selenium toxicity is coupled with competitive inhibition of selenium and Sulphur, resulting in the initiation of Sulphur metabolism (transformation) [37]. Selenium may replace Sulphur in amino acids (cysteine and methionine), while its inorganic form substitutes sulphur during mercapturic acid formation and the interaction of selenites with thiol groups [38]. Therefore, deformed, malfunctioning enzymes and protein molecules could be observed, causing disruptions in the biochemical activity of the cell [39]. High selenium concentrations in the body induce severe hepatic damage, reduced triiodothyronine [T3] levels, and the loss of natural killer cells [40]. Resistance to selenium toxicity is determined by, among other variables, the speed of excretion, and selenium excretion is determined by the rate of methylation of selenium, as discovered in fish [41]. The minimal requirement for Se in livestock is 0.05–0.10 mg/kg dry forage, whereas the toxic Se dosage in animal feed is 2–5 mg/kg dry forage [42]. Selenium methylation detoxifies selenium by forming methyl selenides; nevertheless, an overabundance of selenium in the form of selenocysteine reduces selenium methylation [43]. However, the full decrease of Se to elemental selenium, as accomplished by some bacteria, and the synthesis of heavy metal selenides such as Ag2Se or Hg2Se result in a non-catalytic, non-toxic form of selenium [44]. Some selenium compounds' catalytic prooxidant activity appears to be responsible for their toxicity when it exceeds plant and animal methylation processes and antioxidant defenses. Excess selenium can indeed be catabolized into hydrogen selenide and released into the breath, or it can be catabolized into trimethyl-selenium ion and released into the urine [45]. Organic Se supplementation rather than inorganic form had greater absorption, retention rate in fish [46], antioxidant activity, and lower toxicities [47], resulting in less environmental pollution [48]. Besides, Se′s biological function is related to its incorporation into the structure of proteins important for metabolism via SeCys [32]. SeCys can be found in animal tissues and Se-containing proteins, whereas SeMet can be found in yeast, algae, bacteria, and plants [25], and it replaces sulfur in the thiol group (−SH) [24]. Recently, hydroxy-selenomethionine (OH-SeMet) had been synthesized to increase Se bioavailability [49]. The idea behind commercializing Se-Yeast is that it has the potential to supply Se in a more natural dietary form due to its effectiveness and safety [32]. Additionally, it is believed that seleno-compounds in Se-Yeast (SY) are highly bioavailable [50]. Yeast cells can bind organic and inorganic selenium, then bio-accumulates via membrane assembly receptors (extracellularly) and ion transport across the cytoplasmic membrane (intracellularly), then detoxified via oxidation, reduction, methylation, and selenoprotein synthesis processes, allowing yeasts to survive in high selenium concentration culture conditions, implying that selenium yeasts are likely the best absorbers of this element [39]. The use of refined yeast (S. cerevisiae) products high in Se is a viable and relatively inexpensive option for Se supplementation [51]. Therefore, the main objective of the study was to evaluate the ameliorative effect of selenium yeast supplementation on the detrimental effects of glyphosate and or malathion on growth performance, hematology, biochemistry, and oxidative stress in Oreochromis niloticus after a single or combined chronic exposure. Chemicals and fish diet preparation Glyphosate (48% purity) and malathion (57% purity) were purchased from Egypt Kim International Agrochemicals and prepared with distilled water to make a stock solution. The half-life was determined in triplicates (100 L aquarium) at a concentration of 2 mg L− 1 glyphosate and 0.5 mg L− 1 malathion under the physicochemical parameters of water (22 ± 1 °C and pH 8 ± 0.1) which were to be used for experimental fish exposure to the two chemicals, for chronic toxicity assessment. Samples were collected at 24 h intervals for 72 h and concentrations were assessed by the high-performance liquid chromatography (HPLC) Agilent Series 1200 quaternary gradient pump, Series 1200 autosampler, Series 1200 UV, and fluorescence detector, and HPLC 2D ChemStation software (Hewlett-Packard, Les Ulis, France). The analytical column (stationary phase) was a reversed-phase C18 (250*4.6 mm, 5 μm) Teknorama (Spain). The samples results were processed for probit analysis to calculate the half-life (2.78 and 2.3 days for glyphosate and malathion, respectively) (data not shown). For treatments supplemented with SY, a commercial basal diet was crushed and mixed with 0.8 g selenium yeast (Saccharomyces cerevisiae) per Kg of diet (Yeast Sel 2000, Ultra Bio-Logic Inc) containing (2.36 mg kg− 1 selenomethionine and 0.94 mg organic selenium). SY was generously provided by Kairouan Group Company, Egypt. The diet was pelletized, spread to dry, and stored at 4 °C for the feeding experiment. Generally, SY-treated and non-treated diets were administered orally to fish at a rate of approximately 3% fish body weight, 2 times /day. Ingredients (g/Kg− 1 total diet) of basal diet contained: fish meal (300), soybean (350), vitamins and mineral mix (3), corn starch (150), soybean oil (25), wheat bran (25). Chemical composition by proximate analysis (% Dry matter) included: dry matter (94.22), crude protein (42.01), crude lipid (6.3), crude fiber (4.9), ash (7.43), nitrogen-free extract (39.18), and g Adhikari s energy (460 kcal/kg). Vitamin and mineral premix (per kg of mixture) contain the following:15000 IU vitamin A, 1500 IU vitamin D3, 2.0 mg vitamin E, 2 mg vitamin K3, 2.5 vitamin B2, 10 mg vitamin B3, 3 mg vitamin B6, 2 mg vitamin B1, 5 mg vitamin B12, 5.5 mg pantothenic acid, 1 mg niacin, 2 mg folic acid, 100 mg choline, 4 g copper, 300 mg iodine, 30 g iron, 60 g manganese, 50 g zinc, 855.5 g calcium carbonate. Some water quality parameters were measured daily [pH, Do and Temperature using Jenway, 370 pH meter, UK and Crison OXI 45 P, EU], twice-weekly [un-ionized ammonia (NH3) and Nitrate (NO3) following the procedure of spectrophotometric Phenate method and UV screening spectrophotometric method, respectively according to APHA [52] using 1100 Techocomp UV/visible Spectrophotometer], once weekly [Total Hardness (Ethylene diamine tetraacetic acid (EDTA) titrimetric method), Total Alkalinity (titrimetric method), and Chloride (Argentometric Method)]. Sampling procedures and analytical methods for both physical and chemical determinations were carried out according to APHA [52]. Samples were transferred to the laboratory of Hygiene, Zoonoses and Animal Behavior department, Faculty of Veterinary Medicine, Suez Canal University, without delay for immediate measuring. Experimental Oreochromis niloticus A total of 250 apparently healthy Oreochromis niloticus free from any skin lesions or microbial infections with an average body weight of 14 ± 0.5 g was obtained from nursery ponds at the Central Aquaculture Research Laboratory, Suez Canal University, Ismailia, Egypt. The fish had been acclimatized for 2 weeks in two fiberglass tanks, filled with aerated sterile freshwater with a holding capacity of 1000 L. prior to the experiment, the fish were determined to be free of external parasites [53]. The DO was maintained at 5.8 ± 0.02 mg L− 1, the water temperature was kept at 22.15 ± 0.17 °C, and a 12 h light/12 h dark, photoperiod was adopted [54]. Ammonia (NH3) levels in the water were measured 3 times a week recorded as 0.03 ± 0.001 mg L− 1. Water quality was optimized meanwhile, periodical water change (30% daily) as per the recommendation of Ahmed, Abdullah, Shuib and Abdul Razak [55] and frequent siphoning of fish wastes were performed. The fish were fed daily to apparent satiety on commercial pellets of 1.5 mL (Skereting 30% protein). Based on the results of LC50-96 h probit analysis (data not shown), the experiment was conducted on a sub-lethal dose by adding LC15 (2 mg L− 1 for glyphosate and 0.5 mg L− 1 for malathion) for single pollutant exposure, and LC1 (1.6 mg L− 1 for glyphosate+ 0.3 mg L− 1 for malathion) for co-exposure of both pollutants, showing alliance with predictions of initial pollution levels found in water samples collected from different fishponds (detected in a previously conducted survey). Briefly, out of the 250 acclimatized fish, 210 apparent healthy fish (14 ± 0.5 g) were randomly assigned into one of seven groups (n = 30) with triplicates and exposed to the following treatments: G1 (negative control); G2 (2 mg L− 1 glyphosate); G3 (0.5 mg L− 1 malathion); G4 (glyphosate 1.6 mg L− 1 and malathion 0.3 mg L− 1); G5 (glyphosate 2 mg L− 1 and SY 3.3 mg kg− 1); G6 (malathion 0.5 mg L− 1 and SY 3.3 mg kg− 1); and G7 (glyphosate 1.6 mg L− 1; malathion 0.3 mg L− 1 and SY 3.3 mg kg− 1). Fish were exposed to the previous treatments for a period of 60 days. In the trial, water was changed every 3 days to simulate field conditions, and pesticide concentrations were adjusted with each water change. Fish were observed daily for any symptoms, and performance parameters were measured and averaged every two weeks starting at 30 days. The following parameters were measured to evaluate both pollutant's impact and the ameliorative effect of organic selenium. Growth parameters and feeding efficiency Fish anesthetized using clove oil (0.1 mL l − 1) [56] dissolved in ethanol [57] from each aquarium were collected, counted, and bulk weighed periodically (every 2 weeks). Growth performance was determined, and feed utilization was calculated as follows: Feed intake (FI) was measured biweekly as described [58], body weight (BW), body weight gain (BWG) (g) = final weight – initial weight [59]. Specific growth rate (SGR) was calculated according to the following equations [54]: $$SGR=\frac{\left(\mathrm{Final}\ \mathrm{weight}-\mathrm{intial}\ \mathrm{weight}\ \right)\times 100}{\mathrm{rearing}\ \mathrm{period}\ \left(\mathrm{days}\right)}$$ Feed conversion ratio (FCR) was estimated according to Fritz et al. (1969) as follows: $$FCR=\frac{\mathrm{Feed}\ \mathrm{consumed}\ \left(\mathrm{g}\right)}{\mathrm{weight}\ \mathrm{gain}\ \left(\mathrm{g}\right)}\times fish\ number$$ Protein efficiency ratio (PER) was calculated by applying the following formula: $$PER=\frac{\ \mathrm{weight}\ \mathrm{gain}\ \left(\mathrm{g}\right)\times \mathrm{fish}\ \mathrm{number}\ }{\mathrm{protein}\ \mathrm{intake}\ }$$ Cumulative mortalities The abnormal clinical signs in each fish were reported and the mortality rate was analyzed according to Kaplan and Meier [60] to determine the differences among mortalities curve and the postmortem examination of dead fish was recorded. Hematological and biochemical analysis After 30, 45, and 60 days of agrochemical exposure, five fish were randomly selected from each group (15 fish/treatment), anesthetized, and blood samples were collected from caudal vein. A whole blood sample was used for hematologic analyses in tubes containing 10% ethylene-diamine ethylene tetraacetate (EDTA), and hematological values were measured using standard methods. Sahli's acid haematin method, as described by Zijlstra and Van Kampen [61], was used to calculate hemoglobin (Hb). Neubauer's improved hemocytometer was used to count red blood cells (RBC) and white blood cells (WBC) using Hyem's and Turk's solutions as diluting fluids, as described by Shah and A Altindağ [62]. The W Zijlstra and E Van Kampen [61] microhematocrit method was used to calculate the hematocrit (HCT)/packed cell volume (PCV). The mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), and mean corpuscular hemoglobin concentration (MCHC) derived hematological indices were calculated using Lee's standard formulae [63]. MCV was calculated in femtoliters = PCV/RBC × 10; MCH in picograms = Hb/RBC × 10; and MCHC in milligrams = (Hb in 100 mg blood/PCV) × 100. In extensions, Smears stained with Giemsa / May-Grunwald staining were used for the counts of total numbers of leukocytes (WBC) and thrombocytes by the indirect method described by M Martins, F Pilarsky, E Onaka, D Nomura, J Fenerick, K Ribeiro, D Myiazaki, M Castro and E Malheiros [64]. Differential leukocytic counts (neutrophils, lymphocytes, and monocytes) were determined using an Olympus oil immersion light microscope at 1000 X magnification, and one hundred leukocytes were identified, and the percentage values of different white cells were calculated according to NC Jain [65]. The total number of leukocytes was obtained by subtracting the percentage of thrombocytes from the total of leukocytes plus thrombocytes counted with a Neubauer chamber. Another blood sample was collected in anticoagulant-free centrifugal tubes and allowed to clot overnight at 4 °C, then were centrifuged at 3000 rpm for 10 min. The non-hemolyzed serum was collected and stored at − 20 °C for further biochemical analysis. Test procedures were performed as per the manufacturer's instructions (Diamond Diagnostic, Egypt) using 1100 Techocomp UV/visible Spectrophotometer. Total protein (TP) was measured by Weichselbaum's colorimetric method [66] based on a biuret reaction in an alkaline environment; absorbance photometric measurement with a 550-nm wave based on the method described by CT Weichselbaum [66], A Hubbuch [67]. Albumin was measured using Rodkey's colorimetric method in the modification of B Doumas, H Biggs, R Arends and P Pinto [68], using bromocresol green in an acidic environment; absorbance photometric measurement with a 600-nm wavelength. The serum globulin (g/dl) level was calculated according to B Doumas, H Biggs, R Arends and P Pinto [68] by mathematical subtraction of albumin value from total protein. The Albumin/Globulin (A/G) ratio was calculated from data on albumin and globulin concentration. Alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were determined calorimetrically according to the method described by S Reitman and S Frankel [69]. The AST and ALT activities were generally assayed by monitoring the concentration of oxaloacetate hydrazine and pyruvate hydrazine, respectively formed with 2, 4-dinitrophenylhydrazine; the colour intensity was measured against the blank at 546 nm and 540 nm, respectively. Creatinine and urea (mg/dl) were determined by Berthelot method [70]. Creatinine reacts with picric acid under the alkaline condition to form a yellow-red complex. The absorbance of the color produced is measured at a wavelength of 505 nm, which is directly proportional to the creatinine content in the sample. Urea was determined by the enzymatic UV kinetic method (Urease–modified Berthelot reaction). Urea is hydrolyzed in the presence of water and urease to produce ammonia and carbon dioxide. The liberated ammonia reacts with a-ketoglutarate in the presence of NADH to yield glutamate. An equimolar quantity of NADH undergoes oxidation during the reaction resulting in a decrease in absorbance which is read at 340 nm that is directly proportional to the urea nitrogen concentration in the sample. Oxidative stress biomarkers analysis in livers and kidneys Three fish were randomly collected from each group and euthanized then their livers and kidneys were homogenized (10% w/v) in 0.1 M Tris HCl buffer (pH 7.4) at 4 °C and centrifuged at 11,000 rpm for 30 min, to extract post mitochondrial supernatant (PMS), and the supernatant was used to determine enzyme and lipid peroxides. The malondialdehyde (MDA) of homogenates was measured immediately, the rest of the homogenates were stored at − 20 °C until tissue superoxide dismutase (SOD) and glutathione peroxidase (GPx) were performed. LPO was estimated by a TBARS (thiobarbituric acid-reactive substances) assay, performed by malondialdehyde (MDA) reaction with 2-thiobarbituric acid (TBA) using the JA Buege and SD Aust [71] method and optical density were measured at 532 nm. SOD activity was measured according to M Paya, B Halliwell and JR Hoult [72]. SOD estimation was based on the generation of superoxide radicals produced by xanthine and xanthine oxidase, which react with 2-(4-iodophenyl)-3(4-nitrophenol)-5phenyltetra-zolium chloride to form a red formazan dye. The SOD activity was then measured at 560 nm and constant temperature (25 °C) by considering the degree of inhibition of this reaction. GPx activity was measured using 2.5, dithiobis-tetranitrobenzoic acid (DTNB) reagent that was measured at 412 nm, according to the modified Mills' procedure published by DG Hafeman, RA Sunde and WG Hoekstra [73]. All parameters were estimated from homogenate by measuring optical density using 1100 Techocomp UV/visible Spectrophotometer. All the measurements were made in duplicate. The collected data were subjected to statistical analysis using the SPSS version 22 software computer program, which is available in New York, USA (Inc., 1989–2013). The Pearson correlation coefficient was calculated as a correlation matrix in the form of a rectangular array of integers, which yields the correlation coefficient between the variables. To describe the major relationship between the observed parameters and the chemical exposure, principal components analysis (PCA) was used. A one-way analysis of variance (ANOVA) test with the least significant difference (LSD) technique was used to determine the mean differences. The half-life was determined using probit analysis. To investigate the differences in mortality curves, the Kaplan-Meier test was used. The physiochemical characteristics (DO, pH, temperature, NH3, NO3, alkalinity, total hardness, and chloride) of different water samples in the aquariums of the experimental fish groups (Table 1) were within acceptable limits published before, which indicated that there was no stress condition correlated to water parameters and the main effects could be attributed to agrochemicals used and possible effects of organic selenium. Table 1 Physicochemical parameters (mean ± SE) of water in experimental Oreochromis niloticus groups Growth parameters, feeding efficiency, and cumulative mortalities Figures 1 and 2 summarize the effects of the investigated agrochemicals on growth parameters and feeding efficiency (BW, BWG, FI, SGR, PER, and FCR). The synergistic effect of the agrochemicals represented a significant (P ≤ 0.05) impairment on the examined parameters, followed by malathion then glyphosate exposed groups as compared to control during the different evaluation periods. The enhancement effect of organic selenium supplementation was demonstrated in the pattern of a significant (P ≤ 0.05) improvement in the measured parameters compared to chemically stressed groups, and this effect was observed from the first 30 days of the trial. It was worth to report that cumulative mortality rate (Fig. 3) was described in a descending manner as following: G4: 40% (n = 12), G3: > 30% (n = 9), G7: > 20% (n = 6), G2 and G6: > 10% (n = 6), G5: > 6.7% (n = 2). Body weight gain (BWG); body weight (BW) and Feed intake (FI) among the different treated groups during the different experimental periods. Means with different superscripts are statistically different at p ≤ 0.001(BWG) and at p ≤ 0.05 (BW). A 30th day of experiment; B 45th day of experiment; C 60th day of experiment; D Average mean Specific Growth Rate (SGR); Feed Conversion Ratio (FCR) and Protein Efficiency Ratio (PER) among the different treated groups during the different experimental periods. Means with different superscripts are statistically different at p ≤ 0.001and at p ≤ 0.05 (FCR during 60th day). A different experimental periods; B Average mean Cumulative mortalities among different treatments during the experimental period. Means with different superscripts are statistically different (P ≤ 0.0001) Hematological and biochemical parameters Results shown are in Table 2 demonstrated alterations in almost all hematological parameters during the different evaluation points along the experimental periods. There was a significant decrease (P ≤ 0.05) in erythrocyte count (RBCs) (106/mm3), Hb (g dl− 1), HCT (%), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular volume (MCV) and mean corpuscular hemoglobin (MCH) as compared to the negative control group all over the different examined dates from experiment beginnings, this reflected a general trend revealing the agrochemicals hazardous effect, which was more pronounced in combined exposure more than the single exposure of each chemical. In the same pattern of agrochemical detrimental effects, fish exhibited pale body and fin (Fig. 5. A) with pale organs, especially the liver (Fig. 5. B). Ameliorative action of SY supplementation was observed in the form of cumulative significant increase (P ≤ 0.05) in the measured parameters as compared to chemically treated groups, and maximum beneficial effect of SY was achieved at 60th day of SY treatment. In the same pattern, blood platelets revealed a significant (P ≤ 0.01) thrombocytopenia in fish exposed to glyphosate and/or malathion, which was improved with supplementation of SY. As far as total and differential leucocytes are concerned, our results revealed changes in the leukocyte profile manifested in the form of a significant leukocytosis, neutrophilia, and lymphocytopenia in pesticide exposed groups as compared to control group during the different experimental periods (Table 3). Table 2 Hematological parameters of the experimental Oreochromis niloticus Table 3 Total and differential leukocytic count of the experimental Oreochromis niloticus A significant (P ≤ 0.0001) hypoproteinemia accompanied by a significant decrease in both albumin and globulin concentrations was observed in fish exposed to pesticides on 30th, 45th and 60th days, also malathion was more toxic to fish than glyphosate and had an adverse effect on total serum protein. The results of this study showed a significant increase in ALT and AST activity with creatinine and urea in the blood samples of fish exposed to agrochemicals compared to controls at 30th, 45th, and 60th days of exposure (Tables 4 and 5), with malathion-treated group exhibited the highest AST levels and negatively impacting kidney function by increasing creatinine and urea to the highest significant levels (P ≤ 0.05), proving that malathion is more toxic than glyphosate. On the other hand, the addition of SY decreased the hepato-renal toxicity of these chemicals (Tables 4 and 5). Table 4 Serum protein parameters of the experimental Oreochromis niloticus Table 5 Liver enzymes and kidney markers of the experimental Oreochromis niloticus Principal component analysis (PCA) between the different variables Principal component analysis (PCA) was used to estimate the distribution pattern of the individual association of parameters with significant correlations divided into components. After the verification of the data's validity by Bartlett's sphericity test (< 0.001) and KMO test for the measured parameters, the parameters produced three principal components (PC) (eigenvalues > 1) explaining the total variances of 88.5%. Corresponding, variable loadings and explained variance are presented in Fig. 4. PC1 explained 53.74% variance and negative loadings of agrochemicals doses were reported as malathion>combined > glyphosate exposure, and they were directly correlated with increased urea> creatinine> AST > ALT> FCR > WBCs> A/G, and they have also inversely correlated with the other examined parameters in a declining level of them. On the contrary, SY supplementation was correlated in a positive loading directly with total protein > platelet count > globulin > RBCs > Hb, HCT, % and lymphocytes > albumin > SGR > PER > BWG while inversely associated with the other parameters, confirming its corrective role. Principle component analysis of Glyphosate and /or Malathion intoxication with hematological; biochemical and growth performance parameters in Oreochromis niloticus. Data were extracted by principal component analysis (PCA) and rotated through Varimax with Kaiser Normalization. Explained variance % was 53.73, 12.35, 10.1, 7.6 and 4.7% while Cumulative % was 53.73, 66.1 and 76.16, 83.8 and 88.5% for components (PC1, PC2, PC3, PC4 and PC5, respectively). Positive loading values in PC1 were represented by green upward arrows, while the red downward arrows represented negative loading Signs of anemia in groups exposed to agrochemicals. A Black arrows indicated anemic pale fish with pale fins due to reduction in RBCS and Hb content., red arrows represent dark pigmentation. B Postmortem examination showed pale liver and congested gall bladder Table 6 shows that fish exposed to malathion had a strong positive correlation with MDA, SOD, and GPX in both tissues, while the glyphosate-exposed fish showed a positive correlation that was significant in liver MDA. Fish exposed to a combination of agrochemicals showed a highly significant correlation with liver (MDA and SOD) and kidney GPX. The corrective role of SY, on the other hand, was shown by a highly significant negative correlation with all measurable oxidative stress biomarkers. Table 6 Correlation coefficient between agrochemicals' exposure, Selenium Yeast supplementation and oxidative stress markers in Oreochromis niloticus All results for DO, pH, Temperature, ammonia, alkalinity, chloride were within acceptable limits published before, whereas DO was higher than 5 mg/L in all experimental groups, this result agreed with described limits by R Lloyd [74] and A Bhatnagar and P Devi [75]. Also, pH and temperature showed averages ranging between 8 and 20 °C, respectively. These levels are within limits for growing tilapia. Levels of nitrogenous compounds: NH3 and NO3 lied within acceptable limits recommended by TB Lawson [76], TVR Pillay and MN Kutty [77], respectively. At the same time, hardness levels of chloride and alkalinity were within acceptable limits described by A Bhatnagar and P Devi [75], TB Lawson [76], B Santhosh and N Singh [78]. Our findings revealed that there was no stress condition correlated to measured water parameters, this revealed that the main effects could be attributed to agrochemicals used and their mitigation by using SY. Concerning the adverse effects of glyphosate on Oreochromis niloticus growth performance, the results were consistent with those obtained by PC Giaquinto, MB de Sá, VS Sugihara, BB Gonçalves, HC Delício and A Barki [79]. The latter authors observed that glyphosate-based herbicide at sublethal concentrations (1.8 ppm) affected feed intake in pacu fish and thus, inhibited its growth. Acute exposure of salmon to 1 ppm or higher of glyphosate resulted in reduced electro-olfactogram activity when it detected L-serine in the environment, glyphosate by its role resembles the amino acid glycine and there is some overlapping for the same active site for L-serine substance and the fish could not detect L-serine or respond to its presence [80]. Our findings are in agreement with those of UA Muhammad, NA Yasid, HM Daud and MY Shukor [57], who reported a negative correlation between glyphosate concentration and toxicity parameters such as specific growth rate (SGR). Generally, it was found that pollutants affected specific processes associated with bioenergetics, such as feeding, assimilation, excretion, and metabolism so delayed fish growth [81]. Furthermore, our result agreed with those reported by CA Laetz, DH Baldwin, TK Collier, V Hebert, JD Stark and NL Scholz [82] who indicated that histopathological damage to the liver, pancreas, and intestine may result in decreased feed digestion and metabolism efficiency because these tissues play critical roles in the regulation of biochemical parameters, particularly proteins, lipids, carbohydrates, and hormones, as well as the synthesis and secretion of digestive enzymes. Other important factors that explain the delay in fish growth could be the transformation into the energy of a portion of nutrients from the digestion of feed consumed to cope with chemical stress that constitutes the exposure to agricultural pesticides [83]. The highest reduction in growth in the combined agrochemical exposed group may be attributed to a synergistic impact between glyphosate and malathion; this effect could be explained in light of the fact that binary pesticide combinations generated synergistic acetyl-cholinesterase inhibition [82]. In the current study, there was a positive correlation between SY supplementation and the measured growth performance parameters that indicated improvements in the measured parameters accompanied by decreased cumulative mortalities in both the SY which was in some parameters very close to the non-treated control group. These results assured the fundamental usage of organic SY compounds as a protective antioxidant material to reduce the toxic effects of pesticides [84]. Generally, under normal culture conditions, M Abdel-Tawwab and M Wafeek [85] reported that feed supplemented with 5.54 mg kg− 1 improved growth in tilapia. Similarly, SBd Fonseca, JHVd Silva, EM Beltrão Filho, PdP Mendes, JBK Fernandes, ALL Amancio, J Jordão Filho, PBd Lacerda, FRPdJFS Silva and Technology [86] demonstrated that the diet containing 0.2 mg kg− 1 organic selenium produced weight gain, length gain, and feed conversion ratios comparable to the treatment containing 0.4 mg kg− 1 inorganic selenium. The authors previously mentioned attributing the improvement in growth performance parameters to an increase in glutathione peroxidase concentrations in the blood of tilapia-fed selenium in the diet. Furthermore, S Iqbal, U Atique, M Mughal, N Khan, M Haider, K Iqbal and M Akmal Rana [87] stated that supplementing selenium (2 mg kg− 1) in tilapia feed promotes better physiological performance and productivity, thereby enhancing fish growth and paving the way for an increased supply of selenium-fortified fish meat. Dietary organic Se incorporation at 0.45 mg Kg− 1 provided satisfactory results in various growth parameters and was an effective supplement in salmonid fish diets [88]. Similarly, A El-Kader, F Marwa, AF Fath El-Bab, MF Abd-Elghany, A-WA Abdel-Warith, EM Younis and MAJBTER Dawood [89] recommended Se nanoparticles at the rate of 0.5–1 mg kg− 1 diet to maintain the optimal growth performance of European seabass (Dicentrarchus labrax). Additionally, M Naiel, S Negm, S El-hameed and H Abdel-Latif [90] exemplified that dietary inclusion with 0.36–0.39 mg OS kg− 1 diet improved the growth, immunity and modulated the stress responses in Nile tilapia reared under sub-optimal temperature. Moreover, S Ghaniem, E Nassef, AI Zaineldin, A Bakr and S Hegazi [91] investigated the effects of different sources of selenium of 1 mg kg− 1 diet (inorganic (SSE), organic (OSE), and elemental nano-selenium (NSE)) on the performance Oreochromis niloticus and found that dietary selenium supplementation significantly improved growth performance parameters (P < 0.05), with the highest values recorded in the OSE supplemented and control groups. Severe anemia with marked thrombocytopenia was observed in fish exposed to glyphosate or malathion, as evidenced by significant reductions in RBCs (106/mm3), Hb (g/dl), MCHC %, and HCT (%) with a significant increase in MCV and MCH levels. The previous findings were confirmed by the inverse relationship between pesticide exposure and RBCs, Hb, and HCT values obtained by PCA, and these alterations were increased as the exposure period increased. Glyphosate-contaminated water contributes to changes in blood cell parameters [92]. This could be due to cell destruction and/or a decrease in cell volume because of the negative effects of pesticides [93]. Alteration in blood indices was directly associated with concentration and exposure period of malathion, and this reduction could be attributed to the effect on gills, anda decrease in available oxygen, as well as hemolysis [94]. Furthermore, these effects may be a result of deleterious effects on the hematopoietic organs, reducing the supply of RBCs through decreased production and/or an increased rate of removal from the circulatory system. Besides, the decrease in Hb level may be due to the toxic effects of malathion and glyphosate on the synthesis of this molecule, which may also disrupt it by affecting the activity of enzymes involved in the synthesis. Therefore, the detected anemia could be related to erythrocyte inhibition, hematopoiesis, osmotic dysregulation, or an increased rate of red blood cell destruction in the hematopoietic organ [95, 96]. The reduction in MCHC was probably characterized by an increase in the generation and secretion of reticulocytes, which were larger in size but contained less Hb than mature red blood cells [97]. In addition, SJ Gholami-Seyedkolaei, A Mirvaghefi, H Farahmand and AA Kosari [98] indicated that the increase in the number of immature RBCs could lead to increased values of MCV and MCH indices. The protective role of SY supplementation in preventing anemia in fish exposed to pesticides and herbicides might be supported by the positive correlation between SY in diet and RBCs and Hb level in the present study. That could be attributed to the fact that Se increases the stability of the RBCs and thrombocyte membranes and their survivability by protecting them against oxygen-free radicals, causing membrane damage, cell hemolysis, and thrombocytopenia. Se with a concentration of 0.7 mg kg− 1 of feed had the ability to protect the fish cell against oxidation due to chemical pollution [99]. Additionally, previous results showed similar enhancements in the Hb, RBCs, and PCV indices with dietary Se in common carp [100]. Also, A El-Kader, F Marwa, AF Fath El-Bab, MF Abd-Elghany, A-WA Abdel-Warith, EM Younis and MAJBTER Dawood [89] reported significantly higher values of Hb, PCV, RBCs, and WBCs in fish fed 0.5—1 mg kg− 1 Se nanoparticles. In the same context, S Ghaniem, E Nassef, AI Zaineldin, A Bakr and S Hegazi [91] reported that the selenium-supplemented groups had the highest packed-cell volume, hemoglobin, and red blood cell levels, with the highest values seen in the control group (P< 0.05). WBCs and PCA revealed leukocytosis which could be attributed to the fact that when water quality is altered by toxic substances, leukocytosis occurs as a normal physiological response of fish to foreign substances that assisted in the elimination of cell debris and necrotic tissue and stimulate immune defense [101]. A significant increase in WBCs in common carp following glyphosate exposure was attributed to the immune-toxic effects with changes in glyphosate-caused cytokines that may lead to immune suppression or excessive activation in the treated groups as well as immune dysfunction or reduced immunity [98]. The significant increase in WBCs count in the current study indicated hypersensitivity of leucocytes to malathion and glyphosate. These changes could be due to immunological reactions to produce antibodies in response to stress caused by organophosphorus pesticides [23]. Simultaneously, the leukocytosis observed probably reflected the increased leukocytic demand for the removal of cellular debris at a faster rate [102]. Furthermore, in the current study, the lymphocytopenia and neutrophilia are supported by the findings of SJ Gholami-Seyedkolaei, A Mirvaghefi, H Farahmand and AA Kosari [98] who explained the response as it could be considered as clear responses by the fish during exposure to a wide range of toxicants. The beneficial effect of SY supplementation on WBCs was illustrated as a non-significant difference between the negative control group and SY-treated groups for most of the experimental periods, additionally, a negative association between SY supplementation and WBCs was recorded. Our results were consistent with HS Hamed [23] who noted that the use of Se in the diet was an effective way to counteract the toxicity of malathion in tilapia fish and recommended the use of Se as a protective dietary supplement against malathion-induced toxicity to improve fish health. There was a negative relationship between pesticide exposure and blood proteins. Whereas, the correlation with liver enzymes and kidney markers was positive and this could be attributed to impaired albumin synthesis as in chronic hepatic insufficiency or hepatitis and chronic renal affections, as well as excessive protein loss due to alterations of necrosis in the kidney or hepatocytes destruction in the toxicity of organophosphates and the consequent impairment of protein synthesis [103, 104]. Moreover, the reduction in albumin and globulin levels could be a consequence of a decrease in blood viscosity or a decrease in body immunity because of the liver's inability to synthesize enough of them. Longer periods of herbicide exposure would be expected to trigger enough damage to mitochondrial membranes to release AST into the blood, furthermore, the activities of ALT and/or AST are well-known as stress bio-indicators of hepatotoxicity and liver, gill, and kidney damage [105]. Results were consistent with those of SJ Gholami-Seyedkolaei, A Mirvaghefi, H Farahmand and AA Kosari [98], who concluded that the activity of renal and hepatic AST and ALT in glyphosate-treated groups was significantly higher than the control group at different experimental periods. Regarding glyphosate's toxic effect on the kidney, our results were in agreement with those of S Ayoola [21] who recorded a deleterious toxic effect of glyphosate on the renal function of Oreochromis niloticus that showed a great susceptibility to herbicides. Our findings indicate that chronic malathion intoxication with a sub-lethal dose had the greatest impact on the fish, which could be explained by the fact that structural and soluble proteins were found to be decreased because of high proteolytic activity and inefficiency in protein biosynthesis following malathion exposure [106], which consequently decreases the protein content confirming the intoxication caused by malathion [107, 108]. There were also significant increases in creatinine and urea levels following acute malathion exposure, indicating that malathion had an adverse effect on the kidneys [109]. The results obtained are partially consistent with those obtained by R Magar and A Shaikh [110], who found the influence of malathion on damaging organs such as the kidney and liver in the fish Channa punctatus, exposed to sub-lethal quantities of malathion for a subsequent 7 days. The current study's increased oxidative stress markers in fish exposed to agrochemicals are the result of oxidative damage and a reduction in antioxidant defense, and these findings have been confirmed by Awasthi Y, Ratn A, Prasad R, Kumar M and Trivedi S [111]. MDA increased significantly because of oxidative stress [112]. Our investigation concluded that greater ROS generation altered the elevation of SOD and GPx. The fish's defensive strategy was to fight, eliminate, or neutralize the damaging effects of ROS and protect the system from oxidative stress [113]. In the present study, the addition of SY showed significant improvements in Oreochromis niloticus blood proteins, liver enzymes, and kidney markers compared to pesticide groups during different evaluated periods of agrochemicals exposure. These findings could be strengthened by the inverse relationship between SY supplementation and the previously mentioned parameters. This is consistent with other findings in our study proving that Se has hepatoprotective properties against organophosphorus pesticides and heavy metals that induce liver damage [114, 115] and its ability to protect the host cell against oxidation due to environmental challenges, with an optimum level between 0.15 and 0.8 mg/kg diet [99]. In the same context, M Abdel-Tawwab and M Wafeek [85] discovered that tilapia diets enriched with 0.54 mg kg− 1 organic Se reduced the adverse effects of pesticide stress. Furthermore, MA Naiel, A Nasr and M Ahmed [116] concluded that organic Se 0.6 mg kg− 1 supplementation decreased serum creatinine and uric acid in Nile tilapia. A El-Kader, F Marwa, AF Fath El-Bab, MF Abd-Elghany, A-WA Abdel-Warith, EM Younis and MAJBTER Dawood [89] found that the values of total serum protein and globulin were significantly higher in fish fed 0.25 and 0.5 mg nano-Se kg− 1. SY has powerful antioxidant activity and participates in the antioxidant defense system and immune system moderation, and it acted directly as support for organismal health [23, 117]. Therefore, dietary Se prevents the toxic effects of malathion by ameliorating oxidative damage and enhancing physiological alterations that may affect fish health. Moreover, it improved normal feeding, food assimilation, metabolism, and growth in Oreochromis niloticus, minimizing the hazards associated with pesticide exposure [82]. Besides, R Alvarez, A Morales and A Sanz [118] reported that organic selenium had an advantage in reducing oxidative stress and is incorporated into kidneys, liver, and gastrointestinal mucosa proteins as selenomethionine and selenocysteine and is an essential micronutrient for fish. It was shown that several diseases are associated with increased expression of protein sulfhydryl groups (−SH), which were then oxidized to inactive disulfides bonds by selenium (S-S) [119]. Generally, Se played a major cell reinforcement component due to its incorporation in selenocysteine in enzyme glutathione peroxidase (GPx), GPx scavenges H2O2 and lipid hydroperoxides, using glutathione-reducing counterparts and protecting membrane lipids and macromolecules from oxidative damage and enhancing the body's cell resistance and that acts against reactive oxygen species (ROS) [120, 121]. Because it is absorbed as an amino acid, selenomethionine is more easily assimilated into the body [51]. Similarly, A El-Kader, F Marwa, AF Fath El-Bab, MF Abd-Elghany, A-WA Abdel-Warith, EM Younis and MAJBTER Dawood [89], S Ghaniem, E Nassef, AI Zaineldin, A Bakr and S Hegazi [91] recorded a significant (P < 0.05) reduction in MDA levels in all selenium-supplemented fish groups compared with levels in the control. In conclusion, results of this study indicated that chronic exposure of Oreochromis niloticus to organophosphorus agrochemicals such as glyphosate (2 mg L− 1), malathion (0.5 mg L− 1) and their combination (1.6 mg L− 1 glyphosate and 0.3 mg L− 1malathion) resulted in detrimental effects on performance, hematobiochemical variables, as well as oxidative damage indicative parameters in liver and kidney tissues. Exposure to these agrochemicals' residues may potentially harm the health of tilapia species as well as the health of human consumers, therefore, their use near a fish farm or in areas close to the aquatic environment should be discouraged. The addition of SY to the fish diet (3.3 mg kg− 1 diet organic selenium) ameliorated the fish performance and health status even in the presence of organophosphorus agrochemicals intoxication. Dietary inclusion of SY can be used as a sustainable bioremediation strategy that mitigates many of the negative effects of glyphosate/ malathion exposure in fish by ameliorating oxidative damage and enhancing the physiological alterations which may affect the health of fish. Future studies are needed to assess the toxic effects of these agrochemicals individually or in a mixture with dietary inclusion of organic selenium for other common freshwater and marine fish species cultured in Egypt. All data generated or analyzed during this study are included in this published article [and its supplementary information files]. FAO. Fishery and Aquaculture Statistics. Italy, Roma: Yearbook-Food and Agriculture Organization of the United Nations (FAO); 2021. p. 110. Soliman N, Yacout D. Aquaculture in Egypt: status, constraints and potentials. Aquac Int. 2016;24:1201–27. Nassr Alla A. Egyptian aquaculture status, constraints and outlook. In: CIHEA M Analytical Notes, vol. 32; 2008. Diego N. Financial services for SME aquaculture producers: Egypt case study. In: Project report funded by the German Agency for Technical Cooperation for the benefit of developing countries; 2011. FAO. Fishery and aquaculture statistics. Italy, Roma: Yearbook- Food Agriculture Organization of the United Nations; 2016. p. 11. Sabullah MK, Khayat ME. Assessment of inhibitive assay for insecticides using acetylcholinesterase from Puntius schwanenfeldii. J Biochem Microbiol Biotechnol. 2015;3(2):26–9. USEPA. Reregistration eligibility decision for malathion: Case No. 0248. Office of prevention, pesticides, and toxic substances. Office of pesticide programs. USA: United States Environmental Protection Agency; 2006. p. 196. Shalaby S, El-Saadany S, Abo-Eyta A, Abdel-Satar A, Al-Afify A, Abd El-Gleel W. Levels of pesticide residues in water, sediment, and fish samples collected from Nile River in Cairo. Egypt Environ Forensic. 2018;19(4):228–38. Ibrahim YA. A regulatory perspective on the potential carcinogenicity of glyphosate. J Toxicol Health. 2015;2(1):1. Battaglin W, Kolpin D, Scribner E, Kuivila K, Sandstrom M. Glyphosate, other herbicides, and transformation products in Midwestern streams, 2002. J Am Water Resour Assoc. 2005;41:323–32. Villeneuve A, Montuelle B, Bouchez A. Effects of flow regime and pesticides on periphytic communities: evolution and role of biodiversity. Aquat Toxicol. 2011;102(3–4):123–33. Coupe RH, Kalkhoff SJ, Capel PD, Gregoire C. Fate and transport of glyphosate and aminomethylphosphonic acid in surface waters of agricultural basins. Pest Manag Sci. 2012;68(1):16–30. Derbalah A, Shaheen S. On the presence of organophosphorus pesticides in drainage water and its remediation technologies. Environ Eng Manag J. 2016;15(8):1777–87. Solomon KR, Dalhoff K, Volz D, Van Der Kraak G. Effects of herbicides on fish. In: Fish physiology. Volume 33.: Elsevier; 2013. p. 369–409. Tham LG, Perumal N, Ahmad SA, Sabullah MK. Characterisation of purified acetylcholinesterase (EC 3.1. 1.7) from Oreochromis mossambica brain tissues. J Biochem Microbiol Biotechnol. 2017;5(2):22–7. Yahia D, Elsharkawy EE. Multi pesticide and PCB residues in Nile tilapia and catfish in Assiut city, Egypt. Sci Total Environ. 2014;466:306–14. Clasen B, Loro VL, Murussi CR, Tiecher TL, Moraes B, Zanella R. Bioaccumulation and oxidative stress caused by pesticides in Cyprinus carpio reared in a rice-fish system. Sci Total Environ. 2018;626:737–43. Narra MR, Rajender K, Reddy RR, Rao JV, Begum G. The role of vitamin C as antioxidant in protection of biochemical and haematological stress induced by chlorpyrifos in freshwater fish Clarias batrachus. Chemosphere. 2015;132:172–8. Maqueda C, Undabeytia T, Villaverde J, Morillo E. Behaviour of glyphosate in a reservoir and the surrounding agricultural soils. Sci Total Environ. 2017;593:787–95. Lee EA, Strahan AP, Thurman EM: Methods of analysis by the US geological survey organic geochemistry research group-determination of glyphosate, aminomethylphosphonic acid, and glufosinate in water using online solid-phase extraction and high-performance liquid chromatography/mass spectrometry.: Department of the Interior Washington DC; 2002. Ayoola S. Toxicity of glyphosate herbicide on Nile Tilapia. Oreochromis niloticus; 2008. Epa E. Reregistration eligibility decision (RED) for malathion. Washington, DC: United States Environmental Protection Agency; 2009. Hamed HS. Impact of a short-term malathion exposure of Nile tilapia, (Oreochromis niloticus): the protective role of selenium. Int J Environ Monit Anal. 2015;3(5–1):30–7. Kieliszek M, Bano I, Zare H. A comprehensive review on selenium and its effects on human health and distribution in middle eastern countries. Biol Trace Elem Res. 2022;200(3):971–87. Kieliszek MJM. Selenium–fascinating microelement, properties and sources in food. 2019;24(7):1298. Hamilton SJ. Review of selenium toxicity in the aquatic food chain. Sci Total Environ. 2004;326(1–3):1–31. Meng T-T, Lin X, Xie C-Y, He J-H, Xiang Y-K, Huang Y-Q, et al. Nanoselenium and selenium yeast have minimal differences on egg production and Se deposition in laying hens. Biol Trace Elem Res. 2021;199(6):2295–302. Thoennessen MJRPP. Current status and future potential of nuclide discoveries. Rep Prog Phys. 2013;76(5):056301. Kieliszek M, Błażejak S. Current knowledge on the importance of selenium in food for living organisms: a review. Molecules. 2016;21(5):609. Woo SP, Liu W, Au DW, Anderson DM, Wu RSJJEMB. Ecology: Antioxidant responses and lipid peroxidation in gills and erythrocytes of fish (Rhabdosarga sarba) upon exposure to Chattonella marina and hydrogen peroxide: Implications on the cause of fish kills. J Exp Marine Biol Ecol. 2006;336(2):230–41. Avery JC, Hoffmann PRJN. Selenium, selenoproteins, Immunity. Nutrients. 2018;10(9):1203. Kieliszek M, Błażejak SJN. Selenium: Significance, and outlook for supplementation. Nutrition. 2013;29(5):713–8. Akhtar M, Farooq A, Mushtaq M. Serum concentrations of copper, iron, zinc and selenium in cyclic and anoestrus Nili-Ravi buffaloes kept under farm conditions. Pak Vet J. 2009;29(1):47–8. Toppo S, Vanin S, Bosello V, Tosatto SC. Evolutionary and structural insights into the multifaceted glutathione peroxidase (Gpx) superfamily. Antioxidants & redox signaling. 2008;10(9):1501–14. Burk R, Levander O. Selenio, vol. 1. 9th ed. Madrid: MacGraw-Hill Interamericana; 2002. Nuttall KL. Evaluating selenium poisoning. Ann Clin Lab Sci. 2006;36(4):409–20. Lemly AD. Symptoms and implications of selenium toxicity in fish: the Belews Lake case example. Aquat Toxicol. 2002;57(1):39–49. Puccinelli M, Malorgio F, Pezzarossa B. Selenium enrichment of horticultural crops. Molecules. 2017;22(6):933. Kieliszek M, Błażejak S, Gientka I, Bzducha-Wróbel A. Accumulation and metabolism of selenium by yeast cells. Appl Microbiol Biotechnol. 2015;99(13):5373–82. Navarro-Alarcon M, Cabrera-Vique C. Selenium in food and the human body: a review. Sci Total Environ. 2008;400(1):115–41. Hilton Jw Fau - Hodson PV, Hodson Pv Fau - Slinger SJ, Slinger SJ: Absorption, distribution, half-life and possible routes of elimination of dietary selenium in juvenile rainbow trout (Salmo gairdneri). Comp Biochem Physiol C Comp Pharmacol 1982, 71C(1):49–55. Wu Z, Bañuelos GS, Lin ZQ, Liu Y, Yuan L, Yin X, Li M. Biofortification and phytoremediation of selenium in China. Front Plant Sci. 2015;6(136):1–8. Spallholz JE, Hoffman DJ. Selenium toxicity: cause and effects in aquatic birds. Aquat Toxicol. 2002;57(1–2):27–37. Mézes M, Balogh K. Prooxidant mechanisms of selenium toxicity–a review. Acta Biologica Szegediensis. 2009;53(suppl):15–8. Ip C. Lessons from basic research in selenium and cancer prevention. J Nutr. 1998;128(11):1845–54. Antony Jesu Prabhu P, Schrama JW, Kaushik SJ. Mineral requirements of fish: a systematic review. Rev Aquac. 2016;8(2):172–219. Kim YY, Mahan DC. Comparative effects of high dietary levels of organic and inorganic se on se toxicity of growing finishing pigs. J Anim Sci. 2001;79:942–8. Kuricova S, Boldizarova K, Gresakova L, Gresakova L, Bobcek R, Levkut M, et al. Chicken selenium status when fed a diet supplemented with se-yeast. Acta Vet (Brno). 2003;72:339–46. Mechlaoui M, Dominguez D, Robaina L, Geraert P-A, Kaushik S, Saleh R, et al. Effects of different dietary selenium sources on growth performance, liver and muscle composition, antioxidant status, stress response and expression of related genes in gilthead seabream (Sparus aurata). Aquaculture. 2019;507:251–9. Rayman MP. The use of high-selenium yeast to raise selenium status: how does it measure up? Br J Nutr. 2004;92(4):557–73. Schrauzer GN. Selenomethionine: a review of its nutritional significance, metabolism and toxicity. J Nutr. 2000;130(7):1653–6. APHA. Standard methods for the examination of water and wastewater. Washington, DC: American Public Health Association; 2017. AFS-FHS. Suggested procedures for the detection and identifica tion of certain finfish and shellfish pathogens, vol. 5: USA: Bethesd American Fisheries Society; 2003. Veras GC, Murgas LDS, Rosa PV, Zangeronimo MG, Ferreira MSS, Leon JAS-D. Effect of photoperiod on locomotor activity, growth, feed efficiency and gonadal development of Nile tilapia. Rev Bras Zootec. 2013;42:844–9. Ahmed M, Abdullah N, Shuib AS, Abdul Razak S. Influence of raw polysaccharide extract from mushroom stalk waste on growth and pH perturbation induced-stress in Nile tilapia, Oreochromis niloticus. Aquaculture. 2017;468:60–70. Banaee M, Sureda A, Mirvaghefi A, Ahmadi K. Effects of diazinon on biochemical parameters of blood in rainbow trout (Oncorhynchus mykiss). Pestic Biochem Physiol. 2011;99(1):1–6. Muhammad UA, Yasid NA, Daud HM, Shukor MY. Glyphosate herbicide induces changes in the growth pattern and somatic indices of crossbred red Tilapia (O. niloticus× O. mossambicus). Animals. 2021;11(5):1209. Eurell T, Lewis D, Grumbles L. Comparison of selected diagnostic tests for detection of motile Aeromonas septicemia in fish. Am J Vet Res. 1978;39(8):1384–6. Brady W. Measurements of some poultry performance parameters. Vet Rec. 1968;88:245–60. Kaplan EL, Meier P. Nonparametric estimation from incomplete observations. J Am Stat Assoc. 1958;53(282):457–81. Zijlstra W, Van Kampen E. Standardization of hemoglobinometry: I. the extinction coefficient of hemiglobincyanide at λ= 540 mμ: ε540HiCN. Clin Chim Acta. 1960;5(5):719–26. Shah SL, Altindağ A. Alterations in the immunological parameters of Tench (Tinca tinca L. 1758) after acute and chronic exposure to lethal and sublethal treatments with mercury, cadmium and lead. Turkish J Vet Anim Sci. 2005;29(5):1163–8. Lee GR. Wintrobe's clinical hematology. In: Wintrobe's clinical hematology, vol. 2; 1998. p. 2762–3. Martins M, Pilarsky F, Onaka E, Nomura D, Fenerick J, Ribeiro K, et al. Hematologia e resposta inflamatória em Oreochromis niloticus submetida aos estímulos único e consecutivo de estresse de captura. Bol Inst Pesca. 2004;30(1):71–80. Jain NC. "Hematological techniques". Schalm's Veterinary Hematology. Lea & Febiger; 1986. Weichselbaum CT. An accurate and rapid method for the determination of proteins in small amounts of blood serum and plasma. Am J Clin Pathol. 1946;16(3_ts):40–9. Hubbuch A. Results of the multicenter study of Tina-quant albumin in urine. Wien Klin Wochenschr Suppl. 1991;189:24–31. Doumas B, Biggs H, Arends R, Pinto P: Determination of Serum Albumin. Standard Methods of Clinical Chemistry, 7, 175–188; 1972. Reitman S, Frankel S. A colorimetric method for the determination of serum glutamic oxalacetic and glutamic pyruvic transaminases. Am J Clin Pathol. 1957;28(1):56–63. Burtis CA, Bruns DE. Tietz fundamentals of clinical chemistry and molecular diagnostics-e-book. USA: Elsevier Health Sciences; 2014. Buege JA, Aust SD. Microsomal lipid peroxidation. Methods Enzymol. 1978;52:302–10. Paya M, Halliwell B, Hoult JR. Interactions of a series of coumarins with reactive oxygen species. Biochem Pharmacol. 1992;44:205–14. Hafeman DG, Sunde RA, Hoekstra WG. Dietary selenium on erythrocyte and glutathione peroxidase in the rat. JNutr. 1974;104:567–80. Lloyd R. Pollution and freshwater fish. USA: Fishing News Books Ltd; 1992. Bhatnagar A, Devi P. Water quality guidelines for the management of pond fish culture. Int J Environ Sci. 2013;3(6):1980–2009. Lawson TB. Fundamentals of aquacultural engineering. USA: Springer Science & Business Media; 1994. Pillay TVR, Kutty MN. Aquaculture: principles and practices. Oxford: Blackwell publishing; 2005. Santhosh B, Singh N. Guidelines for water quality management for fish culture in Tripura. In: ICAR Research Complex for NEH Region, Tripura Center, Publication, vol. 29(10); 2007. Giaquinto PC, de Sá MB, Sugihara VS, Gonçalves BB, Delício HC, Barki A. Effects of glyphosate-based herbicide sub-lethal concentrations on fish feeding behavior. Bull Environ Contam Toxicol. 2017;98(4):460–4. Tierney KB, Ross PS, Jarrard HE, Delaney K, Kennedy CJ. Changes in juvenile coho salmon electro-olfactogram during and after short-term exposure to current-use pesticides. Environ Toxicol Chem. 2006;25(10):2809–17. Lal B, Sarang MK, Kumar P. Malathion exposure induces the endocrine disruption and growth retardation in the catfish, Clarias batrachus (Linn.). Gen Comp Endocrinol. 2013;181:139–45. Laetz CA, Baldwin DH, Collier TK, Hebert V, Stark JD, Scholz NL. The synergistic toxicity of pesticide mixtures: implications for risk assessment and the conservation of endangered Pacific salmon. Environ Health Perspect. 2009;117(3):348–53. Agbohessi TP, Toko II, N'tcha I, Geay F, Mandiki S, Kestemont P. Exposure to agricultural pesticides impairs growth, feed utilization and energy budget in African catfish Clarias gariepinus (Burchell, 1822) fingerlings. Int Aquat Res. 2014;6(4):229–43. Barbosa N, Rocha J, Soares J, Wondracek D, Gonçalves J, Schetinger M, et al. Dietary diphenyl diselenide reduces the STZ-induced toxicity. Food Chem Toxicol. 2008;46(1):186–94. Abdel-Tawwab M, Wafeek M. Response of Nile tilapia, Oreochromis niloticus (L.) to environmental cadmium toxicity during organic selenium supplementation. J World Aquacult Soc. 2010;41(1):106–14. Fonseca SB, Silva JHV, Beltrão Filho EM, Mendes PP, Fernandes JBK, Amancio ALL, et al. Technology: Influence of levels and forms of selenium associated with levels of vitamins C and E on the performance, yield and composition of tilapia fillet, vol. 33; 2013. p. 109–15. Iqbal S, Atique U, Mughal M, Khan N, Haider M, Iqbal K, et al. Effect of selenium incorporated in feed on the hematological profile of Tilapia (Oreochromis niloticus). J Aquac Res Dev. 2017;8:513. Nazari K, Shamsaie M, Eila N, Kamali A, Sharifpour I. The effects of different dietary levels of organic and inorganic selenium on some growth performance and proximate composition of juvenile rainbow trout (Oncorhynchus mykiss). Iran J Fish Sci. 2017;16(1):238–51. El-Kader A, Marwa F, Fath El-Bab AF, Abd-Elghany MF, Abdel-Warith A-WA, Younis EM, et al. Selenium nanoparticles act potentially on the growth performance, hemato-biochemical indices, antioxidative, and immune-related genes of European seabass (Dicentrarchus labrax). Biol Trace Elem Res. 2021;199(8):3126–34. Naiel M, Negm S, El-hameed S, Abdel-Latif H. Dietary organic selenium improves growth, serum biochemical indices, immune responses, antioxidative capacity, and modulates transcription of stress-related genes in Nile tilapia reared under sub-optimal temperature. J Therm Biol. 2021;99:102999. Ghaniem S, Nassef E, Zaineldin AI, Bakr A, Hegazi S. A Comparison of the Beneficial Effects of Inorganic, Organic, and Elemental Nano-selenium on Nile Tilapia: Growth, Immunity, Oxidative Status, Gut Morphology, and Immune Gene Expression. Biological Trace Element Research. Biol Trace Elem Res. 2022:1–16. https://doi.org/10.1007/s12011-021-03075-5. Kreutz LC, Barcellos LJG, de Faria VS, de Oliveira ST, Anziliero D, dos Santos ED, et al. Altered hematological and immunological parameters in silver catfish (Rhamdia quelen) following short term exposure to sublethal concentration of glyphosate. Fish Shellfish Immunol. 2011;30(1):51–7. Banaei M, MIR VA, Rafei G, Majazi AB. Effect of sub-lethal diazinon concentrations on blood plasma biochemistry; 2008. Venkataraman G, Rani PS. Acute toxicity and blood profile of freshwater fish, Clarias batrachus (Linn.) exposed to malathion. J Acad Ind Res. 2013;2(3):200–4. Ehler C, Douvere F. Navigating the future of the world heritage marine Programme: results from the first world heritage marine site managers meeting Honolulu, Hawaii, 1–3 December 2010, vol. 28: Hawaii: UNESCO; 2011. Al-Ghanim KA. Acute toxicity and effects of sub-lethal malathion exposure on biochemical and haematological parameters of Oreochromis niloticus. Sci Res Essays. 2012;7(16):1674–80. Lermen CL, Lappe R, Crestani M, Vieira VP, Gioda CR, Schetinger MRC, et al. Effect of different temperature regimes on metabolic and blood parameters of silver catfish Rhamdia quelen. Aquaculture. 2004;239(1–4):497–507. Gholami-Seyedkolaei SJ, Mirvaghefi A, Farahmand H, Kosari AA. Effect of a glyphosate-based herbicide in Cyprinus carpio: assessment of acetylcholinesterase activity, hematological responses and serum biochemical parameters. Ecotoxicol Environ Saf. 2013;98:135–41. Dawood MA, Koshio S, Zaineldin AI, Van Doan H, Moustafa EM, Abdel-Daim MM, et al. Dietary supplementation of selenium nanoparticles modulated systemic and mucosal immune status and stress resistance of red sea bream (Pagrus major). Fish Physiol Biochem. 2019;45(1):219–30. Saffari S, Keyvanshokooh S, Zakeri M, Johari SA, Pasha-Zanoosi H, Mozanzadeh MTJFP. Biochemistry: Effects of dietary organic, inorganic, and nanoparticulate selenium sources on growth, hemato-immunological, and serum biochemical parameters of common carp (Cyprinus carpio). Fish Physiol Biochem. 2018;44(4):1087–97. John PJ. Alteration of certain blood parameters of freshwater teleost Mystus vittatus after chronic exposure to Metasystox and Sevin. Fish Physiol Biochem. 2007;33(1):15–20. Madhu S, Pooja C. Acute toxicity of 4-nonylphenol on haemotological profile of fresh water fish Channa punctatus. Res J Recent Sci. 2015;2277:2502. El-Sayed YS, Saad TT. Subacute intoxication of a Deltamethrin-based preparation (Butox® 5% EC) in Monosex Nile Tilapia, Oreochromis niloticus L. Basic Clin Pharmacol toxicol. 2008;102(3):293–9. Ogueji E, Auta J. Investigations of biochemical effects of acute concentrations of lambda-cyhalothrin on African catfish, Clarias gariepinus–Teugels. J Fish Int. 2007;2(1):86–90. de Aguiar LH, Moraes G, Avilez IM, Altran AE, Corrêa CF. Metabolical effects of Folidol 600 on the neotropical freshwater fish matrinxã, Brycon cephalus. Environ Res. 2004;95(2):224–30. Patil VK, David M. Behaviour and respiratory dysfunction as an index of malathion toxicity in the freshwater fish, Labeo rohita (Hamilton). Turk J Fish Aquat Sci. 2008;8(2)233–7. Thenmozhi C, Vignesh V, Thirumurugan R, Arun S: Impacts of malathion on mortality and biochemical changes of freshwater fish Labeo rohita. 2011. Ibrahim A. Biochemical and histopathological response of Oreochromis niloticus to malathion hepatotoxicity. J Royal Sci. 2019;1(1):10–5. Zulfiqar A, Yasmeen R, Ijaz S. EFfect of malathion on blood biochemical parameters (urea and creatinine) in nile tilapia (Oreochromis niloticus). Pakistan J Sci. 2020;72(1). Magar R, Shaikh A. Effect of malathion toxicity on detoxifying organ of fresh water fish channa punctatus. Int J Pharmaceutical Chem Biol Sci. 2013;3(3):723–8. Awasthi Y, Ratn A, Prasad R, Kumar M, Trivedi S. An in vivo analysis of Cr6+ induced biochemical, geno toxicological and transcriptional profiling of genes related to oxidative stress, DNA damage and apoptosis in liver of fish, Channa punctatus (Bloch, 1793). Aquatic Toxicol. 2018;200:158–67. Sies H. Role of metabolic H2O2 generation: redox signaling and oxidative stress. J Biol Chem. 2014;289:8735–41. Reddy PB. Evaluation of malathion induced oxidative stress in Tilapia mossambica. Sci J. 2017;6(3):2319–4758. Monteiro DA, Rantin FT, Kalinin AL. The effects of selenium on oxidative stress biomarkers in the freshwater characid fish matrinxã, Brycon cephalus () exposed to organophosphate insecticide Folisuper 600 BR®(methyl parathion). Comp Biochem Physiol C Toxicol Pharmacol. 2009;149(1):40–9. Zaki MS, Mostafa SO, Nasr S, AI NED, Ata NS, Awad IM. Biochemical, Clinicophathlogical and microbial changes in Clarias. Rep Opin. 2009;12(17):6. Naiel MA, Nasr A, Ahmed M. Influence of organic selenium and stocking density on performance of Nile Tilapia (Oreochromis niloticus). Zagazig J Agric Res. 2012;39:1–13. Ali AM. The effect of antioxidant nutrition against fenvalerate toxicity in rat liver (histological and immune histo chemical studies). Annu Rev Res Biol. 2013;3:636–48. Alvarez R, Morales A, Sanz A. Antioxidant defenses in fish: biotic and abiotic factors. Rev Fish Biol Fish. 2005;15:75–88. Kieliszek M, Lipinski B. Selenium supplementation in the prevention of coronavirus infections (COVID-19). Med Hypotheses. 2020;143:109878. Aslam F, Khan A, Khan MZ, Sharaf S, Gul S, Saleemi MK. Toxico-pathological changes induced by cypermethrin in broiler chicks: their attenuation with vitamin E and selenium. Exp Toxicol Pathol. 2010;62(4):441–50. Rider SA, Davies SJ, Jha AN, Fisher AA, Knight J, Sweetman JW. Supra-nutritional dietary intake of selenite and selenium yeast in normal and stressed rainbow trout (Oncorhynchus mykiss): implications on selenium status and health responses. Aquaculture. 2009;295(3–4):282–91. We would like to appreciate Kairouan Group Company for their support and supply of Selenium yeast in the form of YEAST SEL 2000 and Dr. Gopal Reddy, Retired Professor, College of Veterinary Medicine, Tuskegee University, Tuskegee, Alabama, USA for reviewing this manuscript. Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Not applicable. This research did not receive any specific grant from funding agencies in the public, commercial, or not- for-profit sectors. Faculty of Veterinary Medicine, Department of Animal Hygiene, Zoonoses and Behaviour, Suez Canal University, Ismailia, 41522, Egypt Marwa A. Hassan & Ahmed M. Hassan Animal Health Research Institute, Ismailia, 41522, Egypt Samaa T. Hozien & Mona M. Abdel Wahab Marwa A. Hassan Samaa T. Hozien Mona M. Abdel Wahab Ahmed M. Hassan All authors collaborated in work planning, experimental design, measurement of parameters and writing the manuscript. Marwa A. Hassan: Methodology, Investigation, Formal analysis, Validation, Writing - original draft, Writing - review & editing, Visualization. Samaa T. Hozien: Methodology, Investigation, Writing. Mona M. Abdel Wahab: Visualization, Supervision. A. M. Hassan: Investigation, Validation, Writing - review & editing, Visualization, resources, Supervision. All authors read and approved the final manuscript. Correspondence to Marwa A. Hassan. All procedures involving animals in this study were carried out in accordance with the Universal Directive on the Protection of Animals Used for Scientific Purposes, in accordance with ethical guidelines of the ethics of scientific research committee, Faculty of Veterinary Medicine, Suez Canal University, Ismailia, Egypt, and was approved by the scientific research committee (Approval number: 2016092). All methods are reported in accordance with the ARRIVE guidelines for reporting animal experiments (https://arriveguidelines.org). The authors report no declarations of interest. Hassan, M.A., Hozien, S.T., Abdel Wahab, M.M. et al. Ameliorative effect of selenium yeast supplementation on the physio-pathological impacts ofchronic exposure to glyphosate and or malathion in Oreochromis niloticus. BMC Vet Res 18, 159 (2022). https://doi.org/10.1186/s12917-022-03261-0 Selenium yeast Glyphosate Malathion - Oreochromis niloticus
CommonCrawl
Search all SpringerOpen articles ROBOMECH Journal Research article | Open | Published: 15 January 2018 Compound locomotion control system combining crawling and walking for multi-crawler multi-arm robot to adapt unstructured and unknown terrain Kui Chen ORCID: orcid.org/0000-0002-7473-56971, Mitsuhiro Kamezaki1, Takahiro Katano1, Taisei Kaneko1, Kohga Azuma1, Tatsuzo Ishida1, Masatoshi Seki2, Ken Ichiryu2 & Shigeki Sugano1 ROBOMECH Journalvolume 5, Article number: 2 (2018) | Download Citation How to improve task performance and how to control a robot in extreme environments when just a few sensors can be used to obtain environmental information are two of the problems for disaster response robots (DRRs). Compared with conventional DRRs, multi-arm multi-flipper crawler type robot (MAMFR) have high mobility and task-execution capabilities. Because, crawler robots and quadruped robots have complementary advantages in locomotion, therefore we have the vision to combine both of these advantages in MAMFR. Usually, MAMFR (like four-arm four-flipper robot OCTOPUS) was designed for working in extreme environments such as that with heavy smoke and fog. Therefore it is a quite necessary requirement that DRR should have the ability to work in the situation even if vision and laser sensors are not available. To maximize terrains adaption ability, self-balancing capability, and obstacle getting over capability in unstructured disaster site, as well as reduce the difficulty of robot control, we proposed a semi-autonomous control system to realize this compound locomotion method for MAMFRs. In this control strategy, robot can explore the terrain and obtain basic information about the surrounding by its structure and internal sensors, such as encoder and inertial measurement unit. Except that control system also can recognize the relative positional relationship between robot and surrounding environment through its arms and crawlers state when robot moving. Because the control rules is simple but effective, and each part can adjust its own state automatically according to robot state and explored terrain, MRMFRs have better terrain adaptability and stability. Experimental results with a virtual reality simulator indicated that the designed control system significantly improved stability and mobility of robot in tasks, it also indicated that robot can adapt complex terrain when controlled by designed control system. Since disaster response robots (DRRs) have been used in Mt. Unzen in Japan [1], many kinds of robots with different functions have been developed, like snake robot [2], jump robot [3], one arm rescue robot [4], and so on. DRRs which have four sub-crawler are often used in disaster response work [5, 6]. This kind of robot has benefit at shrinking the size by lifting sub-crawlers to turn around in narrow space and stretching the size by rotating sub-crawlers downward to climb over obstacles [7]. Many DRRs has a mobile platform equipped with several sensors, hence these robots can perform tasks based on environmental information obtained from the sensors. However, current DRRs have some problems in terms of balance between robot mobility and arm manipulation function. Most of DRRs just specialize in one of them, but both of them are important in disaster response tasks. In order to improve performance of DRRs as well as enable them to be deployed to more complex disaster sites, we have developed hydraulic-driven OCTOPUS (h-OCTOPUS), which has multi-DOF (degrees of freedom) four arms mounted on four sub-crawler robot [8, 9]. We then developed electrically-driven OCTOPUS (e-OCTOPUS) for especially indoor applications, as shown in Fig. 1. The structure of four-arm and four-crawler provides excellent mobility and task-execution ability, which are both important and useful functions for complex disaster response work. Electrically-driven OCTOPUS (e-OCTOPUS), which has four crawlers and four arms disaster response robot The most basic requirement for DRR is to reach the designated location. Thus, robot mobility should be considered first. The robot is required to have ability to move arbitrary environment in arbitrary conditions. Therefore, the mobility which should be improved basically includes the following two aspects. Mobility in unstructured environment The robot should traverse complex geometric ground, such as rough terrain, unstable and unstructured ground, high step, and obstacles. To end this, the robot must have a suitable physical structure and a control system to provide stability and driving force to arbitrary directions, in particular, vertical upward. Mobility in unknown (low-visibility) environment The robot should move even when the visibility for human operators and/or robot itself is quite low. In extreme environment with heavy smoke and fog, the vision sensors for localization, mapping, and navigation cannot obtain sufficient environmental information. Moreover, even when such vision sensors are broken, the robot keeps to move for continuing tasks or returning to the rescue base. To realize these two requirements together, we proposed a compound locomotion method to integrate controls of crawlers and arms for robot moving. In this method, the crawler mainly drives the robot, and four arms assist the robot moving like a quadruped robot. This method can not only compensate the drawbacks of each locomotion method, but also enhance the advantages of each locomotion method. Currently, many DRRs and construction machines adopt a crawler-type structure. TITAN XI [10] has four stronger legs and two crawlers, which can adapt different terrains by changing locomotion methods, such as walking on a slope using four legs or crawling on flat ground using two crawlers. However, there are no control method to combine the advantages of crawling and walking at the same time for TITAN XI. However, some robots adopt a leg mechanism that would be more suitable for irregular terrains [11]. Here, MAMFR has both crawlers and legs (arms), therefore it can move by using the crawlers, and moreover, improve the stability and mobility by making the arms contact with the ground to support the robot body like legs. The proposed compound locomotion combines the advantages of crawling and walking, and thus, the robot mobility would improve drastically. Moreover, the arm can grope environment, so simple but necessary terrain information for locomotion can be estimated, such as the height of obstacle and the terrain tilt angle, like human's groping something in the dark (invisible situation). Here, there are two essential technical issues to be solved to realize the compound locomotion method. The first issue is to integrate control strategies of legged robot and flipper-crawler robot, which are completely different with each other. The second one is to develop a method to explore the necessary terrain conditions by only using few vision sensors information. Laser range sensors are useful to obtain terrain information and robot position, which is called simultaneous localization and mapping (SLAM) system [12,13,14]. SLAM systems can describe the positional and simple postural relationship between the whole robot (robot body) and environment. But, it is difficult to describe the relationship between environment and every part of robot in detail, such as arm's contact state. Moreover, in extreme situation such as fog and strong radiation, such sensors cannot work correctly, as stated above. The purpose of this study is to develop a new locomotion method which combines crawling motion using crawlers and walking motion using arms, for multi-crawler multi-arm disaster response robots. The paper is structured in the following way: "Classification of locomotion modes" section explains the basic locomotion modes for multi-crawler multi-arm robots. "The advantages of CLM" section describes the basic requirements and functions of CLM, and "Requirements of CLM" section explains CLM and control system in detail. "Control system design for CLM" section then describes our experiments and setting. "Experimental setting" section explains the experimental results and analysis. "Results and discussion" section discusses the problems and improvement points in CLM. "Conclusion and future works" section summarizes our findings and discusses our future works. Classification of locomotion modes Multi-crawler multi-arm robots would provide flexibility for locomotion mode. According to the terrain state and collaborative relationships between the arms and crawlers, we can define several locomotion modes, namely, crawler crawling (CCM), arm walking (AWM) and compound locomotion mode (CLM), as shown in Fig. 2a–c. Multi-crawler multi-arm robot OCTOPUS and its locomotion modes Crawler-crawling mode (CCM) This mode has been widely used by current DDRs. It adopts deformable crawlers to adapt disaster terrain and get over an obstacle, as shown in Fig. 2a. In this mode, robot can move at a relatively high speed and get over low-height obstacle. CCM can be used when the robot moves on flat terrain. The DOFs to be controlled are no more than 5 (4 DOFs for crawlers and one for robot moving direction), so CCM can be realized by manual or well-designed automated control system [15,16,17]. Arm-walking mode (AWM) In this mode, robot walks like a quadruped robot [18, 19] on the ground, and the robot body is off the ground. The arms can be considered to be legs, as shown in Fig. 2b. Therefore, the arms should be power enough to support robot body. A large number of joints should be control at the same time according to certain rules (gait control rules), so autonomous control system is needed. Compound-locomotion mode (CLM) In this mode, crawlers and arms closely cooperate to perform tasks, as shown in Fig. 2c. During the moving process, crawlers will provide the main driving force. Based on the torque information obtained from sensors, the angle of flippers will be adjusted to adapt terrain in real time. Robot arms can support robot moving and keep balance. According to the robot posture and estimated terrains information, the joints on robot arms will be adjusted to keep the endpoints of robot arms (EPRA) contacting with ground. These contact points provide additional support for robot and make it owning larger stability margin and better mobility. If the terrain is known, arms can be used to support robot moving when necessary, such as when climbing high step. However, the terrain in disaster site is often unknown, so CLM makes the arms and crawlers keep cooperating when robot moving. Because more than ten joints need to be precisely controlled in each sample period. Thus, it is quite difficult to achieve CLM with manual control method. Only automatic control systems would be suitable in this mode. Theoretically, semi-autonomous control system combines the flexibility of human control and accuracy of autonomous control, and which can well adapt complex disaster sites. The advantages of CLM With the help of the multi-crawler multi-arm structure, robot controlled by CLM mode has better performance. Better mobility In CLM control mode, the robot will be driven by crawling (crawlers) and walking (four legs) at the same time. Therefore, compared with walking driven method, CLM has bigger traction to make robot move forward. And by the support of arms, CLM has stronger lifting force to make robot moving upward than crawling drive method. So, we can regard CLM will give robot more powerful mobility. Wider use range During robot moving, arms, flippers and some robot inner sensors (such as encoders and joint current sensors) can be used to explore and obtain the environment information. Therefore, even if in some extreme environments or accidents that caused environmental sensors cannot be used, CLM control mode also works. So, the use range of robot controlled by CLM will be expanded. More stable locomotion state As CLM integrate clawing and walking locomotion method, robot has more contact points with ground compared with separate crawling and walking. In view of the speed of DRRs in disaster site will not fast, robot stability can be described by using static stable parameters, called stability margin which consists of contact points between robot and ground. Because the contact points of CLM is more than separate crawling and walking. We can think CLM has bigger stability margin during tasks, that is to say, CLM has more stable locomotion state in tasks. Requirements of CLM As the terrain of disaster site will be unknown and uneven, it is dangerous to make robot running in such environment without taking any measures while making sure robot in safe and balance. To find these measures, we can learn from geese. During this kinds of bird flying in a certain formation, no leader tells other birds their right positions and postures to keep the shape of formation. In fact, the bird's team does not need that. Each bird just needs to adjust its own position and posture according to the certain rules, and the flying formation can keep in a stable shape. These control rules may include the distance, angle or other things between that bird and other birds. This is one of the classic examples for distributed control, which can be used to control multi-robot systems in complex environment without complete knowledge of environmental information [20,21,22]. Structurally, it is not difficult to imagine that multi-arm/leg multi-crawler robot systems have better mobility performance by the cooperation of arms/legs and crawlers. However, because of the complexity of environment and insufficient information about the disaster site, it is really difficult to tell the right position of each arm and flipper using centralized control method which is popular in current robot systems. Thus, current control methods cannot realize the structural advantages of these robots efficiently in unknown environment. Consequently, we need a more simple, efficient and reliable control system with high adaptability to fully play the role of structural advantages for this kind of robots. The originality of this paper is to design such a control system, which we call CLM mode. We transplanted distributed control form multiple robots to one robot in CLM control mode. The cooperation in CLM control mode is based on each part of the robot such as arms and crawlers. For distributed control, the most important part is to set control rules for each part. By setting suitable control rules for each arm and flipper, we can control OCTOPUS to get over obstacle and keep in balance in complex even if in unknown environment. In addition to OCTOPUS, other similar structure robots can also use this setting by changing the parameters. To build these control rules, some necessary information is needed, such as the key parameters (robot position, posture and stability margin) as well as the state of arms and flippers. Almost no robot has a similar structure with OCTOPUS in the past, and there is no report about CLM. As mentioned before, there are several problems in the design of CLM. To realize this mode, we first clarify some basic requirements. Then, we design the compound control strategies for crawlers and arms. Requirements of control system On the basis of the analysis in the previous sections, when multi-crawler multi-arm robots work in unstructured and unknown disaster sites, the semi-autonomous control system for CLM should have the following capabilities. Control system should have the capability to understand key parameters of surrounding environment, such as obstacle height and the distance between robot and obstacle. Control system could monitor robot stable state in real time, and maintain the robot state to be stable by controlling joints appropriately. This is because work site may be irregular and slipping the endpoints of arms is unavoidable when contacting it to the environment. Control system could monitor robot stability margin real time to avoid accidents. To realize the above requirements, the control method for arms and crawlers should be well designed. The control rules of them will be detailed in the following content. Preparation: coordinate systems Figure 3 shows the main dimensions of OCTOPUS, which has the three main joints on each arm, called swing, boom and elbow respectively. The D-H parameters of an arm are listed in Table 1. According to D-H parameters and forward kinematics, we can easily get the following equations about \(x^{\prime},y^{\prime},z^{\prime}\), which are EPRA represented in joint coordinate system {J}, as shown in Fig. 3. Main dimension of OCTOPUS (unit: mm) Table 1 D-H parameters of arm (unit: mm) $$\begin{aligned} x^{\prime} &= l_{2} { \cos }\left( {\theta_{1} } \right){ \cos }\left( {\theta_{2} + \theta_{3} } \right) + l_{1} { \cos }(\theta_{1} ){ \cos }(\theta_{2} )\\ &\quad + d_{3} { \sin }(\theta_1) , \end{aligned}$$ $$\begin{aligned} y^{\prime} &= l_{2} { \sin }\left( {\theta_{1} } \right){ \cos }\left( {\theta_{2} + \theta_{3} } \right) + l_{1} \sin \left( {\theta_{1} } \right)\cos \left( {\theta_{2} } \right) \\ &\quad - d_{3} \cos \left( {\theta_{1} } \right), \end{aligned}$$ $$z^{\prime} = - l_{2} { \sin }\left( {\theta_{2} + \theta_{3} } \right) - l_{1} \cos \left( {\theta_{2} } \right),$$ To facilitate developing a control system, we define a coordinate system to describe robot state and the relationship between robot and environment, as shown in Fig. 4a, b, which is a normal right-hand coordinate system, and we call it robot coordinate system {R}. When the roll, pitch and yaw angles of robot are zero, {R} has the same directions with world coordinate system {W}. For calculating the coordinate of EPRA in \(\left\{ R \right\},\) we next need a coordinate translation from {J} to \(\{ R\}\). In Eqs. 1–3, if we build coordinates systems have same direction with {W} in each rotation axis, the positive direction of \(\theta_{1} ,\theta_{2} , {\text{and }} \theta_{3}\) can be got depending on the rules of D-H parameters. Take the left front arm as an example, assuming the position of left front EPRA (P1 in Fig. 4a) in {R} is (x, y, z) then, $$x = m/2 + x^{\prime},$$ $$y = n/2 + y^{\prime},$$ $$z = z^{\prime}.$$ e-OCTOPUS coordinate system Terrain exploration (estimation of height of obstacle) As mentioned before, terrain exploration not using vision sensors is one of the important ability of CLM. For OCTOPUS type robot, important three-dimensional (3D) environment information and robot state in environment can be obtained, through the arm's position in robot coordinate system and robot posture which includes roll, pitch and yaw angles of robot. After four arms contacted ground, on the basis of each arm joint angle and forward kinematics, the 3D coordinates of four EPRA can be obtained. Then, the height difference \(H_{EST }\) between the endpoints of front arms and rear arms can be calculated, as shown in Fig. 4b. The terrain information can be obtained like follows. Note that the results of z1–z4 and z2 − z3 is the z coordinate of four EPRAs, which are called P1, P2, P3, and P4 respectively, as shown in Fig. 4a. σ1, σ2, and σ3 are three control parameters and can indicate terrain features, such as flat ground, upward or downward steps, respectively, and can be set according to actual condition. Case 1: Flat terrain \(\left| {{z}_{1} - {z}_{4} } \right| <{\sigma}_{1}\) and \(\left| {{z}_{2} - {z}_{3} } \right| < { \sigma }_{1}\). Robot moves on a flat landscape. And the height different between four EPRAs is small than σ1. Case 2: Upward step or slope \({\sigma}_{1} < \left( {{z}_{1} - {z}_{4} } \right) < { \sigma }_{2}\) and \({\sigma}_{1} < \left( {{z}_{2} - {z}_{3} } \right) < { \sigma }_{2}\). There is an upward step or obstacle in front of the robot. According to robot structure and driving force, robot has the ability to get over steps which height is lower than \(\sigma_{2}\). Case 3: Downward step or pit \(-{\sigma}_{3} < \left( {{z}_{1} - {z}_{4} } \right) < -{ \sigma }_{1}\) and \(-{\sigma}_{3} < \left( {{z}_{2} - {z}_{3} } \right) < -{ \sigma }_{1}\) There is a downward step or a pit in front of robot. Robot has the ability to down steps which height is lower than \(\sigma_{3}\). In addition, if robot is on a slope, the inclination angle of that terrain can be calculated. $$T_{P} = (z_{1} + z_{2} - z_{3} - z_{4} )/(x_{1} + x_{2} - x_{3} - x_{4} ) ,$$ $$T_{R} = (z_{1} + z_{4} - z_{2} - z_{3} )/(x_{1} + x_{4} - x_{2} - x_{3} ),$$ where, T P is pitch angle of terrain in {W}, T R is roll angle of terrain in {W}, R P is pitch angle of robot, and R R is roll angle of robot. Here, we assume that there is an upward step in front of the robot (case 2), like Fig. 4b. According to four arms endpoints coordinate, the height of the step H EST can be estimated as $$H_{EST} = (z_{1} + z_{2} - z_{3} - z_{4} )/2.$$ This height information is a very important parameter for getting over an obstacle, and it can be used to judge whether the robot has reached the top of the obstacle. In addition to these three conditions. There are also many other kinds of situations that robot may meet. As we have mentioned before, semi-autonomous control system is suitable in CLM mode. Currently, if the robot system met the situations except for these three situation, autonomous control system will regard the terrains as "undetectable", and operator can give control commends depend on the continuous feedback information. Recognition of COG position of robot When robot is moving, especially when getting over an obstacle, the robot COG position should be calculated in real time to identify the positional relationship between the robot and step. According to internal sensor data and mathematical model, the position of robot COG can be calculated. Due to the compact structure of OCTOPUS, the weight of the robot is concentrated in the chassis and the height of the COG is low. Therefore, when robot moving, the position of COG will not change too much. For the convenience of calculation, we assume that the position of robot COG is fixed when robot moving. For calculating the robot COG position in \(\{ W\}\), we need a calculation coordinate system \(\{ C\}\). It has the same coordinate origin with {R} and same directions with {W}. To real OCTOPUS, the yaw angle of robot cannot be got. And in previous experiment in simulator, we found the change of robot yaw angel (robot moving direction) is very small in designed tasks (as shown in following parts: "rough terrain passing" and "getting over obstacle"). For the designed algorithms can be used in real robot, we assume the yaw angle is not change during these tasks. The pitch and roll angles of the robot can be obtained from inertial measurement unit (IMU) sensors, and these two angles can describe the relationship between \(\left\{ R \right\}\) and \(\{ C\}\), as shown in Fig. 5. P is the center point of two rear arm endpoints, according to forward kinematics, and the 3D coordinates RP (X R , Y R , Z R ) in {R} can be obtained easily. The coordinate of P in {C} denotes CP(X C , Y C , Z C ), and we can calculate obviously: $$H_{1} = \left| {Z_{C} } \right|,$$ Calculation of robot COG height In order to calculate CP from RP, we need a transition coordinate system \(\{ M\}\), and two rotation factors \({}_{M}^{C} R\) and \({}_{R}^{M} R\). For \(\{ M\}\), it has the same x-axis with \(\left\{ R \right\}\), and rotate A R degrees around x-axis. Thus, it is given by $${}_{{}}^{C} P = {}_{M}^{C} R {}_{R}^{M} R{}_{{}}^{R} P.$$ Depending on the relationship between \(\left\{ R \right\}\), {M} and \(\left\{ C \right\}\), we can know the two rotation factors. $${}_{M}^{C} R = \left[ {\begin{array}{*{20}c} {\cos (A_{P} )} & 0 & { - \sin (A_{P} )} \\ 0 & 1 & 0 \\ {\sin (A_{P} )} & 0 & {\cos (A_{P} )} \\ \end{array} } \right] ,$$ $${}_{R}^{M} R = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & {\cos (A_{R} )} & {\sin (A_{R} )} \\ 0 & { - \sin (A_{R} )} & {\cos (A_{R} )} \\ \end{array} } \right] ,$$ As we have known RP, $${}_{{}}^{R} P = \left[ {\begin{array}{*{20}c} {X_{R} } \\ {Y_{R} } \\ {Z_{R} } \\ \end{array} } \right] .$$ $$\begin{aligned} {}_{{}}^{C} P & = \left[ {\begin{array}{*{20}c} {X_{C} } \\ {Y_{C} } \\ {Z_{C} } \\ \end{array} } \right] = {}_{M}^{C} R {}_{R}^{M} R{}_{{}}^{R} P \\ & = \left[ {\begin{array}{*{20}c} {\cos (A_{P} )X_{R} + \sin (A_{P} )\sin \left( {A_{R} } \right)Y_{R} - \sin (A_{P} )\cos \left( {A_{R} } \right)Z_{R} } \\ {\cos \left( {A_{R} } \right)Y_{R} + \sin \left( {A_{R} } \right)Z_{R} } \\ {\sin (A_{P} )X_{R} - \cos (A_{P} )\sin \left( {A_{R} } \right)Y_{R} + \cos (A_{P} )\cos \left( {A_{R} } \right)Z_{R} } \\ \end{array} } \right]. \\ \end{aligned}$$ Therefore, H1 can be calculated, $$\begin{aligned} H_{1} & = \sin (A_{P} )X_{R} - \cos (A_{P} )\sin \left( {A_{R} } \right)Y_{R} \\ & \quad + \cos (A_{P} )\cos \left( {A_{R} } \right)Z_{R} . \\ \end{aligned}$$ In the same method H2 can be calculated, thus the height of COG H is: $$H = H_{1} - H_{2} .$$ Except \(H\), the distance between an obstacle edge and front flipper S (as shown in Fig. 5) also can be calculated. When the robot starts to climb the obstacle, S equals to 0. The sampling frequency of our simulator is 50 Hz, basically, we can assume that the robot pitch angle \(R_{P } (i)\) does not change in one sampling period. The rotation speed of crawler is fixed in CLM, and the distance of robot moving is denoted by L in a sampling period. If we do not consider about sliding, thus, $$S = \mathop \sum \limits_{i = 0}^{n} L*{ \cos }(R_{P} (i)).$$ To make this designed method be used generally, we need to consider about the posture of robot in 3D environment. For OCTOPUS, we assume the yaw angle is fixed, because the change of yaw angle cannot been got in real robot. For other systems which roll pitch yaw angle can be got precisely, this designed method and concept are also useful. The researchers just need to add other rotation factor in Eq. 11 to describe the change of yaw angle. Because it is simple matrix multiplication, we do not repeat it in here again. Control system design for CLM In every sampling period, control system detects robot state, then each part is controlled according to the following rules. In CLM, crawlers are main driving components, and they are controlled individually and without affecting arm's state. Arms cooperate with crawlers to control robot moving. In the following sections, we chose two basic terrains and described the control method in detail. Crawler controls Many studies have proposed different autonomous or semi-autonomous control methods for multi-crawler DRRs for moving in irregular terrain. We designed a simple crawler control system on the basis of such control methods. When the robot moves forward, front flippers keep a suitable angle with ground to obtain the object information in the front of the robot. Before and after front flippers contact with obstacle or step, the torque applied to rotation axis of front flipper will greatly change. For example, before flipper contacts to obstacle, the torque is plus, but after that the data is minus. The force analysis of front flippers is shown in Fig. 6. Combined with arms explored terrain information, semi-autonomous control system can understand whether robot has closed to obstacle and what the robot should do in the next step. This part will be detailed in arm control section. For front flippers, the following cases may thus happen. Note that T′ and A′ are obtained front flipper torque and angle in the last sampling period, respectively. T0 is a torque threshold representing terrain condition. \(A\) is the flipper angle control command in the next control period. Case 1: \(\left| {T^{\prime}} \right| < T_{0}\) (low torque) Control rule: \(A = A^{\prime}\) Measured torque will change in a certain range even if robot is running on a flat road, due to the vibration of robot. When robot is in this state, front flippers angle remain the same. Case 2: \(\left| {T^{\prime}} \right| > T_{0}\) (high torque) Control rule: \(A = A^{\prime} - \Delta A*(T^{\prime}/\left| {T^{\prime}} \right|)\) ΔA is adjusting amplitude in each sample period. This parameter can be set according to the system sampling frequency and response time of flipper. When the robot is in this state, flippers will rotate depending on terrain state. For example, when robot encounters an obstacle, front flippers rotate up, while robot meets a pit, front flipper will rotate down. Case 3: \(A^{\prime} > A_{0}\) (flippers in limitation position) Control rule: \(A = A^{\prime} - \Delta B\) A0 is front flipper limit value and ΔB is another adjusting amplitude for getting over obstacle. When robot is in this state, control system will consider there is an obstacle in front of robot. Combined with arms state, crawlers and arms will be controlled to get over obstacle, and this part will be detailed in later section. Force analysis of robot front flippers Arms and crawlers control in rough terrain passing Conventional four-crawler robots pass through rough terrain by driving crawlers. Crawlers need to be close to the ground to provide drive force, therefore the posture of robot body is changed with terrain in real time. In CLM, the support of arm will reduce the load of crawlers, and crawlers do not need to completely fit the ground to drive robot. In addition, the support of arms will increase robot stability margin and can be used to adjust robot's posture to assist crawlers to keep robot in balance. Due to the support of multiple crawlers and the distributed control strategy, four arms will adjust their positions and posture dynamically, depending on the robot and each part's state. Therefore, the arms do need to strictly keep contacting with the ground all the time, slight slippage and off the ground are acceptable, and these will be adjust in next control periods. CLM thus enables robot to move more smoothly and stably. Robot gait design The four arms are considered as legs. For a quadruped robot, a suitable gait is needed in robot moving. There are several gait patterns can be used for quadruped walking, such as crawl gait, pace gait, trot gait and bounce gait [18]. Usually, when passing through rough terrain, the robot will not move fast, so crawl gait is thus suitable for controlling these four legs [23]. Except the support of arms, there are still at least four stable support points made by four moveable crawlers. In most cases, these contact points can make sure robot in balance. Therefore in CLM, we do not need to adjust robot center of gravity to one side [24] to keep robot in balance when transferring legs. Overall, the gait design of legs is more flexible in CLM. The designed gait for four legs is shown in Fig. 7. In this process, there are at least three arms support robot at any time. The polygon constituted by support points were illustrated in blue lines in Fig. 7. Four crawler are included in the blue polygon, which means that robot in CLM always has bigger stability margin than CCM. Robot gait in rough terrain Arm transfer Terrain would be unstructured in disaster sites. Thus, the arms should have the ability to adapt unstructured terrain. Every transferred arm should has suitable stop position, which enables arms to follow the designed gait and have proper contact force with ground. To realize that, we assume that the terrain is an extension of the explored ground. Depend on the robot structure and the size of each part, to maximize the stability margin of robot, the target coordinates of each EPRA after arm transfer in {R}. are specified as listed in Table 2. According to these coordinates and Eqs. 1–6, joints angle (θ1, θ2, θ3) of bot arm can calculated. In view of the expressions of results are ng, we will not show them in here. Table 2 Designed coordinate for four arms transfer in {R} (unit: mm) Trajectory plan of EPRA The current position of every EPRA can be calculated in real time, and its target position has been designed. In addition, the trajectory of EPRA from current to target position is also important. As mentioned before, one quite important function of the arm is to explore (grope) terrain information, especially the environment in front of robot. To make arm not be blocked easily during transfer and reach the designed position to explore the terrain, the position of EPRA should be high. Thus, we designed the following simple but effective EPRA trajectory. The motion of arm in transfer includes three phases, which are lifting arm (0–T1), rotating arm (T1–T2) and down arm (T2–T3), respectively. The arm's control depends on arm's state and gait series. In lifting arm period, boom joint (θ2) and elbow joint (θ3) will rotate up from the current state, and swing joint (θ1) keeps still. After the boom joint reached the certain angle A B (for OCOTPUS, it is 0.6 rad for rear arms and − 0.6 for front arms. ΔB and ΔC are active for rear arm and minus for front arm), the first period (T1) is finished. After that, swing joint rotates forward, and the endpoint of arm will also rotate forward. Until the swing angle reaches A S (for OCTOPUS, it is 1.16 for rear arms and − 0.65 for front arms. ΔA is active for rear arm and minus for front arm), the second period (T2) is finished. After that, should joint keeps still, boom joint and elbow joint rotate down, until the endpoint of robot arm contacts with the ground, the last period (T3) is finished. $$\theta_{1} = \left\{ \begin{array}{ll} \theta_{1}^{\prime} ;&\quad\theta_{2}^{\prime} < A_{B} {\text{ and }} t < T_{1} \\ \theta_{1}^{\prime} + \Delta A; &\quad \theta_{2}^{\prime} \ge A_{B} {\text{ and }} T_{1} \le t < T_{2} \\ \theta_{1}^{\prime} ;&\quad T_{2} \le t \\ \end{array} \right.,$$ $$\theta_{2} = \left\{ \begin{array}{ll} \theta_{2}^{\prime} - K(0.6 - \theta_{2}^{\prime} ); &\quad\theta_{2}^{\prime} < A_{B} {\text{ and }} t < T_{1} \\ \theta_{2}^{\prime} ; &\quad\theta_{2}^{\prime} \ge A_{B} {\text{ and }} T_{1} \le t < T_{2} \\ \theta_{2}^{\prime} + K(0.6 - \theta_{2}^{\prime} );&\quad T_{2} \le t <T_{3} \\ \end{array} \right.,$$ $$\theta_{3} = \left\{ \begin{array}{ll} \theta_{3}^{\prime} + \Delta B; &\quad \theta_{2}^{\prime} < A_{B} {\text{ and }} t < T_{1} \\ \theta_{3}^{\prime} ;&\quad\theta_{2}^{\prime} \ge A_{B} {\text{ and }} T_{1} \le t < T_{2} \\ \theta_{3}^{\prime} - \Delta B;&\quad T_{2} \le t < T_{3} \\ \end{array} \right.,$$ where θ1, θ2, θ3 are angle commands for each joint in the next sampling period, and \(\theta_{1}^{'} ,\theta_{2}^{'} ,\theta_{3}^{'}\) are the joints angle get from last sampling period. ΔA and \(\Delta B\) are adjust amplitudes for each joint and K can be set depending on the response situation of boom joint. Arm stop After inputting the target position, the arm will go along the specified trajectory until reaching the designed position. However, due to the complexity of disaster site, the arms always cannot exactly reach to the target points. For robot's arms, if they collide with ground or obstacle, the torque of related joints will increase in a short time. During arm transferring, the arm will stop if the collision signal is detected. Basically, there are several cases may happen, which are shown as followings. Here, \(P_{i\_a} (x, y, z)\) is the actual stop point of EPRA and \(P_{i\_t} (x, y, z)\) is the target position that EPRA. ΔC and ΔD are the difference thresholds between setting value and real data, which can be set depending on the arm control accuracy. Case 1: \(\left| {{P}_{{{i}\_{a}}} \left( {x} \right) - {P}_{{{i}\_{t}}} ({x})} \right| > \Delta {C}\) This situation means that there are obstacles between the arm and target position. Case 2: \(\left| {{P}_{{{i}\_{a}}} \left( {x} \right) - {P}_{{{i}\_{t}}} ({x})} \right| < \Delta {C}\) and \({P}_{{{i}\_{a}}} \left( {z} \right) - {P}_{{{i}\_{t}}} ({z}) > \Delta {D }\) This means there are obstacles on the target position, and the EPRA stopped on the top of obstacle. Case 3: \(\left| {{P}_{{{i}\_{a}}} \left( {x} \right) - {P}_{{{i}\_{t}}} ({x})} \right| < \Delta {C }\) and \({P}_{{{i}\_{a}}} \left( {z} \right) - {P}_{{{i}\_{t}}} \left( {z} \right) < - \Delta {D }\) This means that there is a pit in the target position, and arm endpoint stopped at the bottom of this pit. In the last phase of transferring arm, EPRA will keep moving downward until control system detects stop signal or the arm joints reach the limit position. If arm still does not contact with ground when joints reach the limit position, it means where there is a deep pit. In this case, a new target point will be selected. Arm moving If the contact points between EPRA and ground are unchanged, arms will provide stably and continuously support for robot body, and it is benefit for robot balance. When robot is moving, the posture of robot will change with terrain. Thus, it is impossible for arms to keep contact points unchanged. In arm's moving process, because the terrain is complex and unknown, even if the control systems has a certain ability to predict terrain, the changing of contact points in a small scale (slight slippage) is unavoidable. However, the multiple crawlers will keep supporting the robot body, and the arms posture and position will be adjusted to contact with ground in every control period. Therefore, the slippage of EPRA will not last long and not cause robot to lose balance. Thus, we can regard that the slippage is slight. To minimize the mobile range of contact points, one method is adjusting robot arm joints in real time according to robot current state and explored terrain information. In order to prevent robot arms being broken during EPRA's slippage, two axes passive roller structure is used in OCTOPUS, as shown in Fig. 8. When robot is in state 1 and the contact points are PA1 and PB1 in robot coordinate system {R}. While robot in state 2, these two points can be represented as PA2 and PB2, as shown in Fig. 8. By considering the change of pitch angle and predicted terrain angle is small between two samples, we can predict state 2 using state 1 information. Usually, when robot is moving forward, the change of robot position along y-axis is small, so we assume that is remains constant. Therefore, the relationship between the contact points in two states is:For rear two arms: $$P_{A2} (x) = P_{A1} (x) + L{ \cos }(T_{P} - R_{P} ),$$ $$P_{A2} (y) = P_{A1} (y),$$ $$P_{A2} (z) = P_{A1} (z) + L{ \sin }(T_{P} - R_{P} ).$$ Robot moving state and arm end with two-axis Moreover, for front two arms: $$P_{B2} (x) = P_{B1} (x) - L{ \sin }(T_{P} - R_{P} ),$$ $$P_{B2} (y) = P_{B1} (y),$$ $$P_{B2} (z) = P_{B1} (z) + L{ \sin }(T_{P} - R_{P} ),$$ where T P is predicted terrain angle in state 1, R P is robot pitch angle in state 1, and L is crawler rotation distance between two samples. Arms and crawlers control in getting over obstacle Step is one kind of common obstacle in disaster sites. For DRRs, step climbing ability is one key performance index. Compared with conventional climbing method (just using crawlers and flippers), the CLM can greatly improve the climbing ability of OCTOPUS, at the same time improve the robot stability in climbing process. Before front arms contacted with step, robot walks and adjusts arm position as the sequences in Figs. 7 and 9. In every sampling, control system calculates the height difference between front arms and rear arms endpoints. When one of its front arm touch the top of step, system will know there is a step in front of robot, like Fig. 9b. Robot will still move forward and adjust another one front arm to touch the "step". After both front arms contact "step", H EST can be calculated. If H EST is greater than setting value (such as the case 2 in terrain exploration), we can make sure that it is a real step in front of robot, as shown in Fig. 9c. After front flippers angles reach to the limit (85° in this robot), robot stops moving forward and adjusts its four arms posture for climbing step, as shown in Fig. 9d. Then, flippers (A and Aʹ, as shown in "Crawler control" part) and all boom joints (θ2 and θ 2 ′ as shown in Fig. 3) of arms rotate down to lift robot body. The control rules are shown in Eqs. 28 and 29. $$\theta_{2} = \left\{ \begin{array}{ll} \theta_{2}^{'} + \Delta E; &\quad H < H_{EST} \,\,or \,\,S < L \\ \theta_{2}^{'} - \Delta E; &\quad H > H_{EST}\,\, and\,\, S > L\,\, and\, \theta_{2}^{'} > 0 ^{'} \\ \end{array} \right.$$ $$A = \left\{\begin{array}{ll} A^{'} - \Delta F; &\quad H < H_{EST} \,\,or \,\,S < L \\ A^{'} + \Delta F; &\quad H > H_{EST}\,\, and\,\, S > L \,\,and \,\,A^{'} < 0^{'} \end{array} \right.$$ where ΔE and ΔF are adjusting amplitudes for boom joints and flipper joints. L is the value calculated from robot structure (0.6 × 500, 500 is the length of robot body). At the same time, by the friction force generated from crawlers and step, robot can move forward slowly, as shown in Fig. 9e. In this step, we do not try to keep the position of the endpoint of arms consistent. This is because the pitch angle of the robot changes rapidly between two sampling period, and robot has enough time to adjust each joint angle (such as boom joint should move 5° in one sampling period 0.02 s). When robot COG height (H in Fig. 6) is higher than estimated step height (H EST in Eq. 9) and the moving distance (S in Fig. 5) reaches set value (0.3 m, depend on robot structure shown in Fig. 3, this value can keep robot in stable state on the top of step). And then, the flippers and arms will be released, like Fig. 9f–h. Sequence of robot climbing step in CLM Experimental setting In this section, we describe the implementation of the proposed system and evaluation using a virtual reality (VR) simulator [8]. VR simulator can obtain large number of useful parameters. In addition, well designed simulator can verify control strategy quickly while consuming less cost and time. Before the effectiveness and safety of our proposed control system is confirmed, experiment using real robot would be dangerous. The e-OCTOPUS in VR simulator has same configurations with real robot, such as size, color, power, and weight. Control system and hardware The robot system mainly consists of control interface, upper computer, on-board computer, and e-OCTOPUS, as shown in Fig. 10. The specification of e-OCTOPUS is listed in Table 3, and the main dimensions of robot are shown in Fig. 3. Control interface includes joysticks and monitors. Operators input commands from four 7-DOF joysticks and four pedals. Visual information obtained from environment cameras or in-vehicle cameras can be displayed on two 42-in. monitors. Joysticks and pedals are connected with AD boards, while robot controller can read operator input form AD boards. The VR simulator and robot controller run in an upper computer with Linux operation system, and they were developed based on the open source softwares (ROS and GAZEBO). For the safety reason, a semi-autonomous control mode (one operator) and a full manual control mode (two operators) are integrated into robot control. Usually, the semi-autonomous control system is executed, and two operators can take over the control authority of robot when necessary. In semi-autonomous control system, the operator just inputs move direction, and semi-autonomous control system will automatically control robot to realize this order. The upper computer communicates with on-board computer through TCP/IP protocol at 50 Hz frequency. After robot-state information has been collected by on-board computer, it will be sent to the upper computer. Combined with input of operator, the detail control commands will be sent to on-board computer. Robot system frame Table 3 Detailed Specification of e-OCTOPUS Tasks setting The aim of our proposed CLM and control system is to make MAMFRs like OCTOPUS have better mobility and terrains adaptation capability in extreme environment. Thus, the following experiments were designed by mainly focusing on these two aspects. Getting over obstacle (test for mobility in unstructured environment) To facilitate comparison and analysis, we simplified the obstacle into step. The robot that can get over higher step has better obstacle get over capability. For testing CLM control mode, in this experiment, robot needed to finish three fundamental tasks in disaster response works, respectively, climbing a one-terrace step, climbing a two-terrace step from the front and climbing a two-terrace step from the side. According to the structure of OCTOPS, as shown in Fig. 3, OCTOPUS can climb the step which height is lower than 250 mm (the length of flippers) without using arms. Thus, we set the height of those steps are 400 mm [8], which is more than twice as steps in residential areas. Two locomotion modes (CCM and CLM) were used to get over one-terrace step, and CLM mode also be used to execute the other two tasks introduced above. For CCM, robot is manually controlled by two operators according to the sufficient feedback visual images. But for CLM, just one operator input move direction based on an image got form in-vehicle camera (the image shows the environment in front of robot), and CLM control system knows nothing about the environment in advance. Passing through rough road (test for mobility in unknown environment) In this experiment, a 500 mm height concrete block was in the way of robot moving forward. The distance between robot right side and concrete block was about 10 mm. Some robot arms would collide with this obstacle and automatically adapt it. These information is also unknown by control system. According to the experimental aims, we analyzed the results from the following aspects. One-terrace step getting over capability Figure 11 shows the locomotion sequence of OCTOPUS in two locomotion modes. By the help of environment cameras in simulator, operators can sufficiently understand robot state and the environment around robot. They also can estimate the height of step and the distance between robot and step from the feedback images information. However, in semi-autonomous control method for CLM, control system is totally unaware of these information. The information it used completely comes from sensors amounted on robot, which are same as real robot. So for semi-autonomous control system, robot runs in an unstructured environment. Two locomotion methods CCM and CLM were compared in our experiment, as shown in Fig. 11. The robot was used as a conventional multi-crawler robot in CCM, arms were not been used when climbing step. Figure 11a shows robot's original state, every joint's angle of robot is zero. Figure 11b indicates the locomotion sequences of robot in CCM. Two skilled operators manually controlled the robot, and they can obtain robot state information by observing monitor and communicate with each other to cooperatively control robot. We hope the arms will not contact with the ground during the experiment of CCM mode. In real world, the violent collisions between robot parts and ground may break robot, and therefore we should avoid these kind of collisions. In consideration of the collision is inevitable if all of the four arms in lower position, we lifted rear arms during our experiment. Because the front arms will not collide with ground, to reduce the interference of COG position caused by lifting arms, we keep the front arms in the original positions. Figure 11c shows the locomotion sequences of robot controlled by semi-autonomous control system based on CLM. Just one operator controlled robot movement direction, while the gait adjusting and step climbing were automatically controlled by system. Both of CCM and CLM were tested five times. In CCM, operators had tried their best but robot cannot reach the top of step in all experiments, and this resulted that the success rate was 0%. Basically, we can regard that 400 mm step exceeds the crawler climbing capability of OCTOPUS. However, for CLM, the success rate was 100%. Figure 12 shows robot pitch and roll angles in two experiments. Two operators tried twice to climb step in CCM, but robot cannot finished the task and turned over, as shown in Fig. 11a4. But for CLM, by the assist of arms, OCTOPUS can climb 400 mm step easily, and the biggest pitch angle of robot just reached 0.6 rad. The roll angle of robot controlled by CLM mode do have too much change during task, the biggest roll angle is 0.03 rad. Compared with CLM control, the roll angle of robot in CMM control mode is relative big, which reached − 0.14 rad, however, this angel will not cause robot loss balance. Pitch angle of robot in two modes Compared with CCM, CLM has better obstacle getting over capability in unstructured environment, it takes full advantage of the structural characteristics of MAMFRs, and turns it into a kind of functional superiority. In CLM mode, arms need to be adjusted to follow robot moving, and thus, so compared with CCM, the robot speed is slower. Robot stability in getting over one-terrace step While robot is moving, by means of analytic geometry, the stability margins of robot can be calculated. Figure 13 shows robot stability margins in two experiments. The smallest SM of robot in all process was 0.15 m. According to the structure of robot, this SM is quite big. It is clear that the robot running in CLM has a larger margin in whole process. Especially when robot was climbing step, the support of arm can improve not only robot climbing ability, but also stability. Stability margin of robot in two modes In CLM, the angle of each joint in robot will change with robot moving to adapt terrain. The corresponding joint (i.e., swing, boom and elbow as shown in Fig. 3) in four arms almost has the similar change rules. Thus, in this part, we just analyze the change rule of right front (RF) arm and right rear (RR) arm. Figure 14 shows the joints change of RF and RR arms during robot walking on ground and climbing step. There are six phases in this process which corresponding to our design as shown in Fig. 9. The first phase is initialization. Crawlers and four arms contact ground to support robot like a four-legged robot, as shown in Fig. 14 (phase A). With robot moving, to keep the stability of contact points, all of the joints in arms would be adjusted in real time, as shown in Fig. 14 (phase B). Phase B1 means that RR arm was adjusting its posture to transfer the endpoint to a new contact point with the ground, and B2 means after RF arm reached a new contact point, the joints in RR arm will be real time controlled to follow the moving of robot body (refer to the arm moving section). In phase C, RR arm was stopped because robot front arm had contacted with the step, and RF arm was waiting for the posture adjustment order. Phase D shows RR arm was adjusting its posture for preparing to climbing step, and phase E indicates arms were assisting robot to climb step. In the last phase F, robot COG reached a suitable position, and robot released its flippers and arms to make robot stop steadily on the top of step. Joints angle of robot RR and RF arms in task It also can be found in Fig. 14 that not all the joints meet the set rules. Such as depend on the Eqs. 19–21, the value of θ2 should be 0.6 rad at t = T2. But, the value of θ2 in Fig. 14 is 0.7. We think it is caused by the response speed of robot boom joint. It is a half-closed loop control method for each joint's angle. We send the data to joint's controller, and the controller will use a closed loop control method to execute this commend. The communication frequency is 50 Hz in this simulator, so the joints should finish every command in 0.02 s. Through the PID of each joint has been optimized, the response speed is still not fast enough. Such as when the control command is 0.6 rad, the data of encoder is 0.55, therefore the final angel of boom is big than setting value (0.6 rad). Getting over two-terrace step form the front Except for one-terrace step, CLM control mode also been tested in a two-terrace step environment, the total height of this step is also 400 mm. The locomotion sequence is shown in Fig. 15. It is clear that robot can complete this task easily when OCTOPUS is control by CLM mode. Figure 16 shows the roll pitch angles and stability margin of robot in this task. Compared with the data obtained from in one-terrace task, the maximum pitch angel is smaller (0.59 rad), and the minimum stability margin is bigger (0.17 m). We think that is because two-terrace step has two edges, and they can provide additional support. Basically, if the total height is same, multi-terrace step is more easily got over. Climb two-terrace step from the front using CLM mode Key parameters during climbing two-terrace step from the front using CLM Getting over two-terrace step form the side In real disaster response works, the objects under left side and right side of robot do not always have the same height. Therefore, robot should have the ability to get over asymmetric obstacles, we simulated this situation in our simulator. In this task, robot needed to get over a two-terrace step from the side, the locomotion sequence is shown in Fig. 17. Because the heights of terrace under left side and right side crawlers are not same, so the arms and flippers of robot should adapt the terrain separately, as shown in Fig. 17(5)–(9). Figure 18 shows the key parameters of robot during this task, because the asymmetric terrain, robot has the biggest roll angle during this task, which reached − 0.25 rad, however, the pitch angle do not have too much difference with other two tasks. Due to this reason, the smallest stability margin of robot in this task is just 0.07 m, which is obviously less than the smallest margin in other tasks. Climb two-terrace step from the side using CLM mode Key parameters during climbing two-terrace step from the side using CLM OCTOPUS controlled by CLM mode can adapt the designed three most fundamental scenarios. Basically, we can regard robot will adapt more other different terrains such as slopes and irregular terrain. Robot terrain adaptability Figure 19 shows the robot locomotion sequences that robot RF arm collided with an obstacle and automatically adapted it. The angle of joints in RF arm and elbow torque are shown in Fig. 20. In phase A, robot moved forward, arms transferred and moved with robot motion. After arm contacted with obstacle, the torque of elbow joint would reach to the setting value as shown in Fig. 20. When system detected this, all of the joints in that arm stopped in this sample period, and arm state moved to arm moving phase from the next sample period. In Fig. 20, robot RF arm met the obstacle at t1, and moved on the top of this obstacle in phase B. After that, left the obstacle top and contact with the ground at t2, and then it was normal arm transfer and moving phases, which are shown as phases D and E in Fig. 20b. In short, depending on sensor information and robot mechanical characteristics, arms can well adapt unknown terrain in CLM. Robot locomotion sequences for adapting unknown terrain RF arm contact with obstacle and ground Compared with previous researches, the new locomotion method CLM has less dependency on environmental information obtaining sensors. It fills the gap that control DRRs in extreme environment without enough environmental sensors. Basically, OCTOPUS can finish the evaluated tasks under CLM control mode. Except that the advantages we have talked about in above, there are also some things could be improved. Firstly, the time efficiency is low. We find that the spending time of CLM in tasks is longer than conventional control method, and we think it is caused by low arms moving speed, because to adapt robot gait, crawler speed cannot be too high. In principle, crawler and wheeled robots have higher speed than quadrupled robots [18]. This is one of the disadvantages of CLM. Secondly, the physical and mathematical model is relatively simple. There are still some complex terrains and special cases that we did not consider in this paper. Thirdly, the control method is designed for four-arm four-crawler robot, for other kinds of multi-arm multi-crawler robot, the control algorithms should be changed depend on the features of robot. But the control ideas and methods are same. At last, the slippage of crawler and arms should be comprehensively considered, because it may cause recognition error. Conclusion and future works This paper addressed a locomotion control system based on CLM for simplifying the control of multi-arm multi-crawler robot and improving its mobility and irregular terrain adaptation in unstructured environment. Some key problems and the corresponding solutions were proposed. In addition, the related mathematical models were built, and control system for CLM was developed. Finally, this control system was verified using a VR simulator. Compared with manually control CCM, semi-autonomous controlled CLM has better mobility and stability in unstructured environment. Through the mathematical model and robot structure characters, some simple but important parameters can be calculated. Experimental results also shown that OCTOPUS has the ability to adapt unknown terrain in CLM. CLM combines crawling and walking locomotion mode. This is our first attempt, and we understand the method presented in this paper is not perfect. In the future work, we will verify this control mode in real robot and optimize our mathematic model, in addition, we also want to build a more reasonable evaluating system to comprehensively study this kind of locomotion and combine other terrain exploration method to further improve robot control performance. We also try to build an integrated control system which include CCM, AWM and CLM control modes to make robot adapt different situation. Due to the limit of simulator, not all the import functions and indexes about robot could be tested, so we will do more experiments using real robot. Ban Y (2002) Unmanned construction system: present status and challenges. In: 2002 International symposium on automation and robotics in construction. pp 241–246 Kamegawa T, Yamasaki T, Igarashit H, Matsuno F (2004) Development of the snake-like rescue robot "KOHGA". In: 2004 IEEE international conference on robotics and automation (ICRA). pp 5081–5086 Tsukagoshi H, Sasaki M, Kitagawa A, Tanaka T (2005) Jumping robot for rescue operation with excellent traverse ability. In: 2005 IEEE/RSJ int. conf. intelligent robots and systems (IROS). pp 841–848 Carey W, Kurz M, Matte D (2012) Novel EOD robot design with dexterous gripper and intuitive teleoperation. In: 2012 IEEE international conference on world automation congress. pp 1–6 Okada Y, Nagatani K, Yoshida K (2009) Semi-autonomous operation of tracked vehicles on rough terrain using autonomous control of active flippers. In: 2009 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 2815–2820 Nagatani K, Yamasaki A, Yoshida K (2008) Semi-autonomous traversal on uneven terrain for a tracked vehicle using autonomous control of active flippers. In: 2008 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 2667–2672 Quan Q, Ma S (2009) A modular crawler-driven robot: mechanical design and preliminary experiments. In: 2009 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 639–644 Kamezaki M (2016) Design of four-arm four-crawler disaster response robot OCTOPUS. In: 2016 IEEE international conference on robotics and automation (ICRA). pp 2840–2845 Chen K, Kamezaki M (2016) Fundamental development of a virtual reality simulator for four-arm disaster rescue robot OCTOPUS. In: 2016 IEEE international conference on advanced intelligent mechatronics (AIM). pp 721–726 Hodoshima R, Doi T, Fukuda Y (2004) Development of TITAN XI: a quadruped walking robot to work on slopes. In: 2004 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 790–797 González de Santos P, Garcia E, Estremera J (2006) Quadruped locomotion, 1st Edn. Springer, London, pp 17–19 Chaves M, Eustice M (2016) Efficient planning with the Bayes tree for active SLAM. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 4664–4671 Mu B, Liu S, Paull L (2016) SLAM with objects using a nonparametric pose graph. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 4602–4609 Dharmasiri T, Lui V, Drummond T (2016) MO-SLAM: multi object SLAM with run-time object discovery through duplicates. In: 2016 IEEE/RSJ int. conf. intelligent robots and systems (IROS). pp 1214–1221 Liu Y, Liu G (2010) Interaction analysis and online tip-over avoidance for a reconfigurable tracked mobile modular manipulator negotiating slopes. IEEE/ASME Trans Mechatron 15:623–635. https://doi.org/10.1109/TMECH.2009.2031174 Nagatani K, Yamasaki A, Yoshida K (2008) Improvement of the operability of a tracked vehicle on uneven terrain using autonomous control of active flippers. In: 2008 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 2717–2718 Choi D, Kim J, Cho S (2012) Rocker-Pillar: design of the rough terrain mobile robot platform with caterpillar tracks and rocker bogie mechanism. In: 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS). pp 3405–3410 Fukuda T, Hasegawa Y, Sekiyama K, Aoyama T (2012) Multi-locomotion robotic systems. pp 127–129 Wooden D, Malchano M, Blankespoor K (2010) Autonomous navigation for BigDog. In: 2010 IEEE international conference on robotics and automation (ICRA). pp 4736–4741 Dang A, La H, Horn J (2017) Distributed formation control for autonomous robots following desired shapes in noisy environment. In: 2016 IEEE international conference on multisensor fusion and integration for intelligent systems (MFI). pp 285–290 Papatheodorou S, Tzes A, Giannousakis K (2017) Experimental studies on distributed control for area coverage using mobile robots. In: 2017 mediterranean conference on control and automation (MED). pp 690–695 Ning B, Jin J, Zuo Z (2017) Distributed fixed-time cooperative tracking control for multi-robot systems. In: 2017 IEEE international conference on robotics and automation (ICRA). pp 833–838 Formal'sky A, Chevallereau C, Perrin B (2000) On ballistic walking locomotion of a quadruped. Int J Robot Res 19(8):743–761 Chen W, Low K, Yeo S (1999) Adaptive gait planning for multi-legged robots with an adjustment of center-of-gravity. Robotica 17:391–403 KC took the lead in proposing CMP, experimentation, programming, and wrote this paper as corresponding author. MK helped to design and optimize gait, mathematic model, and also helped to revise the manuscript. TKt, TKn and KA helped to build electric OCTOPUS, implement experiments and analyze data. KI, MS and KI helped to build electric OCTOPUS. SS provided technical advice. All authors read and approved the final manuscript. This research was supported in part by the Industrial Cluster Promotion Project in Fukushima Pref., in part by the Institute for Disaster Response Robotics, Future Robotics Organization, Waseda University, in part by the Research Institute for Science and Engineering, Waseda University, and in part by the China Scholarship Council (CSC). Modern Mechanical Engineering, Graduate School of Creative Science and Engineering, Waseda University, 17 Kikui-cho, Shinjuku-ku, Tokyo, 162-0044, Japan Kui Chen , Mitsuhiro Kamezaki , Takahiro Katano , Taisei Kaneko , Kohga Azuma , Tatsuzo Ishida & Shigeki Sugano Kikuchi Seisakusho Co., Ltd., 2161-21 Miyama-cho, Hachioji-shi, Tokyo, 192-0152, Japan Masatoshi Seki & Ken Ichiryu Search for Kui Chen in: Search for Mitsuhiro Kamezaki in: Search for Takahiro Katano in: Search for Taisei Kaneko in: Search for Kohga Azuma in: Search for Tatsuzo Ishida in: Search for Masatoshi Seki in: Search for Ken Ichiryu in: Search for Shigeki Sugano in: Correspondence to Kui Chen. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Disaster response robot Multi-arm multi-crawler Semi-autonomous Self-adapt Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Methodology of predicting novel key regulators in ovarian cancer network: a network theoretical approach Md. Zubbair Malik1 na1, Keilash Chirom3 na1, Shahnawaz Ali2, Romana Ishrat2, Pallavi Somvanshi3 & R. K. Brojen Singh ORCID: orcid.org/0000-0001-6693-04011 BMC Cancer volume 19, Article number: 1129 (2019) Cite this article Identification of key regulator/s in ovarian cancer (OC) network is important for potential drug target and prevention from this cancer. This study proposes a method to identify the key regulators of this network and their importance. The protein-protein interaction (PPI) network of ovarian cancer (OC) is constructed from curated 6 hundred genes from standard six important ovarian cancer databases (some of the genes are experimentally verified). We proposed a method to identify key regulators (KRs) from the complex ovarian cancer network based on the tracing of backbone hubs, which participate at all levels of organization, characterized by Newmann-Grivan community finding method. Knockout experiment, constant Potts model and survival analysis are done to characterize the importance of the key regulators in regulating the network. The PPI network of ovarian cancer is found to obey hierarchical scale free features organized by topology of heterogeneous modules coordinated by diverse leading hubs. The network and modular structures are devised by fractal rules with the absence of centrality-lethality rule, to enhance the efficiency of signal processing in the network and constituting loosely connected modules. Within the framework of network theory, we device a method to identify few key regulators (KRs) from a huge number of leading hubs, that are deeply rooted in the network, serve as backbones of it and key regulators from grassroots level to complete network structure. Using this method we could able to identify five key regulators, namely, AKT1, KRAS, EPCAM, CD44 and MCAM, out of which AKT1 plays central role in two ways, first it serves as main regulator of ovarian cancer network and second serves as key cross-talk agent of other key regulators, but exhibits disassortive property. The regulating capability of AKT1 is found to be highest and that of MCAM is lowest. The popularities of these key hubs change in an unpredictable way at different levels of organization and absence of these hubs cause massive amount of wiring energy/rewiring energy that propagate over all the network. The network compactness is found to increase as one goes from top level to bottom level of the network organization. Ovarian cancer (OC) is an assorted cancer that begins in an ovary. Although most of OC's are non-metastatic or having low potential to migrate, ovarian tumors can metastasize to other parts of the body and can be fatal. In 2016, it was reported 22,280 will receive a new diagnosis of ovarian cancer and 14,240 women will die from ovarian cancer [1, 2]. As the eighth-most common cause of death, OC is considered as the 'silent killer' due to lack of symptoms in its initial stages [1, 2]. In the preceding few decades, genetic studies have retrieved some genetic alterations that are also crucial in the pathogenesis of ovarian cancer. The swift growth of next-generation sequencing technologies recently has allowed the possibility for identifying many somatic alterations (genetic) in OC. These somatic alterations can be assessed as the passengers, which on the other hand pose challenge in classifying any cancer [3]. Identification of molecular drivers associated with a specific cancer type/sub-type is crucial and at the same time important for understanding its heterogeneity to seek treatment. In recent studies, [4] network based calculation have been implicated on multiple data sources to identify driver genes, including copy number variation, microRNA expression [4]. Epithelial ovarian cancers (EOC) remained the most lethal cancer among developed nations. From its different sub-types mEOC (mucinous epithelial ovarian cancers) represent approx. 3% of EOC. It can be divided into the more common, type II (aggressive) and type I (slow growing) cancers [5]. Low-grade i.e. type I is present in young women and have a high prevalence of KRAS and BRAF mutations, but low in relation to Tp53 mutations (that characterize to type II). These prevalent mutations qualify as a favourable prognostic factor for type I EOC. While at the same time identifying MAPK, mutations are useful in guiding clinical treatment [6]. Most of the new cases present with advanced stage of disease have initial treatment which consists of a cyto-reductive surgery and chemotherapy [7]. While the patients that develop advanced EOC are the ones having pre-clinical diminution after primary therapy. Though, long term cure resides in the patients exposed to multiple chemotherapeutic agents [8]. Thus, the identification of malignant ascites is the most common consequence of EOC. It causes significant symptoms and impact on the patient's life, more than ever in cases where women have regular ovarian cancer [9]. The current paradigm for finding out OC revolves around the identification of critical regulators in the Transcriptional factor (TF) networks present in OC cells. As these TFs might be important therapeutic targets [10]. To understand the mechanism and predict the complex interaction within the complex biological network and how numerous basic functions are performed by the organization of the components between them. The large scale data from the omics have interestingly been used to map genes with specific diseases [11]. Network theory has been presented to bring a significant approach to understanding the complex systems dynamics and topo- logical properties, to co-relate to their functional modules [12]. The large number of the existing networks fall into different type of network such as, hierarchical net- work, scale-free, random and small world [13]. Amongst these type of network, the hierarchical type network finds special attention from biologists due to its sparsely distributed hubs that regulates the ovarian cancer network as well as also appearance of modules [13, 14]. The appearance of modules in this type of network is of significance to us because they can correlate to independent functional factor in the network which comply with their own laws [13]. Therefore, we tend to focus our study on network of ovarian cancer, which is developed from experimentally verified known genes of ovarian cancer and their interactions to analyze potential key regulatory (KRs) genes which may serve as potential target genes. Therefore, We also aimed to explain its topological properties from which we tend to attempt to conclude potential key regulators among that a few are of elementary effect, their regulating as well as activities mechanism [14]. Workflow of construction of ovarian cancer network and techniques of analysis The detailed workflow of the ovarian cancer network and analysis is given in Fig. 1 and detail techniques are given in the Additional file 1. We describe briefly the workflow of the analysis below. Schematic diagram of the workflow of the methods implemented in the study of ovarian cancer network Acquisition of ovarian cancer data We extracted 6 (sets) list of ovarian cancer genes from 6 highly cited cancer database and then integrated the common ovarian cancer gene from all the list in order to retrieve a list of only experimentally verified gene of ovarian cancer. The different resources used are COSMIC database, Gene cards database, Ovarian kaleidoscope database, Dragon database of ovarian cancer, curated ovarian database and OC- Gene database, which are focused on different aspects of cancer biology (see Additional file 1). We have assimilated 2000 genes from the mentioned repositories. This list of genes is subjected to CGI code written in Perl to remove duplication genes in terms of redundancy of names as well as aliases used for gene names (for details see Additional file 1). This method allows to filter unique 660 genes out of 2000. The list of the genes is further curated manually as well as using Cytoscape 3.7.1 plugin, then mapped to UniProt (January 2016) and finally we arrive at 600 genes (for details see Additional file 1). Network construction of the curated genes We followed the idea of one protein one gene concept to construct primary protein- protein interaction network using the curated list of 600 genes using GeneMANIA App [15] and verified and uploaded the file in Cytoscape [16] using Cytoscape 3.7.1. The constructed network now consists of 4818 nodes having 16320 possible connections among them (for details see Additional file 1). Method to identify key regulators We first find out a list of the first seventy (one can vary the number) leading hubs characterized by the degree distribution of the complete network. Now communities of the network are extracted using Newman and Grivan's community finding method [17] which is the level of organization of communities of network (say first level of network organization). We then trace the leading hubs in each community and isolated hubs are discarded. Now sub-communities of all the communities are found out using the same community finding method mentioned above and the leading hubs are traced in each sub-community at this second level of organization followed by rejection of isolated or truncated hubs. The process is repeated until a situation is reached where the smaller communities can no longer breakable (most probably at motif level) by the method and trace the hub genes reach at this level. The set of the leading hubs reach at this level (lowest level) is termed as a set of key regulators. These KRs can be considered as the backbone of the network, which are deeply rooted and involved at each level of the organization from top level to bottom level and vice versa. Topological analyses of the networks The topological properties of ovarian cancer PPI network are characterized by degree distribution, clustering co-efficient, connectivity and centrality (betweenness, closeness and eigen-vector) measurements (for details see Additional file 1). We use these parameters to understand topological changes when the network is perturbed [14, 18, 19]. Tracking of genes in ovarian cancer network The most influential genes in the OC network were identified first through calculating the centrality measures. Since, higher degree nodes have higher centrality values, top 70 highest degree nodes were considered among the hub nodes of the network for tracing the key regulators which may play important role in regulating the network. Then tracing of nodes from the primary network up to motif level was done on the basis of representation of the respective nodes (proteins) across the sub modules obtained from Louvain method of community detection/ clustering. Finally, the hub-nodes (proteins) which were represented at the modules at every hierarchical level were considered as key regulators of the OC network. Knock-out experiment The role KRs can be investigated by performing knock-out experiment of the net- work and studying the variations in topological properties of OC cancer PPI net- work. In the this experiment the KRs are systematically removed from ovarian cancer network and calculate topological properties of ovarian cancer and sub- communities at each level of organization of network to compare with the original ones before KRs are removed (for details see Additional file 1) [14]. Network compactness estimation The degree of how strongly the nodes are linked in a network and its associated com- munities and the sizes can be estimated by LCP-DP (local community paradigm- decomposition plot) algorithm [20] (for details see Additional file 1). We used this method to analyze organizational behavior of the ovarian cancer network. Constant Potts model The energy distribution in a network can be estimated by using constant potts model [21] (for details see Additional file 1). This technique was applied to understand the importance of KRs in a network and their regulating activities. Survival analysis of the key regulators were performed using kemplotter [22, 23]. All the datasets were taken along with TCGA dataset for the analysis with total sample size of overall survival (n = 1657) using ovarian cancer probset. Overall survival probabilities were plotted on Y axis and time of survival in months were plotted on X axis and logrank p value <0.05 was taken as statistically significant value between the low and high expressions of genes. Gene ontology and Pathways analysis We perform GO and pathways analysis by using DAVID web server (Database for Annotation, Visualization and Integrated Discovery) [24, 25]. Ovarian cancer network follows hierarchical scale free features The ovarian cancer network, which is the proposed complex regulatory network to be studied, is constructed from the experimentally verified seventy genes (see Methods and Table 1). The topological properties of this network, namely, probability of degree distribution P (k), neighborhood connectivity CN(k) and clustering co-efficient C(k) follows power law characters as a function of k. (Fig. 2 1st row denote Level 0). The power law fits on the complex data sets of the topological variables of the ovarian cancer network are performed and conformed following a standard statistical fitting procedure suggested by Clauset et al. [26], where, all p-values (statistical) for all data sets, estimated versus 2500 random samplings, are establish to be ≥ 0.1 (critical value) and goodness of fits is establish to be ≤ 0.33 (Fig. 2 first row blue fitting line). The exponential values are retrieved from the power law fittings. For the entire ovarian cancer network, the results are summarized as follows, $$ \Gamma \left(\mathrm{k}\right)\left[\ \begin{array}{c}{\Gamma}_1\\ {}{\Gamma}_2\\ {}{\Gamma}_3\end{array}\right]=\left[\begin{array}{c}P\\ {}C\\ {}{C}_N\end{array}\right]=\left[\begin{array}{c}{k}^{-\upgamma}\\ {}{k}^{-\upalpha}\\ {}{k}^{-\upbeta}\end{array}\right];\left[\begin{array}{c}\gamma \\ {}\alpha \\ {}\beta \end{array}\right]\longrightarrow \left[\begin{array}{c}2.16\\ {}0.9\\ {}0.67\end{array}\right] $$ Table 1 Gene Ontology Pathway Enrichment Analysis of level 5 communities of ovarian cancer network Topological properties of the ovarian cancer network. a. The behaviours of degree distributions (P(k), clustering co-efficient (C(k)), neighbourhood connectivity (CN(k)), betweenness (CB(k)), closeness (CC(k)) and eigen-vector (CE(k)) measurements as a function of degree k for original and five key regulators knock-out network at different levels of organization. b. The changes in the exponents of the six topological parameters due to key regulators knock-out experiment. c. Energy distribution in the network quantified by Hamiltonian calculation as a function of network levels. d. Changes in the network modules/sub-modules due to five key regulators knock-out experiment. The dotted modules/sub-modules are the break-down modules/sub-modules These topological properties of the ovarian cancer PPI interaction network are very close to ideal hierarchical properties of the network whose values of the exponents are, γ ≈ 2.26 (mean-field theoretical value) [13], α = 1 [27, 28] and β = 0.5 [29]. This topological function Γi; i = 1, 2, 3 satisfy the Mandelbrot's classical definition of fractal [30], which is defined by the following self-affine process of any scale factor λ, $$ \frac{\Gamma_i\left(\lambda k\right)}{\Gamma_i(k)}={\lambda}^D;\mathrm{D}=\left[\ \begin{array}{c}{\mathrm{D}}_1\\ {}{\mathrm{D}}_2\\ {}{\mathrm{D}}_3\end{array}\right]=\left[\begin{array}{c}-\gamma \\ {}-\alpha \\ {}-\beta \end{array}\right] $$ where, Di corresponds to fractal dimension of ith topological parameter. Hence the ovarian cancer network follows the fractal features or the hierarchical scale free. The negative values in fractal dimension indicate the enrich randomness in the network organization with sample variability [31]. Since the cN(k) has a negative power in k i.e. β = 0.67, the ovarian cancer network exhibits disassortivity nature which means that the rich-club formation of a large number of the leading/major hubs in the network is unlikely [29]. The centrality measurement, such as, closeness CC, betweenness CB and eigen-value centrality CE, characterize the importance of the hubs, their regulating mechanisms (see Methods) and obey the following power law behaviors (Fig. 2 first row), $$ \Lambda (k)=\left[\ \begin{array}{c}{\Lambda}_1\\ {}{\Lambda}_2\\ {}{\Lambda}_3\end{array}\right]=\left[\ \begin{array}{c}{C}_B\\ {}{C}_C\\ {}{C}_E\end{array}\right]=\left[\ \begin{array}{c}{k}^{\upvarepsilon}\\ {}{k}^{\upeta}\\ {}{k}^{\updelta}\end{array}\right];\left[\begin{array}{c}\varepsilon \\ {}\eta \\ {}\delta \end{array}\right]\longrightarrow \left[\begin{array}{c}1.15\\ {}0.083\\ {}1.11\end{array}\right] $$ The power law behaviour of the these CC, CB and CE (centrality measurements) are again confirmed and verified applying the Clauset et al. procedure of power law fitting, where, it is found that p-values to be greater than 0.1 and goodness of fit also greater than 3.5. Since only less numbers of higher degree nodes have great CC, CB and CE values, the number of greater regulating hubs, that can regulate the ovarian cancer network, is less. Hence, moderately low degree nodes (proteins/genes) dominate the network and therefore, the organization, functioning and regulation of the network are done mostly by these low degree proteins/genes. However, the sparsely distributed major/leading few hubs, that can be significant roles in maintaining as well as regulating the ovarian cancer network stability. Further, power law behavior of centrality measurements given by equation (3) follows the following self-affine process for any scale factor c, $$ \frac{\Lambda_i(Ck)}{\Lambda_i(k)}={C}^{\mathbbm{D}};\mathbbm{D}=\left[\begin{array}{c}{\mathbbm{D}}_1\\ {}{\mathbbm{D}}_2\\ {}{\mathbbm{D}}_3\end{array}\right]=\left[\begin{array}{c}\epsilon \\ {}\eta \\ {}\delta \end{array}\right] $$ The positive values in \( \backslash \mathrm{mathbbm}{\mathbbm{D}}_i \) indicate the distribution of the measurements in the network [31]. Since the topological variables show fractal nature, as evident from equations (2) and (4) respectively, the ovarian cancer network exhibits fractal or hierarchical scale free features. Key regulators in ovarian cancer network and properties Since ovarian cancer PPI interaction network follows hierarchical scale free nature, the emergence of modules in the network is significant and therefore both these sparsely distributed few leading hubs regulate and modules, which is organize the network. Application of Newmann and Girvan's standard community finding algorithm, the modular structure and their organization at different levels of organization were established (see Methods) [14, 17, 32]. Applying the present algorithm, the ovarian cancer network is established to be hierarchically organized through five various levels of organization shown in Fig. 3a). The corresponding to QN (modularity) and LCP, LCP-correlation per node as a function of levels of organization are established to decrease in concert goes from higher level to down level (Fig. 3b). Identification of key regulators of ovarian cancer network. a. Organization of the modules/sub-modules of the network. b. Plots of QN and LCP − corr as a function of network level. c. Characterization of seventy leading hubs of the network by degree (k) distribution and identification of key regulators. Colour codes are popularities of the leading hubs Depending on the degree of the nodes in the ovarian cancer network, the first seventy leading hubs are identified (Fig. 3c). The most appropriate question is whether these hubs are actual target genes that regulate network at fundamental level. Hence, we propose to define key regulators in the act of those proteins/genes that are deeply rooted from top level to bottom level of the ovarian cancer net- work organization and vice versa and play the role of backbone of ovarian cancer network organization (Fig. 3c). These KRs may or may not necessarily be major leading hubs in the ovarian cancer network, however randomly variation their popularities at different levels of organization. Removal of the major hubs doesn't cause network disruption since the ovarian cancer network exhibits hierarchical characteristics. Though, the removal of key regulators from the ovarian cancer network can cause maximum perturbations both locally and globally in the network, specially at a deeper level of organization. Then the perturbations will propagate through different levels of organization's top level to bottom of bottom level to top causing topological variation in the ovarian cancer network. Hence, we propose that these key regulators could be driver target genes of ovarian cancer. Following the definition of key regulators, we could able to identify five KRs, namely, AKT1, KRAS, EPCAM, CD44 and MCAM (Fig. 3c, Figs. 4 and 5), that are key regulators (organizers) of the ovarian cancer network. Surprisingly, the top 11th leading hubs aren't obtained to be key regulators as they fail to reach till the deepest/lowest level of organization. Out of these five KRs few are at low profile/popularity (CD44 and MCAM) but could able to regulate till the bottom level of arrangement organization. Further, these key regulators start separating from one another after the third level; KRAS and EPCAM go together, AKT1 moves alone and CD44 and MCAM go together till motif (triangular type) level (Figs. 4 and 5). These key regulators perform as signal propagators from top level to bottom level and vice versa to control the inherent properties and network stability. Tracing of key regulators of the network through different levels of the network Network/modules/sub-modules at different network levels which accommodate leading hubs and key regulator regulators. The probability distributions of the key regulators as a function of level To understand regulating capability of total the five KRs, we define a probability Py(x[s]) of a KR y to have a number of links/edges x[s] at level s out of the total number of links/edges E[s] of the ovarian cancer network/module/sub-module in which that KR is accommodated, which is given by, $$ {P}_y\left({x}^{\left[s\right]}\right)=\frac{x^{\left[s\right]}}{E^{\left[s\right]}},\forall {E}^{\left[s\right]}\ne 0 $$ The calculated Py of all the five KRs exhibit increase in Py as one goes from top level to bottom level (when s increases) and found that Py → 1 as s → 5 (Fig. 5 lowest panels). This reveals that the regulating capability of each key regulator becomes more prominent at deeper levels of organization. Further, the inherent regulating capability of each key regulator \( {P}_y^{\left[I\right]} \) can be approximately measured by calculating average over Py as in the following, $$ {P}_y^{\left[I\right]}=\frac{1}{M+1}\sum \limits_{s=0}^{M=5}{P}_y\left({x}^{\left[s\right]}\right) $$ The calculated values of \( {P}_y^{\left[I\right]} \) shows that \( {P}_{AKT1}^{\left[I\right]}>{P}_{KRAS}^{\left[I\right]}>{P}_{EPCAM}^{\left[I\right]}>{P}_{CD44}^{\left[I\right]}>{P}_{MCAM}^{\left[I\right]} \) . The inherent regulating capability of AKT1 is highest and that of \( {P}_{MCAM}^{\left[I\right]} \) is lowest. Local perturbations driven by ovarian cancer key regulators The knock-out experiment of five key regulators from ovarian cancer network could able to highlight the local perturbations driven by these KRs and their effect on global network properties. The removal of these key regulators from the complete ovarian cancer network brings significant variations in the topological properties of the ovarian cancer network (Fig. 2a first row), where, γ and α change significantly in complete network level (Fig. 2b), whereas β change slightly. Similarly, the variations in the measurements of the exponents of centrality (ϵ, η and δ) also show significant (Fig. 2b). Since, all the five KRs are present in a single module/sub- module up to third level of organization, we only consider that module/sub-module for the five KRs knock-out experiments (Fig. 2 second, third and fourth rows). It is noticeable from the variations in the exponents of topological parameters (Fig. 2 B) such as one goes to deeper level i.e. The ovarian cancer network perturbation increases as goes from top direction to down direction. After the third level, the removal of these KRs almost breakdown the sub-modules present in the remaining levels (deeper) (Fig. 2d). This demonstrate that local perturbation caused by five KRs together is maximum at deeper levels and propagates the perturbation through other levels from bottom to top. In order to understand variation in energy distributions in corresponding ovarian cancer network and modules/sub-modules at different level of organization. Now we calculated Hamiltonian of respective ovarian cancer network and modules/sub-modules in five know-out experiment (Fig. 2c) (see Methods). If \( \Delta {H}_s={H}_s^{\left[O\right]}-{H}_s^{\left[R\right]} \) is the change in Hamiltonian functions due to removal of five KRs at level s, where H[O] and H[R] are the Hamiltonian functions for original and removed networks respectively and corresponding modules/sub modules, then we obtain, Where, \( {H}_s={H}_s^{\left[O\right]} \). This demonstrates that removal of key regulators causes excessive destructive of wiring energy/rewiring energy that is propagated over all the levels of ovarian cancer network organization. $$ \Delta {H}_s>0,{\forall}_s;\left\{\begin{array}{c}{\left.\frac{\Delta {H}_s}{H_s}\longrightarrow 0\right\rceil}_{s:3\longrightarrow 0}\ \left( for\ s\le 3\right)\\ {}{\left.\frac{\Delta {H}_s}{H_s}\longrightarrow 1\right\rceil}_{\forall s}\ \left( for\ s>3\right)\end{array}\right. $$ Network compactness preserves self-organization in ovarian cancer The compactness of the network/modules/sub-modules with size are calculated using LCP-DP algorithm which is expressed \( \sqrt{LCL} \) (local community links) as a function of CN (common neighborhood) (see Methods) and found that the number of strongly connected networks/modules/sub-modules (LCP − corr ≥ 0.8) are greater than the number of loosely connected network/modules/sub-modules (LCP − corr < 0.8) at s = 1 (upper level of organization) (see Fig. 6). The size of the modules at s = 1 ranges from 10 to 180 nodes. However, as one moves from top to bottom (s > 1), the number of strongly connected modules decrease as compared to loosely connected modules/sub-modules. Since the ovarian cancer network is tightly bound at the upper level and complete network, the network itself is organized to maintain its own properties against any external and internal perturbations (both local and global) in the network. LCP correlation as a function of CN for different modules/sub-modules and their distribution Now the analysis of LCP-DP of the network/modules/sub-modules shows that, except one particular module and its corresponding sub-modules at different levels, all the other modules/sub-modules become loosely packed with decrease in size as one move from top to bottom levels. The particular module/sub-modules, whose size and compactness do not change much till third level (LCP-correlation ranges from 0.936 − 0.994 and size 175 − 180) is the module/sub-module in which all the five KRs are accommodated. This means that the module/sub-module is tightly regulated by these five KRs along with their connecting nodes in them (Fig. 6 second panel in each row). However, the removal of these five KRs do not cause network breakdown (Fig. 2). Hence, this module/sub-module still try to preserve its own properties against any local and global perturbations. Centrality lethality is ruled out in the ovarian cancer network The ovarian cancer network obeys nearest to ideal hierarchical type of network, thus the emergent modules/sub-modules are tightly bound at top levels of organization. The removal of key regulators does not cause the network breakdown (Fig. 2). Even though one module in which the five KRs are accommodated and it is corresponding few sub-modules breakdown after the third level, other modules/sub-modules remain stable to protect the ovarian cancer network properties. So, ovarian cancer network lead out centrality-lethality rule [33]. But the identified key regulators have significant regulating activities in the ovarian cancer network that is reflected in the variations in the topological properties (Fig. 2) and another network parameter (Figs. 5 and 6) and its associated communities at different levels of organization. AKT1 plays central role in regulating ovarian cancer network AKT1, which is a modulator of apoptotic signal and important therapeutic target gene in ovarian cancer [34], is found to be tightly bound with other important leading ovarian cancer regulator genes with large extension of network/modular sizes 400 to 100 depending on the network level of organization indicated by LCP-DP calculations (see Methods, Fig. 7). In these calculations, the network/module/sub- module in which AKT1 is present are considered, where LCP-correlations of these networks/modules/sub-modules are found to be in the range [0.986 − 0.994] revealing strong compactness of these networks/module/sub-module at different levels of organization. Further, AKT1 is found to act as a main regulator which allows to crosstalk with other remaining KRs (CD44, MCAM, KRAS and EPCAM) and Tp53 (Fig. 7C). Since the clustering co-efficient of all five KRs in the extracting net- work of these five KRs are one (Fig. 7C), the identified five KRs are again found to be interacting strongly which is the signature of rich-club formation in the network [35]. However, if we consider the whole network, we could not able to capture the signature of this rich-club formation of these KRs as evident from network connectivity property dependence on negative exponent CN (k) ∼ k−β [29, 36] (Fig. 2A) and negative exponent dependence on rich-club parameter R on degree k, R ∼ k−θ [35]. Since the rich-club data (R versus k) for all networks, module and sub-modules scale the same scaling functional dependence, R ∼ k−θ, the network organization exhibits absence of central controlling mechanism by AKT1 and its rich-club with KRs. Hence, even though AKT1 is significantly important KR in ovarian cancer network, it never tries to dominate the network organization at different levels of organization. Properties of AKT1. a. The tracing of AKT1 in network/modules/sub-modules at different network levels. b. The variation of \( \sqrt{LCL} \) as a function of CN for different levels. c. Organization of five key regulators with Tp53. d. Directional tracing of AKT1 at different network levels. Rich-club parameter as a function of k PH and \( \sqrt{LCP} \) as a function of level Now, to understand relative energy ATK1 can have at different levels of network organization, can be obtained as follows: Consider, Hs is the Hamiltonian function at any level of network organization s, where, s = 0, 1, …, 5 (network corresponding to s = 0 is the complete network). If ms is the number of modules/sub-modules at level s, then Hamiltonian function per module/sub-module at level s is given by, \( {H}_s=\frac{1}{m_s}\sum \limits_{j=1}^{m_s}\sum \limits_{c_j}\left({e}_{c_j}-{\mathrm{Yn}}_{c_j}^2\right) \), where \( {n}_{c_j} \) is the size of the jth module/sub-module at level s. Then the Hamiltonian function of AKT1 at the module it belongs to can be obtained as, \( {H}^{\left[ AKT1\right]}=-\left({e}^{\left[ AKT1\right]}-{\mathrm{Yn}}_s^{\left[ AKT1\right]}\right) \), Where e[AKT1] and \( {\mathrm{n}}_s^{\left[ AKT1\right]} \) are the number of edges AKT1 has and the size of the module/sub-module where AKT1 belongs to respectively. Now the relative energy AKT1 can have at any level s can be obtained by, $$ {U}_{AKT1}(s)=\frac{H_{AKT1}}{H_s}\sim {e}^{-{\phi}_s};{\forall}_s\in I $$ where, ϕ is a constant. This relative energy of AKT1, UAKT1 represents the energy associated with AKT1 constrained by the level of organization which could be related to the activities of AKT1 at different levels s. In an ovarian cancer network, the activity of AKT1 decreases as one goes down from top to bottom of the network (Fig. 7F) indicating its important regulating activity at complete network level than at a basic level. Further, we calculate the relative compactness of the module/sub-module which accommodates ATK1 at different level s by using, $$ {W}_{LCP}=\frac{L_{AKT1}}{\sum_{j=1}^{m_s}{L}_j};L\to LCP- corr{\forall}_s\in I $$ here, the sum is over non-zero LCP-correlations of the modules/sub-modules at each levels s. The estimated values of W_LCP (Fig. 7F) show that the relative compactness increases as one goes down from top to bottom level indicating the strong interaction of nodes at a lower level of organization. Ovarian cancer network exhibit active regulating mechanism of key regulators with modules Since ovarian cancer network follows hierarchical network features, the emerged modules/sub-modules become important regulating units at different levels of the organization along with active participation of KRs in network phenotypes. The multi-functionality of the network could be the manifestation of the interacting emerged module/sub-modules at each network level to keep the network properties stable. The KRs could be important workers of integrating the components in each module/sub-module they belong to for efficient functioning, through optimal signal processing among the components organized by these KRs. The five identified KRs in fact form rich-club phenomena at each level of organization, however, the impact of this rich-club activity at each level is weak enough such that this perturbation is unable to cause a significant variation in the overall network topological properties. Besides the ovarian cancer network/modules/sub-modules are mostly tightly bound due to strong interaction among the nodes/genes. Hence, the removal of these important KRs do not cause network breakdown indicating absence of central control system, that is a signature of self-organization [37]. The network also exhibits topological properties close to the ideal hierarchical network indicated by equation (3) (Fig. 2) and therefore the regulating mechanism in the network is active (far from equilibrium) in order to maintain network properties. At the same time the topological properties of ovarian cancer network show power law nature that is suggesting the OC network obeys fractal behaviour. This fractal nature due to self-affine process in the network could be a signature of self-organization in the OC network [38]. The knock-out experiment of five key regulators from the original ovarian cancer network express that the altered properties in the OC network due to knock out of five KRs don't cause significant variation in the network topology. This suggests that the system don't adopt to vary by cause due to perturbation communicated by KRs knock-out from the ovarian cancer network. The network then reorganize itself and adapted to the transformed topological properties of ovarian cancer network. The ability to adapt to the change for a better network organization without breakdown of the system is another signature of self-organization in the network [39]. Key regulators are correlated to disease progression in ovarian cancer Higher expressions of AKT1 and CD44 genes in ovarian cancer patients had higher probabilities of survival than their lower expressions (Fig. 8). This might be an indication that they act against the progression of the cancer and could play tumor suppressing roles. On the other hand, higher expressions of KRAS and EPCAM genes lower the probability of survival among OC patients thus their higher expressions could have tumorogenic effects and related to disease progression. But in case of MCAM gene, higher or lower expressions had equal impact on the overall survival of the patients (Fig. 8). Hence, expressions of key regulators AKT1, KRAS, EPCAM and CD44 can be correlated to increasing or decreasing the risk of disease progression, so, they can be potential prognostics markers or drug targets in ovarian cancer. Kaplan-Meier curve of key regulators AKT1, KRAS, EPCAM, CD44 and MCAM. p-values were calculated using the log rank test to evaluate the overall survival analysis between low expression (black) and high expression (red) of key regulator genes of patients Gene ontology (GO) and pathways analysis of community at the level 5 module We have analysed pathways of level 5. The most significant pathways associated are given in Table 1. The p-value denotes the statistical significance of the pathways in the network. The other associated pathways, which are also statistically significant, such as RAS signalling, leucocytes trans endothelial migration, thyroid signalling pathways etc are listed in Table 1. These pathways are also been reported in many cancer types. Complex ovarian regulatory network generated from experimentally verified set of genes show hierarchical features, which allows the genes to organize in a few different pathways (modules/sub-modules) in a complicated way, to exhibit multi- functionality of the system. Since the network bears hierarchical properties, the activities of individual gene are not much important, but their co-ordination exhibit different important functional special deeds. At the same time, some of the leading hubs in the network have significantly important functions, for example, integration of large number of lower degree nodes in the network for organizing and regulating activities, serve as a means of intra and inter cross-talk among different other essential genes, maintaining network inherent properties, stability and optimizing signal processing in the network. Though, out of these leading hubs, few hubs, which we term as key regulator s, acquire significantly more important roles in keeping network properties in better perspectives (ability to get adaptation to the fit change) [40]. In ovarian cancer network, out of seventy leading hubs in the ovarian cancer network, we could able to explore five such KRs which are AKT1, CD44, MCAM, KRAS and EPCAM. These KRs are deeply rooted in the network, they act as the backbone of the network for any network regulations and activities and could be a possible target gene for this disease control mechanisms. Surprisingly, the first few most popular hubs (eleven hubs) do not fall in KRs and these KRs need not necessarily be most popular hubs, but some of them keep a low profile in the network. These KRs form tightly bound rich-club, but the regulating activity of this rich-club could not able to show up in the network properties because the number of members in the rich-club is negligibly small as compared to the whole network. Further, these five KRs fall in a single module/sub-module up to fourth level of organization indicating closer working of the KRs and then start separating afterwards. Some of these identified KRs have experimentally been shown important backbone genes in ovarian cancer. For example, AKT1 is experimentally found therapeutic target gene [34], CD44 is found to be target gene, which serve as the backbone for paclitaxel prodrugs [41], MCAM is reported to be an important metastasis marker and invasion of ovarian cancer cells [42], KRAS is identified as important genetic marker of ovarian cancer [43]. However, even though EPCAM is involved in ovarian cancer regulation [44], we propose that this EPCAM could be an important gene for possible target gene in ovarian cancer. Since the ovarian cancer network bears hierarchical properties, removal of these KRs does not cause network breakdown, rather reorganize the network to an- other perspective and adapt to it. Since the five KRs are associated with a single module/sub-module, one can target (possible drug target genes) these KRs and accommodating module/sub-module in ovarian cancer. But removal of KRs from the module they belong to cause modular breakdown after a certain level of organization in the network. Hence, one needs to investigate this module/sub-module for the critical target of this disease. Higher and lower expression of the key regulatory genes can be correlated with progression of tumorogenesis and overall survival among ovarian cancer patients. In addition to the topological properties of the modules of the protein-protein interaction network of ovarian cancer, the predicted important pathways from the network modules are found to be associated with different other types of cancer. This study proposes a new method to identify key regulators of ovarian cancer net- work. The ovarian cancer network is a tightly bound network and follows certain properties: first, the network rules out centrality-lethality rule (no central control system); second, network topology obeys fractal laws; third, out of KRs AKT1 plays central role in regulating ovarian cancer system. However, we need to study large scale analysis of the dynamical network, which involves different biologically well-defined modules to understand the time evolution of the ovarian cancer and for spatio-temporal behaviors of target genes. The data used in the current study available from the corresponding author on reasonable request. C(k): Clustering co-efficient CB : Betweenness Closeness CE : Eigen-value centrality CN (k): Neighborhood connectivity CN: Common neighborhood Database for Annotation, Visualization and Integrated Discovery EOC: Epithelial ovarian cancers Hs : Hamiltonian function KRs: Key regulators LCL: Local community links LCP-DP: Local community paradigm decomposition plot OC: p(k): Degree distribution PPI: TCGA: The Cancer Genome Atlas TF: Transcriptional factor Siegel RL, Miller KD, Jemal A. Cancer statistics. CA Cancer J Clin. 2016;66(1):7–30. Lengyel E. Ovarian cancer development and metastasis. Am J Pathol. 2010;177(3):1053–64. Bell D, Berchuck A, Birrer M, Chien J, Cramer DW, Dao F, et al. Cancer Genome Atlas Research Network. Integrated genomic analyses of ovarian carcinoma. Nature. 2011;474(7353):609–15. Zhang D, Chen P, Zheng CH, Xia J. Identification of ovarian cancer subtype-specific network modules and candidate drivers through an integrative genomics approach. Oncotarget. 2016;7(4):4298. Xu W, Rush J, Rickett K, Coward JI. Mucinous ovarian cancer: A therapeutic review. Critical reviews in oncology/ hematology. 2016;102:26–36. Singer G, Oldt R, Cohen Y, Wang BG, Sidransky D, Kurman RJ, Shih IM. Mutations in BRAF and KRAS characterize the development of low-grade ovarian serous carcinoma. J Natl Cancer Inst. 2003;95(6):484–6. Bristow RE, Chang J, Ziogas A, Anton-Culver H. Adherence to treatment guidelines for ovarian cancer as a measure of quality care. Obstet Gynecol. 2013;121(6):1226–34. Ozols RF, Bundy BN, Greer BE, Fowler JM, Clarke-Pearson D, Burger RA, Baergen R. Phase III trial of carboplatin and paclitaxel compared with cisplatin and paclitaxel in patients with optimally resected stage III ovarian cancer: a Gynecologic Oncology Group study. J Clin Oncol. 2003;21(17):3194–200. Rudy SS, Charlotte CS, Shannon NW, Robert LC, Gordon B, Millsb C, Larissa AM. The management of malignant ascites and impact on quality of life outcomes in women with ovarian cancer. Expert Review of Quality of Life in Cancer Care. 2016;1(3):231–8. Kulbe H, Iorio F, Chakravarty P, Milagre CS, Moore R, Thompson RG, Braicu I. Integrated transcriptomic and proteomic analysis identifies protein kinase CK2 as a key signaling node in an inflammatory cytokine network in ovarian cancer cells. Oncotarget. 2016;7(13):15648–61. Dittrich MT, Klau GW, Rosenwald A, Dandekar T, Müller T. Identifying functional modules in protein–protein interaction networks: an integrated exact approach. Bioinformatics. 2008;24(13):i223–31. Barzel B, Barabási AL. Universality in network dynamics. Nat Phys. 2013;9(10):673–81. Ravasz E, Somera AL, Mongru DA, Oltvai ZN, Barabási AL. Hierarchical organization of modularity in metabolic networks. science. 2002;297(5586):1551–5. Ali S, Malik MZ, Singh SS, Chirom K, Ishrat I, Singh RKB. Exploring novel key regulators in breast cancer network. PLoS One. 2018;13(6):e0198525. Sara M, Ray D, Farley DW, Grouios C, Morris Q. GeneMANIA: a real-time multiple association network integration algorithm for predicting gene function. Genome Biol. 2008;9(1):S4. Paul S, Markiel A, Ozier O, Baliga NS, Jonathan T, Wang DR, Nada A, Schwikowski B, Ideker T. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13(11):2498–504. Newman ME, Girvan M. Finding and evaluating community structure in networks. Phys Rev E. 2004;69(2):026113. Malik MZ, Ali S, Singh SS, Ishrat R, Singh RKB. Dynamical states, possibilities and propagation of stress signal. Sci Rep. 2017;7:40596. Malik MZ, Alam MJ, Ishrat R, Agarwal SM, Singh RKB. Control of apoptosis by SMAR1. Mol BioSyst. 2017;13(2):350–62. Cannistraci CV, Alanis-Lobato G, Ravasi T. From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks. Sci Rep. 2013;3:1613. Traag VA, Krings G, Van Dooren P. Significant scales in community structure. Scietific Report. 2013;3:2930. Nagy A, Ĺanczky A, Menyhárt O, Győrffy B. Validation of miRNA prognostic power in hepatocellular carcinoma using expression data of independent datasets. Sci Rep. 2018;8:9227. Gyorffy B, Lanczky A, Szallasi Z. (2012). Implementing an online tool for genome-wide validation of survival-associated biomarkers in ovarian-cancer using microarray data of 1287 patients. Endocrine-Related Cancer, 10;19(2):197-208. Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID Bioinformatics Resources. Nature Protoc. 2009;4(1):44–57. Huang DW, Sherman BT, Lempicki RA. Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res. 2009;37(1):1–13. Clauset A, Shalizi CR, Newman ME. Power-law distributions in empirical data. SIAM Rev. 2009;51(4):661–703. Ravasz E, et al. Hierarchical organization in complex networks, Phys. Rev., E. 2013;67:026112. Barabasi AL, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004;5(2):101–13. Pastor-Satorras R, Vázquez A, Vespignani A. Dynamical and correlation properties of the Internet. Physical review letters. 2001;87(25):258701. Mandelbrot B, Fisher A & Calvet L. A multifractal model of asset returns. Cowles Foundation Discussion, 1997; Paper No. 1164. Mandelbrot BB. Negative fractal dimensions and multifractals. Physica A: Statistical Mechanics and its Applications. 1990;163(1):306–15. Anam F, Tazyeen S, Ahmed MM, Alam A, Ali S, Malik MZ, Ali S, Romana I. Assessment of the key regulatory genes and their Interologs for Turner Syndrome employing network approach. Sci Rep. 2018;8(1):10091. Jeong SP, Mason AL, Barabási AL, Oltvai ZN. Lethality and centrality in protein networks. Nature. 2001;411:41–2. Altomare DA, Wang HQ, Skele KL, Rienzo A, Klein-Szanto AJ, Godwin AK, Testa JR. AKT and mTOR phosphorylation is frequently detected in ovarian cancer and can be targeted to disrupt ovarian tumor cell growth. Oncogene. 2004;23(34):5853–7. Colizza V, Flammini A, Serrano MA, Vespignani A. Detecting rich-club ordering in complex networks. Nat Phys. 2006;2(2):110–5. Barrat A, Barthelemy M, Pastor-Satorras R, Vespignani A. The architecture of complex weighted networks. PNAS, USA. 2004;101(11):3747–52. Kauffman SA. The origins of order: Self organization and selection in evolution. USA: Oxford University Press; 1993. Heylighen F. The science of self-organization and adaptivity. The encyclopedia of life support systems. 2001;5(3):253–80. Ashby WR. Principles of the self-organizing system. In Facets of Systems Science, Springer US. 1991:521–36. Holland JH. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology. Control and Artificial Intelligence: MIT Press, Cambridge MA; 1992. Auzenne E, Ghosh SC, Khodadadian M, Rivera B, Farquhar D, Price RE, Klostergaard J. Hyaluronic acid-paclitaxel: antitumor efficacy against CD44 (+) human ovarian carcinoma xenografts. Neoplasia. 2007;9(6):479–86. Wu Z, Wu Z, Li J, Yang X, Wang Y, Yu Y, Zhang Z. MCAM is a novel metastasis marker and regulates spreading, apoptosis and invasion of ovarian cancer cells. Tumor Biol. 2012;33(5):1619–28. Ratner E, Lu L, Boeke M, Barnett R, Nallur S, Chin LJ, Hui P. A KRAS-variant in ovarian cancer acts as a genetic marker of cancer risk. Cancer research. 2010;70(16):6509–15. Baeuerle PA, Gires O. EpCAM (CD326) finding its role in cancer. Br J Cancer. 2007;96(3):417–23. The authors thank to Irengbam Rocky Mangangcha, Department of Zoology, Deshbandhu College, University of Delhi, New Delhi India for numerous discussions of our work. M.Z.M. was financially supported by the Department of Health and Research, Ministry of Health and Family Welfare, Government of India under young scientist File No.R.12014/01/2018-HR, FTS No. 3146887. R.K.B.S. is financially supported by UPE-II, New Delhi, India, under sanction no. 101, DST PURSE, Jawaharlal Nehru University, New Delhi, India, DBT (COE), India. Md. Zubbair Malik and Keilash Chirom contributed equally to this work. School of Computational & Integrative Sciences, Jawaharlal Nehru University, New Delhi, 110067, India Md. Zubbair Malik & R. K. Brojen Singh Centre for Interdisciplinary Research in Basic Sciences, Jamia Millia Islamia, New Delhi, 110025, India Shahnawaz Ali & Romana Ishrat Department of Biotechnology, TERI University, New Delhi, 110070, India Keilash Chirom & Pallavi Somvanshi Md. Zubbair Malik Keilash Chirom Shahnawaz Ali Romana Ishrat Pallavi Somvanshi R. K. Brojen Singh M.Z.M. and R.K.B.S. conceived the model and did the numerical experiment. M.Z.M. and K. C. prepared the figures of the numerical results. M.Z.M. and analyzed and interpreted the simulation results and wrote the manuscript. M.Z.M., K. C., S.A., P.S., R.I. and R.K.B.S. involved in the study and reviewed the manuscript. All authors read and approved the final manuscript. Correspondence to R. K. Brojen Singh. Supplementary file of methods. Malik, M.Z., Chirom, K., Ali, S. et al. Methodology of predicting novel key regulators in ovarian cancer network: a network theoretical approach. BMC Cancer 19, 1129 (2019). https://doi.org/10.1186/s12885-019-6309-6
CommonCrawl
Backscatter communications Multi-antenna transmission with channel feedback Performance analysis by numerical simulations Receiver diversity Coherent query scheme for wireless backscatter communication systems with single tag Aminolah Hasanvand1, Ali Khaleghi1, 2, 3Email authorView ORCID ID profile and Ilangko Balasingham2 Accepted: 19 June 2018 An un-coded multi-transmitter query scheme is introduced for wireless backscatter communication systems in which M transmitters and N receivers are used for single-tag connectivity (M × 1 × N). The main idea is to harden the wireless communication channel with a tag device for high data rate readings. The proposed method is designed for multipath fading channels in which the backscatter channel is a multiplicative Rayleigh. A coherent transmit query scheme is used to increase the tag-reflected signals and simultaneously alter the fading statistics in the forward path by implementing a receiver feedback. Full-diversity performance and array gain is achieved using the receiver diversity without requiring any tag antenna diversity. Therefore, the tag device remains simple. Mathematical expressions for the probability density function (PDF) of the backscatter channel are presented using closed-form equations. Bit error rate (BER) simulations for the binary phase shift keying (BPSK) modulation are computed numerically. Diversity gain of 10 dB is obtained by using a 2 × 1 × 1 scheme. The results show that the transmit diversity for single-tag usage performs the same as the tag antenna diversity, at the expense of a moderate transmitter complexity. The tag device remains intact as a requirement for the simplicity and size constraints. Also, the system realization becomes more feasible due to the available space on the transmitter side to accomplish the uncorrelated forward channel conditions. The feasibility study is demonstrated using software-defined radio (SDR) implementations. Backscatter communication Transmit diversity Antenna diversity Backscatter channel feedback Radio frequency identification (RFID) is known in the commercial sectors. Passive or semi-passive tag devices are used in RFID to modulate the radiated RF carrier signals in the wave propagation channel, in which the reader can demodulate the tag's specific data [1]. The wireless communication approach avoids using an active transmitter in the tag device for power and space savings. By including a sensory data source in the tag device, real-time sensing and data transmission become feasible for applications in the wireless sensor network (WSN) [2–4]. An innovative implementation is for data-intensive applications such as video streaming using the semi-passive devices. The importance is that the power associated with the transmitter task can be eliminated from the tag device. This permits cost, space, and power saving for semi-passive devices for guaranteed longevity with the available power resources. It is also possible to use the ambient signals from radio stations for the backscatter communications; the ambient signals are used to illuminate the tag device instead of the reader's transmitter [5–7]. Considering high data rate communication for data-intensive applications, we target hardening the communication channel with a semi-passive tag device in the multipath fading channels. Backscatter communication is a radar approach that is different from one-way wireless communication in which the wave propagation channel observes two distinct paths. In the forward path, the RF carrier is emitted from the transmitter and has no information. In the return path, the tag device modulates the tag's antenna impedance that appeared as a variable radar cross section (RCS) [8]. The backward channel includes the tag-specific information in the form of amplitude or phase modulation of the radiated carrier. The remote receiver decodes the tag antenna reflections and extracts the tag's data [9]. The modulation level is determined by the number of the loads used in the tag device for switching among [10]. Thus, ultra-low power tag becomes feasible thanks to using an ultra-low power switch device instead of an active transmitter with the power-hungry frontend. The communication link performance can be calculated using Radar equations. One specific environment for the backscatter application is an indoor quasi-stationary channel. The channel can be modeled as a slow fading multipath environment. By assuming the modulation bandwidth, which is the tag antenna switching speed, less than the coherence bandwidth of the backward channel, the frequency selectivity of the channel is neglected. Thus, a Rayleigh channel in the forward and backward paths is realized for the non-line of sight (NLOS) scenarios. Considering independent channels in the forward and backward links, which can be attained by separating the transmitter and the receiver antennas within a distance more than the spatial correlation of the channel, the cascade channel becomes a multiplication of two independent Rayleigh channels. The equivalent channel is no longer Rayleigh [11], and it observes deeper fading conditions. To establish a reliable link, the statistics of the multiplicative Rayleigh channel should be positively altered in a way to convert it back to a Rayleigh or Gaussian channel. Thus, the channel can be hardened for high data rate connectivity. Antenna diversity is a practical approach that can be implemented in the data path, i.e., tag or receiver antenna diversity [12, 13]. 1.1 Related works The application of antenna diversity in the receiver side has been demonstrated [14, 15]. The diversity gain is marginal because the forward channel is always Rayleigh, and the overall channel is the multiplication of Rayleigh and diversity combined channel. Tag antenna diversity has been proposed to combat fading in the forward channel [16]. The requirement is that the tag antennas should be physically spaced with large distances, more than the spatial correlation of the channel. The minimum distance for uncorrelated channels depends on the wave angle of arrivals (AoAs) and is about half a wavelength for uniformly distributed AoAs. The correlation distance is larger if this condition cannot be attained. Therefore, full diversity gain fails to achieve in a compact tag device despite the simple structure. Further, using multiple tag antennas requires multiple and synchronous electronic switch circuits to alter the antenna impedances simultaneously over all the antennas that complicate the tag system. Using multiple transmitters does not influence the statistics of the multiplicative channels [11, 17, 18], because the underlying assumption is that the transmitter is not the information source. The only achievement with using multiple transmitter schemes is a reduced spatial correlation distance by increasing the AoAs. The multi-transmitting effect in backscatter communication has been studied in [19, 20] as a unitary query method of transmission with tag-specific space-time coding (STC). This method is a breakthrough and provides a significant gain by using time diversity in the forward channel besides the spatial diversity (STC coding) in the backward channel and outperforms the uniform query method of transmission. The drawback is the increased tag complexity and power consumption in addition to the sophisticated transmitter scheme that should be operated in a synchronous manner with the tag device for improved performance. Another form of the multi-transmitting scheme is conducted in [21, 22], where multi-sin transmitting scheme (frequency diversity) is used to transfer more power to the tag device to achieve better performance in the reading range. 1.2 Motivation and contributions In this paper, we propose an un-coded transmit diversity scheme using a minor feedback from the receiver to improve the BER (bit error rate) performance and increase the communication range by altering the overall channel statistics. We call it "minor" because only the received signal strength is used to establish coherency in the query transmitters. The receiver feedback is feasible because the transceivers are closely located, and the received signal strength can be accessed and used in the transmitter for optimal system operation. The proposed transmit diversity performs the same as tag antenna diversity [11], but with a single-tag antenna. The system performance is better than the unitary query method of transmission [19] without added complexity in the tag device. Our approach is optimum for single-tag usage. Furthermore, it can provide time reversal spatial focusing at the tag location for a targeted data reading. The main contributions of this paper are as follows: Transmit diversity with equal gain combining (EGC) is proposed at the query end with channel feedback of one receiver for intended tag connectivity. It is shown that the probability density function (PDF) and BER performances can be improved significantly with the main complexity at the reader side. The analytical expressions for PDF of the channel for a single tag are derived followed by BER performance calculations for binary phase shift keying (BPSK) modulation. The receiver diversity is added to show the BER performance. Demonstration of the transmitter query scheme is shown using software defined radio implementations. 1.3 Organization The paper is organized as follows. In Section 2, the mathematical expressions for the statistics of the backscatter communication channel are presented. The analytical expressions for the statistics of the proposed transmit query scheme with feedback channel are calculated in Section 3. Numerical simulation results for BER performance of the transmit query for BPSK modulation are given in Section 4. Experimental validation of the approach for single-channel implementation using a software-defined radio (SDR) is given in Section 5. In Section 6, the receiver diversity is added to the proposed backscatter system in which we show that the feedback from one receiver is sufficient to achieve full-diversity performance. Section 8 concludes this paper. 2 Backscatter communications Any multiple input multiple output (MIMO) backscatter communication system consists of three operational parts: query end, tag end, and receiving end [19]. Figure 1 shows the scheme of a backscatter communication system in a single-tag scenario. An unmodulated continues wave (CW) signal is transmitted in the wave propagation channel. The signal paths that experience tag reflections carry tag-specific data. The signal paths that do not experience the tag device are considered as stationary interference. At the receiving end, the interference signal is removed and the signal demodulation is conducted. The backscatter channel can be interpreted as a pinhole channel [23], and the only difference is that the data is generated at the tag (pinhole) instead of the transmitter. Illustration of backscatter communication system. The data alters differential radar cross section (ΔRCS) of the tag device that results in backscatter signal modulation A nominal backscatter communication setup uses a single-transmitter antenna and multiple receiver antennas, or it can include multiple antennas in the tag device. Using multiple transmitter antennas does not influence the communication system performance [11, 17, 24]. Consider a M × 1 × N backscatter communication system with M transmitters in the query end, single-tag antenna and N receivers are shown in Fig. 2. Assume that the modulation bandwidth generated by the tag switching is less than the coherence bandwidth of the backward channel. Thus, the frequency selectivity of the channel is disregarded, and inter-symbol interference (ISI) is neglected. Therefore, the channel model in the forward and backward paths is expressed as Rayleigh. As shown in Fig. 2, the transmitted carrier signals are expressed as X, the forward channel is defined by Hf in which \( {h}_i^f=\mid {h}_i^f\mid {e}^{j\angle {h}_i^f}\in \mathrm{\mathbb{C}} \) are the channel coefficients. Superposition of these signals in the tag antenna is modulated by the tag data signal s (exp. s ∈ {−1, 1}for BPSK modulation). The backward channel, Hb, is expressed by its coefficients, \( {h}_i^b=\mid {h}_i^b\mid {e}^{j\angle {h}_i^b}\in \mathrm{\mathbb{C}} \). The received signal is corrupted by the noise N (\( {n}_i=\mid {n}_i\mid {e}^{j\angle {n}_i}\in \mathrm{\mathbb{C}} \)) at the i-th receiver. Scheme of a conventional M × 1 × N cascaded backscatter channel The transmitted signals might be received at the receivers without experiencing the tag path. The addition of these direct coupling signals is considered as a stationary interference. Considering the narrowband signaling in the multipath channel, the channel becomes flat fading, and the received signal is expressed as: $$ {\mathbf{Y}}_{\mathbf{N}\times \mathbf{1}}={\mathbf{X}}_{\mathbf{1}\times \mathbf{M}}{\mathbf{H}}_{\mathbf{M}\times \mathbf{1}}^{\mathbf{f}}{\mathbf{H}}_{\mathbf{N}\times \mathbf{1}}^{\mathbf{b}}S+{\mathbf{N}}_{\mathbf{N}\times \mathbf{1}}+{\mathbf{D}}_{\mathbf{N}\times \mathbf{1}} $$ where D is the direct coupling and $$ \mathbf{x}=\left[{x}_1\kern0.5em {x}_2\dots {x}_M\right]\kern1em ,\kern0.5em {x}_i\kern0.6em ={A}_i{e}^{j{\varphi}_i},i=1,2,\dots, M $$ $$ {\mathbf{H}}^{\mathbf{f}}=\left[\begin{array}{l}{h}_1^f\\ {}{h}_2^f\\ {}\kern0.62em \vdots \\ {}{h}_M^f\end{array}\right],\kern1em {\mathbf{H}}^{\mathbf{b}}=\left[\begin{array}{l}{h}_1^b\\ {}{h}_2^b\\ {}\kern0.62em M\\ {}{h}_N^b\end{array}\right],\kern1em \mathbf{N}=\left[\begin{array}{l}{n}_1\\ {}{n}_2\\ {}\kern0.5em \vdots \\ {}{n}_N\end{array}\right] $$ The term DN × 1can be eliminated after signal demodulation by applying proper filtering for removing the stationary signals. This term consists of a tag structural radar cross section (RCS) and environment clutters that do not include the tag's information [8, 25]. For a stationary channel, DN × 1 appeared as a DC component in the demodulator and a simple high-pass filter can remove it. By considering the constant and equally distributed transmission power for the query end, i.e., \( \mid {x}_i\mid ={A}_i=\frac{1}{\sqrt{M}},i=1,2,\dots, M \), the received signal for the i-th receiver becomes $$ {y}_i=K\left(\frac{1}{\sqrt{M}}\sum \limits_{k=1}^M{h}_k^f\cdot {e}^{{j\varphi}_k}\right){h}_i^bs+{\widehat{n}}_i $$ where K is a deterministic scalar coefficient including the filtering gain, transmitting signal amplitude, and other gains and \( {\widehat{n}}_i \) is the color Gaussian noise. For a single antenna in transmission, i.e., 1 × 1 × N, the received signal statistics at the i-th receiver is proportional to the production of two complex i.i.d Gaussian variables \( {h}_1^f \)and \( {h}_i^b \)as follows, $$ {y}_i=K\left({h}_1^f\cdot {e}^{{j\varphi}_1}\right){h}_i^bs+{\widehat{n}}_i\propto {h}_1^f\cdot {h}_i^b $$ Thus, the envelope of the received signal has PDF of two Rayleigh products. Considering the M transmitters, i.e., M × 1 × N, the superposition of the carriers at the tag antenna are modulated and then reflected in the channel. Therefore, the received signal is written as $$ {y}_i\propto \left(\sum \limits_{k=1}^M{h}_k^f\cdot {e}^{{j\varphi}_k}\right){h}_i^b=\left({h}_1^f\cdot {e}^{{j\varphi}_1}+{h}_2^f\cdot {e}^{{j\varphi}_2}+\cdots +{h}_M^f\cdot {e}^{{j\varphi}_M}\right){h}_i^b={z}_i^f\times {h}_i^b $$ For flat fading channels with i.i.d. complex Gaussian property, the linear combination of M channels is i.i.d. complex Gaussian variable. The envelope of these channels is Rayleigh, and the received signal in the i-th receiver has PDF of two Rayleigh products [26]. Therefore, the above multi-transmitter system has no improvement compared to a single-transmitter scheme [17]. 3 Multi-antenna transmission with channel feedback Under the condition that the transmitter signals are coherently summed at the tag antenna, the maximum amount of the backscatter signal is achieved at the receiver. We show how this combination alters the channel statistics. For this purpose, the forward channel must be maximized at the tag's position. Consequently, the overall gain of the forward channel should be maximized, $$ \mathrm{if}\kern0.84em \underset{a_k}{\max}\left|\sum \limits_{k=1}^M{h}_k^f\cdot {e}^{{j\varphi}_k}\right|,{\alpha}_k=\measuredangle {h}_k^f+{\varphi}_k\kern0.72em results:\kern1.68em \left\{\forall m,n=1,2,\dots, M|{\alpha}_m={\alpha}_n=\alpha \right\} $$ To obtain the condition above, the transmitters must be synchronized with a single reference clock (see Fig. 3). Also, it is necessary to alter the phase of each transmitter to achieve the co-phase summation (maximize the amplitude of the received signal) at the tag location. Therefore, applying the condition in (5) to (4) results to Illustration of M × 1 × 1 backscatter communication system with a common reference clock. The dashed line shows feedback implementation for coherent focusing of signals at the tag location $$ {y}_i(t)\propto \left(\sum \limits_{k=1}^M{h}_k^f\times {e}^{j{\upvarphi}_k}\right){h}_i^b=\left(\sum \limits_{k= 1}^M|{h}_k^f|\right){e}^{j\alpha}\times \mid {h}_i^b\mid {e}^{\measuredangle }{h}_i^b=\left(\sum \limits_{k= 1}^M|{h}_k^f\Big\Vert {h}_i^b|\right){e}^{j\kern0.24em \left(\alpha +\measuredangle {h}_i^{\mathrm{b}}\right)} $$ As a result, from (6), the PDF of the received signal envelope is the sum of M Rayleigh products. The signals terminated to the tag in the forward and backward channels may be correlated. The amount of the correlation, ρ, depends on the distance between the transmitter and the receiver antennas. The elements of the forward and backscatter links can be written in the Cartesian form, respectively, as hf = uf + jvf and hb = ub + jvb where uf, b and vf, b are zero-mean Gaussian random variables,\( \sim \mathrm{N}\left(\mathbf{0},{\sigma}_{f,b}^2/2\right) \), in which \( {\sigma}_f^2 \) and \( {\sigma}_b^2 \) are the variance of \( {h}_k^f \) and\( {h}_i^b \), respectively. The envelope PDF of M × 1 × N cascaded backscatter channel is derived from that of the product of two Rayleigh random variables. The closed-form PDF for dependent and independent Rayleigh products is given in [26]. Therefore, by considering random variable Z as the multiplication of \( \mid {h}_k^f\mid \)and \( \mid {h}_i^b\mid \), i.e., \( Z=\mid {h}_k^f\mid \times \mid {h}_i^b\mid \), its closed-form PDF is expressed as [26]: $$ {f}_{\mathrm{z}}\left(\mathrm{z},\rho \right)=\frac{4z\left(1-{\left|\rho \right|}^2\right)}{\sigma_b^2{\sigma}_f^2{\gamma}^2}{\mathrm{I}}_0\left(\frac{2z\mid \rho \mid }{\sigma_b{\sigma}_f\gamma}\right){\mathrm{K}}_0\left(\frac{2z}{\sigma_b{\sigma}_f\gamma}\right)\kern0.72em ,\gamma =\left(1-{\rho}^2\right),\mathrm{z}\ge 0 $$ where I0 is a zero-order, modified Bessel function of the first kind and K0 is a zero-order, modified Bessel function of the second kind. The characteristic function, Φ(.), of the sum of i.i.d. random variables, \( R=\sum \limits_{k=1}^M\mid {h}_k^f\Big\Vert {h}_i^b\mid \) (the inter parentheses of (6) that is envelop of the received signal in i-th receiver), is the product of their individual characteristic functions (CF), and consequently, their joint PDF can be derived by using the inverse Hankel transform. The CF (7) is found using Hankel transform [27], which was raised to the power of M to yield the CF for the cascaded backscatter channel of M × 1 × 1 system: $$ \Phi \left(\omega; \rho \right)={\left[\frac{\sigma_b^4{\sigma}_f^4}{16}\frac{\gamma^4}{{\left(1-{\left|\rho \right|}^2\right)}^2}\left({\omega}^2+\frac{4{\left(|\rho |-1\right)}^2}{\sigma_b^4{\sigma}_f^4{\gamma}^2}\right)\times \left({\omega}^2+\frac{4{\left(|\rho |+1\right)}^2}{\sigma_b^4{\sigma}_f^4{\gamma}^2}\right)\right]}^{\raisebox{1ex}{$-M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.} $$ By using the inverse Hankel transform on (8), the PDF of M × 1 × 1 channel is derivable. However, it is difficult to solve its related integral analytically. Therefore, two special cases are considered forρ = 0, in which \( {h}_k^f \)and \( {h}_i^b \)are independent, and ρ = 1, where they are completely correlated. The former case can be achieved by separating the receiver and the transmitter which is known as bistatic topology. The latter is known as monostatic topology where the receiver and transmitter are co-located. The analytical calculations of the PDF for bistatic and monostatic models as upper and lower limits of dependency of the forward and the backward channels are conducted. Considering the bistatic topology (ρ = 0) the CF becomes $$ \Phi \left(\omega; 0\right)={\left[\frac{\sigma_b^2{\sigma}_f^2}{4}{\omega}^2+1\right]}^{-M} $$ Applying the inverse Hankel transform to (9) yields the PDF of M × 1 × 1 topology as $$ {f}_r\left(r,\rho =0\right)={r}^M{\left(\frac{2}{\sigma_b{\sigma}_f}\right)}^{1+M}\frac{2^{1-M}}{\Gamma (M)}{K}_{\left(1-M\right)}\left(\frac{2r}{\sigma_b{\sigma}_f}\right) $$ where K(1 − M)(.) is the modified Bessel function of the order of (1−M) and the second kind, Γ(.), is the gamma function and r is the channel envelope. By considering the monostatic topology (ρ = 1), the CF is derived as $$ \Phi \left(\omega; \rho \right)={\left[{\sigma}_b^2{\sigma}_f^2{\omega}^2+1\right]}^{\raisebox{1ex}{$-M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.} $$ Applying the inverse Hankel transform to (11) yields the PDF of M × 1 × 1 topology as $$ {f}_{\mathrm{r}}\left(\mathrm{r},\rho =1\right)={\mathrm{r}}^{\raisebox{1ex}{$M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}{\left(\frac{1}{\sigma_b{\sigma}_f}\right)}^{1+\raisebox{1ex}{$M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\frac{2^{1-\raisebox{1ex}{$M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}}{\Gamma \left(\raisebox{1ex}{$M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\right)}{\mathrm{K}}_{\left(1-\raisebox{1ex}{$M$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\right)}\left(\frac{\mathrm{r}}{\sigma_b{\sigma}_f}\right) $$ The PDF of the envelope of the channel at the n-th receiver antenna based on the above analytical expressions is plotted in Fig. 4a. PDF of backscatter channel by increasing the number of co-phased TX transmitters. a PDF results from analytical expressions for M = 1, 2, 3,4 transmitters with the proposed phase adjustment and coherent transmitter (the PDF for monostatic, bistatic, and Rayleigh are also illustrated for comparison). b The numerical results of PDF with channel realization Monte-Carlo channel realizations are provided to compute the PDF numerically. For this purpose, 106 realizations are conducted, and the PDF of the envelope is obtained for the bistatic and monostatic topologies for M × 1 × 1 (M = 1, 2, 3, 4). Figure 4b shows the numerical simulation results. As shown, the results of the simulation and the above analytical calculations are in complete agreement. The PDF of a conventional one-way Rayleigh fading channel is also illustrated. We note that the PDF's are normalized to unit transmit power. Deep fading in the PDF of the cascaded backscatter channel can be seen in Fig. 4 compared to the one-way Rayleigh channel, as it is shown from the left to right arrow. By increasing the number of the transmit antennas, the corresponding envelope PDF shifts to the right indicating reduced fading probability. The most significant change is seen in the transition from a fully correlated channel, where the PDF changes from an exponential distribution of the monostatic channel to a product of Rayleigh distribution for the bistatic channel with M transmitters. By increasing M, the PDF curves approach the Rayleigh distribution, in which a Rayleigh fading can express the cascade channel for massive numbers of coherent transmitters. The backscatter communication performance is simulated by assuming BPSK modulation with additive white Gaussian noise (AWGN). Coherent detection is used to evaluate BER performance. The number of the random channel realizations is 106 to achieve the ensemble average of BER. In the simulations with M transmitter antennas, the total power is normalized to unity. Figure 5a shows the calculated BER versus SNR (dB). As shown, by increasing the number of transmit antennas and by applying the phase compensation at the transmitter side, given in (10), the BER curves shift toward the left side of the graph. For instance, by applying two transmitter antennas, we can observe a gain of 10 dB for BER level 10−4 compared to a single antenna in transmission. The gain increases for M = 3 and 4. The improvement in BER performance can be described by the transmit array gain and the modification of the channel statistics as stated in (6) and Fig. 4. Average BER curves for cascaded backscatter channels with independent, Rayleigh fading in forward and backscatter links. Un-coded BPSK modulation with coherent detection in the presence of AWGN noise is applied. Each curve represents the average BER of the signal at the n-th receiver with and without diversity. a Normalized total transmitted power with M transmitters (diversity gain and antenna array gain are represented). b Normalized BER to constant EIRP (diversity gain due to the altered channel statistics is illustrated) Figure 5b shows the BER performance for the scenario in which the effective isotropic-radiated power (EIRP) is normalized to the unity. Therefore, the array gain is removed in BER calculations, and the improvement caused by the transmit diversity is obtained. As shown, a gain of 7 dB is achieved for BER level of 10−4 due to the changes in the channel statistics. The rest of the 3 dB gain in Fig. 5a is related to the antenna array gain in transmission. In an ultimate scenario, with massive numbers of antennas in the forward channel, the overall channel statistics approaches Rayleigh performance for (M × 1 × 1, M➔∞). 4 Performance analysis by numerical simulations The implementation of the proposed transmit diversity is simple. As a result from (5), for a co-phase signal combination at the tag location, the transmitters' phase (φi, i = 1, 2, …, M) must be adjusted. Figure 3 shows the co-phase multi-transmit backscatter communication scheme. In this implementation, all the transmitters should have common reference clock, and a feedback from one receiver is used for the phase tuning. In the phase tuning process, one of the transmitters sends continues wave (CW) signal in the channel. The tag reflects its data in the backscatter channel, and the reflections are received and demodulated by the receiver system. The signal level at the receiver is monitored. The second transmitter starts the operation by transmitting CW signal, and the phase of the transmitted carrier is swept in the range [− π, π] to obtain the maximum data signal at the receiver side. This process is similar to the phasor summation of the wave vectors. The procedure is continued for the consequent transmitters, and the phase tuning is performed for the newly added transmitter. Therefore, a relative phase among the transmitters are selected that increase SNR at the receiver. The proposed co-phasing algorithm is a simple approach, and it may be replaced by other fast and more adaptive algorithms which are not considered in this work. According to the approach above, the phase of each transmitter must be tuned to add SNR in the receiver. Therefore, the sensitivity of the scheme relative to the phase deviation among the transmitters is studied. Figure 6 shows the simulated average BER performance for 2 × 1 × 1 backscatter channel with independent forward and backward Rayleigh fading channels. As shown, the phase deviation of 0° provides the best BER performance, in which the two transmitter signals are coherently added at the tag location. By the phase deviation from the optimal value, the performance is degraded. However, the performance is almost intact for the phase deviations up to 120°. The reason is the vector summation of the complex Gaussian random variables. From 120° to 170°, the performance is degraded rapidly in which the system performance is similar to a single antenna in transmission (see Fig. 6). In case that the phase error lies in the range of 170°–180°, the system performance is worse than the single antenna configuration. In this case, the signals at the tag are destructive. Therefore, the probability to worsen the performance with dual antenna compared to the single antenna in transmission is about 10/180 or 5.5%. This result shows that if the second transmitter comes to the operation, with a random phase value, there is a probability of 5.5% that the system performance is degraded. However, with the phase sweeping, the system performance can be improved for a wide range of phase values for instance 66% (i.e., 120/180). Therefore, the most essential characteristic of co-phase multi-transmit in backscatter communication is its low sensitivity to the phase deviation from the optimal value. BER versus SNR for phase deviation (Δφ, in degrees) from the optimal value (Δφ = 0) for a system with two query transmitters (2 × 1 × 1). The system performance is less sensitive to phase miss alignments up to 120° 5 Experimental results Demonstration of multi-transmitting coherent query scheme for backscatter communication is conducted using the experimental setup shown in Fig. 7. The plan of the indoor environment is illustrated in Fig. 7a. The transmitter antennas are log-periodic and the tag antenna is a circular patch on a ground plane with resonance frequency at 2.5 GHz (Fig. 7b, c). The tag antenna port is connected to an RF switchboard which is controlled by the data source from a computer program. The tag reflection is maximized by self-grounding the antenna (resonance mode) and is minimized by open circuit condition. Backscatter implementation setup using two transmitter query scheme a The plan of the indoor measurement environment, b two transmit query and receiving antennas, c tag antenna d transmitter and receiver radios based on USRP N210 modules and clock reference Octoclock-G The transmitter antennas are located in one corridor and transmit CW signal at 2.5 GHz. To generate NLOS conditions, the tag antenna is placed in the second corridor to remove the possible direct path from the transmitters. Two transmitters and one receiver are used for wireless communication with the semi-passive tag device. The transceivers are realized using software-defined radio (SDR), USRP N210. Two SDR transmitters are synchronized by using an external clock source Octoclock-G (Fig. 8d). Therefore, the relative phase of the transmitter carriers can be tuned via the system's software program. One of the SDRs is also used as the receiver. MATLAB Simulink is used to implement the baseband processing functions. All SDRs are connected to their host PCs through Giga Ethernet ensuring enough baseband bandwidth. Matlab Simulink block diagram for the backscatter M-query scheme using USRP N210 a implemented USRP No.1 as the transmitter and receiver b 2-PAM non-coherent receiver chain for the backscattered data extraction c transmitter module USRP No.2 with phase tunning feature The tag switch is controlled by the data from a PC which transfers the content of an image data in a defined protocol. All the requirements for data extraction in the receiver side are considered in this protocol. In the experiment, we consider constant net power in the query ends. Thus, in the dual transmitter scheme, the transmitted power for each radio is reduced by 3 dB compared to the single transmitter. Figure 8 shows the block diagram of the realized 2 × 1 × 1 backscatter communication system using USRP N210 and the related MATLAB Simulink target. The transmitter part can be tuned to operate in a defined carrier frequency. The gain of the transmitters can be adjusted independently. In addition, a phase shifter is used that can modify the relative phase of the synchronized transmitters. The receiver includes automatic gain control (AGC), receiver match filter, coarse and fine frequency compensator, clock recovery, and data decoding blocks. Figure 9a shows the baseband received signal detected by the SDR receiver; the eye diagram for both in-phase and quadrature components is shown in Fig. 9b. The overall system reads the transmitted image data via the backscatter link. Experimental received signal a Baseband received signal after removing the direct coupling term b eye diagram after clock recovery for I & Q channels To demonstrate the applicability of the coherent query scheme, the detected backscatter signal level by the SDR receiver is used as the basis for the performance evaluation (Fig. 9a). MATLAB Simulink is used for the real-time monitoring of the tag data in the receiver. First, one SDR transmitter with constant power is applied. Figure 10 shows the level of the detected signal for one-transmitter query system in the quasi-static channel. Using a two-transmitter scheme, by assigning half of the power to each transmitter, the received signal is detected. By sweeping the phase of one transmitter in the range [− π, π] with 5° steps per second, we can observe the received signal as the blue curve. As shown, the detected backscatter signal level is increased from 0.8 to 1.4 mv which is about 5 dB in terms of the received signal power. Also, for 60% of the phase interval, the baseband signal amplitude using 2 × 1 × 1 scheme is above the 1 × 1 × 1 scheme and confirms the simulation results in Section 4. Measured data signal amplitude after demodulation using one-transmitter query 1 × 1 × 1 (red curve) and two transmitter query 2 × 1 × 1 (blue curve) with relative phase variations The reason for the improved signal level at the receiver is EM wave focusing in the forward path. To assess this fact, a spectrum analyzer is connected to the tag antenna port, and the received signal level is monitored by sweeping the transmitter phase. By monitoring the received carrier signal power at the tag antenna, it is observed that the transmitted signals with the above defined optimum phase values are coherently summed at the tag antenna location. 6 Receiver diversity Receiver diversity is applied to further improve the BER performance by altering the channel statistics in the backward path. For this purpose, we use M-transmit and N-receive antennas. In the implementation procedure, feedback from one receiver is applied to the transmitter chain and the phase tuning task is completed to assure the coherent summation of the forwarding paths at the tag location. Thus, the transmit diversity task is completed. The receiver diversity is applied to coherently combine the data signals in the receivers. To determine the amount of the improvement, Monte-Carlo simulations is conducted for 106 channel realizations, and EGC is used as the signal combining method. Figure 11 shows the results of the simulations. The average BER curves for BPSK modulation are plotted for 2 × 1 × 2, 2 × 1 × 3 and 3 × 1 × 3 schemes. The BER curves for the multiplicative Rayleigh, Rayleigh, and AWGN are also illustrated. As shown, by using 2 × 1 × 2 configuration, the BER performance for a level of 10−4 is improved by 19 dB compared to the non-diversity (1 × 1 × 1) backscatter system. By using three transceivers (3 × 1 × 3), the improvement is 4.5 dB compared to the dual transceivers. In an ultimate state with massive transceivers, the system performance approaches AWGN channel. BER versus SNR for BPSK modulation in the backscatter channel with multiple transmitters and receivers. The transmit diversity in the forward channel is used to add up at the tag location coherently, and the receiver diversity is used in the backward channel with EGC A coherent query scheme is proposed for backscatter wireless communication with a single-tag device in a Rayleigh multipath channel. The proposed method uses the receiver feedback for providing appropriate phase information in the query end and thus spatial wave focusing in the tag antenna. Using the proposed method, the multiplicative Rayleigh fading channel statistics is altered in favor of BER. An analytical approach is used to derive the fading channel statistics, and Monte-Carlo numerical simulations confirm the results. The BER performance is calculated numerically for M- transmitter scheme with the channel feedback. The provided method shifts the tag device complexity to the transmitter side in which a moderate complexity is acceptable. The approach is validated experimentally by using SDR implementations. Two transmitters in the query end are synchronized by using a common clock distributor network, and feedback from one receiver is used to control the phase difference between the transmitter carrier signals. Details of system implementation using MATLAB Simulink are provided, and the backscatter communication with a data-intensive tag device is demonstrated. Measurements show coherent summation of the transmitted signal in the tag location, using the coherent query scheme, that results improved data connectivity in multipath channels. Transmit diversity is proposed for backscatter wireless communication with a single-tag antenna. The primary objective is to combat deep fading in the backscatter channel for possible high data rate communication with a tag device. It is shown that using coherent query antennas on the reader side, with a feedback channel from one receiver, the overall backscatter link can be improved by 10 dB for BER = 10−4 and BPSK modulation using two transmitter antennas. The improvement is caused by the coherent summation of the multipath signals in the forward path at the tag location, thanks to the receiver feedback. The channel statistics are positively modified by using multiple coherent and phase-tuned transmitters, in which the equivalent channel statistics can be converted from multiplicative Rayleigh to a Rayleigh channel by increasing the number of the phase-tuned transmitters. The transmit array gain adds further to the system performance. The analytical expressions of the channel statistics are provided. It is also indicated that the transmit diversity performance is less sensitive to the phase deviation among the transmitters. Therefore, there is a broad range of relative phase values that a backscatter system with multiple transmitters can operate optimally. The system implementation is demonstrated by experimental measurements in an indoor channel. Using receive diversity in addition to the transmit diversity is recommended to improve the channel statistics and obtain better BER performance than in a Rayleigh channel. In the case of receiver diversity, only a feedback from one receiver in the chain can be used for tuning the phase differences among the transmitters. The diversity combining at the receiver can be applied following the forward channel focusing that improves the BER performance significantly. AGC: Automatic Gain Control AWGN: Additive white Gaussian noise BER: BPSK: Binary phase shift keying Continues wave EGC: Equal gain combining EIRP: Effective isotropic radiated power ISI: Inter-symbol interference MIMO: Multiple input multiple output NLOS: Non-line of sight Probability density function RCS: Radar cross section RFID: SDR: STC: Space-time coding WSN: The authors would like to thank Wireless Test Terminal Lab (WTTL), KNTU, for the help and support of the experiments. Also, the Department of Electronic Systems at NTNU and the Research Council of Norway under the project WINNOW grant no. 270957/O70 for supporting the visiting scholar. The authors have contributed jointly to all parts on the preparation of this manuscript, and all authors read and approved the final manuscript. Aminolah Hasanvand is a Ph.D. student at K. N. Toosi University of Technology, Tehran, Iran. Ali Khaleghi is an adjunct professor at K. N. Toosi University of Technology and is a scientist at the Norwegian University of Technology (NTNU), Trondheim, Norway. Ilangko Balasingham is a professor at the Norwegian University of Science and Technology (NTNU), Trondheim, Norway. K. N. Toosi University of Technology (KNTU), Tehran, Iran Norwegian University of Science and Technology (NTNU), Trondheim, Norway Signal Processing Group, Department of Electronic Systems, Norwegian University of Science and Technology, Electro Building, NTNU, N-7491 Trondheim, Norway D M Dobkin, The Rf in RFID: Uhf RFID in Practice: Newnes; 2012.Google Scholar S Roy, V Jandhyala, JR Smith, D Wetherall, BP Otis, R Chakraborty, M Buettner, DJ Yeager, Y-C Ko, AP Sample, RFID: from supply chains to sensor nets. Proc. IEEE 98(9), 1583–1592 (2010)View ArticleGoogle Scholar K Han, K Huang, Wirelessly powered backscatter communication networks: modeling, coverage, and capacity. IEEE Trans. Wirel. Commun. 16(4), 2548–2561 (2017)View ArticleGoogle Scholar C Psomas, I Krikidis, Backscatter communications for wireless powered sensor networks with collision resolution. IEEE Wireless Communications Letters 6(5), 650–653 (2017)View ArticleGoogle Scholar G Wang, F Gao, R Fan, C Tellambura, Ambient backscatter communication systems: detection and performance analysis. IEEE Trans. Commun. 64(11), 4836–4846 (2016)View ArticleGoogle Scholar X Zhou, G Wang, Y Wang, J Cheng, An approximate BER analysis for ambient backscatter communication systems with tag selection. IEEE Access 5, 22552–22558 (2017)View ArticleGoogle Scholar Y Liu, G Wang, Z Dou, Z Zhong, Coding and detection schemes for ambient backscatter communication systems. IEEE Access 5, 4947–4953 (2017)View ArticleGoogle Scholar P Nikitin, K Rao, R Martinez, Differential RCS of RFID tag. Electron. Lett. 43(8), 431–432 (2007)View ArticleGoogle Scholar D Kim, MA Ingram, WW Smith, Measurements of small-scale fading and path loss for long range RF tags. IEEE Trans. Antennas Propag. 51(8), 1740–1749 (2003)View ArticleGoogle Scholar SJ Thomas, MS Reynolds, A 96 Mbit/sec, 15.5 pJ/bit 16-QAM modulator for UHF backscatter communication. IEEE International Conference. 185–190 (2012)Google Scholar JD Griffin, GD Durgin, Gains for RF tags using multiple antennas. IEEE Trans. Antennas Propag. 56(2), 563–570 (2008)View ArticleGoogle Scholar J-S Kim, K-H Shin, S-M Park, W-K Choi, N-S Seong, Polarization and space diversity antenna using inverted-F antennas for RFID reader applications. Antennas and Wireless Propagation Letters, IEEE 5(1), 265–268 (2006)View ArticleGoogle Scholar MS Abouzeid, L Lopacinski, E Grass, T Kaiser, R Kraemer, Efïicient and low-complexity space time code for massive MIMO RFID systems. 12th Iberian Conference on IEEE. 1–6 (2017)Google Scholar MA Ingram, MF Demirkol, D Kim, in Proc. ISSSE. Transmit diversity and spatial multiplexing for RF links using modulated backscatter. Proceedings of the 2001 International Symposium on Signals, Systems, and Electronics (ISSSE '01). (Tokyo, 2001)Google Scholar A Rahmati, L Zhong, M Hiltunen, R Jana, Reliability techniques for RFID-based object tracking applications,(2007) pp. 113–118Google Scholar JD Griffin, GD Durgin, Multipath fading measurements for multi-antenna backscatter RFID at 5.8 GHz. IEEE International Conference on. 322–329 (2009)Google Scholar C Boyer, S Roy, Space time coding for backscatter RFID. IEEE Trans. Wirel. Commun. 12(5), 2272–2280 (2013)View ArticleGoogle Scholar C He, X Chen, ZJ Wang, W Su, On the performance of MIMO RFID backscattering channels. EURASIP J. Wirel. Commun. Netw. 2012(1), 1–15 (2012)View ArticleGoogle Scholar C He, ZJ Wang, VCM Leung, Unitary query for the MxLxN MIMO backscatter RFID channel. IEEE Trans. Wirel. Commun. 14(5), 2613–2625 (2015)View ArticleGoogle Scholar C He, ZJ Wang, C Miao, Query diversity schemes for backscatter RFID communications with single-antenna tags. IEEE Trans. Veh. Technol. 66(8), 6932-6941 (2017)Google Scholar AJS Boaventura, NB Carvalho, The design of a high-performance multisine RFID reader. IEEE Transactions on Microwave Theory and Techniques. 65(9), 3389-3400 (2017)Google Scholar H-C Liu, Y-F Chen, Y-T Chen, A frequency diverse Gen2 RFID system with isolated continuous wave emitters. J. Networks 2(5), 54–60 (2007)View ArticleGoogle Scholar D Chizhik, G Foschini, R Valenzuela, Capacities of multi-element transmit and receive antennas: correlations and keyholes. Electron. Lett. 36(13), 1 (2000)View ArticleGoogle Scholar C Boyer, S Roy, Invited paper-backscatter communication and RFID: coding, energy, and MIMO analysis. IEEE Trans. Commun. 62(3), 770–785 (2014)View ArticleGoogle Scholar D Hotte, R Siragusa, Y Duroc, S Tedjini, Radar cross-section measurement in millimetre-wave for passive millimetre-wave identification tags. IET Microwaves, Antennas & Propagation 9(15), 1733–1739 (2015)View ArticleGoogle Scholar M. K. Simon, Probability distributions involving Gaussian random variables: a handbook for engineers and scientists: (Springer Science & Business Media, New York, 2007).Google Scholar A. D. Poularikas, Transforms and applications handbook: (CRC Press, Boca Raton, 2010).Google Scholar
CommonCrawl
U.Va. Links Research Data Services + Sciences Home U.Va. Home U.Va. Library University of Virginia Library Research Data Services + Sciences The Wilcoxon Rank Sum Test Posted on Thursday, January 5th, 2017 at 5:35 pm. Written by jcf2d The Wilcoxon Rank Sum Test is often described as the non-parametric version of the two-sample t-test. You sometimes see it in analysis flowcharts after a question such as "is your data normal?" A "no" branch off this question will recommend a Wilcoxon test if you're comparing two groups of continuous measures. So what is this Wilcoxon test? What makes it non-parametric? What does that even mean? And how do we implement it and interpret it? Those are some of the questions we aim to address in this post. First, let's recall the assumptions of the two-sample t test for comparing two population means: 1. The two samples are independent of one another 2. The two populations have equal variance or spread 3. The two populations are normally distributed There's no getting around #1. That assumption must be satisfied for a two-sample t-test. When assumptions #2 and #3 (equal variance and normality) are not satisfied but the samples are large (say, greater than 30), the results are approximately correct. But when our samples are small and our data skew or non-normal, we probably shouldn't place much faith in the two-sample t-test. This is where the Wilcoxon Rank Sum Test comes in. It only makes the first two assumptions of independence and equal variance. It does not assume our data have have a known distribution. Known distributions are described with math formulas. These formulas have parameters that dictate the shape and/or location of the distribution. For example, variance and mean are the two parameters of the Normal distribution that dictate its shape and location, respectively. Since the Wilcoxon Rank Sum Test does not assume known distributions, it does not deal with parameters, and therefore we call it a non-parametric test. Whereas the null hypothesis of the two-sample t test is equal means, the null hypothesis of the Wilcoxon test is usually taken as equal medians. Another way to think of the null is that the two populations have the same distribution with the same median. If we reject the null, that means we have evidence that one distribution is shifted to the left or right of the other. Since we're assuming our distributions are equal, rejecting the null means we have evidence that the medians of the two populations differ. The R statistical programming environment, which we use to implement the Wilcoxon rank sum test below, refers to this a "location shift". Let's work a quick example in R. The data below come from Hogg & Tanis, example 8.4-6. It involves the weights of packaging from two companies selling the same product. We have 8 observations from each company, A and B. We would like to know if the distribution of weights is the same at each company. A quick boxplot reveals the data have similar spread but may be skew and non-normal. With such a small sample it might be dangerous to assume normality. A <- c(117.1, 121.3, 127.8, 121.9, 117.4, 124.5, 119.5, 115.1) B <- c(123.5, 125.3, 126.5, 127.9, 122.1, 125.6, 129.8, 117.2) dat <- data.frame(weight = c(A,B), company = rep(c("A","B"), each=8)) boxplot(weight ~ company, data = dat) Now we run the Wilcoxon Rank Sum Test using the wilcox.test function. Again, the null is that the distributions are the same, and hence have the same median. The alternative is two-sided. We have no idea if one distribution is shifted to the left or right of the other. wilcox.test(weight ~ company, data = dat) data: weight by company W = 13, p-value = 0.04988 alternative hypothesis: true location shift is not equal to 0 First we notice the p-value is a little less than 0.05. Based on this result we may conclude the medians of these two distributions differ. The alternative hypothesis is stated as the "true location shift is not equal to 0". That's another way of saying "the distribution of one population is shifted to the left or right of the other," which implies different medians. The Wilcoxon statistic is returned as W = 13. This is NOT an estimate of the difference in medians. This is actually the number of times that a package weight from company B is less than a package weight from company A. We can calculate it by hand using nested for loops as follows (though we should note that this is not how the wilcox.test function calculates W): W <- 0 for(i in 1:length(B)){ for(j in 1:length(A)){ if(B[j] < A[i]) W <- W + 1 Another way to do this is to use the outer function, which can take two vectors and perform an operation on all pairs. The result is an 8 x 8 matrix consisting of TRUE/FALSE values. Using sum on the matrix counts all instances of TRUE. sum(outer(B, A, "<")) Of course we could also go the other way and count the number of times that a package weight from company A is less than a package weight from company B. This gives us 51. sum(outer(A, B, "<")) If we relevel our company variable in data.frame dat to have "B" as the reference level, we get the same result in the wilcox.test output. dat$company <- relevel(dat$company, ref = "B") So why are we counting pairs? Recall this is a non-parametric test. We're not estimating parameters such as a mean. We're simply trying to find evidence that one distribution is shifted to the left or right of the other. In our boxplot above, it looks like the distributions from both companies are reasonably similar but with B shifted to the right, or higher, than A. One way to think about testing if the distributions are the same is to consider the probability of a randomly selected observation from company A being less than a randomly selected observation from company B: P(A < B). We could estimate this probability as the number of pairs with A less than B divided by the total number of pairs. In our case that comes to \(51/(8\times8)\) or \(51/64\). Likewise we could estimate the probability of B being less than A. In our case that's \(13/64\). So we see that the statistic W is the numerator in this estimated probability. The exact p-value is determined from the distribution of the Wilcoxon Rank Sum Statistic. We say "exact" because the distribution of the Wilcoxon Rank Sum Statistic is discrete. It is parametrized by the two sample sizes we're comparing. "But wait, I thought the Wilcoxon test was non-parametric?" It is! But the test statistic W has a distribution which does not depend on the distribution of the data. We can calculate the exact two-sided p-values explicitly using the pwilcox function (they're two-sided, so we multiply by 2): For W = 13, \(P(W \leq 13)\): pwilcox(q = 13, m = 8, n = 8) * 2 For W = 51, \(P(W \geq 51)\), we have to get \(P(W \leq 50)\) and then subtract from 1 to get \(P(W \geq 51)\): (1 - pwilcox(q = 51 - 1, m = 8, n = 8)) * 2 By default the wilcox.test function will calculate exact p-values if the samples contains less than 50 finite values and there are no ties in the values. (More on "ties" in a moment.) Otherwise a normal approximation is used. To force the normal approximation, set exact = FALSE. dat$company <- relevel(dat$company, ref = "A") wilcox.test(weight ~ company, data = dat, exact = FALSE) Wilcoxon rank sum test with continuity correction When we use the normal approximation the phrase "with continuity correction" is added to the name of the test. A continuity correction is an adjustment that is made when a discrete distribution is approximated by a continuous distribution. The normal approximation is very good and computationally faster for samples larger than 50. Let's return to "ties". What does that mean and why does that matter? To answer those questions first consider the name "Wilcoxon Rank Sum test". The name is due to the fact that the test statistic can be calculated as the sum of the ranks of the values. In other words, take all the values from both groups, rank them from lowest to highest according to their value, and then sum the ranks from one of the groups. Here's how we can do it in R with our data: sum(rank(dat$weight)[dat$company=="A"]) Above we rank all the weights using the rank function, select only those ranks for company A, and then sum them. This is the classic way to calculate the Wilcoxon Rank Sum test statistic. Notice it doesn't match the test statistic provided by wilcox.test, which was 13. That's because R is using a different calculation due to Mann and Whitney. Their test statistic, sometimes called U, is a linear function of the original rank sum statistic, usually called W: \[U = W – \frac{n_2(n_2 + 1)}{2}\] where \(n_2\) is the number of observations in the other group whose ranks were not summed. We can verify this relationship for our data sum(rank(dat$weight)[dat$company=="A"]) - (8*9/2) This is in fact how the wilcox.test function calculates the test statistic, though it labels it W instead of U. The rankings of values have to be modified in the event of ties. For example, in the data below 7 occurs twice. One of the 7's could be ranked 3 and the other 4. But then one would be ranked higher than the other and that's not correct. We could rank them both 3 or both 4, but that wouldn't be right either. What we do then is take the average of their ranks. Below this is \((3 + 4)/2 = 3.5\). R does this by default when ranking values. vals <- c(2, 4, 7, 7, 12) rank(vals) [1] 1.0 2.0 3.5 3.5 5.0 The impact of ties means the Wilcoxon rank sum distribution cannot be used to calculate exact p-values. If ties occur in our data and we have fewer than 50 observations, the wilcox.test function returns a normal approximated p-value along with a warning message that says "cannot compute exact p-value with ties". Whether exact or approximate, p-values do not tell us anything about how different these distributions are. For the Wilcoxon test, a p-value is the probability of getting a test statistic as large or larger assuming both distributions are the same. In addition to a p-value we would like some estimated measure of how these distributions differ. The wilcox.test function provides this information when we set conf.int = TRUE. wilcox.test(weight ~ company, data = dat, conf.int = TRUE) 95 percent confidence interval: -8.5 -0.1 sample estimates: difference in location This returns a "difference in location" measure of -4.65. The documentation for the wilcox.test function states this "does not estimate the difference in medians (a common misconception) but rather the median of the difference between a sample from x and a sample from y." Again we can use the outer function to verify this calculation. First we calculate the difference between all pairs and then find the median of those differences. median(outer(A,B,"-")) [1] -4.65 The confidence interval is fairly wide due to the small sample size, but it appears we can safely say the median weight of company A's packaging is at least -0.1 less than the median weight of company B's packaging. If we're explicitly interested in the difference in medians between the two populations, we could try a bootstrap approach using the boot package. The idea is to resample the data (with replacement) many times, say 1000 times, each time taking a difference in medians. We then take the median of those 1000 differences to estimate the difference in medians. We can then find a confidence interval based on our 1000 differences. An easy way is to use the 2.5th and 97.5th percentiles as the upper and lower bounds of a 95% confidence interval. Here is one way to carry this out in R. First we load the boot package, which comes with R, and create a function called med.diff to calculate the difference in medians. In order to work with the boot package's boot function, our function needs two arguments: one for the data and one to index the data. We have arbitrarily named these arguments d and i. The boot function will take our data, d, and resample it according to randomly selected row numbers, i. It will then return the difference in medians for the resampled data. library(boot) med.diff <- function(d, i) { tmp <- d[i,] median(tmp$weight[tmp$company=="A"]) - median(tmp$weight[tmp$company=="B"]) Now we use the boot function to resample our data 1000 times, taking a difference in medians each time, and saving the results into an object called boot.out. boot.out <- boot(data = dat, statistic = med.diff, R = 1000) The boot.out object is a list object. The element named "t" contains the 1000 differences in medians. Taking the median of those values gives us a point estimate of the estimated difference in medians. Below we get -5.05, but you will likely get something different. median(boot.out$t) Next we use the boot.ci function to calculate confidence intervals. We specify type = "perc" to obtain the bootstrap percentile interval. boot.ci(boot.out, type = "perc") BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 1000 bootstrap replicates CALL : boot.ci(boot.out = boot.out, type = "perc") Intervals : Level Percentile 95% (-9.399, -0.100 ) Calculations and Intervals on Original Scale We notice the interval is not too different from what the wilcox.test function returned, but certainly bigger on the lower bound. Like the Wilcoxon rank sum test, bootstrapping is a non-parametric approach that can be useful for small and/or non-normal data. Hogg, R.V. and Tanis, E.A., Probability and Statistical Inference, 7th Ed, Prentice Hall, 2006. R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. For questions or clarifications regarding this article, contact the UVA Library StatLab: [email protected] View the entire collection of UVA Library StatLab articles. Clay Ford Statistical Research Consultant University of Virginia Library Licensed Data Sources Data Management Planning Support What is Data Management? Data Management Components Past Data Management Workshops StatLab StatLab Articles Mathematica for Staff and Faculty Minitab for Windows OriginPro Social, Natural, Engineering Sciences © 2021 by the Rector and Visitors of the University of Virginia COVID-19 Update: Visit the Status Dashboard for at-a-glance information about Library services
CommonCrawl
Start wearing purple http://math.stackexchange.com/users/73025/l-g $\;$ top accounts reputation activity bookmarks subscriptions How can one prove that $e<\pi$? inequality asked Sep 10 '13 at 16:12 math.stackexchange.com Unexpected approximations which have led to important mathematical discoveries soft-question math-history big-list experimental-mathematics asked May 11 '13 at 10:48 Fibonacci numbers from $998999$ sequences-and-series puzzle fibonacci-numbers asked May 1 '14 at 10:09 Free online mathematical software big-list math-software experimental-mathematics asked May 14 '13 at 9:34 How to compute $\sum_{n\text{ odd}}\frac{1}{n\sinh n\pi\sqrt 3}$? sequences-and-series closed-form asked Jun 18 '15 at 21:01 Why is the dimension of $SL(2,\mathbb{H})$ equal to $15$? reference-request lie-groups quaternions asked May 23 '13 at 12:28 Name for determinant identity linear-algebra matrices terminology determinant asked Oct 7 '15 at 14:06 Indefinite integral $\int \arcsin \left(k\sin x\right) dx$ special-functions closed-form elliptic-integrals polylogarithm asked Nov 6 '13 at 15:55 Identity $i\int_0^{\pi}\left[\mathrm{Li}_2\left(-1-e^{ix}\right)-\mathrm{Li}_2\left(-1-e^{-ix}\right)\right]dx=\frac{7}{3}\zeta(3) $ complex-analysis special-functions riemann-zeta asked Apr 29 '13 at 12:45 Top Answers 'Obvious' theorems that are actually false Are there any open mathematical puzzles? Reasoning that $ \sin2x=2 \sin x \cos x$ How to prove $\int_{-\infty}^{+\infty} f(x)dx = \int_{-\infty}^{+\infty} f\left(x - \frac{1}{x}\right)dx?$ What is the oldest open problem in geometry? An integral involving Airy functions $\int_0^\infty\frac{x^p}{\operatorname{Ai}^2 x + \operatorname{Bi}^2 x}\mathrm dx$ Closed form for $\int_0^1\log\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\mathrm dx$ How can I find $\sum\limits_{n=0}^{\infty}\left(\frac{(-1)^n}{2n+1}\sum\limits_{k=0}^{2n}\frac{1}{2n+4k+3}\right)$? Laplace, Legendre, Fourier, Hankel, Mellin, Hilbert, Borel, Z…: unified treatment of transforms? Visually deceptive "proofs" which are mathematically wrong What are some conceptualizations that work in mathematics but are not strictly true? Fun Geometric Series Puzzle Closed form for $\sum_{n=-\infty}^{\infty}\frac{1}{(n-a)^2+b^2}$. Proof of $\sum_{n=1}^{\infty}\frac1{n^3}\frac{\sinh\pi n\sqrt2-\sin\pi n\sqrt2}{{\cosh\pi n\sqrt2}-\cos\pi n\sqrt2}=\frac{\pi^3}{18\sqrt2}$ Closed form for $\prod_{n=1}^\infty\sqrt[2^n]{\frac{\Gamma(2^n+\frac{1}{2})}{\Gamma(2^n)}}$ How to show $\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$? Prove ${\large\int}_0^\infty\left({_2F_1}\left(\frac16,\frac12;\frac13;-x\right)\right)^{12}dx\stackrel{\color{#808080}?}=\frac{80663}{153090}$ What are some examples of mathematics that had unintended useful applications much later? Examples of $ \sqrt 2$ and $\sqrt[3]{3}$ in nature? How to solve $\int_0^\infty J_0(x)\ \text{sinc}(\pi\,x)\ e^{-x}\,\mathrm dx$? Closed form for $\sum_{n=0}^\infty\frac{\operatorname{Li}_{1/2}\left(-2^{-2^{-n}}\right)}{\sqrt{2^n}}$ Motivation for/history of Jacobi's triple product identity How do solve this integral $\int_{-1}^1\frac{1}{\sqrt{1-x^2}}\arctan\frac{11-6\,x}{4\,\sqrt{21}}\mathrm dx$? Why doesn't a Taylor series converge always? Why does the inverse of the Hilbert matrix have integer entries? An awful identity A logarithmic integral $\int^1_0 \frac{\log\left(\frac{1+x}{1-x}\right)}{x\sqrt{1-x^2}}\,dx$ Extract real and imaginary parts of $\operatorname{Li}_2\left(i\left(2\pm\sqrt3\right)\right)$ Closed form for $\int_0^{\infty}\frac{\arctan x\ln(1+x^2)}{1+x^2}\sqrt{x}\,dx$ Literary statements that are false as mathematics 1 2 3 4 5 … 10 next
CommonCrawl
New Cosmography From Universe in Problems Revision as of 00:27, 29 March 2016 by Cosmo All (Talk | contribs) First section of → Cosmography problem id: cs-1 Using the cosmographic parameters introduced above, expand the scale factor into a Taylor series in time. We can write the scale factor in terms of the present time cosmographic parameters: \[a(t)\sim 1+H_{0} \Delta t-\frac{1}{2} q_{0} H_{0}^{2} \Delta t^{2} +\frac{1}{6} j_{0} H_{0}^{3} \Delta t^{3} +\frac{1}{24} s_{0} H_{0}^{4} \Delta t^{4} +120l_{0} H_{0}^{5} \Delta t^{5} \] This decomposition describes evolution of the Universe on the time interval $\Delta t$ directly through the measurable cosmographic parameters. Each of them describes certain characteristic of the evolution. In particular, the sign of deceleration parameter $q$ indicates whether the dynamics is accelerated or decelerated. In other words, a positive\textbf{ }acceleration parameter indicates that standard gravity predominates over the other species, whereas a negative sign\textbf{ }provides a repulsive e\textbf{ff}ect which overcomes the standard attraction due to gravity. Evolution of the deceleration parameter is described by the jerk parameter $j$. In particular, a positive jerk parameter would\textbf{ }indicate that there exists a transition time when the Universe modifies its expansion. In the vicinity of this transition the modulus of deceleration parameters tends to zero and then changes its sign\textbf{. }The two terms, i.e., $q$ and $j$ fix the local dynamics, but they may be not sufficient to remove the degeneration between different cosmological models and one will need higher terms of the decomposition. Using the cosmographic parameters, expand the redshift into a Taylor series in time. \[\begin{array}{l} {1+z=\left[\begin{array}{cc} {} & {1+H_{0} (t-t_{0} )-\frac{1}{2} q_{0} H_{0}^{2} (t-t_{0} )^{2} +\frac{1}{3!} j_{0} H_{0}^{3} \left(t-t_{0} \right)^{3} +\frac{1}{4!} s_{0} H_{0}^{4} \left(t-t_{0} \right)^{4} } \\ {} & {+\frac{1}{5!} l_{0} H_{0}^{5} \left(t-t_{0} \right)^{5} \; +{\rm O}\left(\left(t-t_{0} \right)^{6} \right)} \\ {} & {} \end{array}\right]^{-1} ;} \\ {z=H_{0} (t_{0} -t)+\left(1+\frac{q_{0} }{2} \right)H_{0}^{2} (t-t_{0} )^{2} +\cdots .} \end{array}\] What is the reason for the statement that the cosmological parameters are model-independent? The cosmographic parameters are model-independent quantities for the simple reason: these parameters are not functions of the EoS parameters $w$ or $w_{i} $ of the cosmic fluid filling the Universe in a concrete model. Obtain the following relations between the deceleration parameter and Hubble's parameter $$q(t)=\frac{d}{dt}(\frac{1}{H})-1;\,\,q(z)=\frac{1+z}{H}\frac{dH}{dz}-1;\,\,q(z)=\frac{d\ln H}{dz}(1+z)-1.$$ Show that the deceleration parameter can be defined by the relation \[q=-\frac{d\dot{a}}{Hda} \] \[q=-\frac{d\dot{a}}{Hda} =-\frac{\ddot{a}dt}{Hda} =-\frac{\ddot{a}}{aH^{2} } .\] It corresponds to the standard definition of the deceleration parameter \[q-\frac{\ddot{a}}{aH^{2} } .\] Classify models of Universe basing on the two cosmographic parameters -- the Hubble parameter and the deceleration parameter. When the rate of expansion never changes, and $\dot{a}$ is constant, the scaling factor is proportional to time $t$ , and the deceleration term is zero. When the Hubble term is constant, the deceleration term $q$ is also constant and equal to $\mathrm{-}$1, as in the de Sitter and steady-state Universes. In most models of Universes the deceleration term changes in time. One can classify models of Universe on the basis of time dependence of the two parameters. All models can be characterized by whether they expand or contract, and accelerate or decelerate: (a) $H>0,\; q>0$: expanding and decelerating (b) $H>0,\; q<0$: expanding and accelerating (c) $H<0,\; q>0$: contracting and decelerating (d) $H<0,\; q<0$: contracting and accelerating (e) $H>0,\; q=0$: expanding, zero deceleration (f) $H<0,\; q=0$: contracting, zero deceleration (g) $H=0,\; q=0$ : static. Of course, generally speaking, both the Hubble parameter and deceleration parameter can change their sign during the evolution. Therefore the evolving Universe can transit from one type to another. It is one of the basic tasks of cosmology to follow this evolution and clarify its causes. There is little doubt that we live in an expanding Universe, and hence only (a), (b), and (e) are possible candidates. Evidences in favor of the fact that the expansion is presently accelerating continuously grows in number and therefore the current dynamics belongs to type (b). Show that the deceleration parameter $q$ can be presented in the form $$q(x)=\frac{\dot{H}(x)}{H(x)}x-1;\,\,x=1+z$$ Show that for the deceleration parameter the following relation holds: $$q(a)=-\left(1+\frac{dH/dt}{H^2}\right)-\left(1+\frac{adH/da}{H}\right)$$ problem id: "cs-9 Show that $$\frac{dq}{d\ln (1+z)}=j-q(2q+1)$$ problem id: cs-10 Let \[C_{n} \equiv \gamma _{n} \frac{a^{(n)} }{aH^{n} } ,\] where $a^{(n)} $ is n-th time derivative of the scale factor, $n\ge 2$ , $\gamma _{2} =-1,\; \gamma _{n} =1$ äëÿ $n>2$ . Then$C_{2} =q,\; C_{3} =j,\; C_{4} =s\ldots $ Obtain $dC_{n} /d\ln (1+z)$ . \[\begin{array}{l} {\frac{dC_{n} }{d\ln (1+z)} =-a\frac{dC_{n} }{da} =-\frac{1}{H} \frac{dC_{n} }{dt} ;} \\ {\frac{dC_{n} }{d\ln (1+z)} =-\gamma _{n} \left(\frac{a^{\left(n+1\right)} }{aH^{n+1} } -\frac{a^{\left(n\right)} }{aH^{n} } -n\frac{a^{(n)} \dot{H}}{aH^{n+2} } \right)=} \\ {=-\gamma _{n} \left(\frac{1}{\gamma _{n+1} } C_{n+1} -\frac{1}{\gamma _{n} } C_{n} -\frac{1}{\gamma _{n} } nC_{n} \frac{\dot{H}}{H^{2} } \right);} \\ {\frac{dC_{n} }{d\ln (1+z)} =-\frac{\gamma _{n} }{\gamma _{n+1} } C_{n+1} +C_{n} -nC_{n} (1+q)} \end{array}\] Use result of the previous problem to find $dC_{n} /dt$. \[\frac{dC_{n} }{dt} =-H\frac{dC_{n} }{d\ln (1+z)} =H\left[\frac{\gamma _{n} }{\gamma _{n+1} } C_{n+1} -C_{n} +nC_{n} (1+q)\right]\] Use the general formula for $dC_{n} /dt$ obtained in the previous problem to obtain time derivatives of the cosmographic parameters $q,j,s,l$. \[\begin{array}{l} {\dot{q}=-H\left(j-2q^{2} -q\right),} \\ {\dot{j}=H\left[s+j(2+3q)\right],} \\ {\dot{s}=H\left[l+s(3+4q)\right],} \\ {\dot{l}=H\left[m+l(4+5q)\right]} \end{array}\] Find derivatives of the cosmographic parameter w.r.t. the red shift. Using results of the previous problem, one can transit from the time derivatives to the derivative w.r.t. the red shift according to the relation $\frac{d}{dz} =-\frac{1}{H(1+z)} \frac{d}{dt} $: \[\begin{array}{l} {\frac{dH}{dz} =H\frac{1+q}{1+z} ,} \\ {\frac{dq}{dz} =\frac{j-2q^{2} -q}{1+z} ,} \\ {\frac{dj}{dz} =-\frac{s+j(2+3q)}{1+z} ,} \\ {\frac{ds}{dz} =-\frac{l+s(3+4q)}{1+z} ,} \\ {\frac{dl}{dz} =-\frac{m+l\left(4+5q\right)}{1+z} } \end{array}\] Let $1+z=1/a\equiv x$. Find $q(x)$ and $j(x)$. It is easy to see that $\dot{H}=-H'H/a$ where $H'=dH/dx$. Then \[q=-\frac{\dot{H}}{H^{2} } -1=\frac{H'}{H} x-1\] Calculating $j$, making use of $a'=-a^{2} $, we obtain \[j(x)=1-2\frac{H'}{H} x+\left(\frac{H'^{2} }{H^{2} } +\frac{H''}{H} \right)x^{2} \] Express the derivatives $d^{2} H/dz^{2} $ , $d^{3} H/dz^{3} $ and $d^{4} H/dz^{4} $ in terms of the cosmographic parameters. Using results of the two previous problems, one finds \[\begin{array}{l} {\frac{d^{2} H}{dz^{2} } =\frac{j-q^{2} }{(1+z)^{2} } H,} \\ {\frac{d^{3} H}{dz^{3} } =\frac{H}{(1+z)^{3} } \left(3q^{2} +3q^{3} -4qi-3j-s\right)} \\ {\frac{d^{4} H}{dz^{4} } =\frac{H}{(1+z)^{4} } \left(-12q^{2} -24q^{3} -15q^{4} =32qj+25q^{2} j+7qs+12j-4j^{2} +8s+1\right)} \end{array}\] Find decomposition of the inverse Hubble parameter $1/H$ in powers of the red shift $z$. \[\begin{array}{l} {\frac{d}{dz} \left(\frac{1}{H} \right)=-\frac{1}{H^{2} } \frac{dH}{dz} =-\frac{1+q}{1+z} \frac{1}{H} ;} \\ {\frac{d^{2} }{dz^{2} } \left(\frac{1}{H} \right)=2\left(\frac{1+q}{1+z} \right)^{2} \frac{1}{H} -\frac{j-q^{2} }{\left(1+z\right)^{2} } \frac{1}{H} =\frac{2+4q+3q^{2} -j}{(1+z)^{2} } \frac{1}{H} ;} \\ {\frac{1}{H(z)} =\frac{1}{H_{0} } \left[1-\left(1+q_{0} \right)z+\frac{2+4q_{0} +3q_{0}^{2} -j_{0} }{6} z^{2} +\ldots \right]} \end{array}\] Obtain relations for transition from the time derivatives to that w.r.t. the red shift. \[\begin{array}{l} {\frac{d^{2} }{dt^{2} } =(1+z)H\left[H+(1+z)\frac{dH}{dz} \right]\frac{d}{dz} +(1+z)^{2} H^{2} \frac{d^{2} }{dz^{2} } ,} \\ {\frac{d^{3} }{dt^{3} } =-(1+z)H\left\{H^{2} +(1+z)^{2} \left(\frac{dH}{dz} \right)^{2} +(1+z)H\left[4\frac{dH}{dz} +(1+z)\frac{d^{2} H}{dz^{2} } \right]\right\}\frac{d}{dz} -3(1+z)^{2} H^{2} } \\ {\times \left[H+(1+z)\frac{dH}{dz} \right]\frac{d^{2} }{dz^{2} } -(1+z)^{3} H^{3} \frac{d^{3} }{dz^{3} } ,} \\ {\frac{d^{4} }{dt^{4} } =(1+z)H\left[H^{2} +11(1+z)H^{2} \frac{dH}{dz} +11(1+z)H\frac{dH}{dz} +(1+z)^{3} \left(\frac{dH}{dz} \right)^{3} +7(1+z)^{2} H\frac{d^{2} H}{dz^{2} } \right. } \\ {+\left. 4(1+z)^{3} H\frac{dH}{dz} \frac{d^{2} H}{d^{2} z} +(1+z)^{3} H^{2} \frac{d^{3} H}{d^{3} z} \right]\frac{d}{dz} +(1+z)^{2} H^{2} \left[7H^{2} +22H\frac{dH}{dz} +7(1+z)^{2} \left(\frac{dH}{dz} \right)^{2} \right. } \\ {+\left. 4H\frac{d^{2} H}{dz^{2} } \right]\frac{d^{2} }{dz^{2} } +6(1+z)^{3} H^{3} \left[H+(1+z)\frac{dH}{dz} \right]\frac{d^{3} }{dz^{3} } +(1+z)^{4} H^{4} \frac{d^{4} }{dz^{4} } +(1+z)^{4} H^{4} \frac{d^{4} }{dz^{4} } .} \end{array}\] problem id: Can $dH^{n} /dz^{n} $ generally be expressed in terms of the cosmographic parameters? Show that $$q(z)=frac 12 \frac{d\ln H^2}{d\ln(1+z)}$$ Show that the time derivatives of the Hubble's parameter can be expressed through the cosmographic parameters as follows: $$\dot{H}=-H^2(1+q);$$ $$\ddot{H}=H^3(j+3q+2)$$ $$\dddot{H}=H^4\left(s-4j-3q(q+4)-6\right)$$ $$\ddddot{H}=H^5\left(l-5s+10(q+2)j+30(q+2)+24\right)$$ generally be expressed in terms of the cosmographic parameters? Show that \[j=\frac{\ddot{H}}{H^{3} } +3\frac{\dot{H}}{H^{2} } +1\] Using results of the previous problem, we find \[j=\frac{\ddot{H}}{H^{3} } -3q-2\] Substituting \[q=-\frac{\dot{H}}{H^{2} } -1\] one finally obtains \[j=\frac{\ddot{H}}{H^{3} } +3\frac{\dot{H}}{H^{2} } +1\] Express total pressure in flat Universe through the cosmographic parameters. Excluding the density $\rho $ from the Friedman equations \[\begin{array}{l} {H=\frac{1}{3} \rho ,} \\ {\frac{\ddot{a}}{a} =H^{2} +\dot{H}=-\frac{1}{6} \left(\rho +3p\right)} \end{array}\] one finds \[p=-\left(3H^{2} +2\dot{H}\right)\] Using the above obtained expression $\dot{H}=-H^{2} (1+q)$ , we obtain \[p=-H^{2} \left(1-2q\right)\] Retrieved from "http://universeinproblems.com/index.php?title=New_Cosmography&oldid=2235" NEW PROBLEMS 1. Cosmo warm-up 2. Expanding Universe 3. The Big Bang model 4. Black Holes 5. CMB 6. Thermodynamics 7. Perturbation theory 8. Inflation 9. Dark Energy 10. Dark Matter 11. Interactions in the Dark Sector 12. $\Lambda$CDM model 13. Gravitational Waves 14. Observational Cosmology 15. Holographic Universe 16. Horizons 17. Deceleration Parameter 18. Quantum Cosmology About Universe in Problems
CommonCrawl